Research


My research involves leveraging visual research methodology towards studying social media phenomena. In particular, I seek to explore visual media's role in problematic information (online hate, misinformation, scams) and how it causes intersectional harms in online spaces towards core identities of individuals and groups. I borrow largely from media studies, communications, computer vision, cultural analytics, CSCW/HCI, and ideas of intersectionality in analyzing various happenings on social media.

This often includes macroscale views of summative statistics and trends in visual media sets online to more zoomed in case studies and human experiences. I am a deeply mixed methods researcher, conducting interview studies and performing qualitative and quantitative analysis of large collections of visual media. I am also a methods nerd and a large crux of my work focuses on creating online tools and methods such that other researchers answer questions about visual media in their communities and environments.

I do this work within the Centered for an Informed Public at the University of Washington and with generous support from a National Science Foundation(NSF) Fellowship and a Herbold Fellowship.

I describe the 3 pillars of my work below and publications of this work and press about it are available on my publications and press page.

Innovating Visual Research Methods

Many researchers study social media platforms and sociotechnical systems (i.e. generative AI systems) that are largely visual, but do not have the necessary tools and methodological training to safely and rigorously incorporate visual media into their studies, particularly human subjects studies or studies involving teams of human researchers. I build out tools to fill this gap by allowing other researchers to perform visual research, and expand upon methods for these and other tools for other researchers.

My Diamond ranking tool used in my forthcoming CSCW paper is an example of this, and it was also used in forthcoming AIES paper with my colleague Sourojit Ghosh, where his study was empowered by this tool in interviews and in crowdsourcing data around Stable Diffusion outputs.

I am currently conducting a research project interviewing other HCI researchers about incorporating visual methodologies into their work, to better understand opportunities and limitations of visual research in HCI.

I have recent forthcoming work at CSCW (preprint soon) that showcases how we may reclaim color quantization, a "classic" method from computer graphics, towards studying large collections of hateful images.

Visual Media in Problematic Information

The crux of my PhD is exploring the unique role of visual media in online hate, misinformation, and scams. While many think this may involve an indepth, media forensics approach of deepfakes of politicians (which does happen), my work centers around the much more common phenomena of imagery (photos and videos) being framed and misused to push false, and often harmful, narratives and scams. Presently, I have 2 active projects in this space: studying the visual online hate rhetoric at the US Mexico border and studying users' folk theorizations around religious AI spam on Facebook.

My work at on the rhetoric of the US-Mexico border crisis has involved a cross platform longitudinal study where I leverage quantitative methods to better collect, process, and analyze large amounts of images and videos, along with their relationships. I manage a team of undergraduate and Masters students who have been helping to explore this data and hateful, anti-immigrant narratives within it since January. 3 preliminary posters, 2 authored with my students, have emerged from this work and will be at CSCW and Trust & Safety (pre-prints soon).

In early 2024, shrimp Jesus took over the internet and as a visual researcher, I quickly fell down the rabbit hole. I have spent this summer using computation and a human research team of 4 to analyze 6 months of AI Jesus spam imagery across over 100 pages and conduct an interview study with Facebook users to understand folk theorizations of this phenomena and add a user perspective towards this "AI slop" epidemic.

Visual Identity Online

The presentation of identity in online systems, particularly social media, has always been fascinating to me, and is a space I am actively exploring. How do groups experience algorithmically recommended visual content about themselves, and what content resonates with them the most was a question of my study of Latinidad on TikTok, forthcoming at CSCW 2024. (preprint here)

How do in-group aesthetic signals develop and how does this interact with idolatry of politicians and influencers? Is a question I am actively exploring in my research of misinformation and rumoring around the US Presidential election, including the role of generative AI spam and visual memery in making sense of the assassination attempt of Donald Trump.

The harms of generative AI image outputs towards users, particularly femme people and children, such as non-consensual nude photos or CSAM is also an increasing area of interest for me as it emerges in many of my existing projects and datasets. How have advances in AI technology made it easier than ever to represent these identities in harmful ways? And how is this content avoiding, via sociotechnical imagery methods, content moderation?