home / about / research / publications and press / other fun
My research involves leveraging visual research methodology to study sociotechnical systems, particularly ones that manifest on the internet. I use problematinc information (online hate, misinformation, scams) as a "home base" and key insight into how visual media and systems can cause intersectional harms towards core identities of individuals and groups. I borrow largely from media studies, communications, computer vision, cultural analytics, CSCW/HCI, and ideas of intersectionality in analyzing various happenings on social media.
This often includes macroscale views of summative statistics and trends in visual media sets online to more zoomed in case studies and human experiences. I am a deeply mixed methods researcher, conducting interview studies and performing qualitative and quantitative analyses of data. I am also a methods nerd and a large crux of my work focuses on creating online tools and methods such that other researchers answer questions about visual media in their communities and environments.
I do this work within the Centered for an Informed Public at the University of Washington and with generous support from a National Science Foundation(NSF) Fellowship and a Herbold Fellowship.
I describe the 3 pillars of my work below and publications of this work and press about it are available on my publications and press page.
Many researchers study social media platforms and sociotechnical systems (i.e. generative AI systems) that are largely visual, but do not have the necessary tools and methodological training to safely and rigorously incorporate visual media into their studies, particularly human subjects studies or studies involving teams of human researchers. I build out tools to fill this gap by allowing other researchers to perform visual research, and expand upon methods for these and other tools for other researchers.
My Diamond ranking tool used in my forthcoming CSCW paper is an example of this, and it was also used in forthcoming AIES paper with my colleague Sourojit Ghosh, where his study was empowered by this tool in interviews and in crowdsourcing data around Stable Diffusion outputs. We have a manuscript in revisions about this tool and HCI researchers' use of visual elicitation at large.
I am currently writing up our process for a TikTok netnography that we ran over the course of five months during the election season. Additionally, I have a recent poster at CSCW that showcases how we may reclaim color quantization, a "classic" method from computer graphics, towards studying large collections of hateful images.
The crux of my PhD is exploring the unique role of visual media in online hate, misinformation, and scams. While many think this may involve an indepth, media forensics approach of deepfakes of politicians (which does happen), my work centers around the much more common phenomena of imagery (photos and videos) being framed and misused to push false, and often harmful, narratives and scams. Presently, I have 2 active projects in this space: studying the visual online hate rhetoric at the US Mexico border and studying users' folk theorizations around religious AI spam on Facebook.
My work at on the rhetoric of the US-Mexico border crisis has involved a cross platform longitudinal study where I leverage quantitative methods to better collect, process, and analyze large amounts of images and videos, along with their relationships. I manage a team of undergraduate and Masters students who have been helping to explore this data and hateful, anti-immigrant narratives within it since January. Several posters have emerged from this work and the first paper on this work will be in CSCW 2025. Other manuscripts are under preparation.
In early 2024, shrimp Jesus took over the internet and as a visual researcher, I quickly fell down the rabbit hole. I have spent this summer using computation and a human research team of 4 to analyze 6 months of AI Jesus spam imagery across over 100 pages and conduct an interview study with Facebook users to understand folk theorizations of this phenomena and add a user perspective towards this "AI slop" epidemic. This manuscript is in active revisions.
The presentation of identity in online systems, particularly social media, has always been fascinating to me, and is a space I am actively exploring. How do groups experience algorithmically recommended visual content about themselves, and what content resonates with them the most was a question of my study of Latinidad on TikTok, at CSCW 2024.
How do in-group aesthetic signals develop and how does this interact with idolatry of politicians and influencers? How does this happen across identities like religion, spirituality, and emergent in groups like betting market bros? These are the questions I am exploring in the wake of the 2024 Election.
The harms of generative AI image outputs towards users, particularly femme people and children, such as non-consensual nude photos or CSAM is also an increasing area of interest for me as it emerges in many of my existing projects and datasets. How have advances in AI technology made it easier than ever to represent these identities in harmful ways? And how is this content avoiding, via sociotechnical imagery methods, content moderation?