2025
2025
UX Improvements and UI components for Meta
This project explores how social media users perceive and interact with AI-generated content in their feeds. Through a survey of 128 Facebook users, we examined the clarity of AI content labeling, the frequency of AI-generated posts, and the effectiveness of content control features. The findings highlight gaps in user awareness and content moderation, informing the design of more transparent and user-consented AI warning systems.
Based on these insights, I designed a content warning system that blurs images and requires active user consent before viewing AI-generated media, providing clear context on potential harms while empowering users to make informed choices.
This warning concept was presented in the paper "The Securitization Paradox: AI-Generated Content and Polarization in the 2024 European Elections" at the 28th IPSA World Congress of Political Science held in Seoul, South Korea on 15 July 2025.
UX Researcher
UX/UI Designer
Surveys
UX/UI Design
Figma
Google Forms
April 2025 to July 2025
Users encounter AI-generated content on social media without clear labeling or understanding of its potential impact. This lack of transparency can reduce trust, limit informed engagement, and hinder users’ ability to control their feeds. The challenge was to design a system that clearly indicates AI-generated content, provides context on why content may be harmful, and empowers users to make informed choices about what they see.
I compared how social media platforms handle NSFW vs AI-generated content, analysing warnings, blurring/blocking, consent buttons, and explanatory text. This revealed that clear labeling, user control, and explanations are key to helping users understand and safely engage with sensitive content.
Comparison of AI-generated content labels vs. NSFW content warnings across different social platforms
A total of 128 participants were recruited via Facebook to provide insights on experiences regarding AI-generated content in their feeds. The survey collected data on content labeling clarity, frequency of AI posts, use of content control features, and user preferences for AI-generated media.
Have you noticed content on Facebook or Instagram that you believe was generated by AI?
How frequently do you see AI-generated posts in your feed?
Have you ever used 'Not Interested' or similar features to hide content you didn’t want to see?
When designing an AI-content warning flow, I drew inspiration from existing NSFW and sensitive content warnings on platforms like Meta, but tailored it to emphasize user consent and education.
I started with a brainstorming session, mapping out what users would expect from such a warning: clarity, control, and trust. To structure the ideas, we used MoSCoW prioritisation:
Must-haves included active user consent (a “Show content” button), clear text explaining potential harms of AI-generated media, contextual information about why the content was flagged, and prompts encouraging critical source evaluation.
Should-haves focused on subtle content moderation cues: removing colour, blurring images, adding an easy “Report” option, and ensuring accessible AI literacy information.
Could-haves explored stronger interventions, such as fully hiding suspicious posts, blocking shares, or removing engagement metrics to reduce amplification.
Won’t-haves eliminated solutions that harm usability like small, ignorable labels or overwhelming technical jargon.
This process ensured the design balanced transparency, user agency, and safety, while avoiding overreach or confusion.
MoSCoW prioritisation for AI content warnings
AI posts hidden on feed
Active user consent to view
AI information page
Easy report function