Blogpost by Dr Devina Sarwatay, Prof. Jessica Ringrose, and Chiara Fehr
To mark Safer Internet Day, on the 11th of February this year researchers, charity and institution representatives gathered at the University of Surrey to discuss the harms experienced — mainly by children — in an increasingly complex digital environment.
On this day Dr Devina Sarwatay and Prof. Jessica Ringrose came together to comment on the rising harms and risks of AI, algorithms and the concerns that must be raised around the rise of synthesized personhood.
Dr Devina Sarwatay: AI Literacy for the Synthetic Self: Safeguarding Young People from AI-generated Image-based Sexual Abuse
Dr Devina Sarwatay opened her presentation citing research from the National Literacy Trust, UK that shows a massive increase in use of generative AI among young people (13-18 years) who participated in the survey ran by the trust. Where nearly 40% boys and 24% girls used generative AI in 2023, this figure shot up nearly 2x to 78% boys and more than 3x to nearly 76% girls in 2024. Teachers surveyed also agreed that students must be taught to engage with these tools more critically.
Building on the existing work of Papacharissi (2012) around the concept of the ‘networked self’ and Bhandari and Bimo (2022) on the ‘algorithmized self’, Dr Sarwatay proposed the concept of the ‘synthetic self’ (2025). We have begun engaging with the idea of the ‘synthetic self’ in robotics and engineering (Prescott & Camilleri, 2019; Lallee, et al., 2015), art and performance (Saban, 2023; Stelarc, 2022), and in the study of digital cultures (Follows, 2023; Lovink, 2017) to some extent. What Sarwatay posits is that the ‘synthetic self’ goes beyond our presentation of the self online as imagined in the networked era, internalises the logics of the algorithmized self to the extent where the use of AI technologies to alter our appearances and experiment/play with our identities online gives rise to a new self: the ‘synthetic self’. And, within the postdigital contexts we live in, it becomes increasingly important to understand which (new) risks and harms await us and the (new) literacies we need to cultivate to face them.
This is because we are already using these new tools and technologies to not just create our own synthetic selves but also flirting with creating and editing the likeness of others and building relationships with(in) these synthetic selves. Case in point is the app, ‘Replika’, that allows users to create chatbots and have conversations with them. Bardhan’s (2022) piece unpacked how men used this app to create AI girlfriends and verbally abuse them. While one may argue that chatbots do not really feel anything and no one is really being abused in this Human-Computer Interaction (HCI) setting, we need to examine what this means for the Human in the HCI setting and what behaviours we are normalising.

Image by Getty Images/Futurism (https://futurism.com/chatbot-abuse)
Similarly, Sarwatay brought attention to AI and generative AI tools that allow us to create synthetic selves that are ‘too real’ to the point where it is becoming increasingly difficult/impossible to tell the difference between ‘real’ and AI-generated ‘people’ online. She also presented a short video of another app called, ‘REVIVA: Bring your photos to life’ which was advertised on Instagram and available to download on the Apple App Store and Google Play Store. The advertisement opened with two side-by-side pictures of one male and one female presenting person respectively that can be shown — through the use of the app — to be kissing. The voice-over narrates:
“Wanna see your crush kiss you? AI has got you covered. Just upload 2 photos, that’s it. Try REVIVA forAI kiss, hug, and more!”

Image from presentation by Devina Sarwatay on AI Literacy for the Synthetic Self:
Safeguarding Young People from AI-generated Image-based Sexual Abuse at ’The Evolving Nature of Image-Based Sexual Abuse (IBSA): Addressing New Challenges in Research, Policy and Practice’ workshop at the University of Surrey
This is the promise of the app REVIVA, readily available on the Apple – and Google Store. Just by uploading two pictures anyone can render a digital likeness of another person and themselves (or others) partaking in what is heavily implied as sexual acts by the advertising. Deep Faking made easy and available and advertised on mainstream social media platforms to generate mass awareness.
This means it is not only the ‘synthetic self’ that is at risk here, but also the synthesis of others that can be potentially weaponised for image-based sexual abuse, consent violation, cyberbullying, and other harms. It is not only the AI filters that allow us to edit our synthetic selves to flirt with our real selves versus how we want to present ourselves on platforms, but also deep fakes that allow us to edit others and realise what was only within our imagined consciousness into the digital realm and offering a perhaps twisted digital materiality that can be easily manipulated causing potential risks and harms. It is not only the self that we have more power to exert over but executing that power over others too perhaps without consent or knowledge through the means of AI synthesis.
In her work Dr Sarwatay problematises this questioning how subjectivity has fundamentally changed in light of AI technology presenting the idea of the synthetic self. In the past years, AI image generation has steadily woven itself into the day-to-day lives of young people. The ‘self’ is thereby becoming increasingly synthesized through digital tools, some altering the physical appearance of the user, impacting the ‘self’ through witnessing of newly created personas, as well as the ability to synthesise content for images of other people. This ranges from AI filters that grant the users smooth skin and small nose bridges, AI girlfriends who get verbally abused by the men who created them, Instagram accounts with AI personas who can no longer be told apart form ‘Real people,’ and apps like REVIVA, that reflect the larger problem of the violating practice of deep faking.
This inarguably puts children and young people at risk on multiple levels. In the first instance technology like filters warp young people’s perceptions and perspectives on appearance. As they interact with image-based content on platforms such as Instagram and TikTok, with ‘AI beauty standards’ promoting increasingly western features, thereby negatively impacting their development of a healthy self-image. Furthermore, AI facilitated interactions, such as those promoted by REVIVA and AI girlfriends are likely to skew the expectations of young people, especially in relation to consent. Dr Sarwatay’s work raises major concerns around the prevalence of online communities dedicated to the manufacturing of child sexual abuse material, facilitating paedophilic activity and victimising all children. This includes text-to-image based generative AI, AI chatbots offering ‘sexual communication with children’ as well as ‘nudifying technology.’
It is of utmost importance, she argues, that social media giants regulate their media environments that have potential to facilitate these harms. She notably calls out companies like META whose recent reduction of content moderation panders to toxic masculinities and actively places vulnerable users at risk.
Prof. Jessica Ringrose_How do young people use social media? Consent, Snapchat and Image Based Abuse
Professor Jessica Ringrose takes a wide-ranging approach to the issue of online harms in her work. She started her talk noting that in January 2025 the world witnessed the inauguration of Donald J. Trump as the 47th President of USA.

Image credit: Julia Demaree Nikhison/AP Photo/ Bloomberg/Getty Images
A snapshot from the inauguration was widely circulated of the ‘tech bros’ the owners of Google, META, X/Tesla and Amazon, stood together witnessing the spectacle. The headlines such as “The tech bros have front-row seats at Trump’s inauguration, but what they want goes way beyond that” (Guardian, 2025) seemed to show that the world was waking up to how social media companies were part of the 1% of billionaires and influencing the circulation of knowledge globally, with grave implications for the planet. She noted we might be in the midst of a paradigm shift in how people are perceiving power relations globally and in relation to technology in particular. Ringrose questions how we could regulate widespread sexual violence online including new AI abuse and harms given this contexts of political polarization and disinformation such as the demise of X as a platform in the wake of Elon Musk’s ownership and deregulation of the platform. She argued that researchers need to take platform affordances and how tech works and how apps are used by young people very seriously as these platform affordances are typically the way consent is bypassed.
Like Dr. Sarwatay, Prof. Ringrose discussed the shifts from a networked digitally connected self (Paparachisi, 2010) to a passive algorithmic self (Fehr, 2024) noting the rise of the ‘synthetic self’ (Sarwatay, 2025). While the networked self (proposed by Papacharissi, 2012) on platforms such as Facebook or Snapchat allows for a “reflexive process of fluid associations with social circles,” the algorithmized self, as drawn out by Bhandari and Bimo (2022), is constituted through a “reflexive engagement with previous self-representations”, constituting a more passive self. The implications of the synthetic self in real life experiences of youth remain to be seen.
Ringrose’s work positions the world in a moment that comes after (post) digitalisation, a shift that has been accelerated through the COVID-19 pandemic. Citing Haraway, (1991) and Latour (2005), she demarcates the human experience as one with ‘more than human relationalities,’ part of a actor-networks where we interface with devices that extend the ‘self’ beyond the bounds of the physical body. It is therefore imperative, she argues, to embrace the Post-Digital Paradigm in the development and implementation of policy and practice in all aspects of life that are shaped by the digital (Evans & Ringrose, 2025). To achieve post-digital consideration is however, argued Prof. Ringrose, important to create new post-digital research methods. Attuning to the postdigital is needed to capture the blurriness of how URL digital mediation reshapes IRL – the real and material psychical relationships in day to day life (Cramer, 2015). Ringrose outlined some of the creative methodologies she’s used in her research with youth to document their experiences on social media apps, including drawing methodologies, which also help to address privacy and data concerns with surveillance of youth social media use by prioritizing what they want to share about their experiences.
Ringrose discussed how her approach enabled her to research how social media enabled and facilitated image based abuse for teens. Citing Treen and Leonardi (2012), boyd (2011), and Schrock (2015) she discussed how affordances enabling online image based abuse include: spreadability, visibility, editability, persistence, association and searchability. A clear example of how the platform affordances of social media directly puts young people at risk of digital sexual abuse is the app SnapChat, which Ringrose has extensively studied. Snapchat, she argues, operates on a non-consensual basis of snapping images to one another without asking which may be a positive experience with known relationships, however these patterns and habits open a space for surprise, non-consensually received content. The ephemeral affordance of ‘disappearing’ images around which the app is built makes it extremely difficult to report abuse. This is particularly problematic for girls who regularly experience nonconsensual ‘dickpics’ or what has come to be classified as ‘cyberflashing’.

Image from presentation by Jessica Ringrose on Consent, Snapchat and Image Based Abuse
Complicating this risk, are the gamified features of the platform which include elements to keep young people engaged and on Snap, such as ‘Snap Scores,’ follower count/ influencer status and ‘like/love buttons.’ Many young people grow their intimate digital networks very widely opening themselves up to risk (mutuals/loose ties, semi-known and strangers). As noted, it is also extremely difficult to report on the platform, and young people are often afraid to report issues to parents or schools for fear of getting into trouble, due to outdated youth sexting laws. Therefore young people are typically left to self-manage their privacy settings in app and regulate their network and responses. Ringrose points to this neo-liberal, self- responsibilising and blaming of young people when they are facing abuse as a form of social neglect and complicity from Snapchat, since the app carries no responsibility.
Taken together these talks show us new opportunities but also challenges with technology especially when we consider the overlapping contexts of networked, algorithmic and synthetic selves in postdigital contexts. These modes of psychosocial and technosocial subjectivity do not replace one another but they intra-act (to use Karen Barad’s new materialism). Now more than ever, it is imperative that educational standards are required to empower young people with essential digital and algorithmic literacy, and to hold accountable and better regulate those digital corporations such as Snap Inc. which choose to place profit over the safety of their users.
References
Bardhan, A. (2022). Men Are Creating AI Girlfriends and Then Verbally Abusing Them. https://futurism.com/chatbot-abuse
Follows, T. (2023). I Used Generative AI To Create A Synthetic Self And You Can Too. Forbes. https://www.forbes.com/sites/traceyfollows/2023/07/21/i-used-generative-ai-to-create-a-synthetic-self-and-you-can-too/
Lallee, S., Vouloutsi, V., Munoz, M. B., Grechuta, K., Llobet, J.-Y. P., Sarda, M., & Verschure, P. F. M. J. (2015). Towards the synthetic self: Making others perceive me as an other. Paladyn, Journal of Behavioral Robotics, 6(1). https://doi.org/10.1515/pjbr-2015-0010
Lovink, G. (n.d.). NXS Interview with Geert Lovink: Defining the Synthetic Self.
Men Are Creating AI Girlfriends and Then Verbally Abusing Them. (2022). From https://futurism.com/chatbot-abuse
Prescott, T. J., & Camilleri, D. (2019). The Synthetic Psychology of the Self. In M. I. Aldinhas Ferreira, J. Silva Sequeira, & R. Ventura (Eds.), Cognitive Architectures (pp. 85–104). Springer International Publishing. https://doi.org/10.1007/978-3-319-97550-4_7
Science Gallery. (2022). Synthetic Self. Science Gallery Bengaluru. https://bengaluru.sciencegallery.com/psyche-exhibits/synthetic-self
Tanya Bonakdar Gallery. (2023). Analia Saban: Synthetic Self | September 14 – October 28, 2023. from https://www.tanyabonakdargallery.com/exhibitions/745-analia-saban-synthetic-self-tanya-bonakdar-gallery-spruth-magers-los-angeles/
