By Staff Writers Erika Liu, Veer Mahajan, Finn McCarthy & Leland Yu
Introduction
“RIP Kirk, your sacrifice meant something,” one TikTok user said, captioning a slideshow of AI-generated memes of the late Charlie Kirk. “[These memes are] genuinely more important than he was,” other commenters said, countering the former user’s statements. Following the death of Charlie Kirk, Turning Point USA founder and right-wing podcaster, satirical posts such as the above — part of a larger trend dubbed “Kirkification” — have exploded in popularity, taking the Internet by storm. The rise of Kirkification and AI’s growing role in online communication has caused Internet attitudes to grow increasingly apathetic to serious situations, raising serious concerns about accountability and ethics online.
Kirkification
Offense has always played a major role in online humor. Major tragedies, ranging from 9/11 to the assassination of JFK, have found cultural significance through their reappropriation as memes. Kirk’s death and his subsequent distortions online are an extension of this phenomenon. The first Kirk face swap is credited to @wapzahra on X, who posted Kirk’s face morphed onto Twitch streamer IShowSpeed. This trend of face-swapping spread like wildfire and, by today, has succeeded in dominating every digital platform.
However, by reincarnating Kirk as a hollow digital icon, Internet users forget the social impact of Kirk’s rhetoric, as well as the effects of his assassination. His podcasts amplified his harmful rhetoric — demonizing and diminishing the Black community — while his death added tension between the left and right. With Kirkification, these real-world issues are hidden under layers of Internet irony — ignoring the broader issue while also perpetuating desensitization to serious situations in the modern digital age.
Examples of Kirkification
AI has played a huge role in reducing emotion and meaningful discussion online. The technology’s capacity to steal human likenesses is key both to perpetuating these edgy trends and distancing users from the real people behind famous tragedies. A notable example is the face-swapping of Junko Furuta, a 17-year-old Japanese girl who was horrifically brutalized over the course of 44 days by a group of boys. As her case recently exploded in popularity, the Internet has made a mockery of her death in parallel to Kirk’s, using AI to morph his face onto hers.
The Internet’s transition from mocking controversial political figures to mocking the rape of an innocent girl is drastic, but also an expected result of the detached nature of memes online. AI and meme culture in general offer a level of superficiality to these events — separating the memes themselves from the real-world ramifications. “People are not seeing death as death, and they’re not respecting it anymore,” Junior Tharun Arya Mahesh said. Callousness has become widespread, and tangible social impacts are ignored by online audiences.
This harmful trend doesn’t affect only the deceased: this growing apathy poisons US politics, too, impacting people’s capacity to assess major issues critically and seriously. “It’s okay, you can just say [I’m a fascist],” Trump said during a press conference with NYC Mayor-Elect Zohran Mamdani in a recent viral clip. Some users hinted jokingly at a relationship between Trump and Mamdani. Others suggested Trump’s potential as a “comedian,” a sentiment that minimized Trump’s actual, borderline authoritarian power. Regardless, the real implications of this statement were lost on many, prompting only further indifference toward otherwise crucial developments in politics today. A similar pattern has persisted through some of Trump’s other statements. Trump’s claims that Haitians were “Eating the dogs … eating the cats,” for instance, were memeified to the point that the statement’s destructive impacts upon the aforementioned community. Once again, consideration of realistic consequence and accountability has disappeared, a product of harmful Internet satire.
Connecting the dots
As AI has become increasingly integrated into Internet spaces, people’s dependence on the tool grows, too, in unprecedented ways. According to OpenAI, between February and April alone, the number of ChatGPT users jumped from 400 million to 800 million. In research published in the Journal of Mental Health and Clinical Psychology, 39% of users considered AI to be a dependable presence. A study from Common Sense Media shows 72% of teens rely on AI chatbots for emotional connection — opting for a simulated, algorithmic empathy over authentic, human interaction. It is this added level of distance from others that contributes to a larger disconnection from human issues.
2wai
Former Disney Channel star Calum Worthy recently co-founded an AI app company, 2wai. While the app started as a way to give celebrities agency over their representation, it has since expanded to allow users to create their own “HoloAvatar” — an AI character of themselves. Worthy described the app as enabling “one-on-one, humanlike connection.” However, its ability to use a phone camera to create realistic avatars and its subpar data sources set a dangerous precedent. In an ad campaign on X, Worthy posted a video of a pregnant woman using 2wai to create memories of her deceased mother and unborn child. X users compared the app to an episode of Black Mirror — in which a character uses AI to resurrect her dead boyfriend — while others decided Worthy deserved to be “banished to the shadow realm.”
2wai uses multiple large language models (LLMs) to source data for its avatars, including those from OpenAI. Such LLMs, which train to imitate natural human interaction, have led to chats of a more sinister nature. OpenAI is currently facing seven lawsuits in CA alleging that its chatbots have encouraged suicide and harmful delusions in users. In August, OpenAI acknowledged that its protections may be “less reliable in long [term] interaction,” displaying how AI’s role as a personal companion is defined by a disregard for humans’ well-being. The fact that 2wai uses an LLM from an AI company under scrutiny for alleged ignorance of human safety highlights the dangers of using data sources with lax safeguards to create false memories for vulnerable, grieving users. “I certainly think it [is the] responsibility of the companies that manage and create the generative AI to make sure it doesn’t — not necessarily manipulate — but persuade and mess with human emotions,” Junior Neal Arribas Hidas said.
The case with 2wai using likenesses to encourage unhealthy grieving habits — as shown through Worthy’s ad campaign on X — amongst vulnerable users only shows how superficial emotion conveyed by AI can be. Under unfettered technological development, social media trends like Kirkification allow the state of human empathy to become increasingly endangered and, with AI models such as 2wai, lead to an inability to process emotions.
Conclusion
With the ever-increasing prevalence of AI and disingenuous meme content in people’s daily lives, it is more important than ever to prize authentic human emotion rather than slip into apathy. AI cannot truly provide what it lacks; it is inherently artificial and can do no more than mimic. Students must be able to distinguish between the differences in AI and real, human relationships to maintain control of their emotions and identities. What may seem like harmless trends — like Kirkification and Junko Furuta memes — serve only to desensitize the public to exacerbating social issues. Recognizing how AI and digital irony erode meaningful discussion is key to preserving empathy and critical thinking, as well as pursuing authentic human emotion away from the screen.

Be the first to comment on "AI tools highlight the decline in empathy in digital culture"