AI-generated social media posts: A Double-Edged Sword
The age of AI is here, and it's a game-changer, or so the tech enthusiasts tell us. But for many, the impact is more nuanced, especially when it comes to social media.
For me, the most concerning issue is the proliferation of AI-generated content on social media platforms. It's like a snake eating its own tail, creating an endless loop of algorithmic tripe that clutters our feeds.
As a wordsmith, I find the AI-generated posts on my Facebook feed particularly offensive. They often present themselves as in-depth analyses of the bands I love, but they're nothing more than shallow, formulaic summaries. These posts are packed with literary cliches, overstating the importance of the people involved and their interactions. It's as if the AI is trying to impress us with its 'deep' understanding of the world, but it's all just a facade.
The problem doesn't stop there. These AI-generated posts often conclude with grandiose statements that are more than a bit suspect. For instance, a seemingly innocent meeting between two people is portrayed as a pivotal moment in rock'n'roll history, or a simple application of AI is framed as a revolutionary breakthrough. These 'it's not X, it's Y' phrases are a telltale sign of AI-generated content, and they're a cause for concern.
But how can we tell when something is AI-generated? PhD candidate Leon Furze has some insights. He's studying the implications of generative AI on writing instruction and education, and he's found that AI-generated text often uses negative parallelisms, empty verbs, and adjectives like 'delving' and 'navigating'. It also tends to have a predictable sentence structure and variation. Furze suggests that avoiding these issues is relatively simple: just tell the AI not to use them.
The AI-generated content on social media often includes other red flags, too. It tends to overemphasize the significance of events, use vague attributions, and generalize opinions. And let's not forget the endless lists of words that sound like they're trying to impress a five-year-old: 'align with', 'crucial', 'enhance', 'interplay', 'key', and so on. These words are often used to make mundane events sound more important than they are.
Georgia Phillips, a creative writing lecturer, notices the stiffness of the tone in AI-generated content. She finds it lacking in flair and originality, devoid of a distinct voice. Phillips believes that AI-generated creative work can be dispiriting, as it doesn't provide the same pleasure as encountering well-crafted literature.
Dr. Leah Henrickson, a senior lecturer in digital media and cultures, has noticed a similar trend. She often receives AI-generated stories about history, which are 'magical' and 'fancily written'. But as she points out, these stories are often hard to verify, and it's not clear if they're based on fact or fiction.
The issue of AI-generated content on social media is complex. It's not just about the quality of the writing; it's also about the algorithms that drive it. Social media algorithms optimize for engagement, which can lead to a vicious cycle. As AI content becomes more prevalent, bots congregate around high-performing posts, and the algorithm learns to reward this kind of content.
Dr. Henrickson also mentions 'transactional text', which exists for the sake of it, without the expectation that anyone will read it. This includes the terms and conditions we agree to when downloading apps. The AI-generated content on social media might be a result of this transactional nature, where the focus is on creating content rather than consuming it.
Meta, the company behind Facebook, has acknowledged the issue of spammy content on its platform. They've announced measures to 'crack down' on such content, which is crowding out authentic creators and harming the user experience. Spam networks create hundreds of accounts to share the same content, and fake engagement is coordinated to make it seem more popular than it is.
The public's reaction to AI-generated content is ambivalent. Dr. Henrickson's study found that many participants were frustrated with AI-generated content, feeling overwhelmed and unsure about its ethics and copyright issues. Others were excited, but the overall sentiment was one of confusion and uncertainty.
The impact of AI-generated content on education is a concern. Dr. Phillips believes that future generations deserve better than 'slop'. She argues that literature plays a crucial role in helping humans think critically and make sense of the world. If language is reduced to cliches and formulaic repetitions, our capacity to express ourselves will suffer.
Mr. Furze agrees, suggesting that treating education as a transactional process could lead to a homogenized learning experience. He envisions a future where AI can produce lively and enjoyable texts in various forms, but that's not the reality we're facing yet.