ADVERTISEMENT

LifestylePREMIUM

Don’t touch me on my em dash, ChatGPT

The excessive use of this punctuation is a giveaway of AI use, with a lack of anecdotes, emotion and unique viewpoints

Picture: UNSPLASH/PATRICK FORE
Picture: UNSPLASH/PATRICK FORE

A certain linguistic snobbery comes with using unconventional punctuation. For a humanities major, it’s a moment of one-upmanship in emails with colleagues or letters to friends. Here, your language prowess is on full display, and this dazzling grasp of grammar elevates your reputation as a writer. But what happens when artificial intelligence (AI) language models adopt this punctuation too?

I’m talking about a member of the hyphen family — this long horizontal bar called the em dash. If you know it, you either love it or hate it. And now, there’s a new emotion that this divisive grammar elicits: suspicion.

But let’s take a few steps back. The first thing to know about these dashes is how multifaceted they are. They shape-shift to take on the role of commas, colons, brackets and ellipses and you can use them for both pauses and interruptions as well as summaries or long-winded tangents. They convey a wide range of emotion, signalling impatience, anger, passion and excitement. Perfectly on-brand for this infamous grammar type is that they aren’t accessible with just a simple touch of your keyboard. Instead, they make you work for them, hidden behind the more complicated Ctrl + Shift keystroke combinations. 

Firmly rooted in literary history from Emily Dickinson to Stephen King — both devout em dash-ers — these markers aim to capture our interest and, mimicking an arrow in appearance, nudge us to where our attention should follow. As with all good things, language guidelines suggest less is more. During a newsroom critique in 2011, The New York Times managing editor for standards said: “I’ve noted before the risks of missteps, confusion or awkwardness in the use of dashes. Even if the dashes are correct and the syntax intact, we should avoid overdoing the device.”

Fast forward to 2025 and AI has entered the chat.

If you’re new to the world of large language models, ChatGPT is an interactive AI chatbot developed by OpenAI that can generate human-like text within seconds. It reached 100-million users in just two months after its launch, setting the record for the fastest-growing user base yet. By way of comparison, it took Instagram two-and-a-half years to reach similar levels of popularity. 

On the plus side, ChatGPT is said to have made professionals more than 50% faster at their writing tasks, while improving the quality of their work. On the negative front, 89% of American students admit to using ChatGPT to do their homework for them, and universities are scrambling to incorporate tools to detect AI writing to combat the threat of college cheating and plagiarism. Even outside academia, there are unprecedented cases of nonethical ChatGPT use, for example, the lawyer facing legal charges after his firm used the tool and presented false case citations.

This brings to the fore the concept of trust, a somewhat abstract construct in this post-truth era. In a peer-reviewed article published in May, authors found that ChatGPT users are trusted less, and their work is judged as less legitimate and less impressive. Insights from behavioural science show that trust is a key determiner of expertise and authority. So, the risk that ChatGPT users face is that their proficiency is called into question. 

What ChatGPT offers in productivity, it still lacks in authenticity. Genuine human content resonates with us because it includes personal anecdotes, moments of emotion and unique viewpoints that AI language models haven’t (yet) mastered. Unless prompted, AI text tends to omit first-person and second-person pronouns such as I, you and we, which human writers rely on for a more personal and relatable tone. Yet despite criticism that AI lacks emotional depth, ChatGPT now offers a break-up text assistant that allows users to specify tone of voice, relationship length and how much detail and closure they want to give their ex.

But here’s the thing: ChatGPT loves patterns and humans are very good at pattern recognition. So much so, that TikTok is bursting with viral videos of jilted lovers exposing these break-up messages for being AI-generated.

Patterns of writing behaviour are so powerful that an entire profession is dedicated to them: forensic linguistics. These semantic sleuths analyse language across court documents, police interviews and social media posts for insights to present as evidence in legal issues.

Just as you might recognise a friend’s message in a group chat by their signature style, emoji use or their “idiolect” (individual dialect), forensic linguists help to solve crimes by identifying forged suicide notes and faked alibi messages. This helped the FBI identify Ted Kaczynski as the Unabomber and, closer to home, led a recent QAnon investigation to an SA suspect.

How is the em dash relevant in all of this? While ChatGPT works hard to be indistinguishable from text written by native human speakers, there are some tells. Starting with this dash. 

The frequent and sometimes excessive use of this punctuation by ChatGPT has led many to recognise it as a telltale sign of AI intervention. Gen Zs have even started to call it “the ChatGPT hyphen”. Other linguistic clues of ChatGPT text include repetitive phrasing, clichéd phrases and the overuse of sophisticated verbs and adjectives such as delve, align, underscore, noteworthy, versatile and commendable. If any of these words form part of your day-to-day vernacular, it may be a good idea to go easy on them while they’re under the AI microscope.

The summary here is this: we still expect human effort when it comes to writing, thinking, and certainly when ending our relationships. As language AI tools become more ubiquitous, this is likely to change — in the same way that accountants using calculators for the first time in the 1970s were seen as sell-outs and ostracised by members of their profession.

Am I worried that AI is coming for my job? 

The likely risks of misinformation and bias? 

All-too-convincing phishing emails and malware code? 

Sure. But what really keeps me up at night is needing to find a new favourite punctuation mark. 

Crymble is a behavioural linguist at BreadCrumbs Linguistics, a marketing firm that specialises in persuasive behavioural communication.

No AI was harmed in the making of this content.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT