Why pay a writer when generative AI makes articles for free?  

Generative AI writing articles - by Dina Gazizova

To understand the difference between skilled writers and AI content, let’s go back to iconic noughties’ gameshow, “Who Wants to be a Millionaire?”.

Famously, “Ask the Audience” was one of the most popular lifelines. And it usually worked. Between 91% and 92% of the time, the room full of strangers was correct. Likewise, generative AI platforms like Chat GPT, Claude, and Gemini are also excellent at summarising pre-existing common information. They analyse data across a full range of (credible to uncredible) online sources to find an overall consensus, which is usually broadly correct.  

With AI, thought leadership becomes thought followship  

While this makes generative AI platforms excellent research tools, particularly for literary reviews, it doesn’t guarantee accuracy. If ChatGPT was around in the 6th century BC, it would have told you that the Earth was flat. It takes highly skilled human experts like Pythagoras to come up with original ideas that go against the grain.  

This is why generative AI definitively cannot create thought leadership pieces, only “thought followship”. It cannot produce truly original thought. Unlike humans who often come up with creative new ideas, these platforms work by summarising information which is already established … and may not even be correct.  

Interestingly, this opens a unique advantage to firms who provide original content. If they can publish enough blogs, reports and whitepapers on a specialised topic, they can shape the overall AI discourse. 

AI hallucinations increase as content specialises  

As you move into more specialised fields, the error risk of AI-generated content increases. Just like the studio audience who lost accuracy after the £32,000-mark, generative AI also struggles as questions get tougher.  A 2024 study found that ChatGPT gets 52% of programming questions wrong, while 2025 research found that two thirds of the platform’s medical diagnosis were also incorrect.  

For firms relying on generative AI, this opens huge regulatory and reputational risks. Consultancy Deloitte was left red-faced after it emerged that a $290,000 report commissioned by the Australian government was riddled with AI hallucinations. As the name suggests, the platform fabricated fake information and non-existent studies, plastering them all over the paper.  

According to 2025 OpenAI tests, between 30% and 50% of the errors are caused by hallucinations. The reason for these hallucinations is still unknown and unfixed, presenting yet more risks.  

The Deloitte scandal ruptured across headlines in October 2025, with politicians slamming the firm as adopting the kind of behaviour that “a first-year university student would be in deep trouble for”. While another accused the Big Four firm of having a “human intelligence problem”. As a direct result, other governmental clients have lost trust in Deloitte, leading to a vicious cycle of yet more scrutiny, scandals and demands for refunds.  

This same pattern has already repeated across multiple industries, leading to brutally irreversible reputational damage. Multiple lawyers, for example, have been caught, shamed, and fined by judges for submitting court documents that contain hallucinations.  

Unlike writers, AI struggles with satire and fake news 

When generative AI isn’t hallucinating, there is another risk. Unlike humans, it is not good at eliminating satirical or joke content from its analysis. One of the most memorable 2025 examples arrived in the unlikely form of the offal-based Scottish meat delicacy, haggis.  

A group of tricksters set up the “Haggis Wildlife Foundation” website, to prank naive tourists. Using AI, they generated images of a guinea pig type creature with a roaring moustache feasting on “wild tartan”. Unfortunately, even though it helped create the pictures, the generative platforms fell victim to the prank. For several weeks Google’s AI Overview confirmed that haggis is “a small furry mammal native to Scotland“.  

In this case, the misinformation was relatively harmless. But the potential risks are enormous, as political groups can flood the internet with deepfakes and fake news.  

Accidentally including fake or joke content in a serious article comes with heavy penalties. Regulators like the FCA have zero tolerance for promotional content which does not follow the “clear, fair and not misleading” rule.  

AI data analysis is not the same as writers’ expertise  

In the same way that respected professionals would not ask a studio audience to write their industry articles, reports and thought leadership, they shouldn’t get a generative AI platform to produce it. While the research and data analysis functions are useful, it is not suitable for writing complex original pieces.  

Relying on AI for content only leads to dull thought followship which could easily be contaminated with hallucinations, mistakes or fake information, especially in specialised fields. While they may be free to produce, the potential reputational, regulatory and financial risks widen extensively. It’s a false economy.  

By contrast, specialist writers arrive with so much more than just research skills. They come stacked with years – often decades – of insider industry experience, spanning multiple companies. They have an instinct for what content works well, and an ear-to-the-ground at networking events, meetings and conferences, which an AI generative platform can never obtain. As well as writing about services, writers use the services themselves. They can effortlessly make the content relatable, original and relevant to new audiences.  

Data is not the same as expertise. Somehow before the age of ChatGPT, we all knew that whatever was online may not be correct. Nothing has changed. Generative AI platforms package the same information more conveniently and hallucinate to fill the gaps. This does not make a good article; it makes a ticking time bomb.  

Look at Deloitte. Look at the stolen careers of lawyers. Do you really want to be the thought leader that thinks the world is flat? Or that haggis is an animal? How about the one that propagated fake news?  

Why should firms pay a writer? Because they can’t afford not to.  

Why pay a writer when AI makes articles for free - Hannah's blog - AI writing, deloitte scandal