Fake It Until You Break It
How Writing Platforms Fail On AI

The world of AI has moved fast and broken things since Open AI’s ChatGPT crashed into our awareness in November 2022. Along with a proliferation of other similar products, it is rapidly transforming our world with no prior consultation, and few if any safeguards.
Now, barely any discussion seems complete without some reference to AI, it has become so pervasive. It is perhaps the ultimate buzzword to be dropped into any dinner party conversation if you want to appear on trend and knowledgeable.
It seems as though any problem, from getting Forest Green Rovers to score goals, to curing the blind and healing the lame just needs the application of more AI. No doubt it will soon appear as a new additive in brands of toothpaste and washing powder.
Inconvenient truths

There are of course inconvenient truths, which it is best to gloss over, to avoid being seen as a party pooper. AI is reversing trends to reduce carbon emissions, such are its energy requirements, and it seems destined to decimate the jobs market in numerous sectors. But who needs a liveable planet or a job to pay the bills, when you ask ChatGPT or one of its mates for a list of the ten best ways to annoy your partner, or a picture of dancing cat playing the accordion in a tutu?
Medium slow

Writers are far from immune to the impact of AI. This was seen on Medium, where throughout 2023 and 2024, countless AI-generated articles were dumped on the platform at scale, and the platform seemed unwilling or unable to deal with the issue, constantly many months behind the curve in responding. (This may be partly explained by the fact that some of the biggest investors in Medium are heavily involved with AI).
Medium finally came out with a series of increasingly strong approaches to AI being posted, although I have seen only one reference to a writer threatened with suspension for using it. Presumably there have been some others. Earnings on the platform now seem so low that this may act as a greater deterrent.
Substack
As a Medium refugee on Substack, I was interested to see how things are dealt with on the other platform. I have already written about a product some Substack authors are using called “WriteStack”. We know they are using it as their names appear in the marketing for the product, with glowing endorsements:
The Write Stuff - An investigation of “WriteStack”
This week I saw an interesting article in which Linda Caroll highlights the way that some successful writers on Substack appear to be making extensive use of AI, along with reference to the environmental impact of the technology:
Linda links to this article in which Will Storr highlights his suspicions of extensive AI use on Substack:
It seems therefore, that much of the writing we come across on Substack may therefore not be human writing at all, but churned out by AI. That heart-breaking story with ten thousand likes, about someone with drug dependency who turned their life around to become a best-selling author? That funny meme about a dancing cat in a Substack note? That writing advice about the one thing you need to do to become a writing guru yourself and get rich selling courses writing about writing? It could all be AI-generated synthetic semolina slop!
What is the platform doing about this? Well, I had a look at the terms and conditions for Substack, the small print, you know, the sort of thing you probably click agreement to, without ever actually reading, when buying a product, at least if you are as impatient or lazy as me.
The thing is, I couldn’t find anything restrictions on AI use on Substack at all. I am fairly new to Substack, so please do correct me in the comments, but remarkably it appears there is no policy on AI usage. Zero. Zilch. Zip. Nada. Sweet FA.
It may be that, as with the very libertarian approach the owners have taken to extreme content, they take the view that anyone should be allowed to publish almost anything. Call me old-fashioned, but this feels rather like visiting a rifle range and finding the establishment has no policy on shooting other customers with the guns, as it would spoil the fun.
This laissez-faire approach to the use of AI on writing platforms seems to threaten the end of human writing. Since AI can work perhaps 10,000 times faster than a human author, the competition is ever so slightly unfair. But writers? Who needs them?
Training AI on our content
The only thing I could find by way of a Substack policy on AI, was when looking through a wider internet search, there is reference to an old article, in which it was once possible to use a setting to prevent our articles being used to train AI. However, unless I am missing something (which happens often!), reference to AI and the “toggle switch” seems to have been removed from the Substack settings.
This is the old article I found:
“To disable AI bots from using your published Substack content as part of a training model, first head to your publication’s Settings page.
Scroll to the Privacy section and slide the toggle to the right next to “Block AI training.”
This setting indicates to AI tools like ChatGPT and Google Gemini that their models should not be trained on your published content. This will only apply to AI tools that respect this setting, and blocking training may limit your publication’s discoverability in tools and search engines that return AI-generated results.”
If it has been removed it would be interesting to know the reason. Has the platform taken a commercial decision to benefit from our work being crawled? Have they surrendered and decided it is impossible to stop it happening? Or is there another explanation?
Does it matter?

Many now seem to take the view that AI is with us and unstoppable, and we just have to learn to live with it. However, that means letting it rip unchecked through numerous employment sectors, putting millions, perhaps billions of people out of work, with minimal if any safety net to support us. Many freelance writers report a reduction in work available to them, reflecting the wider impact of AI on the job market reported in these articles:
https://www.theguardian.com/technology/2025/may/31/the-workers-who-lost-their-jobs-to-ai-chatgpt
https://www.theguardian.com/business/2025/jun/30/uk-entry-level-jobs-chatgpt-launch-adzuna
https://observer.co.uk/news/business/article/cant-chatgpt-do-it-the-growing-cull-of-office-jobs
https://www.theguardian.com/technology/2025/jun/09/ai-advertising-industry-google-facebook-meta-ads
Universal Basic Income (UBI) is a pipe dream, and the reality seems to be that AI simply concentrates wealth even more in the hands of a tiny self-styled elite of tech bros, with ever larger yachts, tax havens and bunkers to escape from the dystopian future they are creating, the world of “have nots and have yachts” referred to this article:
If you are fine with that, no problem. If not, then we need to be pushing back firmly against AI at every opportunity, to ensure there is very belated consultation, and that some safeguards are introduced urgently.
Technology is advancing so fast that politicians seem consistently several years behind, and to either not see the dangers, or to be playing catch-up with a world spinning out of their control.
Further reading:
https://medium.com/eco-news/the-write-stuff-a38844898413
https://www.theguardian.com/technology/2025/may/31/the-workers-who-lost-their-jobs-to-ai-chatgpt
https://www.theguardian.com/business/2025/jun/30/uk-entry-level-jobs-chatgpt-launch-adzuna
https://observer.co.uk/news/business/article/cant-chatgpt-do-it-the-growing-cull-of-office-jobs
https://www.theguardian.com/technology/2025/jun/09/ai-advertising-industry-google-facebook-meta-ads
For the search engine that plants trees: https://www.ecosia.org/







Readers are the real game changers i guess... AI can mimic human text (no subjective experience) but it takes life as a thing-in-itself.. If the code runs, change it anyway. AI can’t touch illusions/conscious thought/unique perspectives/delusions/dreams/“perhapses”...(Not sure, about future)