Keen readers will have spotted a short note in the Dear Readers of the February 2023 issue of Global Cement Magazine, stating that all of the content within this title is ‘proudly’ written by humans. This was a thinly-veiled aside about ChatGPT, the advanced chatbot that was launched by OpenAI in November 2022 and the potential for it to ‘automate away’ jobs in the publishing industry. Not to be outdone Google has since released a competitor, Bard.
For those still unaware, these advanced AI systems generate human-sounding text from prompts. Trained using a wide range of content from the internet, they are a huge step up compared to previous AI, with the ability to easily adjust to different audiences and styles. Potential applications include taking the legwork out of writing repetitive social media posts, news stories and press release materials. Computer programmers can ask the AI what is wrong with sections of code. Other uses could include writing new pages for Wikipedia, product descriptions for Amazon... or entire A-grade essays for lazy students.
The internet being the internet, ChatGPT can also advise readers how to remove a peanut butter sandwich from a VCR player in the style of the King James Bible.1 Another humourous hypothetical sees Christopher Columbus arrive in the US in 2015 where he is amazed by the technological development and multiculturalism that he discovers.2
These texts, and others like them, are worth a read, but they are nothing that a decent student wouldn’t be able to conjure up in an afternoon. Maybe speed will prove to be an advantage for AI in some applications, but, in order to write these passages, it still has to be prompted... by a human. AI has neither first hand experience, nor any feelings or - some say - any valuable ideas. Indeed, the musician Nick Cave, having read the lyrics to a ChatGPT song written ‘in his style’ called it a ‘gross mockery of what it is to be human.’3
There are also limits on exactly what ChatGPT/Bard can and cannot do. For example, ChatGPT’s developers have been clear that, as there is no single ‘source of truth,’ it is capable of writing ‘plausible-sounding but incorrect or non-sensical answers.’2 Someone being uncharitable might call this ‘crap in, crap out.’ There is also the risk of AI generating offensive, harmful or even libellous content, plus the risk of plagiarism, although the developers say they are working around these issues.
Reassuring - or not - as this is, advanced AI is not going away and we will have to adapt. To start, it would be wise for reputable publishers to clearly mark AI-generated content as such. This would be the policy of Global Cement, but we don’t currently have any intention to use AI. AI-generated content should also undergo thorough fact-checking and named humans should take responsibility for it, as if they had written it themselves. The risks of not doing this were highlighted when an incorrect answer from Bard - on day one - wiped US$100bn from Google’s parent company Alphabet’s share price.4
But what if unscrupulous players don’t label such content and it proliferates online? With misinformation already rife, this could lead to the accuracy of online information being eroded with time, as ChatGPT/Bard and their descendents bastardise their own ‘work.’ If that transpires, online information might become so unreliable that humans seeking accurate information might have to disregard the internet altogether. Alternatively, we might reach a point where literally any information could be retrieved, repackaged and ‘spun’ by a sufficiently advanced chatbot. This could open individuals and companies up to interrogation like never before, be it from investors, detractors, government officials, whoever. It is entirely possible that content would migrate offline, lest it fall victim to the ‘wrong kind of analysis.’
But all is not lost! Knowledge that isn’t online would rise in value. In either scenario, human journalists, with valued contacts would rise to ensure that there are sources of truth. Maybe these scenarios seem a bit far fetched, but communication is about transmitting thoughts between people. AI could become a useful part of that, but not a replacement. You may think that AI tools are alarming, interesting, amusing, useful or not quite the revelation they are being marketed as. Whether you agree or disagree, don’t hesitate to contact me at This email address is being protected from spambots. You need JavaScript enabled to view it.. Let’s communicate, human to human!
1. https://twitter.com/tqbf/status/1598513757805858820?la ng=en;
2. https://openai.com/blog/chatgpt;
4. www.cnbc.com/2023/02/08/alphabet-shares-slip-following-googlesai- event-.html