"Legislate AI to augment – not replace – journalists’ work": Advice from an interview with ChatGPT
- hamishmonk1
- May 29
- 6 min read

[The following is a piece I wrote for the Chartered Institute of Journalists' Spring 2025 Journal issue. Published May 2025]
The term ‘artificial intelligence’ (AI) was coined in an August 1955 study proposal by a group of academics and technology developers from Dartmouth College in New Hampshire, United States. This seminal moment, and the workshop that followed, is considered to be the foundation of AI as a field of study. By 1956, American computer scientists Herbert Simon and Allen Newell, had written ‘Logic Theorist’ – a program capable of performing automated reasoning. And so, into humble beginnings, AI was born.
Since the middle of the 20th century, the horizon of AI has grown exponentially, in terms of its public recognition, capital investment, process development, and application. In the creative sector, which contributes £125 billion GVA to our economy annually, the merits and hazards of AI deployment are being fiercely debated – with the interests of creators, governments, and technology firms often coming into conflict.
To ensure AI is rolled out in a manner that is constructive to writers, governments must legislate to augment, not replace, human work – and, most importantly, be guided by the research. Accuracy, independence, impartiality, accountability, and humanity are the founding principles of journalism – and by extension, democracy. Only humans can guarantee these principles endure.
The UK government’s stance on AI
Given the inevitability of AI’s integration with every facet of our lives and work, it is incumbent on the industries and governments it impacts to ensure the technology’s development is informed, unbiased, and supportive of human-made content.
The UK government, for its part, launched on 17 December 2024 an AI and Copyright consultation – essentially a tender for opinions on proposed major reforms to the country’s copyright framework. Alongside the launch, the government expressed its partiality for a potential ‘opt-out’ system, whereby copyright holders would need to pro-actively register their works to avoid them being used by AI developers for training purposes. This measure would run against the legal principle of automatically granting choice and control to authors under UK copyright law. As such, it is not just unworkable, but of little benefit to both copyright holders and AI companies.
Formally, the government argues that its ‘opt-out’ proposal seeks to balance the needs of AI developers with creative industries’. Informally, it supports Labour’s mission to boost business investment into the UK. While this is a necessary ambition, growth should not come at a cost of failing to holistically protect creators and journalists.
The industry response
On 25 February, the consultation was closed, and – according to the Authors' Licensing and Collecting Society (ALCS), which supported its 13,500 members in submitting responses – received a “huge reaction” from the creative sector. Before the deadline struck, Baroness Kidron stated in an article for The Bookseller that the government was positively “overwhelmed”, having by 19 February received over 2,500 submissions – a figure bound to have increased in the final six days.
Barbara Hayes, ALCS CEO, has remained vocal on this matter beyond the consultation deadline, highlighting three key action points for UK regulators:
Reconsider the copyright exception and opt-out model, which is unfair, unworkable and will likely result in prolonged legal challenges;
Ensure transparency around which works have been used and by whom. This is an essential first step for any progress on the issue; and
Develop an AI Copyright Hub (an adaptation of a government proposal from 2015), which would link rightsholders with users and make use of emerging tools to streamline transparent licensing.
Clearly, given its potential, authors and journalists are not opposed to AI – as long as copyright laws and the dynamic licensing market are upheld, and the transparency obligations of AI developers are observed. At the time of writing, the government’s legislative response to the consultation feedback is undecided or undisclosed.
From the (open) source’s mouth: How does AI think it should be regulated?
As highly analytical, data-aggregating, ‘impartial’ ocean of resource, might AI itself be able to offer some advice, or perspective, on how it can be most safely utilised?
Here are the results of an ‘interview’ the CIoJ Journal conducted with OpenAI’s AI-powered chatbot, ChatGPT, on the technology in question and the ethics surrounding journalists’ work:
Question 1: “How will AI impact journalism in the long-term?”
ChatGPT spat out a generous, A4-long response to this question, pointing to the automation of routine tasks, the personalisation of news content, enhanced investigative tools, improved audience insights, global reach and localisation, as well as new kinds of journalism – such as “immersive news experiences using virtual or augmented reality (VR/AR), and narrative structures driven by AI’s understanding of individual readers.”
Nota bene: Enhanced investigative tools could be a double-edged rapier – particularly if the bias and prejudices of AI creators are, like Descartes’ manufacturer’s stamp, baked into the models that power them. VR and AR may also necessitate tighter data protection laws.
Question 2: “In what ways might AI be good for journalists in the future?”
The predictably bounteous response to this question discussed improvements for “data journalism, fact checking and credibility assessments, research and information aggregation, visual content creation, bias mitigation, audience engagement, personalisation,” and more.
What the chatbot did not acknowledge – perhaps restricted by the question’s parameters – was the social implications of continually feeding readers the content and opinions they want to believe. There has already been plenty of warning around the creation of media echo chambers, and how they can result in polarised, extremist views.
Question 3: “In what ways might AI be bad for journalists in the future?”
Characteristically thorough, structured, and suspiciously toady, ChatGPT, offered seven ways that it, or its successors, will be bad for journalism. These were “job displacement, loss of human perspective, a proliferation of misinformation, quality dilution, erosion of trust, consolidation of media power, and lack of accountability.”
Though, with contemporary geopolitics in mind, it seems we do not need AI or machines to give rise to such trends.
Question 4: How should governments manage the development of AI so that it does not harm journalists?
The 10-point strategy concocted by ChatGPT involved promoting transparency and accountability in AI development; supporting reskilling; encouraging collaboration between journalists and developers; guarding ethical standards; supporting independent journalists; fostering public awareness and digital literacy; ensuring fair labour practices; regulating the use of AI in news generation and distribution; investing in AI research; and, most importantly, encouraging the development of AI to augment, not replace, human work.
ChatGPT added: “By managing AI development with a clear focus on supporting journalists and their work, governments can create a future where AI complements rather than competes with human driven journalism.” Perhaps this should be the starting point for Labour. We’ve been warned.
Question 5: Thanks for the interview. Would you like to write a title for the piece?
After a series of prompts to help rationalise the request, ChatGPT provided the very title of this article. It’s not bursting with originality, but it wasn’t unusable either.
A final note of warning for journalists: We cannot rely too heavily on AI to write our headlines (just yet). Apple’s generative AI tool was recently found to have been putting inaccurate headlines on curated news articles – leaning on both false information and context omission. The Bureau of Investigative Journalism said of the story: “All of this points towards the start of a new era of disinformation, where it will become increasingly hard to tell fact from fiction.”
A purlieus postscript: AI’s energy footprint
With the ChatGPT interview for this article concluded, it came to the Journal’s attention that the very act of deferring to AI in this manner may have cost a surprising amount of water and energy.
In terms of processing, a ChatGPT query requires significantly more energy compared to a standard web search, due to the complex language models involved. According to RW Digital, ChatGPT uses (at the time of writing) up to 10 times more energy than a standard Google search. Sustainability News claims ChatGPT's monthly carbon footprint is equivalent to 260 flights between New York and London. Forbes, meanwhile, says the water consumption required to handle 5-50 prompts on ChatGPT (our interview falls within this bracket) is around 500 millilitres per interaction. Evidently, regulation of AI is not just an ethical pursuit – it is an existential one.
In an interview with The Guardian on the future of AI, novelist William Boyd prophesised: “It will become more efficient, of course…The only straw to clutch at is the sheer complexity and randomness of human individuality.”
Comentarios