What you need to know about using AI in the media

We’re hearing about it everywhere at the moment. AI or artificial intelligence is the new buzz word and we wanted to know what storytellers in the global south need to pay attention to when using the technology in journalism or documentary filmmaking. We talked to three experts to break it down for us – Prof. Charlie Beckett, founding director of Polis, the think-tank for research and debate around international journalism and society in the Department of Media and Communications at LSE; Dr Irving Huerta, convenor of the Data Schools at the University of Cambridge and Ruona Meyer, a freelance journalist, researcher and media trainer. Here’s what they said:

 

How can AI be used in the media?

Prof. Beckett: In the global south, there is a significant opportunity to empower journalism. I can see a lot of really effective organisations like at Arab Reporters for Investigative Journalism who realised that through collaboration, you can build capacity in a way that was much harder before. These technologies enable knowledge sharing. These technologies are going to be a goldmine for people everywhere, but especially in the global south where there’s already a lot of censorship and a lot of distortion and misinformation.

On the flip side, newsrooms around the world, but perhaps especially in the global south, have been struggling forever. They face challenges of finance, political interference, lack of literacy. There’s a technological divide. The challenge of finding the resources, doing the training, having the information, buying the technology and implementing it is obviously greater for news organisations in economies that are not thriving. And this is true with generative AI as with older AI technologies. There’s an inequality between big and small newsrooms. If you’re a big global south newsroom, like the Nation group in Kenya, you’re probably able to adapt this technology pretty easily. And with AI, but especially the recent generative AI, there’s also a big language bias. At the moment, it’s still hugely anglocentric or English centric.

Dr Huerta: AI has been used by the media for some time now. From automating information feeds that subsequently produce sports, economic, and climate reports, to managing social media, and even controlling cameras on a TV studio. However, an important breakthrough in recent years is Generative AI, such as ChatGPT, Midjourney, Google’s Bard, and others, which generate text, images, and even code. This type of AI can be useful for many tasks, but using it with care is necessary because of its propensity to produce inaccurate information and exacerbate discrimination. This should be enough to dissuade anyone in the media from relying solely on these services to write or verify an entire story. But these tools can assist with:

  • Providing ideas to pursue a story, which will then need to be reported and verified.
  • Offering quick trend analysis on publicly available datasets, as suggested by the Centre for Collaborative Investigative Journalism.
  • Improve storytelling structures for stories and even try new models.
  • Assisting in popular programming languages for journalists, such as R and Python, whether for analysing large datasets or performing intensive computational tasks.
  • Image analysis, by sorting similar images and finding patterns in collections

All of the above must be verified by trained media professionals for legal and editorial standards.

Media professionals should also report critically on AI and do it more often. Technological enthusiasm often overshadows reports on bias in algorithms and the tangible consequences of using AI systems, such as environmental damage caused by the high computational power required to run them.

There is a need for more stories about the impact of AI on human lives and the planet.

Ruona: For newsrooms in Africa, AI is most useful to combat the pressures that stifle media freedom like news production costs, access to information, reducing mis/disinformation, and the ability to surmount censorship and reach wider markets. Specifically, AI is assisting newsrooms in Africa to create royalty-free images for stories, they can condense large volumes of data, produce studio-like video news bulletins without having production budgets, their websites can be read in foreign languages, while others are using AI to build chatbots that can fact-check claims.

But the power of AI is in what each newsroom or organisation or journalist uniquely decides to use it for.

For example, I regularly use the paid speech-to-video software Murf.AI to turn my African sources’ audio interviews into German voice overs in seconds. This has meant I can diversify my multimedia content and income by being able to sell a Nigerian or Senegalese story to both African and German audiences.

AI and machine learning can greatly empower journalists, bring efficiency to news production and also keep journalists safe, thereby reducing burnout.  I see AI as everyone’s colleague, capable of making the media industry more competitive, and attentive to the audience.

But many specific AI tools come with subscription packages that are well beyond the reach of the average journalist, newsroom and filmmaker. Costs are a major challenge with uptake of AI, even when there is a willingness to delve into it.

 

How can journalists and filmmakers start learning how to use AI?

Dr Huerta: I recommend Automating the News by Nicholas Diakopoulos. For entry-level users of AI tools,  explore introductory courses at universities like MIT or Cambridge. And try the  Teachable Machine, which offers theoretical insights and practical exercises to become acquainted with Machine Learning.

Ruona: Start with the basics. AI is just like any other technology – it will only be as powerful as the savvy of the professional using it. I would recommend the JournalismAI Starter Pack and the Introduction to Machine Learning courses by the London School of Economics.  For filmmakers, AI can assist with shot list and script generation. It can help them automatically achieve easy rotoscoping, animation, light and composition of CG characters and speedy music composition with no equipment, or prior technical skills.

What do media practitioners need to be careful of when using this new technology?

Prof. Beckett:

Journalists need to make sure that when they use these technologies, that they are sure about the authenticity of the information that they’re gathering, for example, they make sure that they don’t fall for the misinformation.

Dr Huerta: There are at least three considerations:

  • They need to be aware that generative AI is highly likely to produce inaccurate information and exacerbate structural discrimination. Consequently, these technologies cannot be wholly trusted to produce entire stories and facts. Media professionals must verify information.
  • The value of human interaction in storytelling should never be forgotten. Without it, storytellers are left with second hand narratives that add nothing new.
  • Encompassing the previous two points, efforts should be made to avoid generating more junk information on the internet. These technologies can produce stories en masse, whether true or not. Responsibility lies in producing high-quality media products with real value for the public.

As an additional note, when using any Generative AI tool, it is a good practice to provide full disclosure of its use, as The Guardian has recently attempted to do.

Ruona: They need to be mindful of any legal regulations in their location, and mindful of professional ethics, because trust can be eroded in audiences if certain types of AI are not disclosed when used; in much the same way as one would disclose that a translation is paraphrased, or a website article would reveal that there have been changes since publication.

Practitioners also need to be aware that generative AI is currently weighted towards data generation processes that originate mainly from the global minority world. As more media houses from Africa, or Asia input their data, use and develop products, Artificial Intelligence will achieve greater inclusivity. Machine learning processes are also being shaped by the needs of the newsrooms who have the resources to undertake them, and it is best to not blatantly copy machine learning projects or tools, without considering your unique audience needs and location.

For example, a newsroom in Finland may train machines to learn how to summarise lawmaker minutes and spot spending on meal subsidies. But a newsroom in Nigeria would struggle to replicate that because there are no lawmaker minutes readily available, traditionally.

Finally, practitioners should be mindful that the AI itself can produce hallucinations, where there is totally wrong data, or plagiarism is included.

The human element will therefore always remain important for cross-checking, re-checking generative AI results and also for creating machine learning products that are cost effective and contextually relevant to journalists and their audiences.

What are three AI tools you recommend for storytellers and what are they best for?

Dr Huerta: ChatGPT and Bard for coding assistance. ImageGraph, for both image analysis and understanding the principles underpinning machine learning and computer vision.

Ruona: You can use Google Pinpoint to transcribe audio, extract information from large caches of documents (e.g PDF files) as well as from images.

Chat GPT can help to summarise large amounts of information, get story angles, generate interview ideas and article outlines, as well as kick off your storyline development process (for visual storytellers), while Colourlab AI can automatically colour grade your projects and can be synced with everything from Premiere Pro, Final Cut Pro to Davinci Resolve.

Though Pinpoint is free (you fill out some registration details with your Gmail address) and Chat GPT has a free version, Colourlab AI is paid for, and this is the situation with many types of AI software. They are often based on payment plans, given that they are indeed intellectual property.

 

Are there AI tools that are particularly good for social impact?

Ruona: Seeing AI is a tool from Microsoft that allows people with visual impairments live independent lives. Specific to journalism, this can help journalists get a description of their location for use in writing their stories, it can also help them identify currency notes when they have to spend money on the field, and with documents; it can capture not only printed documents but also handwritten ones. It stands out for me because it comes in several languages and best of all, is free.

GPTZero is an AI-detection tool that is good for social impact because it can be used to verify news in sensitive times such as during elections, outbreaks of disasters  or conflicts; times when fake news proliferates. The main drawback? Like most AI tools we see, it is only able to work with English.

The last tool is not publicly available, but I find it inspiring. It is the gender bias-focused bot built in-house by the Financial Times. This bot analyses pronouns and first names, then warns editors in that newsroom if women are not getting quoted in their stories. This ensures greater impact towards gender balance in the news. As I mentioned before, these are tools that have arisen out of one newsroom tackling an industry-wide, societal problem. So, AI tools such as these can be replicated and tweaked elsewhere, if the human and technical resources are available.

 

What is the future of AI in the media industry?

Dr Huerta: Generative AI and Machine Learning is becoming more popular in newsrooms, and this trend can be expected to grow. That is why it is essential for appropriate regulations to be put in place by policymakers, and for ethical procedures to be established by media professionals. This is crucial to prevent the mass production of misinformation and discriminatory narratives, while harnessing AI’s capabilities for the public interest. Media professionals must also be conscious of their labour rights and be prepared to defend them, as AI is already transforming the job market.

Ruona: The future looks bright for well-resourced, connected newsrooms located in more democratic nations and regions. The innovation departments of these newsrooms will continue to train computers to build great interventions, they will be more audacious in using AI to tackle common challenges and achieve cross-border stories. Most will shift towards using AI to increase personalisation of news, to increase views, revenue and ultimately sustainability. Everyone will grapple with how to balance traditional ethics with using AI and retaining audience trust.

But away from these privileged locations, AI uptake is already becoming the preserve of newsrooms and organisations that are the usual suspects for receiving majority of donor grants and those already with clout. Representatives of these privileged entities are often the ones in the roundtables of AI programmes being developed for small newsrooms or newsrooms outside the West.

Specific to the African continent, we already have pandemic-related, geopolitical stressor-induced pressures that increase the cost of journalism production; AI is often not regarded as a priority by most newsrooms, despite the rest of the world advancing with AI. This means what is being fed into machines, will likely not be representative of our societies in the nearest future. There is also the disparity between genders for digital literacy, and that means a future where AI is dominated again by the current majority (whatever that is for whatever country/region).

But more far-reaching will be a future where there is greater need for clarity on how AI and machine-learning data is collected, where it is stored, who gets compensated, and what constitutes fair compensation. Hopefully, we will see some concrete decolonisation and accessibility occurring as well in these areas.