100% found this document useful (1 vote)
123 views

Generating+Change+ +The+Journalism+AI+report+ +english

This document provides an overview of a global survey conducted by Polis at the London School of Economics on how news organizations are using artificial intelligence. Some key findings from the survey include: 1) AI continues to be unevenly adopted with large newsrooms and Global North countries adopting it more, 2) Over 75% of respondents use AI in at least one part of the news process like gathering, production, or distribution, 3) The main reasons for using AI were to increase efficiency and free up journalists for more creative work. The report also discusses challenges around the ethics of AI and strategies for responsible adoption.

Uploaded by

newgeektype
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
123 views

Generating+Change+ +The+Journalism+AI+report+ +english

This document provides an overview of a global survey conducted by Polis at the London School of Economics on how news organizations are using artificial intelligence. Some key findings from the survey include: 1) AI continues to be unevenly adopted with large newsrooms and Global North countries adopting it more, 2) Over 75% of respondents use AI in at least one part of the news process like gathering, production, or distribution, 3) The main reasons for using AI were to increase efficiency and free up journalists for more creative work. The report also discusses challenges around the ethics of AI and strategies for responsible adoption.

Uploaded by

newgeektype
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

POLIS

Journalism at LSE

Generating Change
A global survey of what news
organisations are doing with AI
Charlie Beckett and Mira Yaseen
Preface

Our news media world has been turned upside down again. As always, serious technological
change produces both dystopian and utopian hype. Much of this has been generated on social
media by corporate PR and politicians. News coverage and expert commentary has also
veered from excited coverage of positive breakthroughs in fields such as medicine to much
more frightening visions of negative forces unleashed: Generative AI (genAI) is producing a
tidal wave of automated, undetectable disinformation; it will amplify discrimination, extreme
speech and inequalities.
And its impact on journalism? Again, much of the coverage has focused on the unreliability of
many genAI tools and the controversy over its rapacious appetite for other people’s data to train
its algorithms. As the initial storm of hype turns into more practical considerations we have
been talking to news organisations around the world about this new wave of technological
change. What are they doing with AI and genAI; what might they do in the future; and what
are their hopes and fears for its impact on the sustainability and quality of this hard-pressed
journalism industry?
Whether you are excited or appalled at what genAI can do, this report makes it clear that it is
vital to learn and engage with this technology. It will change the world we report upon. It needs
critical attention from independent but informed journalists. But our survey shows it is also
already changing journalism. It brings exciting opportunities for efficiency and even creativity.
As one respondent told us, “Freeing up time for journalists to continue doing their job is the
greatest impact achieved.”
But it also brings specific and general hazards. The good news from our respondents, at least,
is that they are aware of the opportunities and risks and are beginning to address them. The
best organisations have set up structures to investigate genAI and processes to include all their
staff in its adoption. They have written new guidelines and started to experiment with caution.
This is a critical phase (again!) for news media around the world. Journalists have never been
under so much pressure economically, politically and personally. GenAI will not solve those
problems and it might well add some, too.
Responsible, effective journalism is more needed than ever. We hope this report and our work
at JournalismAI contributes to that mission. We look forward to hearing from you. Let us know
what you are doing and how we can help.

Professor Charlie Beckett


Director Polis, LSE, leader of the LSE’s JournalismAI project

1
Contents

Preface 1
The JournalismAI Survey 4
Executive Summary & Key Findings 6
Introduction: How Did We Get Here? 9
Chapter 1: How AI is Being Used in Journalism Today 14
1.0 How Are Newsrooms Using AI? 14
1.1 Newsgathering 15
1.2 News Production 17
1.3 News Distribution 18
1.4 Why Newsrooms Use AI 21
1.5 What is Working and What is Not 22
Chapter 2: AI Strategy 25
2.0 The Need for Strategy 25
2.1 Newsrooms’ AI Strategies 24
2.2 How Newsroom Processes and Roles are Affected by AI 28
2.3 Ready for AI? 31
2.4 The Strategic Challenges to AI Adoption 32
2.5 Have Newsrooms’ Approaches to AI Integration Changed? 36
Chapter 3: Ethics and Editorial Policy 39
3.0 AI’s Impact on Editorial Quality 39
3.1 Algorithmic Bias 39
3.2 Newsroom Approaches to Ethical Concerns 41
3.3 Ethical Implications for Journalism at Large 43
3.4 The Role of Technology Companies 44
3.5 The Role of Universities and Intermediary Companies 47

2
Contents (continued)

Chapter 4: The Future of AI and Journalism 49


4.0 Where is This All Going? 49
4.1 The Need for Education and Training 51
4.2 Newsroom Collaboration 53
4.3 How Will AI Change Journalism? 54
Chapter 5: Generative AI and Journalism 57
5.0 Current Use Cases 57
5.1 Opportunities Presented by Generative AI 60
5.2 Challenges Presented by Generative AI 62
Chapter 6: The Global Disparity in AI Development and Adoption 65
6.0 The Global North/South Divide 65
6.1 Economic and Infrastructural Challenges 66
6.2 Language and Accessibility Challenges 66
6.3 Political Realities Affect Trust in AI 68
Conclusion: What Does AI Mean for Journalism? 71
Six Steps Towards an AI Strategy for News Organisations 72
Glossary 73
References 77
Readings & Resources 83
Acknowledgements 85

3
The JournalismAI Survey

This report is the second global survey that we have conducted. The sample for this report
is bigger with a greater emphasis on geographical diversity. It is based on a survey of 105
news and media organisations from 46 different countries regarding AI and associated
technologies. In 2019 we surveyed 71 news organisations from 32 different countries, of
which only 16 have participated again in this 2023 survey.
This year, we made it a point to reach a more diverse group of participants in terms
of the size of their organisations. We invited small and large newsrooms, including
emerging and legacy organisations. In addition to this, contributions came in from Latin
America, sub-Saharan Africa, the Middle East and North Africa (MENA), Asia Pacific,
Europe, and North America. This necessitated an additional chapter, focusing on regional
challenges to AI adoption.

News Organisations That Completed The Survey By Type


30%
28%

20%
20%

16% 16%
13%
10%

7%

0%
Broadcaster Newspaper Magazine News Publishing Other
Agency Group

4
The purpose of this report is the same as the first: to give a sense of what is happening with
AI and what risks and opportunities it offers. We asked participants how they are engaging
with generative AI (genAI) technologies and its implications for the future of journalism. We
hope it informs the debate, helps news organisations chart their way forward, and guides us
to develop our programmes to support that process.
The survey was supplemented with interviews, and conversations at journalism
conferences. We are very grateful to everyone who has shared their thoughts
and experiences with us. The surveys and interviews were conducted between April
and July, 2023.
We do not claim that the survey is representative of the global industry – that would be
almost impossible on an international scale – nor does it equally reflect all viewpoints
within the different parts of news organisations. But it does give an unprecedented insight
into how these technologies are perceived by those people leading their development or
application inside news organisations.
Our respondents represent diverse roles and expertise within their organisations; they
include journalists, technologists, and managers. We encouraged news organisations to
gather representatives from different departments to complete the survey collaboratively.

NB: The list of organisations that completed the survey can be found in the acknowledgments.

The published quotes have generally been anonymised. Some organisation names were
added for context after receiving permission from the authors. Some quotes were edited
lightly to correct literals and for sense. The editorial responsibility for the report lies entirely
with the author.

5
Executive Summary
& Key Findings

1 Artificial Intelligence (AI) continues to be unevenly distributed among small and large
newsrooms and regionally among Global South and Global North countries.
2 The social and economic benefits of AI are geographically concentrated in the Global
North, which enjoy the infrastructure and resources, while many countries in the
Global South grapple with the social, cultural, and economic repercussions of post-
independence colonialism.
3 More than 75% of respondents use AI in at least one of the areas across the news
value chain of news gathering, production and distribution.
4 Increasing efficiency and productivity to free up journalists for more creative work
were the main drivers for AI integration for more than half the respondents.
5 Around a 1/3 of the respondents said they had an institutional AI strategy or were
currently developing one.
6 Newsrooms have a wide range of approaches to AI strategy, depending on their size,
mission, and access to resources. Some early adopters are currently focusing on
achieving AI interoperability with existing systems, others have adopted a case-by-
case approach, and some media development organisations are working towards
building AI capacity in regions with low AI literacy.
7 Around a 1/3 of respondents believe their organisations are ready to deal with the
challenges of AI adoption in journalism, while almost half said they were only partially
ready or not ready yet.
8 Many respondents said AI integration is changing existing roles within the newsroom
through training and upskilling. Along the same lines, AI is changing the nature of a
journalist’s role and sought after skills.
9 As we saw in our 2019 report, financial constraints and technical difficulties remain
the most pressing challenges for integrating AI technologies in the newsroom.
10 Ethical concerns are still significant for our respondents; many advocate for
explainable AI and setting ethical guidelines to mitigate algorithmic bias.
11 Setting de-biasing techniques emerged as a highly challenging area for
most respondents.

6
12 Cultural resistance and fears of job displacement and scepticism of AI technologies
cannot be discounted.
13 Across the board, respondents noted that mitigating AI integration challenges requires
bridging knowledge gaps among various teams in the newsroom. Similarly, cross-
department collaboration was seen as necessary for achieving effective AI adoption.
14 The challenge of keeping pace with the rapid evolution of AI was consistently
mentioned throughout the survey.
15 About 40% of respondents said their approach to AI has not changed over the past
few years, either because they are still in the beginning of their AI journey or because
AI integration remains limited in their newsrooms. Concurrently, around a 1/4 said
their organisation’s approach to AI has evolved; they have gained hands-on experience
that helps them think more realistically about AI.
16 More than 60% of respondents are concerned about the ethical implications of AI
integration for editorial quality and other aspects of journalism. Journalists are trying
to figure out how to integrate AI technologies in their work upholding journalistic
values like accuracy, fairness, and transparency.
17 Respondents called for transparency from the designers of AI systems and
technology companies, and the users, namely newsrooms, with their audiences.
18 Journalists and mediamakers continued to stress the need for a ‘human in the loop
approach,’ in line with the results in our 2019 survey.
19 There are fears that AI technologies would further commercialise journalism,
boosting poor quality and polarising content, leading to a further decline in public
trust in journalism.
20 Tech companies are driving innovation in AI and other technologies, but survey
participants voiced concerns about their profit-driven nature, the concentration of
power they enjoy, and their lack of transparency.
21 Around 80% of the respondents expect a larger role for AI in their newsrooms
in the future.
22 Survey participants expect AI to influence four main areas:
1 Fact-checking and disinformation analysis
2 Content personalisation and automation
3 Text summarisation and generation
4 Using chatbots to conduct preliminary interviews and gauge public sentiment
on issues

7
23 There are concerns that AI will exacerbate sustainability challenges facing less-
resourced newsrooms which are still finding their feet, in a highly digitised world and
an increasingly AI-powered industry.
24 Almost 43% of responses emphasised the importance of training journalists and other
personnel in AI literacy and other nascent skills like prompt engineering.
25 The vast majority welcomed more collaboration between newsrooms and other
media organisations and academic institutions, hoping it would help lessen the
disparity between small and large newsrooms, as well as regionally between
newsrooms in Global North and Global South countries.
26 The need for a balancing act between tech and journalism, a theme that also emerged
in our 2019 survey, remains imperative to a future where AI technologies are leveraged
to serve journalism and its mission.
27 The vast majority of respondents, around 85% have at least experimented with
generative AI (genAI) technologies in a range of ways such as writing code, image
generation, and authoring summaries.
28 Some are apprehensive about using genAI in editorial tasks, while others are using
them regularly in coding, headline generation, and search engine optimisation.
29 There was a high level of agreement among participants that genAI presented a
new set of opportunities not provided by traditional AI. They highlighted some of
the affordances of genAI, such as accessibility and low requirements for advanced
technical skills.
30 Respondents were much more divided – almost 1/2 were not sure – as to whether
genAI also presented a new set of challenges. Some believe genAI presents similar
challenges to traditional AI, such as algorithmic bias, but raises the risk ceiling to a
new level.
31 Newsrooms globally contend with challenges related to AI integration, but the
challenges are more pronounced for newsrooms in the Global South. Respondents
highlighted language, infrastructural, and political challenges.

8
Introduction
How Did We Get Here?

Artificial Intelligence in journalism has been significant for some years. The LSE
JournalismAI project started back in 2019 and our first global report published in the
same year showed that it was a key emerging set of technologies. AI was producing
efficiencies for newswork and it was also creating opportunities for new practices and
products or services.
We showed in that previous report that a range of news organisations were using AI
across the journalism process from newsgathering, content creation and distribution,
to marketing and revenue-gathering. A varied set of technologies were being used, with
programmes based on training software to manipulate data. Advances in machine-
learning and Natural Language Processing (NLP) enabled newsrooms to build or adapt
tools and systems to support their journalism.
Generally, these were large-scale but relatively basic functions such as scraping
social media or automating very simple content creation. It was used by investigative
journalists to comb through large document leaks or to help automate paywalls and to
personalise content in straightforward ways. Some uses of machine learning – such as
search – were so routine and universal they were taken for granted.
In 2019 we found that news organisations were facing various challenges in adopting AI.
There was a lack of general knowledge, specific skills, and resources. There were also
inequalities between big new organisations and smaller ones, especially those in non-
English speaking or less developed markets.
Working with news organisations over the last five years, we could see that the impact
of AI was systemic and accelerating, just as it was in other industries and sectors.
The most successful organisations were those that took a strategic, holistic approach
and who recognised that these technologies required fundamental self-analysis of the
organisation’s capabilities and future planning.
In the wider context it is possible to see AI as a third wave of technological change
for journalism. The first wave was going online, accompanied by the digitalisation of
tools and the shift to mobile. The second wave was the arrival of social media and the
impact that had on content creation, consumption and competition. The technology
platforms now provided much of the infrastructure for journalism and the ‘user’ was
central to its dissemination.

9
The arrival of generative AI (genAI) in the last year has accelerated all these trends
and created new disruptions. This report is a survey of how news organisations have
continued to develop ‘traditional’ AI and how they are approaching the new challenges of
genAI. Clearly, it presents fresh opportunities, but it has special risks and characteristics.
There are continuities. Most news organisations we spoke to were taking a more
strategic approach to genAI, often based on the lessons from dealing with AI and other
technology beforehand.
It is important to stress that genAI is probably the most rapidly emerging technology for
media in this digital era. Some of the more extreme dystopian critiques and over-heated
marketing hype have distracted from a proper debate about immediate concerns. It
is good that we are now all aware of AI and able to interact directly with it and explore
its force and flaws. It is hoped that we will have a more inclusive debate about what it
means for society in general and journalism in particular.
Journalism is a special practice. On the one hand it is around the world a sector under
great commercial, political and competitive pressure. It is weak in resources compared
to the giant corporations developing this technology. The potential for deep structural
threats to journalism in the future must be part of our thinking now. On the other hand,
news organisations have shown remarkable resilience and innovation in sustaining and
sometimes thriving despite the challenges they have faced. It might even be that in a
world where genAI is such a power, for ill as well as good, public interest journalism will
be more important than ever.

• Newsroom Definitions of AI + Generative AI


We continue to refer to AI as an umbrella term for a wide variety of related technologies
and to acknowledge that many processes described as AI often incorporate more
conventional technologies. We borrowed the same simple definition of AI we used in our
2019 report:

Artificial intelligence is a collection of ideas, technologies, and techniques


that relate to a computer system’s capacity to perform tasks normally requiring
human intelligence.1

As for generative AI (genAI), which we discuss in detail in Chapter 5:

It is a subfield within Machine Learning (ML), a subfield of AI in its own right,


that involves the generation of new data, such as text, images, or code, based on
a given set of input data.2

10
We wanted to know if our respondents had an operational definition of AI. As they did in
2019, the responses reflected quite varied understandings of AI, echoing once again the
fluidity of the term and the complexity of the topic.
Some respondents offered a clear operational definition of AI as the use of machines or
computer systems to perform tasks that traditionally required human intelligence. Many
offered technical definitions that centred around the concepts of “automation”, “machine
learning” and “algorithms.” Almost half the respondents used one or more of those terms
in their definitions:

It entails the creation of algorithms and models that allow machines to carry
out operations like speech recognition, visual perception, problem solving, and
decision-making that ordinarily require human intelligence.

Other respondents related their operational definitions of AI to its potential benefits and
their motives for integrating it in the newsroom, such as increasing efficiency, or better
serving the newsroom’s audience and mission:

For us, AI represents a group of technologies that can assist and empower
[our team] by providing insights and automated support across a range of
editorial, operational and communications tasks.

Technologies used to automate gathering and analysis of data that serve our
editorial niche and mission.

Some highlighted the capacity of AI technologies to “learn” or improve themselves:

AI is the use of advanced algorithms that are able to process, interpret,


classify and find patterns in complex and/or large amounts of data in such a way
that infers ‘intelligence’ or ‘human-like’ learning.

11
Several respondents emphasised the importance of ethical considerations in AI
development, while others mentioned concerns about the opacity of AI systems or the
need for human oversight:

The set of technologies, tools, processes... that make it possible to emulate


human capabilities in order to automate or improve them, not always for an
ethical or lawful purpose.

A few respondents said they did not have a working definition of AI yet:

We do not have a collective working definition yet. Mine, as the person in charge
of exploring AI within the newsroom, is that AI is a set of processes that a computer
does to aid and facilitate human’s work, adding intelligence to it. By no means it
replaces human presence and it should always be checked and accompanied.

12
This report is presented in seven chapters. In order to facilitate comparisons between
this and the 2019 report, we kept the majority of the chapters the same, with the
exception of two new chapters.
The Introduction gives a brief background of the findings from the 2019 report, and
summative overview of the technological changes seen in the journalism industry over
the past years, to date. We define key issues and a summary of what you can expect in
this report.
Chapter One focuses on how AI is currently being used by newsrooms. The chapter
looks at how newsrooms are using AI across the new value chain as well as what has
been working and what has not been working.
Chapter Two unpacks the AI strategy or lack of, in newsrooms. We look at the types of AI
approaches newsrooms have undertaken, some of the key challenges and what impact
the technology can have on them.
Chapter Three is also similar to the previous report as we expand on ethics and
editorial policy.
Chapter Four looks to the future and role of AI in journalism.
Chapter Five touches on generative AI and journalism. It is a new chapter that looks at
the current use cases of genAI, as well as its opportunities and challenges.
Chapter Six reflects on the global disparity in AI development and adoption as well as
the challenges faced by the majority of the world’s population in the Global South.
The Conclusion ties all the above-mentioned chapters together and gives a brief analysis
of what all this means for journalism. We conclude the main body of the report with a six-
step roadmap towards an AI strategy that newsrooms could borrow from. You will also
find a glossary, endnotes, references and a list of suggested readings and resources.
This work was funded by the Google News Initiative and carried out by a team led by
Professor Charlie Beckett, director of the LSE’s international journalism think-tank, Polis.
We would like to thank all the journalists, technologists and researchers who took part in
the project. The project was managed by Tshepo Tshabalala and the lead researcher and
co-author was Mira Yaseen

13
Chapter 1
How AI is Being Used
in Journalism Today

1.0 How Are Newsrooms Using AI?


We asked newsrooms how they are using AI technologies today. These areas –
newsgathering, news production and news distribution – cover all the content creation
stages from ideation to publishing. These three areas often intersect given the nature of
“contemporary ‘networked’3 or ‘hybrid’4 journalism. For instance, fact-checking chatbots
are leveraged in news production to validate or refute certain claims. At the same time,
the data collected could assist in detecting misinformation trends and inspire a topic for
a feature article, thus contributing to the newsgathering process.

100%

90%
75% 80%
75%

50%

25%

0%
Newsgathering News Production News Distribution

14
1.1 Newsgathering
AI applications can assist newsrooms in gathering material from various sources
and helping the editorial team gauge an audience’s interests as part of a data-driven
production cycle. The responses revealed that a large majority, almost three quarters of
organisations, use AI tools in newsgathering. The responses focused on two main areas:
1 Optical character recognition (OCR), Speech-to-Text, and Text Extraction:
Using AI tools to automate transcription, extract text from images, and structure
data after gathering.
2 Trend Detection and News Discovery: AI applications that can sift through large
amounts of data and detect patterns, such as data mining.
We list more detailed examples below of these two main areas of application of AI
in newsgathering.

1O
 ptical character recognition (OCR), Speech-to-Text,
and Text Extraction:
The use of AI-powered tools for speech-to-text transcription and automated translation,
such as Colibri.ai, SpeechText.ai, Otter.ai, and Whisper, was a widely cited area of use.
They help streamline the production process and allow newsrooms to engage with
content in different languages:

Transcription services like Otter are invaluable for reporters on deadline, and our
tag tool streamlines production processes for editors.

For others, inaccuracies related to accent or language limitations mean the benefits of
transcription tools are not yet as accessible:

I tried to use an automatic transcription service like Otter.ai to transcribe my


interviews but it was very inaccurate. It struggled to transcribe interviews where
people had an accent.

15
AI technologies provide a universal set of challenges pertaining to ethical and other
considerations that apply to industries and to newsrooms globally. However, early on in
the survey, we began to see an additional set of challenges, such as AI tools’ language
limitations. Newsrooms in Global South countries must contend with these constraints
from the first stage of newsgathering to news production. (More on this in Chapter 6).

2T
 rend Detection and News Discovery:
AI applications help journalists uncover issues of interest to audiences in different
regions and get a sense of what they think about particular issues. Several respondents
mentioned using tools like Google Trends, web scraping, and data mining services like
Dataminr and Rapidminer to identify trending topics, detect news of interest, and gather
data from various sources to uncover stories. Here are some examples from our survey:

CrowdTangle is one of the tools we use regularly. It searches various social


media posts for ‘viral’ or talked about posts.

We use softwares like Rapidminer and other Google initiatives to mine data to
detect trending and news of interest around the world.

We use speech-to-text algorithms to monitor public discourse, mainly


on the bigger broadcasters from the country (radio, TV, streaming). We also
monitor viral social media posts to identify possible disinformation circulating
on these platforms.

In addition to text automation and trend detection, respondents provided various


other uses of AI technologies that help streamline routine, daily processes, previously
performed manually or through lengthy processes, such as data classification
and content organisation. Examples from our respondents include tag generation,
notification services, chatbots and language models that assist in automating responses
and extracting data.
The responses reflected a general inclination to use third-party tools in newsgathering.
Few newsrooms, however, mentioned developing their own in-house built automation
tools like web scrapers, or crawlers to meet their specific needs:

Mostly automations by webhooks feeding into Slack. We have also built our own
scraping services feeding us information when a certain threshold is reached in the
data that we scrape.

16
We have an internal tool that includes an automated tagger for news websites
articles and social media posts (which tags articles with topics/keywords) to collect
specific discourses on issues of accountability and classify them by topics. We use
neural networks for natural language sentiment analysis of refugees related data
using Google Cloud APIs. Other APIs for analytics such as Lebanon protests
platform to collect data on protest discourses and analyse main influences (genders
and job positions in profiles).

These tools do not necessarily use AI technologies. Many processes described as AI


often incorporate more conventional technologies. These are systems that are created or
‘trained’ by humans.
Sometimes endeavours take on the form of collaborative projects with other organisations:

We have developed a tool with the OCCR [Organized Crime and Corruption
Reporting Project] team to “Arabize” their engine by extracting hundreds-of-
thousands of pages to the ARIJ [Arab Reporters for Investigative Journalism]
datadesk using Google Optical Character Recognition (OCR) services, and we build
[our own] crawler to collect the data from specific resources to be cleansed by
researchers and journalists, then we uploaded to our domain.

1.2 News Production


AI can be a valuable resource in content creation at a detailed level. The rise of publicly
accessible generative AI (genAI) technologies like ChatGPT has opened new possibilities
(and challenges) for the ways in which AI can be leveraged in content creation, as the
responses demonstrated. Around 90% said they used AI technologies in news distribution
in a variety of ways, such as fact-checking and proofreading, using natural language
processing (NLP) applications, trend analysis, and writing summaries and code using
genAI technologies.
For instance, NLP applications are assisting with factual claim-checking. They identify
claims and match them with previously fact-checked ones. Reverse-image search is also
used in verification:

We are starting to use NLP algorithms to assist journalists in finding fact-


checkable statements. This system includes collecting the most recent official data
to assist journalists during the fact-checking process.

17
Newsrooms are already experimenting with and using genAI technologies like ChatGPT
in content production tasks, including the production of summaries, headlines, visual
storytelling, targeted newsletters and in assessing different data sources:

Our CMS has a Watson-powered tagging engine. We’re working on a ChatGPT-


powered headline suggestion tool, as well, but it’s in the early phases.

[We use] GPT-4 to create summaries and translation of articles written


by journalists for use on various platforms. We are also experimenting with
AI-generated images, headline alternatives, tagging articles, audio and
video production.

GenAI tools like ChatGPT are also being used to assist with code writing and
source assessments:

For production I am using ChatGPT to help with code writing. I have made a few
games/quizzes where even though the code is not completely written by ChatGPT, it
has certainly written quite a few functions.

We have also used either the ChatGPT interface or the OpenAI API to rationalise
different data sources.

AI technologies like Grammarly and spell checking tools are employed for editing,
proofreading, and improving the quality of written content.

1.3 News Distribution


Around 80% of respondents reported using AI technologies in news distribution, a
slightly smaller percentage compared to production, but the range of use cases was the
widest. Overall, the aim of using AI in distribution is to achieve higher audience reach
and better engagement. News distribution was also the most frequently mentioned area
impacted by AI-powered technologies in the newsroom, with 20% of respondents listing
it as one of the areas most impacted by AI technologies.

18
Respondents shared examples of using personalisation and recommendation systems
to match content more accurately and at scale with interested audiences. Or the other
way around, tailoring content to a specific medium or audience:

We have a multi-layered set of rules for customising our content to individual


news outlets, so it meets all of their internal rules for word-use from British or
American spelling, to rules regarding biassed words, opinionated words, cliches,
hyphenated words, and so on.

Recommender system for podcast episodes, using the EBU Peach engine.

Speech-to-text technology is another AI technology used to optimise content to other


mediums, such as converting text to audio:

We are using voicebots to convert our text stories to audio format.

AI-powered social media distribution tools like Echobox and SocialFlow were
mentioned by several respondents, who said they used them to optimise social media
content scheduling.
Respondents also mentioned using chatbots to create more personalised experiences
and achieve faster response rates:

The WhatsApp chatbot is also used for news distribution, as users immediately
receive a link to our debunk if we have already verified the content they sent. Also, it
sends daily text and audio summaries with Maldita’s top stories.5

Enhancing the visibility of content in searches is key for all digital content, not least for
newsrooms. AI-driven SEO tools can help newsrooms boost discoverability and better
understand their audiences’ interests:

We mostly make use of SEO to help increase the visibility of our stories on our
website. We have found that human interest local stories tend to do better than stories
about celebrities or other topics.

Ubersuggest6 helps me see which keywords are highly searched online, Google
Discover shows me which stories and keywords are trending, CrowdTangle shows
me which social media posts are over performing. This helps me create relevant
news stories that people are interested in. Using SEO keywords that are searched
often increases the likelihood of the stories reaching a higher number of people.

19
We asked our respondents to share some of the impressive applications of AI
technology they have come across which are used by media organisations.
Here is a selection of the most common examples:

1 BloombergGPT: A large-scale language model trained on financial data


to support various NLP tasks such as summarising financial documents,
generating reports, and providing insights on market trends.7
2 The Washington Post’s Heliograf: automates the process of generating short
news articles from structured data, such as sports scores and earnings reports
to allow journalists to focus on more in-depth reporting.8
3 The Times of London’s JAMES: An AI-powered content management system
that uses ML algorithms to analyse user behaviour and interests to deliver
personalised news content.9
4 Czech Radio’s Digital Writer: An AI-powered tool that generates news articles
from structured data, helping automate news production by converting data into
human-readable news stories.10
5 Reuters’s Lynx Insight: This platform utilises AI algorithms to analyse massive
data sets and provides journalists with valuable results and background
information to support investigative reporting.11
6 Washington Post’s Arc XP: A suite of tools for content management, publishing,
and audience engagement that enables enterprise companies, retail brands, and
media and entertainment organisations to create and distribute content, drive
digital commerce, and deliver powerful multi-channel experiences.12
7 Newtral’s Claim Hunter: The platform listens to and transcribes audio content,
detecting statements that need fact-checking and automates the process
of identifying claims made in speeches, interviews, or other audio sources,
enabling efficient fact-checking.13

8 The Reuters News Tracer: It utilises machine learning algorithms to rapidly


identify breaking news stories and verify their credibility. It helps journalists by
sifting through massive amounts of data, social media posts, and eyewitness
reports to deliver reliable and real-time news updates.14
9 Newtral’s automated fact-checking tool: The platform uses NLP and machine
learning techniques to identify potentially false or misleading information. This
tool aims to improve the efficiency and accuracy of fact-checking processes.15
10 Duke Reporter’s Lab’s FactStream: FactStream is an automated fact-checking
system developed by Duke Reporter’s Lab which identifies false claims in live
speeches, debates, and public events by comparing them to previously fact-
checked claims to provide instant feedback on their accuracy.16

20
1.4 Why Newsrooms Use AI
Clearly, the integration of AI applications has the potential to streamline various aspects
of journalistic work. However, we sought to delve into the underlying incentives of the
respondents in employing AI. More than half cited increasing efficiency and enhancing
productivity as core objectives driving their adoption of AI. They said they hoped to
automate monotonous and repetitive tasks, thereby streamlining workflows and allowing
journalists to engage in “more creative, relevant, and innovative work”:

Many of our traditional news processes can be quite laborious and are reliant on
human instinct that can vary drastically from person-to-person. Machine learning (or
AI) should ideally streamline many of those newsroom processes, give insight into
the viability of current processes and ultimately free up the ‘human element’ to
focus on other areas.

Almost all our use cases for AI are to speed up news production. It’s always
about speeding up, I don’t think I’ve had one conversation about using it to
improve quality.
For the fact-checkers at Madrid-based Maldita, the impact of AI tools was felt strongly
during the Covid-19 pandemic, as they helped accelerate and scale the organisation’s
response to Covid-19 misinformation:

By automating some tasks we are able to dedicate more time to other important
things like fact-checking or investigations. It also allows our readers to receive quick
answers when they inquire about a potential hoax. For instance, during the first
weeks of the Covid-19 pandemic, our WhatsApp service was manual, meaning that
a Maldita journalist would have to filter through all messages and count how many
times content had been sent to us. We went from receiving 200 daily queries to over
2,000 during lockdown, which meant that we could simply not get back to all users
at a time when they desperately needed answers (some of the disinformation they
were receiving could be seriously harmful for their health).

21
Around a third of respondents said they hoped AI technologies would help them reach a
wider audience, personalise reader experiences, and enhance audience engagement, a
theme that featured strongly in the previous section about AI uses in news distribution:

We hope to gather more insights in understanding our audiences as well as wide


distribution of our newsletters.

To increase audience engagement on all social media platforms and on the


news site itself. There are specific pageview targets to hit every month and
monitoring analytics and what people are interested in helps me do so.

1.5 What is Working and What is Not


Overall, the successful use of AI technologies varied widely among respondents, though
transcription and audio-editing tools were seen as advantageous by many. Web scraping,
social media monitoring, image generation, recommendation systems, and other
distribution tools were also mentioned as successful AI applications:

Scraping web pages and creating Slack alerts based on filters have been the
most successful applications so far.

Proofreading and basic copy editing have been very successful; video
production using stable diffusion has also worked well.

Recommender systems and NLP systems affecting distribution have been the
most significant success.

Automated transcription and claim-checking have proven to


be successful.

22
Respondents highlighted that even with successful AI applications, testing and
improvement are continuous, reflecting the evolving nature of AI and the consistent need
for human intervention:

We’ve successfully categorised hundreds of thousands if not millions of


communications, so that was particularly successful but we did hit a limit with
traditional machine learning approaches, and we’re interested to see if we can
develop a stronger strategy that integrates a range of approaches.

We have been very successful in the part of automatically monitoring Twitter


and in the part of detecting phrases extracted from audio/video. However, we are
having some difficulties in detecting political tension/polarisation and we are still
trying to improve our claim matching system, since sentence similarity models have
some challenges, such as temporarity (maybe one claim that was false in the past
is true now) …We are working on it.

Many respondents, often smaller, emerging newsrooms, are still in the early stages of
AI adoption:

It is still too early to ascertain any failures, we have been testing many individual
tools and integrations, most of them have been helpful but none of them are heavily
integrated into our workflow.

Newsrooms in Global South countries expressed challenges related to language or


accents, when for instance a tool is used outside of its intended market. We will deal
with this in more detail in Chapter 6.

23
Other than language challenges, very few respondents mentioned failures in specific
AI-applications. However, when discussed, some respondents attributed any failures to
organisational issues, rather than to technical limitations:

The biggest failure has been slow progression on already identified use cases
because of organisational issues, lack of focus and resources.

For some of our third-party available machine learning offerings, we found that
we didn’t have a strong onboarding process or clear explanations, so uptake has
been slower than anticipated.

One respondent explained how their organisation decided to discontinue their work on
an “automated service to write short stories about companies performance in the stock
market,” because it did not gain popularity with the audience:

[It] did not create enough user value (the users rather looked at the stock graph),
and when the pandemic hit and all stocks went south, our thresholds were reached
for almost all companies spamming our users.

24
Chapter 2
AI Strategy

2.0 The Need for Strategy


We have seen how AI technologies are being used or explored in newsrooms in
various ways across the production process. To ensure their best use, newsrooms
need to develop a more strategic approach to adoption. Our survey showed that many
newsrooms had not yet evolved a more formal strategy. And where they had, these
varied according to organisational circumstances and policies. Generally, newsrooms
are adopting a more strategic approach, partly in reaction to the challenge of genAI, but
this is still a fluid area and strategies have to be flexible.

2.1 Newsrooms’ AI Strategies


Around 1/3 of the respondents said their organisation had an AI strategy or were
currently developing one, similar to the results we saw in our 2019 survey. Responses
to this question reflected strongly the diversity among the participants; in terms of
experience with AI, strategy building objectives and approaches. Some newsrooms
have had integrated AI technologies at an institutional level for some time now, and are
conducting strategy reviews to better leverage AI across the organisations.
For instance, The Associated Press (AP) is conducting a strategy review to better
understand areas for opportunities in AI and where it is not necessarily the required solution:

We are currently developing a plan for an AI strategy that cuts across all
departments at AP. We have one working group that has been tasked with reviewing
aspects of the news operation for opportunities and avoidance of AI. A tool or
service needs to meet our journalistic standards and business mission to support
AP members and customers.

25
Some organisations take on a two-pronged approach in their AI strategy, working with
technology partners while enhancing their own in-house capabilities:

We partner with vendors that are moving fast, so we can move quickly,
too. Meanwhile, we are building in-house capabilities so we can have control
and ownership.

Jordan-based ARIJ, a media development hub in the Arab region, recently launched its AI
strategy to guide the organisation internally, but they plan to make it accessible to help
guide other Arabic-speaking media organisations in their own AI integration efforts:

[We] will provide this strategy in a playbook style to all Arab speaking
newsrooms so they can benefit from it, in developing their own strategy.

Depending on many factors, having a strategy might not be needed at all. Some
respondents said they adopt a case-by-case approach to AI, without necessarily
developing a particular institutional level strategy. They focus on how AI technologies can
help them achieve their objectives through AI or other conventional technologies:

It will very often be deployed as a feature of an existing product (ie, via an


upgrade); in other cases we will build our own models. These different use cases do
not necessarily need to be joined up in an overall “strategy” just because they involve
the same underlying technology.

Even those with comprehensive AI strategies, such as AfricaBrief, highlight the need
to incorporate training and to continuously evolve their strategy to adapt to nascent AI
technologies, such as generative AI. Their responses reflect the challenge of keeping up
with the fast-paced evolution of AI technologies, a consistent theme throughout the survey:

AfricaBrief’s vision is to enhance news production using AI technologies, with


objectives including news aggregation automation, data analysis for insights, and
personalised content. Their roadmap includes phased implementation of ChatGPT
for news gathering and NLP for data analysis. Resource allocation is dedicated to AI
investments and talent development. Data management follows regulations for
privacy and security. Ethical considerations are addressed, including bias mitigation.
Monitoring and evaluation are performed using performance metrics. Collaboration
and partnerships are sought for staying updated with advancements and best
practices in AI for news production.

Right now we are fine-tuning our strategy in order to take into account the recent
development of GPT.

26
Several newsrooms that have not developed an AI strategy said they plan to do so in the
near future. For some, the absence of an AI strategy seems to be the result of competing
newsroom priorities and a lack of resources, rather than a lack of interest. Respondents
expressed their newsrooms’ support for individual efforts in experimentation, reflecting the
fact that many newsrooms have not reached an institutional level of AI integration:

Our organisation does not have a formal strategy for AI-related activities.
We rely on the initiative and enthusiasm of some of our colleagues who are
interested in AI.

Not yet. We have been training some team members and searching for funding
to design and develop products that involve.

The responsibility to develop and lead AI integration differs from one newsroom to another:

Who Leads on AI Strategy and Implementation?

A dedicated
26%
cross-functional team

Innovation/
29%
Digital teams

Tech/IT departments 11%

Data Team 9%

Other* 26%

0% 10% 20% 30%

*Includes other departments, such as IT, business, management, editorial and product.

27
2.2 How Newsroom Processes and Roles are Affected by AI
Whether they are at the beginning of their AI integration journey or have more experience
with AI technologies, we found newsrooms are dedicating time and resources to building
their AI capacities. We asked respondents if AI integration efforts had impacted their
workflows and processes, as well as existing roles in the newsroom.
Around a quarter of respondents said the impact of AI adoption on their workflows and
processes in the newsroom has been significant. It has helped cut costs; streamlined
and scaled processes; and increased efficiency in fact-checking, social media monitoring,
content distribution, and accounting:

We saved more than 80% in the process of monitoring and searching for
verifiable phrases … We are convinced that this field will have a more positive impact
in the future.

AI has impacted our news production processes, automating tasks like news
gathering and content creation with ChatGPT. It has also streamlined internal
workflows, improved productivity, and holds potential for advanced tasks like NLP
and data analysis.

One respondent explained how the automation of some processes using AI technologies
changes the nature of their work, rather than replaces it:

AI optimises distribution onsite and on social media. While we are no longer


scheduling all posts individually or curating every part of every homepage, that work
has shifted. We are more easily able to think big-picture, and to change outcomes
more swiftly by adjusting broader rules that may affect dozens of posts or positions
on a page. Put another way, AI streamlines workflows, but it doesn’t replace work
entirely. It changes the nature of the work and expands our impact.

Respondents seemed to appreciate how AI technologies have allowed them to reallocate


journalists’ time to more complex editorial tasks:

Freeing up time for journalists to continue doing their job is the greatest
impact achieved.

28
The vast majority of respondents, almost 75%, who are still in the early stages of AI
adoption, have not witnessed a noticeable impact yet, but expect to in the future:

[It] will definitely have an impact in the future as AI takes over more of the
mundane tasks of newsgathering.

Currently, the impact of AI is not yet significant and widespread, but it already
emerges as an enabler, for sure.

Like more experienced news organisations, they hope AI integration will enable journalists
to spend more time on field work and special projects:

Right now, it does not have a significant impact. However, the impact may be quite
significant if we embrace AI… This will free the journalists to work on other sectors,
especially when they go in the field and remote areas to conduct very unique
interviews and videos for very unique stories that our audiences are interested in.

Responses to whether AI technologies impacted existing roles in newsrooms followed a


similar pattern; around 60% said AI integration has not done so. Many, however, expected
this to change in the future:

Not yet, but we are working on new vacancies for AI that include Prompt
engineers, AI and ML engineers, and data scientists.

Not yet because it’s a transition still in early stages. AI is augmenting rather than
totally changing roles.

I could see us creating more AI-specific roles in the future, likely as news
technologists who work closely with journalists.

Some newsrooms said AI integration has led to the creation of new roles related to AI in a
variety of areas, such as data analytics:

AP’s NLG of earnings reports nearly a decade ago liberated our reporters from
the grind of churning out rote earnings updates and freed them up to do more
meaningful journalism. More recently AP has created three new roles that focus on
AI across news operations and products.

29
Yes, we have created at least one new role focused on managing AI experiences,
and expect we may have more, but the growth is slow and deliberate. As much as
we can, we are leveraging the talent we have already. For instance: Our real estate
and development editors lead our real estate AI content creation. One of our leading
digital audience producers is overseeing our social media optimisation.

Yes there was a need to allocate a dedicated data analyst within the team.

Noticeably, many organisations underscored that AI integration is changing existing roles


within the organisation, through training and upskilling in AI literacy and specific skills like
data analysis or prompt engineering, rather than creating entirely new roles:

Overall, even though the adoption of AI-powered technologies has not always
resulted in the development of new, AI-specific roles, it has prompted the evolution
of current roles and the acquisition of new skills by the staff in order to effectively
use AI technologies in their journalistic endeavours.

Some have already begun building prompt engineering capabilities, but not only within the
IT department:

We are convinced our IT department that while prompt engineering does


require a certain technological understanding, IT staff are not equipped to assess
the result when it comes to journalistic production. On the other hand, designing a
successful prompt, “getting the machine to tell what I want it to tell”, has some
similarities with journalistic processes. We are already training a journalist in
prompt design.

Other responses echoed a similar need to engage journalists and build on their
capabilities in AI and digital skills, as opposed to relying solely on having the expertise
within the IT department:

Yes, there are new AI-specific roles. The digital team helps us monitor trends,
but as the digital editor, I do that too. Newsgathering and distribution have also
changed. I check trends and write content based on that.

30
Yes, the journalist had to train the algorithms and, for doing so, they received
training about how the algorithm works, what kind of data do we need and how to
gain accuracy. On the other side, the journalistic team have shared the editorial
criteria that guide their decisions to the engineers team, by providing keys on why
and what is considered a factual claim.

In line with these responses, it is no surprise that the hiring criteria in newsrooms is
changing, as one respondent remarked:

I think the impact has been more felt in considering who to hire and who not to
hire. I would need less writers once AI is deployed.

The responses reflect the ongoing challenge of balancing technical and journalistic skills
throughout AI integration in the newsroom.

2.3 Ready for AI?


Back in 2019 our report said that:

These are still relatively new, diverse and complex technologies, coming in the
wake of a series of other digital challenges. So it is not surprising that respondents
are deeply divided on AI readiness. They split approximately in half between those
who felt they were catching the wave and others who hadn’t done more than dip
their toes in the water. There was a vein of optimism, especially as many of our
respondents were early-adopters who feel they have already made the first steps.

This year’s report also showed a disparity in AI readiness across newsrooms. Over the
last five years we have observed a broad increase in preparedness but the arrival of genAI
means news organisations have a fresh set of challenges.
Many newsrooms, around a 1/3, expressed confidence in their readiness to deal with
the challenges of AI adoption in journalism. They emphasised their efforts in advancing
tools and technologies to facilitate their work, as well as their ability to adapt quickly to
changing technologies. They believe they have inquisitive skilled personnel capable of
utilising AI effectively:

Yes, we are an online-only organisation and are used to quickly changing


technologies and adapting fast.

We have quite a few people who are interested in AI and skilled in using
technology. I don’t see a problem in the technical challenges.

31
A large proportion, around 53%, said they were not ready yet or only partially ready to deal
with the challenges of AI integration in the newsroom. They cited financial constraints and
the lack of technical expertise as key challenges. The next section explores this in detail.

2.4 The Strategic Challenges to AI Adoption


Financial constraints and technical challenges were identified as primary challenges to AI
adoption. Sometimes, one leads to the other. For instance, technical issues sometimes
stem from a lack of resources. Smaller and emerging news organisations often struggle
to allocate the necessary funding to hire qualified personnel to implement and maintain
AI systems. Similarly, newsrooms face challenges in dedicating time and resources to
designing and implementing upskilling programmes in AI.

The most Pressing Challenges for AI Integration in the Newsroom


50%

40%
41%

30%

25%
20% 22%

10% 12%

0%
Technical Ethical Cultural Managerial
challenges challenges challenges challenges

32
Newsrooms were not entirely sure which skillset(s) to look for in technical personnel.
Newsrooms with several years of experience with AI integration in the newsroom
mentioned specifically the challenge of achieving compatibility and interoperability with
existing systems and platforms:

The main blocker to deploying an ML-based tagging system, for example, is


technical; it needs to be integrated with other systems.

Tested tools have to be implemented into current business structures, which


requires quite a lot of development and testing.

These responses highlight the huge strides some newsrooms have made in AI adoption
at an institutional level. In our 2019 report, many of the respondents, including early
adopters, were at the beginning of their AI journeys. Technical challenges focused on
which projects to prioritise, how to demystify AI and providing general AI literacy training
to personnel.
The responses also highlight a disparity between smaller, emerging newsrooms in Global
South countries on the one hand, and large, well-resourced, more experienced news
organisations in Global North countries. While responses by the former focused on finding
the resources to hire the technical experience needed, the latter have already deployed AI
technologies in various areas and are now focused on achieving interoperability:

We are a mid-sized regional nonprofit startup with a strong engineering team


and an innovative organisational culture… But, we have nowhere near the technical
power of large national organisations.

Mitigating AI integration challenges goes beyond hiring the right technical staff. It
requires bridging knowledge gaps that exist among various teams in the newsroom,
a challenge that is more consistent across the board. Responses reflected a need to
enhance AI literacy and technical skills among journalists and technical staff alike:

[We] have varying levels of understanding on what AI is. In a team of about 20


people, less than a ¼ have training on it and we have yet to get everybody up to
speed. I think my organisation will be equipped to take advantage of its potential
once we are all on the same page.

33
One of the greatest challenges we have is that the Innovations/Technical team
is not very informed on AI and the solutions AI can bring to journalism. We don’t
have in-house experts who can help in AI coding and training. But some have shown
interest in learning and we hope soon they will be well versed with AI.

Some organisations expressed a desire to collaborate with other, more experienced


organisations, to fill the knowledge gap:

Our greatest challenge lies in the technical expertise and understanding


[what’s] needed to introduce and create AI systems. We may not have to create
the system from scratch, but we do need to have trustworthy partners if we will
be outsourcing them.

The challenge of keeping pace with the rapid evolution of AI is also experienced more
evenly by most newsrooms, and is testament to the need for continuous adaptation, a
theme we have seen consistently in the survey:

The technologies are evolving so quickly it’s difficult to know what technology to
take on for fear it will soon be outdated.

On technical challenges: Given our lean technical resources, it’s always


challenging to support new integrations that may not have immediate ROI. AI
implementations take a lot of setup work, so we’re continuously looking at ways to
increase speed-to-market.

Ethical challenges were also prominent in discussions around AI adoption in


newsrooms. Our respondents raised concerns about the transparency and
explainability of AI algorithms:

The ethical question is the most important because you have to keep it
transparent for the readers.

34
Algorithmic bias is another concern:

If not properly addressed, AI algorithms can perpetuate biases and


discrimination, amplifying societal inequalities.

Respondents stressed the need for guidelines, standards, and regulations to ensure
the ethical use of AI in newsrooms and to address the potential risks associated with
its implementation:

As a news organisation, we are wary of the ethical implications and believe we


need to have clear guidelines in place before rolling it out widely in terms of
newsgathering and production.

Despite a general enthusiasm for AI integration, cultural challenges remain a noticeable


hurdle. Some respondents noted scepticism, resistance to integrating AI technologies in
their work, fear of job displacement and concerns about the way AI is changing the nature
of the journalistic profession:

Our newsroom managers are great allies, and there are many champions for
experimentation among our ranks. In our newsrooms, there is some fear about the
implications of AI – the impact on jobs, products and subscribers.

Most people understand the big changes happening with AI, but changing
workflows is always hard for any given profession.

Managerial challenges revolved around organisational structures and competing


newsroom priorities:

The larger the organisation, will have multiple layers or management, the harder
it is to experiment without lots of meetings and presentations.

No overarching management strategies or training being shared with the


editorial team, and a general fear that using AI will contribute to our own
future redundancy.

35
Resistance to, and over enthusiasm about AI were also mentioned:

Management seems to want to insist on the use of AI even when it is


not necessary.

2.5 Have Newsrooms’ Approaches to AI Integration Changed?


Around 40% of organisations said their approach to AI technologies in the newsroom
has not changed much since our last report in 2019. Many are still in the early stages
of implementation while in others AI use remains limited in the newsroom to one
department or a small number of staff, which was not sufficient to steer the institutional
approach to AI:

Not yet, because so far it has mainly involved the IT department and just a handful
of journalists dedicated to testing and validating them in limited contexts.

No, we have not because we are still at the pilot stage.

There are no significant changes since we have very limited use of AI.

I think we do not yet have the collective consciousness that the tools that we
use are AI, so no, it has not changed.

However, around a quarter of respondents said their organisation’s approach to AI has


evolved. Experimenting and learning by doing has helped organisations gain a deeper and
more realistic understanding of the potentials of AI integration in journalistic work:

Completely. I’ve been working on little AI projects as part of my innovations


remit for a few years now but they have just been curiosities really. But the moment
ChatGPT was launched, the upper management suddenly [became] really
enthusiastic about AI.

36
Generally, they feel more confident about their engagement with AI, and better equipped to
handle the challenges posed by continuously emerging AI technologies:

Yes, our organisation’s approach to and use of AI-powered technologies in the


newsroom has evolved… as we gained more hands-on experience and explored
various use cases, we have likely gained insights into the capabilities, limitations,
and ethical considerations of AI technologies in the newsroom.

Their experiences have helped them set reasonable expectations of AI technologies:

Our approach became more realistic taking into consideration our resources and
the very fast involvement in the industry. We became more focused on AI
technologies that can complement the work of journalists in gathering and
aggregating data that is relevant and to help them identify trends and conduct
in-depth analysis.

Editors who lead AI implementations have a deeper understanding of the


potential of AI — how the tech falls short. For those who have hands-on experience,
there is more enthusiasm about the future, and less fear about AI replacing
journalists’ work, at least in the near term. And having case studies/success stories
makes it easier to build trust with those who are more sceptical.

37
Hands-on experience with AI technologies in the newsroom has helped some uncover
benefits they were not expecting when they started out:

Although our AI tools were first implemented as a way to save time and be more
effective, we have discovered that the data they gather is very useful to understand
how disinformation works as well as other research uses.

Others noted that more departments are now involved with AI integration efforts in the
newsroom with the goal of adopting an institutional approach to AI, compared to when AI
experimentation efforts were considered the domain of technical experts only:

Since we first began using AI in 2017, it has moved from the engineering team
to the newsroom.

38
Chapter 3
Ethics and Editorial Policy

3.0 AI’s Impact on Editorial Quality


Ethical concerns are central to the debate about AI in all industries and journalism is
no exception, particularly as a profession meant to serve the public interest. More
than 60% of respondents expressed concerns about the ethical implications of AI
integration for editorial quality and other aspects of journalism. For journalists, the
central question is, how do we integrate AI technologies in journalism while upholding
journalistic values like accuracy, fairness, accountability, and transparency? The
examples below provide a summary of the breadth and depth of ethical concerns related
to AI integration in the newsroom:

The adoption of AI in journalism raises potential concerns related to bias,


editorial independence, transparency, verification, data ethics, and human
judgement. It is important for journalists and news organisations to carefully
consider these concerns and take necessary steps to ensure responsible and
ethical use of AI in their editorial work, while upholding journalistic principles of
accuracy, fairness, and integrity.

Upholding trust, accuracy, fairness, transparency, and diversity in news content,


while mitigating biases and maintaining journalistic integrity, is a priority for us in
the era of AI-powered technologies.

3.1 Algorithmic Bias


Since AI systems mirror societal biases, respondents worried that a reliance on AI
technologies could exacerbate biased news coverage and misrepresentation of
marginalised groups:

AI systems are trained on vast amounts of data, and if the training data contains
biases, those biases can be amplified in the AI outputs. This can lead to biased
content recommendations, skewed perspectives, or unfair representation in news
coverage. It is essential to address and mitigate algorithmic biases to ensure fair
and inclusive journalism.

39
I don’t trust the current technologies to include perspectives of people who tend
to be marginalised.

Algorithmic bias is a potentially larger problem for content in languages other


than English:

AI generated models are built on databases that include bias especially when it
comes to content in Arabic and this will be reflected in the AI generated content.

We asked respondents if they employed any debiasing techniques. Few organisations


provided solid examples:

We are employing rudimentary (i.e. not advanced) de-biasing techniques for


recommender systems and classification focused natural language processing
(NLP) systems. Firstly, we check both recommender systems and NLP applications
for basic types of bias via an evaluation framework. Secondly, we mitigate
unwanted bias e.g. by changing training data for NLP systems or introducing rules
that override the raw output of recommender systems. We are not yet far enough
into our work with generative AI (genAI) to know what biases we need to mitigate
for in practice. But I expect that this will be a concern in the future and more
advanced methods might be required here. Ideally, however, this should be a
community effort focusing on the large foundation models out there – but this is
increasingly impossible as transparency in training data and learning methods is
eroding (also allocating more responsibility to the news organisations that choose
to use genAI anyway – that is organisations like us).

We have tested an approach where a red-team analyses the algorithms in order


to find biases on them. Another interesting approach is to have an ethical
committee to supervise all the workflow of the algorithms from its inception, data
annotation, etc. However this kind of work is resource-consuming and it is very
difficult to implement it properly in newsrooms because of the small size of the AI
teams involved.

Respondents largely agreed about the significance of addressing algorithmic bias


by establishing debiasing techniques, but the responses suggest that building and
implementing ethical guidelines for AI adoption is one of the most challenging areas for
media organisations, in terms of complexity and time:

Although I understand the concept of de-biasing, I don’t even know the steps of
doing so or even how to implement such a strategy.

I can’t say we’ve done that yet but debias training is being talked about. That is
the aspect of AI that we’ve found is the most time consuming so I do worry that it
might not be prioritised.
40
Designing de-biasing techniques often requires multidisciplinary collaboration:

Journalism should address algorithmic bias with bias elimination techniques


to guarantee equity. Journalists, news organisations, ethics and academic experts
are involved in establishing these ethical techniques and practices in the use of AI
in journalism.

Several respondents said they did not know whether their organisation deployed any,
while others said their use is still “too limited” for them to develop such techniques.
It is important to keep in mind that our respondents come from journalistic and technical
fields with widely ranging tech expertise which might explain why they did not offer many
examples of de-biasing techniques.

3.2 Newsroom Approaches to Ethical Concerns


In addition to debiasing techniques, respondents suggested measures that would help
mitigate some of the ethical concerns discussed. Their responses focused mainly on
transparency, considering the “black box” nature of AI systems and the need to maintain
roles performed by humans when AI technologies are part of a process:

The automated nature of AI algorithms raises questions about transparency,


accuracy, and potential biases. Audiences might doubt the authenticity once they
know info is AI generated.

AI systems often operate as black boxes, making it challenging to understand


how they make decisions or why specific content is recommended.

They called for transparency from the designers of AI systems as well as transparency
from those who apply the systems, such as newsrooms. They argued that audiences
should be made aware when AI systems are used in content creation or other tasks:

We need to understand how the algorithm works to be able to trust it. Regimes
are sometimes closely tied with tech companies. So we need transparent AI.

How does AI know what it knows? We must be sceptical of these systems, and
as transparent as possible with editors and readers when we use them.

It is important to note that today it is almost impossible to perform journalistic duties


without using AI technologies in some way, however minor. So it is not clear where the
line is drawn between an AI-assisted production process that requires disclosure and
one that does not. Most of our respondents seemed to refer to the explicit use of AI in
content production, i.e. using ChatGPT or other genAI technologies to summarise or
author pieces, as areas where disclosure was needed.

41
An emphasis on the need for a ‘human in the loop approach’ has not changed much
since our 2019 survey. Newsrooms continue to view human intervention as crucial to
mitigating potential harms like bias and inaccuracy by AI systems:

No matter how advanced AI becomes, human criteria will always be essential in


the whole fact-checking process.

The constant and mandatory intervention of the human factor in [AI] integration
is necessary.

Contextualisation is key in journalism and AI systems cannot perform it (yet):

Context and interpretation is everything in our industry, and this is something


that AI technologies will struggle to duplicate. We cannot let our audiences think
that we have outsourced this critical function to technology.

It is not always clear how “human” values can be integrated with AI, which explains why
it is difficult to develop and implement ethical guidelines and de-biasing techniques.
Aligning metrics with human values can be complex, as one respondent said:

... Most alignment procedures require translation of values into metrics that can
be operationalised within data science/ML – and something might be lost in
translation here, even when we try to integrate values in our AI systems.

Some respondents suggested keeping editorial tasks AI-free for the time being:

For now, we believe that it is best to keep AI out of direct editorial roles in any
manner, way or form. Editorial decisions are based not just on ethics but on a
variety of factors like real-time situations which can change any minute. AI, we
believe, is not yet equipped to make decisions, however, we do think that in the
coming days, AI could assist the editorial chalk out strategies related to
distributing workflow.

I think the focus of AI in journalism should be on fact-checking, data analysis


and content distribution, not on fields that will decrease the human role in the
journalism field.

Ethical implications for using generative AI technologies are addressed in Chapter 5.

42
3.3 Ethical Implications for Journalism at Large
We wanted to know if our respondents thought AI technologies are changing the
public’s perception of journalism and if there are other implications for journalism as an
industry. Their responses centred around two interrelated concerns: The concern that
AI technologies would further commercialise the journalism industry which would likely
lead to the second concern, a decline in public trust in journalism.

Newsroom Concerns for AI’s Ethical Implications


100%

75% 82% 82%

40%
60%
50%

40%
25%

18% 18%
0%
Editorial Readers’ Industry
Quality Perceptions in General

Concerned Not Concerned

Respondents feared that AI technologies would heighten competitive pressures on


newsrooms, leading to the mass-production of poor quality journalism. Here are some
examples from the survey:

I think it’s going to result in a lot of mass-produced clickbait as news


organisations compete for clicks. We will not be participating in that contest. Most
of the public already have a very poor opinion of journalism, and that seems unlikely
to change either way as a result of this technology.

If journalists rely on AI for content creation the same way as influencers do, it
will be a huge threat to the industry. There have to be rules and boundaries.

If the industry seeks to only maximise revenue then it could have a negative
impact on editorial standards and ethics at large.

43
According to some respondents, the risk of disenchanting audiences is happening at a
time when public trust in journalism generally appears to be eroding:

I am concerned that the public already has declining trust in the media and a
decreased appetite for news. I am not entirely sure what the public’s attitude is
toward AI, but if they are largely sceptical of it, I worry that this might have negative
effects for newsrooms that do use AI in their work.

I worry about how the reader will react if they hear that a story in the
newspaper or website was written by a robot. I worry about the lack of trust of
machines and the apparent absence of a human touch to news gathering, writing,
gathering and distribution.

3.4 The Role of Technology Companies


Research and development at technology companies is driving innovation in AI and other
technologies. With the emergence of generative AI (genAI) technologies that use large
language models (LLMS) like OpenAI’s ChatGPT and Dall-E and Google’s Bard, a plethora
of dependent tools were created and made accessible to the public. These tools can
automate a large number of tasks in almost all industries, presenting great potential for
increasing efficiency and productivity. The opportunities they present for journalism are
still being explored, but may be transformational. At the end of the day, content creation
is the bread and butter of journalism. The relationship between technology companies
and journalists is increasingly significant.
Many respondents agreed that technology companies foster innovation and develop
useful tools:

Tech companies are at the forefront of AI R&D, driving innovation and pushing
the boundaries of what AI can achieve. This has the potential to automate
processes, improve efficiency, and solve complex problems .

44
They also raised concerns about the profit-incentive driving these innovations and the
concentration of power technology companies enjoy.

Existing information is already biased, reflecting the patriarchal, euro-centric


world we already live in. I believe AI will only exacerbate this phenomenon,
especially because the tech world producing these technologies also tends to be
hegemonic and profit-driven more than anything else.

Many respondents demanded more transparency from tech companies around the
data used and how the systems are designed. They hoped technology companies
would play a more proactive role in training journalists on AI tools and collaborate with
civil society, media, and government to ensure technical innovations are aligned with
humanistic values.
Several respondents appreciated the accessibility and affordability of some tools they
provide. At the same time, respondents voiced concerns about the ethics of technology
development. They mentioned algorithmic bias created through black boxed AI-systems,
privacy concerns, and accountability issues:

They also face the risk of neglecting ethical issues and social impacts, such as
privacy, fairness, accountability and transparency, in their pursuit of competitive
advantage and profit.

Tech companies often collect and analyse massive amounts of user data to
train their AI systems.

Some highlighted that technology is advancing at a rapid pace that journalism cannot
keep up with:

As a negative, the urgency and eagerness, speed, with which they want these
advances to be adopted, in many industries and at all levels. Your market/
commercial fight ends up affecting everyone.

Other respondents worried that tech innovations create a dependency on technologies


that become industry norms which newsrooms are forced to embrace:

The worst problem is the monopoly, the absence of control, the black boxes and
the fact that they develop tools and technologies that they want us to use without
first asking if we want them or how we want them.

45
They can enforce a news dependence, as we have also seen with other waves of
new technologies. They can become gatekeepers with a worldview that users of
their technologies have to adapt (an example is the bias controls that OpenAI have
put in place in the GPT models which are aligned with their commercial values and a
certain set of American values).

With these critiques in mind, many respondents called for more transparency from
technology companies pertaining to the AI systems they develop and the training data
they use:

I would like to see technology companies be more proactive in communicating


to the public how they are using their systems. People have a right to know and
understand what they’re being subjected to when they use these platforms.

We would like to see AI tools focus more heavily on explainability. AI art bots
should develop ethical credit-sharing processes.

They also hoped technology companies will provide more training to journalists on AI
tools that can enhance their work, especially in small newsrooms and organisations in
less-resourced regions:

I would like to see them collaborating with small news agencies such as ours.
We need technology companies to offer free extensive training to community
journalists. Most of the time, community media organisations don’t have the
resources and funding to offer relevant AI training programmes.

They also called on technology companies to pursue more collaboration with journalists,
civil society and governments to ensure the technologies they develop are aligned with
humanistic values:

I would like to see them adopt a more responsible and collaborative approach to
AI, engaging with stakeholders and regulators, and ensuring that their products and
services are aligned with human values and rights.

Also I would like them to engage in informed conversation with journalists all
over the globe even in markets that are not interesting for them and especially
where information gaps are affecting the most vulnerable.

46
Other respondents highlighted the opportunities tech companies have in leveraging AI
for “social good”:

Tech companies have the opportunity to leverage AI for social good, such as
improving healthcare, addressing climate change, and assisting in disaster
response. They can also contribute to bridging the digital divide by making AI more
accessible and inclusive.

3.5 The Role of Universities and Intermediary Companies


Around 90% of respondents welcomed a stronger role being played by universities,
journalism schools and other intermediary companies in assisting with the adoption of
AI in newsrooms through research, training, and collaboration:

Universities, intermediary companies, journalism schools, and research


institutions can contribute to AI adoption in newsrooms through research and
development, education and training, intermediary solutions, and collaborative
partnerships. They can conduct research, provide training programmes, offer
customised solutions, and collaborate with newsrooms to accelerate the adoption
of AI in journalism, shaping the future of the industry in the AI era.

Some explained that academia can play an important role in a much-needed critical
examination of AI and in addressing the ethical question:

They have to do more research on how AI can be used more effectively in public
interest journalism and also come up with a guide on how to use AI.

I believe schools and universities can serve as key catalysts in the adoption of AI
in newsrooms by providing education, research, ethical guidance, and fostering
critical thinking skills.

While welcoming a larger role being played by academic and other institutions in AI
adoption, respondents from various regions said that journalism study programmes
have not effectively evolved to reflect significant technological developments that have
drastically impacted journalism i.e. digitisation and the emergence of data journalism:

47
From our experience, journalism schools in the MENA are not coping with the
digital changes even (before talking about AI). Curricula of most schools do not equip
journalists with necessary knowledge and skills to use digital tools for fact-checking,
for data journalism, digital security etc.
ARIJ (MENA)

Universities, especially journalism schools, need to start integrating AI


learnings in their teachings much sooner. Most journalism graduates I see coming
into our newsrooms have very little understanding, unless they themselves are
naturally inquisitive.
South Africa-based newsroom

Journalism schools should be core in educating a new generation of tech and AI


knowledge journalists (even though most J-schools are lagging massively behind
which is a real danger for both effective and responsible adoption of AI among
legacy publishers).
Ekstra Bladet (Denmark)

48
Chapter 4
The Future of AI and Journalism

4.0 Where is This All Going?


The vast majority, around 80% of respondents expect an increased use of AI in their
newsrooms. Four main areas for future AI integration were mentioned:

1 Fact-checking and disinformation analysis


2 Content personalisation and automation
3 Text summarisation and generation
4 Using chatbots to conduct preliminary interviews and gauge public sentiment
on issues

1 Fact-checking and disinformation analysis: Many respondents highlighted the


importance of AI in combating misinformation and polarisation. They mentioned using
AI protocols to enhance fact-checking processes, analyse false narratives, identify hate
speech, and monitor social media platforms for disinformation:

We are rethinking our media and social media monitoring programmes and
methodologies to rely more on AI automation tools and to integrate the analysis of the
role of algorithms in mis and disinformation and hate speech.

So much time of ours is spent looking for eligible claims to fact-check – whether it
be from social media posts in various platforms, speeches, interviews, news reports,
among others. I think that within the next two to five years, my organisation may
introduce more AI-powered technologies for monitoring disinformation.

2 Content personalisation and automation: Several respondents mentioned the potential


for AI to personalise news content and optimise distribution. This includes personalising
the home stream for readers, implementing AI functionalities in content distribution, and
utilising machine learning for customised metered paywalls. The aim is to enhance user
experience and deliver tailored content:

Personalisation/automation are part of the main home stream. This is something


we are already working on and have deployed to some of our smaller sister sites
already, but not quite ready to deploy on to a site of our size. However, we are hoping to
have something in the next few years.

49
We are exploring AI-powered technologies, including chatbots like ChatGPT, to enhance
newsroom operations and engage with the audience through personalised news updates
on messaging platforms.

3 Text summarisation and generation: AI-powered technologies for text summarisation and
generation were mentioned as valuable tools for newsrooms. This includes using generative
language models to produce summaries, titles, and push messages for articles. Here are
some examples from the survey:

We hope to create a service using GPT-4 to “eat ‘’ through stock market announcements
creating easy to understand article drafts from them, and by training the model with our
feedback we would make it better and also teach it to tell us what is important and not,
hopefully. It’s in the experimentation phase so far, but we hope to have a prototype by the
summer and then expand on that further.

We will be using generative LLMs for summarisation tasks (e.g. proposal of titles or
push messages).

We are experimenting with using chatbots for headline and SEO title generation, and
summarisation.

We hope to integrate AI tools into the newsroom to help on more high-end editing
tasks, such as suggesting headlines and creation of multiple versions of stories. We are
also exploring new types of news products and forms.

4 Using chatbots to conduct preliminary interviews and gauge public sentiment on issues:
Some respondents expressed interest in utilising chatbots for conducting preliminary
interviews and gauging public sentiment on specific issues, allowing journalists to identify
interesting cases for further investigation and in-depth interviews:

“I foresee the application of more AI powered technologies in our newsroom. For


example, a chatbot that explains our products/packages to our readers and also a chatbot
that can help the newsroom to monitor social media platforms and send us alerts when
key sources or personalities or organisations post on their social media handles like
Twitter, Instagram or Facebook.

I see the use of chatbots to perform interviews as something we could use on


some projects. If there is a specific issue that affects lots of people, a chatbot could
perform rudimentary interviews to get a general feel for what people are saying and out
of those basic interviews, the more interesting cases could be followed up with an
interview by a journalist.

50
AI tools for social media monitoring, content curation, news verification, and language
translation were also mentioned as areas of interest. These tools would aid in monitoring
social media platforms, curating relevant content, verifying information, and translating
content across different languages. The goal is to improve news production, content quality,
and audience engagement.
Others, especially smaller newsrooms are assessing their use of AI and working on aligning
their future strategy with the resources available to them:

For us now it is very important to evaluate what we have done so far and rethink
what we can realistically invest in in terms of human resources, financial resources
and technology. AI powered technologies are evolving faster than the capacities of
small newsrooms and organisations. We are currently conducting an internal
discussion to strategise our next steps in terms of AI related activities both in our
newsrooms and training and support programmes for other small independent media
in the region.

4.1 The Need for Education and Training


The first report produced in 2019 described newsrooms’ struggles in building AI literacy
across the organisation. This continues to be an objective for less-resourced newsrooms
and ones at the beginning of their AI journeys. Almost 43% of responses emphasised the
importance of training journalists and other personnel in AI literacy skills and technologies:

We aim to spread AI literacy widely among our community of journalists and


fact-checkers.

We will invest in basic training for all members of my organisation, focusing on


how AI works and how data-for-AI-tools works.

At the same time, this year’s discussions around training were more focused on specific and
nascent skills like prompt engineering, advanced technologies like large language models
(LLMs), and multidisciplinary training across various departments to enhance interoperability:

We will train journalists on new skills such as prompt engineering and create
workshops where they could play with new AI advances.

Respondents noted the need for a holistic approach to AI training that goes beyond
technical skills, asserting the need for cross departmental collaboration so various
functions are more in sync:

51
I would allocate resources to inter-departmental collaboration on innovation. I’d offer
courses in applied AI to any employee (journalist and developer) who is interested. I’d
establish well-funded data science teams in the editorial rooms – and a unit dedicated to
value alignment, as lacking alignment with editorial values is both intrinsically wrong and will
result in the brakes being pulled (rightfully) before AI-prototypes are employed at any real
scale in most legacy newsrooms.

We are implementing initiatives to promote interoperability between departments to


share processes and information. Subsequently a training plan is envisaged to help
overcome technical gaps.

... I would tear down most siloes so that journalists, developers, data scientists and so
on work closer together.

Around a 1/4 of responses highlighted the need to hire AI specialists, data scientists, and developers
with expertise in AI technologies. These experts would bridge the gap between journalism and
technology, working closely with journalists to integrate AI tools into newsroom processes. Here are
some examples from our survey:

Hire an AI manager with an understanding of both editorial and tech.

Hire data scientists and developers.

Recruit more tech-savvy IT graduates.

Hire more engineers with experience in building AI tools and project managers.

The vast majority of responses, more than 90%, highlighted the need for training in a variety of skills
and competencies:

We seek to give training in AI-augmented journalism for competences including data


literacy, AI literacy, digital storytelling, and ethical considerations. This includes skills in data
collection, analysis, visualisation, understanding AI principles and ethical implications,
crafting narratives with AI, and responsible reporting with AI-generated content.

Building AI literacy is urgently needed. Everybody in our newsroom should have


at least a basic understanding of how AI systems – as far as they are relevant for our
field of work – come about and work. They should also know about legal, ethical and
business implications.

52
Some emphasised how the type of training needed depended on the role:

The competencies would be different from different teams and job roles – for
example, a product manager might need training on how to improve reader
experience on the site, whereas a news producer might need training on how to use
AI to better produce articles, videos, podcasts and other multimedia projects.

4.2 Newsroom Collaboration


Almost half the respondents think there is not enough collaboration between newsrooms
and other entities, such as academic institutions, media development organisations, and
tech companies, owing to several challenges, such as newsroom competition as well as
competing priorities within the newsroom:

I think collaboration is always nice, but most organisations are currently busy
trying to find themselves within the maelstrom of digital transformation – too much
collaboration and talking about what one is doing can also hinder actual progress.

Competition amongst newsrooms will be a challenge for [collaboration].

As discussed in Chapter 3, a large majority, 85%, welcomed more collaboration between


newsrooms and other media organisations and academic institutions as this can be useful
in lessening the disparity between small and large newsrooms:

How Newsrooms Feel About More Collaboration Among


Newsrooms on AI
100%

75% 85%

50%

25%

15%
0%
More Collaboration Would be Useful Not Necessarily

53
More exchange between advanced newsrooms and small newsrooms can be
beneficial to bridge knowledge and resources gaps.

Similarly, respondents highlighted the potential of collaboration between newsrooms


in Global South countries as well as between Global South and Global North-based
newsrooms as a step toward enhancing AI adoption globally and bridging the global AI
adoption gap. (More on this in Chapter 6).

4.3 How Will AI Change Journalism?


As discussed throughout this report, overall, respondents acknowledged the transformative
potential of AI to automate tasks, personalise content, improve productivity, and enhance
audience engagement:

AI will transform the news industry through increased personalised products,


generation of news in multimedia, intensified verification functions, increased
productivity. However, small media organisations that will not be able to cope with
this transformation will not be able to sustain themselves.

They discussed their ethical concerns as we demonstrated in Chapter 2, as well as how AI


could impact media viability. Many respondents expressed concerns about AI exacerbating
sustainability challenges facing less-resourced newsrooms who are still finding their feet in
a highly digitised world and an increasingly AI-powered industry:

AI may empower the smallest of the newsrooms to experiment and reach


further than has been possible. AI could also help the large, fortified newsrooms
become larger and stronger, snuffing out the little guys.

54
AI could become a crossroad and an insurmountable hurdle for news
organisations that do not realise that AI is just a new aspect of the constant
progress of digital transformation. … some news organisations have been very slow
in digitising their business models (or haven’t even succeeded in doing so) – now
the next shock is just around the corner.

Several newsrooms expected AI to make them “leaner,” as an increasing number of tasks


become automated:

It may mean job losses because the work is currently being done by say five
people, and may only need one person.

It will have a drastic impact…If machines can write stories, edit them and
distribute them, it follows that newsrooms have to be leaner.

Others said that AI will not “replace jobs.” Rather, AI will redefine the role of journalists;
“steering AI… requires new competencies and new functions.”
Another respondent said:

We believe AI isn’t a threat to jobs. But people who learn to effectively use AI to
leverage their work will be in demand, and soon many roles will expect people to be
able to use these tools.

The need for a balancing act between tech and journalism, a theme that emerged in our
2019 survey, remains imperative to a future where AI technologies are leveraged to serve
journalism and its mission:

It will involve a rethinking of the entire workflow and, at least during the adoption
phase, additional work to adapt to this new approach. There will be more
collaboration and intersection between journalistic and technical figures.

Others worried that the reliance on AI technologies will undermine journalistic values, for
instance by pushing polarising content. This in turn would reduce public trust in journalism,
which many think is in decline as noted previously:

55
It might facilitate the path for some newsrooms but it can threaten core values
of journalism, negatively affecting the news industry. It can make our work more
efficient but less reliable if used badly.

At the moment I am too pessimistic because too many media are forgetting that
public interest and voyeurism are not the same thing.

I think it’s going to change what we even consider news. Unfortunately, it might
create a greater pivot to biased political and social commentary as humans feel the
need to differentiate themselves.

How organisations are affected by AI depends on various factors, including size, region,
and access to resources:

Different types of news organisations may have different opportunities and


threats from AI, depending on their size, resources, audience and goals. AI will not
explode the classic organisation, but it will require adaptation and innovation from
news professionals and stakeholders.

56
Chapter 5:
Generative AI and Journalism

5.0 Current Use Cases


Generative AI (genAI) has been facilitated by technological advances such as the creation of
Large Language Models (LLMs), increased server space and processing power producing
programmes that have accelerated learning power to process ‘language’ as text, audio and
imagery.17 They are not ‘sentient’ or ‘intelligent’ in a genuine human way, but they give the
appearance of being intelligent. They are sometimes inaccurate and even make up facts
(‘hallucinations’) because they are language machines, not ‘truth’ machines. They can
accelerate or amplify existing AI abilities and with prompts or adaptations can provide new
tools and services. There are continuities with ‘traditional’ AI but they also represent a new –
and somewhat unpredictable – phase for news organisations. So generally our respondents
described engagement with genAI, understandably, as an experimental process.

Newsroom Experimentation with Generative AI Tech


100%

75% 85%

50%

25%

14% 1%
0%
Yes Unclear No

57
The vast majority of respondents, around 85% at least, have experimented with genAI
technologies at varying degrees and in a range of ways as you’ll see in the responses below.
Some examples include writing code and image generation and authoring summaries. Others
are more project-oriented and on the extreme end of the spectrum, some newsrooms said they
already use genAI technologies regularly:

I have used them to construct emails, get code snippets and rephrase a sentence I
feel just isn’t right.

We have experimented with natural language processing, Open AI’s ChatGPT. We use
it to generate content that we use to develop infographics for our socials.

Some respondents made sure to indicate that their uses of genAI technology have not included
content generation, reflecting an apprehension to using genAI technologies in editorial tasks:

We are using it but not to generate content. We have experimented with ChatGPT for
analysing large swaths of data. Graphic designers have tried tools like DALL-E as a
reference/source of inspiration in the brainstorming process.

Some mentioned specific projects their organisations are working on that use
genAI technologies:

We’re working on a range of GPT-3/4 techniques for data extraction and


code development.

We have created a presenter and his programme 100% with generative artificial
intelligence, the image, what he looks like, what he says, the voice... everything is AI,
but supervised.

58
Several respondents said they are now using genAI technologies regularly in their
newsrooms in various ways, such as headline suggestions, search engine optimisation,
and producing summaries:

We are encouraging everyone to experiment with these. For example, our


social media team uses ChatGPT to summarise articles. Our newsletter team
creates infoboxes to use in newsletters, etc.

We use them on a daily basis for various tasks, such as summarising articles,
evaluating content quality, search engine optimisation, and generating copy.

We use Bing Co-Pilot for suggesting headlines and sublines for topics,
gathering background information and generating unique images for an article.

The use of genAI by newsrooms depends on their mission, size, experience and many
other factors. Unlike the examples we just mentioned, media development organisations
in MENA are using ChatGPT not for the purpose of integrating AI in their work. Rather,
they are using it in media literacy training to demonstrate its shortcomings, including
inaccuracies and bias in Arabic content, for example:

ChatGPT or Bing AI is used by our journalists for example to generate texts


used in our production and also to detect biases to be used as examples in
training on media information literacy. E.g. Generating speech of women
candidates in Arabic. Terminologies were not gender sensitive. Journalists had to
re-edit. Another example is related to testing the accuracy of the answers to
fact-check claims of politicians in Arabic. The answers do not provide critical
information especially when it comes to the economy and banking sector that are
sponsoring the media content in Lebanon while they are a main driver of the
economic crises.

Though newsrooms are largely still experimenting with ChatGPT and other genAI
technologies, most have not had enough time to build comprehensive assessments.
This is expected given that genAI tools became accessible to the public in late 2022
with the launch of OpenAI’s ChatGPT. Despite their novelty, many respondents expect a
larger role for genAI technologies in content creation, including in writing summaries and
headlines, content customisation, and coding:

AI can help journalists generate summaries, headlines, captions and other types
of content using natural language generation techniques. AI can also help
journalists create engaging and personalised stories for different audiences and
platforms using natural language understanding and recommendation systems.

59
...Offload the work of content generation and adaptation to what
the user is looking for and focus on more core journalistic functions
(curation, investigation, analysis).

5.1 Opportunities Presented by Generative AI


We have reviewed the diverse use cases of generative AI (genAI) technologies the
respondents shared, but will these use cases and others in the future present new
opportunities for journalism that ‘old’ AI technologies did not?
There was a high level of agreement among the respondents; almost 3/4 agree with
this statement, particularly with respect to assisting journalists in “generating copy,” like
summaries and headlines, personalised distribution, and research and brainstorming:

Do Generative AI Technologies Present New Opportunities?


90%
80%
70% 73%
60%
50%
40%
30%
20% 26%
10%
1%
0%
Yes Not sure No

... Generative AI can help us create engaging and diverse content, such as
headlines, summaries, captions, quotes, or even stories, based on data or
information we provide… help us personalise and tailor our content to different
audiences, platforms, and formats, using natural language generation and
adaptation techniques … and enable us to explore new angles and perspectives on
topics that we may not have considered before, by generating questions,
hypotheses, or scenarios that stimulate our curiosity and creativity. In short,
genAI can enhance our journalistic skills and values, and empower us to produce
more relevant and impactful stories in ways that we can’t even imagine.

60
They pointed out the affordances of genAI technologies, such as their accessibility,
low requirements for technical skills, and what was described as their ability to
understand “context”, which make them stand out from other AI technologies that
generally require deep specialist expertise in areas like programming. Here are some
insights from our respondents:

[T]heir ability to understand context offers a unique ability to create models


that understand language much better and, in doing so, can bring us one step
closer to automated fact-checking. Therefore, in the future, generative language
technologies may be more of a help than a challenge.

GenAI can help because of the democratic way in which they have arrived, that
is: I don’t need an intermediary, a developer to make me the application that I need,
it’s like a Chrome extension. I make my life easier, the ease with which today, in
2023, you can do artificial intelligence compared to 2020 is impressive.

GenAI seems to require a lot less technical skills to the end user with much
faster response times, allowing us to bring it to fruition quickly throughout
the organisation.

GenAI can change the way we interact with information, allowing us to grasp
massive amounts of data, and level the playing field between high and low data
skills. They can give us much more control on the information we use to write
news, as they assist us in the time consuming writing tasks.

With all those affordances in mind and as the experimentation journey carries on,
journalists are trying to find out how drastically genAI technologies could raise the
productivity threshold. This is happening as these models continue to improve. As
millions of people experiment with these tools, the models are ingesting massive data
that is hoped will enhance them.

61
5.2 Challenges Presented by Generative AI
Interestingly, respondents were more divided over whether generative AI (genAI) presents
a different set of challenges in the newsroom compared to other AI technologies. Slightly
over half the respondents, 52%, were not sure if this was the case , whereas 40% did view
genAI as presenting new challenges in the newsroom.
The respondents argued that the types of challenges genAI presents are not very
different from the ones posed by other AI technologies (i.e. transparency, bias,
inaccuracy, and privacy issues). However, they think genAI technologies exacerbate to a
considerable degree those challenges, therefore potentially producing more harm:

GenAI has higher tendencies to produce biased outputs.

How to approach transparency and trust in relation to genAI is really a


big challenge.

Do Generative AI Technologies Create New Challenges?


60%

52%

40%
40%

20%

8%
0%
Yes No Not Sure

For some, genAI raises the risk ceiling to a new level:

The requirements for robustness (e.g. factuality and no harmful bias) is even
larger in the case of genAI, as mistakes are potentially more harmful when they
occur than with most other AI technologies.

62
In particular, many respondents are concerned about the repercussions of genAI on
misinformation and fake news. They expressed fears that it would exacerbate the
problem even further, expanding in scale:

GenAI will allow the production and distribution of disinformation at a scale


we haven’t seen before – this will potentially impact news consumption. but also
send people to more trusted sources.

Yes. So far, I can’t rely on AI for fact-checking. Especially that the most common
mass tool (ChatGPT) is faking data. In the current stage, AI can help me in writing,
drafting, but I’d never trust the accuracy until a [human] editor reviews it.

I am very concerned about the generation of content without verification. The


generative models that we currently have do not have a stage to verify their
content and that is worrying. We have already had some examples, even when we
have done some tests, we have seen that there is a generation of random content,
it is not even oriented towards something, but it directly obtains a solution that
does not exist.

63
Some respondents believed genAI would produce more sophisticated manipulated
content, requiring in return more sophisticated validation methods. Here are some
examples from our survey:

... AI-generated content (photos, videos, audio, text) is trickier to debunk


because there’s no reference material to cross-check it with. It’s completely a
work of fiction as opposed to, let’s say, a photo that was manipulated -- with this
kind of disinformation, you at least have an original photo with which to compare
the false version. AI-generated content doesn’t work like that.

Generating stories and copy using AI could reduce trust, increase inaccuracies
and perpetuate editorial bias.

I am worried about the power posed by genAI and there is a need to have tools
that automatically fact check [content produced by] ChatGPT in real time.

There is a need to ensure that journalists do not resort to ChatGPT for story
analysis. AI software that can help identify stories written by bots will be very
helpful in ensuring original content remains central in news production.

64
Chapter 6
The Global Disparity in AI
Development and Adoption

6.0 The Global North/South Divide


Currently, the social and economic benefits of AI are geographically concentrated primarily in the
Global North.18 This is due to various reasons, such as the affordance of technical infrastructure,
the abundance of capital, and well-funded research institutions in these countries.19 This chapter
sheds light on the global inequality in AI development and adoption. As you’ll notice, we took a
more analytical approach to this chapter than the rest of the report. Why?
To collectively benefit from AI technologies in a more equitable manner, we ought to gain a better
understanding of how and why global AI inequality exists. One step towards that is to pay close
attention to the challenges of AI adoption faced by the majority of the world’s population, which
resides in Global South countries.20
Let’s first clarify the Global North/South distinction and why we adopted it in this report. The Global
North/South terminology does not refer to a “geographic region in any traditional sense but rather
to the relative power and wealth of countries in distinct parts of the world”21:

In the late twentieth century, the Global North and South terminology replaced previous
descriptors of the global order. It was generally agreed that the Global North would include
the United States, Canada, England, nations of the European Union, as well as Singapore,
Japan, South Korea, and even some countries in the southern hemisphere: Australia, and
New Zealand. The Global South, on the other hand, would include formerly colonised
countries in Africa and Latin America, as well as the Middle East, Brazil, India, and parts of
Asia. Many of these countries are still marked by the social, cultural, and economic
repercussions of colonialism, even after achieving national independence. The Global South
remains home to the majority of the world’s population, but that population is relatively
young and resource-poor, living in economically dependent nations 22.

We opted to use the North-South distinction to extend a power-conscious framing that considers
the power dynamics governing AI development and adoption in newsrooms globally, while
upholding that the Global North and the Global South are by no means monolithic; as each includes
socially and politically diverse countries.

65
6.1 Economic and Infrastructural Challenges
As discussed in Chapter 2, AI technologies pose a range of ethical and other challenges to
all industries, including journalism. These are experienced by newsrooms across the board,
regardless of size, resources, or geographic location. For newsrooms in Global South countries
however, the challenges are much more pronounced. Respondents in these countries
highlighted knowledge gaps, resource constraints, language barriers, as well as infrastructural,
legal, and political challenges.
A MENA-based respondent mentioned the political and economic realities low-resourced
independent media operate under, emphasising the challenge of competing with AI-powered
local and foreign state propaganda (i.e. bots, disinformation campaigns), amid low internet
penetration rates:

We’re talking about a war-torn region. You have millions of refugees and millions
living in deep economic crises, from Lebanon to Egypt. In our region, millions are
deprived of internet access, which should be a basic right, or have limited access to it. As
an independent media outlet producing professional content, you are dealing with low
internet penetration rates and repressive state propaganda dominating the digital
sphere… This creates digital illiteracy, which is very difficult to confront, and this is a key
challenge for us.

Some challenges are shared across large areas of the Global South. Respondents in Sub-
Saharan Africa, MENA, and the Asia-Pacific all mentioned low internet penetration rates and a
difficulty in hiring technical experts:

Technology is not fully embraced in most media houses in Malawi. Part of the reason
[being] their poor internet infrastructure and internet penetration [which is] quite low.

Adequate technology infrastructure and widespread internet connectivity are essential


for implementing AI solutions. In Egypt, there may be disparities in access to reliable
internet connections, particularly in rural areas. Addressing the infrastructure gap and
ensuring widespread connectivity is crucial to facilitate the adoption of AI technologies.

6.2 Language and Accessibility Challenges


Other challenges are unique to local contexts. For instance, AI-related language challenges in
India or in other countries which are home to hundreds of languages, are distinct:

The adoption of AI in India and especially northeast India faces a whole lot of
challenges. We have over 200 tribes with their distinct languages and culture. There is a
lack of skilled workforce, data quality and availability issues, evolving ethical and
regulatory frameworks, infrastructure and connectivity gaps.

66
Some challenges to AI adoption are interrelated. Low internet penetration leads to low digital
literacy, which makes it easier for disinformation to thrive. Similarly, resource constraints
make it difficult to hire or even find AI experts:

Developing and implementing AI technologies requires a skilled workforce with


expertise in AI, data science, and related fields. In Egypt we may face challenges in
terms of the availability of professionals with the necessary skills and knowledge.

[The] Botswana government does not promote transparency and does not have
comprehensive data privacy laws and policies that promote access to information.
This makes it difficult to promote dynamism in adopting AI-powered technologies in a
country that is quick to repress online content.

Local developers are incentivised to work at foreign companies which are more likely to offer
higher pay:

Our newsrooms have limited resources and tech capacity is expensive.

Scarcity of resources [is a challenge]. In Argentina (and in LatAm) developers tend


to work for foreign companies that can pay high salaries.

Technology companies invest the vast majority of their resources in Western markets. Most
tools are made for English speakers, which causes accessibility challenges to both non-
English speakers and English speakers with non-Western accents.
A Philippines-based respondent summarised how resource constraints, knowledge gaps,
and language barriers intersect:

AI technologies developed have been predominantly available in English, but not in


many Asian languages (with the possible exception of [Mandarin] Chinese). We have
to catch up doubly to create AI systems, and AI systems that work with our local
languages. There are also limited funding opportunities to allow us to explore using AI
systems in our jobs. And lastly, some countries in Southeast Asia (like ours, the
Philippines) aren’t as advanced as our neighbours, so there are only a handful of AI
experts in the country, much less AI experts in journalism.

Respondents gave us several examples where they discovered issues using AI tools in non-
English languages or non-Western English accents:

Coral has been very successful as a comment moderation tool, but we still find the
‘grey’ area comments require a human element, especially as it is an American tool not
built for the South African audience in mind.

Machine Learning (ML) for encoding is a real dealbreaker, Trint for speech-to-text
is highly recommended, translation to any other language than English and Mandarin
or Cantonese needs improvement.”
67
Voice AI tools do not sound like Africans, [they are] not authentic at all.

It is hoped that genAI technologies, which our respondents described as more accessible
than traditional AI technologies, will help bridge the regional disparities in AI adoption.
Cautious optimism is advised. If we look at ChatGPT for instance, the most famous publicly
accessible genAI tool, we find that it is not available for a large proportion of the world’s
population for various reasons. OpenAI does not support the access of ChatGPT in Russia,
Venezuela, Zimbabwe, Cuba, most likely due to US sanctions, or in China.23 Egypt has
reportedly banned ChatGPT for privacy concerns.24 Most of these countries are among the
most populous in the world.

Tools like Chat GPT are not available in Zimbabwe unless you use VPN and
you need to have a foreign number to get the code.

There are limitations for our country in some platforms (i.e. ChatGPT doesn’t
work in Egypt) and most of the tools don’t natively support Arabic.

GenAI technologies such as ChatGPT are also out of reach for hundreds of millions of
people around the world due to accessibility issues such as internet penetration rates,
particularly in rural areas.

6.3 Political Realities Affect Trust in AI


Algorithmic bias disproportionately affects marginalised communities, potentially causing
serious harm, as research has shown (e.g. racial discrimination in facial recognition
technologies).25 Similarly, perpetuating bias is ostensibly a larger problem for content in
languages other than English, as mentioned in Chapter 3:

AI powered tools are more advanced in English language and experiments in


technology that is contextualised to the MENA region are modest as well. This
affects the accuracy of data collected and of analysis of sentiments for example.

AI scholars have warned that ignoring social, political and cultural contexts contributes
to increasing algorithmic bias and widening global AI disparity.26 Respondents noted how
many AI tools and applications fail to understand local contexts and cultures:

Most tools are not applicable to our language or contexts.

Generative AI [technologies do not support] Indian languages or our cultural


nuances in their responses.

68
Scepticism of AI technologies by newsrooms in Global South countries also stems
from a distrust in the entities involved in the development and large-scale adoption of
AI technologies, such as global technology companies and local, government funded
technology and media institutions. For instance, in MENA, an alignment between
technology companies and governments was seen as a major obstacle to trust. One
respondent noted that newsrooms in MENA with resources for AI technologies were
aligned with nondemocratic governments:

In the case of the MENA, big media organisations are mouthpieces of


governments that are not democratic and thus do not invest in quality journalism
that contributes to accountability and democratic change. Thus AI powered
technologies will not reach the independent small media platforms that are
reaching youth and contributing in fostering critical thinking.

It is feared that smaller newsrooms that advance public interest and accountability
journalism will struggle to survive. This could have significant implications for the entire
news ecosystem.
Even if local AI models were abundantly available, trust would remain an issue. Discussing
mobile application “Allam”, a Saudi Government-developed chatbot similar to ChatGPT, one
respondent explained how such projects remain tied to political considerations, diminishing
user trust in such models:

This is a local model, do we trust the datasets used by Arab state institutions?
[One wonders] if the datasets used were balanced or representative or if the data
were manipulated? Unfortunately, this is one of the issues we deal with regionally.
We don’t have pan-Arab models created by independent Arab institutions whose
choices when it comes to training datasets can be trusted. You know how
sensitive some of these contexts are … AI requires massive funding to be
competitive … Arab political realities raise urgent questions about the reliability of
[local AI] models. Are they going to be open source? Are they adaptable to Arab
newsrooms’ needs? Can newsrooms add their own datasets, for instance?

It is important to note that concerns about AI technologies’ enablement of government


surveillance and control are not unique to the Global South and have been intrinsic to critical
discussions of AI technologies in the Global North as well.27 As early as 2013, Edward
Snowden’s revelations exposed in detail the interdependencies between governments and
technology companies.28 The PRISM program illustrated how the US government utilised the
surveillance infrastructure built by technology companies like Google and Facebook using
the data they collected for marketing purposes to advance its own surveillance practices.29

69
Despite the myriad complex challenges newsrooms in Global South countries face, respondents
from the region’s newsrooms expressed enthusiasm for building capacity in, and sharing AI
expertise. Arguably, they have to if they want to survive as AI is transforming journalism. This is
particularly true for smaller funding-dependent newsrooms whose mission is rooted in public
interest journalism and holding power to account.
When we asked respondents if they thought there was enough collaboration between newsrooms
around the development of AI technologies, several respondents noted that collaboration could be
especially useful for newsrooms in Global South countries that are experiencing similar challenges:

We think that collaboration would be specially useful between newsrooms from Global
South, such as us. We think that developing models in non-English languages (Spanish, in
our case) is really important for newsrooms.

This could involve joint efforts to create tailored AI algorithms for the African context and
establishing industry standards for responsible AI use.

Collaboration between Global South and Global North newsrooms was also highlighted as a step
toward lessing global AI disparity:

There is a big gap between the Global North and South. Both of them need to be resilient
together and collaborate to expose biases in AI, and have a serious conversation about AI
regulations and policies.

70
Conclusion
What Does AI Mean
for Journalism?

There is a caveat that this is all a reaction to a moving story. ‘Amara’s law,’ the adage coined
by American scientist and futurologist Roy Amara, applies here: “we overestimate the impact
of technology in the short-term and underestimate the effect in the long run.” Some new
technologies take time. The first newspaper went online in 1980, but it took another 17 years
before BBC Online went live. OpenAI only released ChatGPT in late November 2022 but by
January 2023 they were claiming one million users. Things are moving fast and some things
might get broken. Working practices will not be the same and some jobs will be replaced.
New ones will be created with different skills and responsibilities. Many journalists who have
experimented with genAI can see how it can make their work much more efficient and add
new dimensions to what they offer to the public.
As this report has shown, this is a volatile technology for news organisations. Most are aware
of the inherent risks in AI technologies generally and the dangers of bias or inaccuracy. They
are discovering that applying AI in news production has immediate possibilities, but how it
will shape future practice is uncertain.
It is important to understand the wider context. There are major issues around regulation,
intellectual property and commercial competition. There are big societal concerns related to
AI around misinformation, discrimination and bias as well as the dangers of media capture
by corporations or even governments. We should not lose sight of the bigger picture that
goes way beyond the news sector.
However, as journalists who report on the world, we should be much more aware of our role
in critically reporting on how AI is changing our lives, in an informed and independent way.
Our survey suggests that there is an awareness of this, albeit most people are putting most
of their energy into understanding and working through the immediate practical challenges.

71
Whether this is a brave new world or not depends to a large degree on humans making
policy and ethical choices within news organisations. If we want to make bland,
automated clickbait then this technology makes that a lot easier. But it also offers the
opportunity for ‘good’ journalists to do more ‘human’ work with the support of AI. In a
world of machine-created information, much of it unreliable, responsible, public service
journalism is in a great position to prove its value. AI also offers ways for journalism
to reinvent itself in imaginative ways. GenAI has also however created the threat of
‘disintermediation’ for the news media. Why should people go to a news organisation
for information if they can just prompt a chatbot? This survey suggests that many
newsrooms are now working hard to answer that question in a way that affirms the
utility and importance of journalism as part of our social, economic and political lives.
We look forward to working with them on that journey.

Six Steps Towards an AI Strategy for News Organisations


1 Get informed. See the LSE JournalismAI website for online introductory training,
the AI Starter Pack, a Case Study hub and a series of reports on innovation case
studies. Other sources are available! (See Readings & Resources section)
2 Broaden AI literacy. Everyone needs to understand the components of AI that
are impacting journalism the most, because it will impact on everyone’s job –
not just editorial, and not just the ‘tech’ people.
3 Assign responsibility. Someone in your organisation should be given the
responsibility of monitoring developments both in your workplace but also more
widely, such as assigning AI innovation and R&D leads, and keep a conversation
going within your organisation about AI.
4 Test, iterate, repeat. Experiment and scale but always with human oversight
and management. Don’t rush to use AI until you are comfortable with the
process. Always review the impact.
5 Draw up guidelines. They can be general or specific. This is a useful learning
process when done inclusively to engage all stakeholders. And be prepared to
review and change them over time.
6 Collaborate and network. There are many institutions such as universities
or intermediaries like start-ups who are working in this field. Talk to other
news organisations about what they have done. Generative AI technologies
may present new opportunities for newsroom collaboration given the high
enthusiasm about and accessibility of genAI tools.

72
Glossary

Algorithm:
“A procedure for solving a mathematical problem in a finite number of steps that frequently
involves repetition of an operation”. More broadly, “a step-by-step procedure for solving a
problem or accomplishing some end.’’30

Artificial intelligence (AI):


“A collection of ideas, technologies, and techniques that relate to a computer system’s
capacity to perform tasks normally requiring human intelligence.”31

Automation:
“The technique, method, or system of operating or controlling a process by highly automatic
means, as by electronic devices, reducing human intervention to a minimum.”32

Bias:
A systematic prejudice or error affecting the rationality and fairness of a decision. Rooted in
decision theory, cognitive psychology and statistics, the notion of bias is extremely important
as both journalism and artificial intelligence techniques ultimately rely on human decisions,
and are as such subject to “cognitive” biases (confirmation bias, bandwagon effect, etc.).
When mirrored in bad, incomplete or flawed data sets to train AI algorithms, this may result
in equally flawed AI-powered decisions: “Algorithms can have built-in biases because they
are created by individuals who have conscious or unconscious preferences that may go
undiscovered until the algorithms are used, and potentially amplified, publically.”33

73
Bot:
‘Bot’ is short for ‘Robot’ and usually refers to ‘agent-like’ software – ie, software that
exhibits autonomy or autonomous characteristics. A bot is “a piece of software that can
execute commands, reply to messages, or perform routine tasks, such as online searches,
either automatically or with minimal human intervention.”34 Bots perform either perfectly
legitimate (eg. smart assistants, search engine spiders) and malicious activities (eg.,
covertly spread false information and political propaganda in coordination with other bots,
within a so-called “botnet”).35

Data Mining:
“Data mining is most commonly defined as the process of using computers and automation
to search large sets of data for patterns and trends, turning those findings into business
insights and predictions. Data mining goes beyond the search process, as it uses data to
evaluate future probabilities and develop actionable analyses.”36

Deepfakes:
This is the negative form of a broader concept of ‘synthetic media’. Audio and video altered
through machine learning and deep learning techniques for maximum, real-time realism in
fakery. The term originally comes from a Reddit user that, in 2017, used such techniques to
realistically and dynamically add faces of celebrities to pornographic content,37 and is now
widely used for any kind of content, the politically charged included.38

Deep Learning:
“Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks
capable of learning unsupervised from data that is unstructured or unlabelled. Also known as
deep neural learning or deep neural network”, it is one of the most advanced contemporary
applications of “AI”, powering a broad range of image, voice and text recognition tools.39

Generative AI (genAI):
“Generative AI is a sub-field of machine learning that involves generating new data or
content based on a given set of input data. This can include generating text, images,
code, or any other type of data. Typically, genAI uses deep learning algorithms [‘to learn
patterns and features in a given dataset, and then generate new data based on the
underlying input data.]’”40

74
Hallucinations:
“Hallucination is the term employed for the phenomenon where AI algorithms and deep
learning neural networks produce outputs that are not real, do not match any data the
algorithm has been trained on, or any other identifiable pattern. It cannot be explained by
your programming, the input information, other factors such as incorrect data classification,
inadequate training, inability to interpret questions in different languages, inability to
contextualise questions.”41

Large Language Models (LLMs):


“Large Language Models are a subset of artificial intelligence that has been trained on
vast quantities of text data … to produce human-like responses to dialogue or other natural
language inputs. In order to produce these natural language responses, LLMs make use
of deep learning models, which use multi-layered neural networks to process, analyse, and
make predictions with complex data.”42

Machine learning (ML):


“Machine learning is an application of artificial intelligence (AI) that provides systems
with the ability to automatically learn and improve from experience without being
explicitly programmed.”43

Optical Character Recognition (OCR):


“Optical Character Recognition (OCR) is the electronic conversion of images of text into
digitally-encoded text using specialised software. OCR software enables a computer to
convert a scanned document, a digital photo of text, or any other digital image of text into
machine-readable, editable data. OCR typically involves three steps: opening and/or scanning
a document in the OCR software, recognising the document in the OCR software, and then
saving the OCR-produced document in a format of your choosing.”44

Natural Language Processing (NLP):


“Natural Language Processing, usually shortened as NLP, is a branch of artificial intelligence
that deals with the interaction between computers and humans using the natural language.
The ultimate objective of NLP is to read, decipher, understand, and make sense of the human
languages in a manner that is valuable. Most NLP techniques rely on machine learning to
derive meaning from human languages.”45

75
Natural Language Generation (NLG):
NLG is a subset of NLP. “While natural language understanding focuses on computer reading
comprehension, [NLG] enables computers to write. NLG is the process of producing a human
language text response based on some data input. This text can also be converted into a
speech format through text-to-speech services. NLG also encompasses text summarisation
capabilities that generate summaries from in-put documents while maintaining the integrity
of the information.”46

Neural Network:
“A programme or system which is modelled on the human brain and is designed to imitate
the brain’s method of functioning, particularly the process of learning.”47 “[A] computer
architecture in which a number of processors are interconnected in a manner suggestive of
connections between neurons in a human brain and which is able to learn by a process of
trial and error.”48

Prompt Engineering:
“Prompts are instructions given to an LLM to enforce rules, automate processes, and
ensure specific qualities (and quantities) of generated output. Prompts are also a form of
programming that can customise the outputs and interactions with an LLM.”49

Search Engine Optimization (SEO):


“In simple terms, SEO means the process of improving your website to increase its visibility in
Google, Microsoft Bing, and other search engines whenever people search for products you
sell, services you provide, [or] information on topics in which you have deep expertise and/or
experience. The better visibility your pages have in search results, the more likely you are to
be found and clicked on.”50

Synthetic Media:
“Synthetic media is an umbrella term that refers to digital content generated by AI or
algorithmic means, often with the intention of appearing real.”51 Deepfakes are one type of
synthetic media.

76
References

Introduction
1. Brennen, J. Scott, et al. “An Industry-Led Debate: How UK Media Cover Artificial
Intelligence.” Reuters Institute, 2018, https://reutersinstitute.politics.ox.ac.uk/
sites/default/files/2018-12/Brennen_UK_Media_Coverage_of_AI_FINAL.pdf.
Accessed 14 August 2023.
2. Foy, Peter. “What is Generative AI? Key Concepts & Use Cases.” MLQ.ai,
5 December 2022, https://www.mlq.ai/what-is-generative-ai/. Accessed
10 August 2023.
3. Russell, Adrienne. Networked: A Contemporary History of News in Transition.
Wiley, 2011.
4. Chadwick, Andrew. The Hybrid Media System: Politics and Power. Oxford
University Press, USA, 2013.

Chapter 1
5. Maldita. “Disinformation on WhatsApp: Maldita.es’ chatbot and the “Frequently
Forwarded” attribute · Maldita.es – Periodismo para que no te la cuelen.” Maldita.es,
3 June 2021, https://maldita.es/nosotros/20210603/disinformation-whatsapp-
chatbot-frequently-forwarded-attribute. Accessed 14 August 2023.

6. Neil Patel. “Ubersuggest: Free Keyword Research Tool.” Neil Patel, https://neilpatel.
com/ubersuggest/?utm_source=neilpatel.com&utm_medium=blog&utm_
content=StepByStepGuideGrowingTrafficUbersuggest. Accessed 14 August 2023.

7. Bloomberg. “Introducing BloombergGPT, Bloomberg’s 50-billion parameter large


language model, purpose-built from scratch for finance | Press.” Bloomberg.com,
30 March 2023, https://www.bloomberg.com/company/press/bloomberggpt-50-
billion-parameter-llm-tuned-finance/. Accessed 14 August 2023.

77
8. The Washington Post. “The Washington Post leverages automated storytelling to
cover high school football – The Washington Post.” Washington Post, 1 September
2017, https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-
leverages-heliograf-to-cover-high-school-football/. Accessed 14 August 2023.

9. Kunova, Marcela. “The Times employs an AI-powered ‘digital butler’ JAMES to serve
personalised news.” Journalism.co.uk, 24 May 2019, https://www.journalism.
co.uk/news/the-times-employs-an-ai-powered-digital-butler-james-to-serve-
personalised-news/s2/a739273/. Accessed 14 August 2023.

10. Czech Radio. “Artificial Intelligence Writes Stories for Czech Radio. The Launch
of the Digital Writer Project.” Czech Radio, December 2023, https://www.czech.
radio/artificial-intelligence-writes-stories-czech-radio-launch-digital-writer-
project-8384063. Accessed 14 August 2023.

11. Kobie, Nicole. “Reuters is taking a big gamble on AI-supported journalism.” Wired
UK, 10 March 2018, https://www.wired.co.uk/article/reuters-artificial-intelligence-
journalism-newsroom-ai-lynx-insight. Accessed 14 August 2023.

12. ArcXP. Arc XP: Enterprise CMS and DXP solution, https://www.arcxp.com/. Accessed
15 August 2023.

13. Abels, Grace. “What is the future of automated fact-checking? Fact-checkers


discuss.” Poynter, 28 June 2022, https://www.poynter.org/fact-checking/2022/how-
will-automated-fact-checking-work/. Accessed 14 August 2023.

14. Reuters. “Reuters News Tracer.” Reuters News Agency, 15 May 2017, https://www.
reutersagency.com/en/reuters-community/reuters-news-tracer-filtering-through-
the-noise-of-social-media/. Accessed 14 August 2023.

15. Campos, Alba Martín. “Los servicios públicos externalizados por el Gobierno: del
reparto de vacunas a la destrucción de narcolanchas en Cádiz.” Newtral, 29 March
2022, https://www.newtral.es/servicios/. Accessed 14 August 2023.

16. Adair, Bill. “FactStream app now shows the latest fact-checks from Post, FactCheck.
org and PolitiFact.” reporterslab.org/, 7 October 2018, https://reporterslab.org/
factstream/. Accessed 14 August 2023.

78
Chapter 5
17. NVIDIA. “Generative AI – What is it and How Does it Work?” NVIDIA, https://www.
nvidia.com/en-us/glossary/data-science/generative-ai/. Accessed 28 August 2023.

Chapter 6
18. Yu, Danni, et al. “The ‘AI divide’ between the Global North and Global South.”
The World Economic Forum, 16 January 2023, https://www.weforum.org/
agenda/2023/01/davos23-ai-divide-global-north-global-south/. Accessed 23
August 2023.

19. Chan, Alan, et al. “The Limits of Global Inclusion in AI Development.” arXiv, 2 February
2021, https://arxiv.org/abs/2102.01265. Accessed 23 August 2023.

20. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.

21. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.

22. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.

23. OpenAI. “Supported countries – OpenAI API.” OpenAI platform, https://platform.


openai.com/docs/supported-countries. Accessed 23 August 2023.

24. EdGavit. “How to Use Chatgpt in Egypt: 8 Proven Method Step-By-Step Guide |
Bypass & Securely Use Chat Gpt.” GptCypher.com, 28 June 2023, https://gptcypher.
com/how-to-use-chatgpt-in-egypt/#1_REGULATORY_CONSTRAINTS. Accessed
23 August 2023.

25. Najibi, Alex. “Racial Discrimination in Face Recognition Technology.” Science in


the News, 24 October 2020, https://sitn.hms.harvard.edu/flash/2020/racial-
discrimination-in-face-recognition-technology/. Accessed 23 August 2023.

26. Chan, Alan, et al. “The Limits of Global Inclusion in AI Development.” arXiv, 2 February
2021, https://arxiv.org/abs/2102.01265. Accessed 23 August 2023.

79
27. van Dijck, Jose. “Datafication, dataism and dataveillance: Big Data between scientific
paradigm and ideology | Surveillance & Society.” Open Journals @ Queen’s, 9 May
2014, https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/
view/datafication. Accessed 28 August 2023.

28. Zuboff, Shoshana. “Big other: Surveillance Capitalism and the Prospects of an
Information Civilisation.” Journal of Information Technology, vol. 30, no. 1, 2015.
journals.sagepub.com/, https://doi.org/10.1057/jit.2015.5. Accessed 25 August
2023.

29. Andrejevic, Mark. “Automating surveillance.” Communications & Media Studies, vol.
17, no. 1-2, 2019. https://research.monash.edu/en/publications/automating-
surveillance. Accessed 25 August 2023.

Glossary
30. “Algorithm Definition & Meaning.” Merriam-Webster, 7 August 2023, https://www.
merriam-webster.com/dictionary/algorithm. Accessed 10 August 2023.

31. “An Industry-Led Debate: How UK Media Cover Artificial Intelligence.” Reuters
Institute, https://reutersinstitute.politics.ox.ac.uk/our-research/industry-led-
debate-how-uk-media-cover-artificial-intelligence. Accessed 10 August 2023.

32. Dictionary.com. “Automation Definition & Meaning.” Dictionary.com, https://www.


dictionary.com/browse/automation. Accessed 15 August 2023.

33. Gillis, Alexander S. “What is Machine Learning Bias? | Definition from WhatIs.”
TechTarget, https://www.techtarget.com/searchenterpriseai/definition/machine-
learning-bias-algorithm-bias-or-AI-bias. Accessed 15 August 2023.

34. “Bot Definition & Meaning.” Dictionary.com, https://www.dictionary.com/browse/


bot. Accessed 15 August 2023.

35. Rouse, Margaret. “What is an Internet Bot? – Definition from Techopedia.”


Techopedia, 24 April 2020, https://www.techopedia.com/definition/24063/internet-
bot. Accessed 14 August 2023.

36. Rutgers. “What Is Data Mining? A Beginner’s Guide (2022).” Rutgers Bootcamps,
https://bootcamp.rutgers.edu/blog/what-is-data-mining/. Accessed 14 August 2023.

37. Vincent, James. “Why we need a better definition of ‘deepfake.’” The Verge, 22 May
2018, https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-
manipulation-fake-news. Accessed 14 August 2023.

80
38. Parkin, Simon. “The rise of the deepfake and the threat to democracy.” The Guardian,
22 June 2019, https://www.theguardian.com/technology/ng-interactive/2019/
jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy. Accessed 14
August 2023.

39. Bruce, Peter. “A Deep Dive into Deep Learning – Scientific American Blog Network.”
Scientific American Blogs, 10 April 2019, https://blogs.scientificamerican.com/
observations/a-deep-dive-into-deep-learning/. Accessed 14 August 2023.

40. Foy, Peter. “What is Generative AI? Key Concepts & Use Cases.” MLQ.ai, 5 December
2022, https://www.mlq.ai/what-is-generative-ai/. Accessed 14 August 2023.

41. Ribeiro, José Antonio. “ChatGTP and the Generative AI Hallucinations | by


José Antonio Ribeiro Neto. Zezinho. | ChatGPT LEARNING.” Medium, 15 March
2023, https://medium.com/chatgpt-learning/chatgtp-and-the-generative-ai-
hallucinations-62feddc72369. Accessed 15 August 2023.

42. Foy, Peter. “What is a Large Language Model (LLM)?” MLQ.ai, 8 December 2022,
https://www.mlq.ai/what-is-a-large-language-model-llm/. Accessed 14 August 2023.

43. “What Is the Definition of Machine Learning? | expert.ai.” Expert.ai, 14 March 2022,
https://www.expertsystem.com/machine-learning-definition/. Accessed 14
August 2023.

44. Russell, John. “Library Guides: Optical Character Recognition (OCR): An Introduction:
Home.” Library Guides, 8 December 2022, https://guides.libraries.psu.edu/OCR.
Accessed 14 August 2023.

45. Education Ecosystem (LEDU). “A Simple Introduction to Natural Language Processing


| by the Education Ecosystem (LEDU).” Becoming Human: Artificial Intelligence
Magazine, 15 October 2018, https://becominghuman.ai/a-simple-introduction-to-
natural-language-processing-ea66a1747b32. Accessed 14 August 2023.

46. Kavlakoglu, Eda. “NLP vs. NLU vs. NLG: the differences between three natural
language processing concepts.” IBM, 12 November 2020, https://www.ibm.com/
blog/nlp-vs-nlu-vs-nlg-the-differences-between-three-natural-language-
processing-concepts/. Accessed 15 August 2023.

47. Harris, Marvin. “Neural network definition and meaning | Collins English Dictionary.”
Collins Dictionary, https://www.collinsdictionary.com/dictionary/english/neural-
network. Accessed 15 August 2023.

81
48. Merriam Webster. “Neural network Definition & Meaning.” Merriam-Webster, 10
August 2023, https://www.merriam-webster.com/dictionary/neural%20network.
Accessed 14 August 2023.

49. White, Jules, et al. “A Prompt Pattern Catalog to Enhance Prompt Engineering with
ChatGPT.” NASA/ADS, https://ui.adsabs.harvard.edu/abs/2023arXiv230211382W/
abstract. Accessed 14 August 2023.

50. Search Engine Land. “What Is SEO – Search Engine Optimization?” Search Engine
Land, https://searchengineland.com/guide/what-is-seo. Accessed 14 August 2023.

51. Munts, Maggie. “Zero Trust and Visual Vulnerability: What Does the Deep Fake Era
Mean for the Global Digital Economy?” Journal of International Affairs, 21 October 2022,
https://jia.sipa.columbia.edu/online-articles/zero-trust-and-visual-vulnerability-
what-does-deep-fake-era-mean-global-digital. Accessed 15 August 2023.

82
Readings & Resources

JournalismAI resources
The JournalismAI Starter Pack – our guide designed to help small and local publishers learn
about the opportunities offered by AI.

The JournalismAI Case Studies Database – our collection of 110+ examples of news
organisations worldwide making use of AI technologies to meet different needs.

Introduction to Machine Learning for Journalists – our short course that covers the basics
of machine learning for journalism.

The JournalismAI Report: New Powers, New Responsibilities


Beckett, C. (November 2019). London School of Economics and Political Sciences.

Other online resources


Big Data from the South(s): Beyond Data Universalism (2019) Stefania Milan and Emiliano
Treré – This academic article introduces the principles of a theory of datafication of and in the
Global South(s) and calls for a ‘de-Westernization of critical data studies.’

Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject (2018).
Nick Couldry and Ulises Mejias. Television & New Media – An academic article that proposes
understanding datafication processes through the history of colonialism. The authors view the
processing of social data as a “new form of data colonialism” that normalises the exploitation
of human beings through data, the same way historic colonialism appropriated territory and
resources for profit.

Elements of AI – A free online course that helps demystify AI, by combining theory with
practical exercises.

Generative AI In The Newsroom – A collection of articles written by journalists using


generative AI in their newsrooms, published by Prof. Nick Diakopoulos.

83
Large language models, explained with a minimum of maths and jargon (2023)
Lee, T and Trott, S.

Sketching the Field of AI Tools for Local Newsrooms – A database of AI tools for local
newsrooms built by Partnership on AI. (December 2022).

Artificial Intelligence in Local News: A survey of US newsrooms’ AI readiness


Rinehart, A. and Kung, E. (March 2022). Associated Press.

AI, Journalism, and Public Interest Media in Africa


Ogola, G. (May 2023). International Media Support (IMS).

Journalists AI toolbox
(2023) Mike Reilly – a live website listing AI and genAI tools for newsrooms.

Responsible Practices for Synthetic Media – a framework on how to responsibly develop,


create, and share synthetic media: the audiovisual content often generated or modified by AI,
published by Partnership on AI. (February 2023).

Spanish technological development of artificial intelligence applied to journalism:


companies and tools for documentation, production and distribution of information
Sánchez-García, P., Merayo-Álvarez, N., Calvo-Barbero, C., and Diez-Gracia, A. (2023).

Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms. H Cools, H &
Diakopoulos, N. (2023)

Books
Beginner’s prompt handbook: ChatGPT for local news publishers
Admitis, J. (March 2023).
Reporting on artificial intelligence: a handbook for journalism educators
Maarit, J. (Ed). (2023). Unesco.

For a wider selection of articles about the applications and implications of AI in journalism, with
case studies and practical insights, go to blogs.lse.ac.uk/polis. This will be updated regularly.
Please send us suggestions for further readings and resources.

84
Acknowledgements

The editorial responsibility for the content of this report lies solely with the author, Professor
Charlie Beckett.
Thanks to lead researcher and co-author Mira Yaseen, to Arab Reporters for Investigative
Journalism (ARIJ) for their assistance with research and outreach to MENA-based
organisations, and for additional respective regional data collection and research by Dr Trust
Matsilele, James Gatica Matheson and Vivek Mallik-Das.
This research project was overseen by the LSE JournalismAI manager Tshepo Tshabalala.
JournalismAI would not have been possible without the support of the Google News
Initiative. Special thanks to GNI’s David Dieudonné for his vital work to make this happen.
Although they may not have actively contributed to this report, credit should be given to
JournalismAI’s programme managers, Lakshmi Sivadas and Sabrina Argoub, as well as the
previous manager, Mattia Peretti, whose work over the past three years is the bedrock that
made a lot of this possible.
Last but not least, we want to thank again the media organisations who made this
report possible by taking the JournalismAI survey. The list follows on the next page
(some organisations opted to participate in this research anonymously and have
not been included in the list below):

85
NEWS ORGANISATIONS THAT COMPLETED THE
JOURNALISMAI SURVEY

Sub-Saharan Africa Asia Pacific


Africa Check EastMojo
AfricaBrief Ekushey Television (ETV)
Alpha Media Holdings IE Online Media
CGTN Initium Media
CITEZW KBR
Daily Maverick Malaysiakini
Dataphyte NZME
House and Garden Magazine SBS
INK Center for INVESTIGATIVE Scroll.in
Journalism Stuff Limited
Khwezi Times News The Current Pk
Nairobi News – Nation Media Group The Paper
Nation Publications Limited (NPL) The Quint
New Vision Printing and Publishing Times Internet
Company Limited
UDN Group
Newskoop Radio News Agency
VERA Files
NTV Uganda
Ohambileyo
Portal Publishing
Primedia Media Broadcasting
Radio Africa Group
Stears
The Post

86
Europe Latin America
AFP Abraji
Aftonbladet Chequeado
ARTE G.E.I.E. Cuestión Pública
Austria Presse Agentur (APA) El Surti
Časoris El Tiempo
CMI France Folha de Sao Paulo
Czech Radio Il Sole 24 Ore
E24 La Gaceta de Tucumán
Ekstra Bladet La Nación – Argentina
Evangelischer Presseverband Für Bayern Mutante
(EPV) Perfil
Group Nice-Matin PodSonhar
Maldita.es Rede Gazeta
Newtral T13
Observador TN
RTVE TV Azteca
Sveriges Radio Unitel
The Economist
VRT

International
OCCRP
Reuters
The Associated Press (AP)

87
Middle East and North Africa North America
(MENA) McClatchy
AlManassa MuckRock
AlMasry AlYoum NPR
ARIJ Semafor
Daraj The Texas Tribune
Jummar Zenger
Khuyout
Maharat Foundation
Masrawy
MBC Group, Egypt
Megaphone
Nawa Network – media platform of
Filastiniyat
Raseef22
Scientific Arab
Ultrasawt
Welad ElBalad

88
POLIS
Journalism at LSE

Get Involved JournalismAI, Polis


Department of Media and
Communications
The author welcomes feedback on this The London School of Economics
report at [email protected] and Political Science
Houghton Street
If you have any questions about the
London WC2A 2AE
project, or if you want to be involved in
future JournalismAI initiatives, do not
hesitate to get in touch with Tshepo
Tshabalala at [email protected]

blogs.lse.ac.uk/polis/2023/06/26/how-
newsrooms-around-the-world-use-ai-a-
journalismai-2023-global-survey/

The London School of Economics and Political Science is a School of the University
of London. It is a charity and is incorporated in England as a company limited by
guarantee under the Companies Acts (Reg no 70527).

The School seeks to ensure that people are treated equitably, regardless of age,
disability, race, nationality, ethnic or national origin, gender, religion, sexual
orientation or personal circumstances.

Design: LSE Design Unit (info.lse.ac.uk/staff/divisions/


communications-division/design-unit)

All images from unsplash.com:

p.12 – [email protected], p.24 – [email protected],


p.38 – Wonderlane, p.48 – Soundtrap, p.56 – Charlesdeluvio, p.64 – Sajad Nori,
p.70 – manas rb, p.82 – Jake Lorefice, p.88 – Markus Krisetya

You might also like