Generating+Change+ +The+Journalism+AI+report+ +english
Generating+Change+ +The+Journalism+AI+report+ +english
Journalism at LSE
Generating Change
A global survey of what news
organisations are doing with AI
Charlie Beckett and Mira Yaseen
Preface
Our news media world has been turned upside down again. As always, serious technological
change produces both dystopian and utopian hype. Much of this has been generated on social
media by corporate PR and politicians. News coverage and expert commentary has also
veered from excited coverage of positive breakthroughs in fields such as medicine to much
more frightening visions of negative forces unleashed: Generative AI (genAI) is producing a
tidal wave of automated, undetectable disinformation; it will amplify discrimination, extreme
speech and inequalities.
And its impact on journalism? Again, much of the coverage has focused on the unreliability of
many genAI tools and the controversy over its rapacious appetite for other people’s data to train
its algorithms. As the initial storm of hype turns into more practical considerations we have
been talking to news organisations around the world about this new wave of technological
change. What are they doing with AI and genAI; what might they do in the future; and what
are their hopes and fears for its impact on the sustainability and quality of this hard-pressed
journalism industry?
Whether you are excited or appalled at what genAI can do, this report makes it clear that it is
vital to learn and engage with this technology. It will change the world we report upon. It needs
critical attention from independent but informed journalists. But our survey shows it is also
already changing journalism. It brings exciting opportunities for efficiency and even creativity.
As one respondent told us, “Freeing up time for journalists to continue doing their job is the
greatest impact achieved.”
But it also brings specific and general hazards. The good news from our respondents, at least,
is that they are aware of the opportunities and risks and are beginning to address them. The
best organisations have set up structures to investigate genAI and processes to include all their
staff in its adoption. They have written new guidelines and started to experiment with caution.
This is a critical phase (again!) for news media around the world. Journalists have never been
under so much pressure economically, politically and personally. GenAI will not solve those
problems and it might well add some, too.
Responsible, effective journalism is more needed than ever. We hope this report and our work
at JournalismAI contributes to that mission. We look forward to hearing from you. Let us know
what you are doing and how we can help.
1
Contents
Preface 1
The JournalismAI Survey 4
Executive Summary & Key Findings 6
Introduction: How Did We Get Here? 9
Chapter 1: How AI is Being Used in Journalism Today 14
1.0 How Are Newsrooms Using AI? 14
1.1 Newsgathering 15
1.2 News Production 17
1.3 News Distribution 18
1.4 Why Newsrooms Use AI 21
1.5 What is Working and What is Not 22
Chapter 2: AI Strategy 25
2.0 The Need for Strategy 25
2.1 Newsrooms’ AI Strategies 24
2.2 How Newsroom Processes and Roles are Affected by AI 28
2.3 Ready for AI? 31
2.4 The Strategic Challenges to AI Adoption 32
2.5 Have Newsrooms’ Approaches to AI Integration Changed? 36
Chapter 3: Ethics and Editorial Policy 39
3.0 AI’s Impact on Editorial Quality 39
3.1 Algorithmic Bias 39
3.2 Newsroom Approaches to Ethical Concerns 41
3.3 Ethical Implications for Journalism at Large 43
3.4 The Role of Technology Companies 44
3.5 The Role of Universities and Intermediary Companies 47
2
Contents (continued)
3
The JournalismAI Survey
This report is the second global survey that we have conducted. The sample for this report
is bigger with a greater emphasis on geographical diversity. It is based on a survey of 105
news and media organisations from 46 different countries regarding AI and associated
technologies. In 2019 we surveyed 71 news organisations from 32 different countries, of
which only 16 have participated again in this 2023 survey.
This year, we made it a point to reach a more diverse group of participants in terms
of the size of their organisations. We invited small and large newsrooms, including
emerging and legacy organisations. In addition to this, contributions came in from Latin
America, sub-Saharan Africa, the Middle East and North Africa (MENA), Asia Pacific,
Europe, and North America. This necessitated an additional chapter, focusing on regional
challenges to AI adoption.
20%
20%
16% 16%
13%
10%
7%
0%
Broadcaster Newspaper Magazine News Publishing Other
Agency Group
4
The purpose of this report is the same as the first: to give a sense of what is happening with
AI and what risks and opportunities it offers. We asked participants how they are engaging
with generative AI (genAI) technologies and its implications for the future of journalism. We
hope it informs the debate, helps news organisations chart their way forward, and guides us
to develop our programmes to support that process.
The survey was supplemented with interviews, and conversations at journalism
conferences. We are very grateful to everyone who has shared their thoughts
and experiences with us. The surveys and interviews were conducted between April
and July, 2023.
We do not claim that the survey is representative of the global industry – that would be
almost impossible on an international scale – nor does it equally reflect all viewpoints
within the different parts of news organisations. But it does give an unprecedented insight
into how these technologies are perceived by those people leading their development or
application inside news organisations.
Our respondents represent diverse roles and expertise within their organisations; they
include journalists, technologists, and managers. We encouraged news organisations to
gather representatives from different departments to complete the survey collaboratively.
NB: The list of organisations that completed the survey can be found in the acknowledgments.
The published quotes have generally been anonymised. Some organisation names were
added for context after receiving permission from the authors. Some quotes were edited
lightly to correct literals and for sense. The editorial responsibility for the report lies entirely
with the author.
5
Executive Summary
& Key Findings
1 Artificial Intelligence (AI) continues to be unevenly distributed among small and large
newsrooms and regionally among Global South and Global North countries.
2 The social and economic benefits of AI are geographically concentrated in the Global
North, which enjoy the infrastructure and resources, while many countries in the
Global South grapple with the social, cultural, and economic repercussions of post-
independence colonialism.
3 More than 75% of respondents use AI in at least one of the areas across the news
value chain of news gathering, production and distribution.
4 Increasing efficiency and productivity to free up journalists for more creative work
were the main drivers for AI integration for more than half the respondents.
5 Around a 1/3 of the respondents said they had an institutional AI strategy or were
currently developing one.
6 Newsrooms have a wide range of approaches to AI strategy, depending on their size,
mission, and access to resources. Some early adopters are currently focusing on
achieving AI interoperability with existing systems, others have adopted a case-by-
case approach, and some media development organisations are working towards
building AI capacity in regions with low AI literacy.
7 Around a 1/3 of respondents believe their organisations are ready to deal with the
challenges of AI adoption in journalism, while almost half said they were only partially
ready or not ready yet.
8 Many respondents said AI integration is changing existing roles within the newsroom
through training and upskilling. Along the same lines, AI is changing the nature of a
journalist’s role and sought after skills.
9 As we saw in our 2019 report, financial constraints and technical difficulties remain
the most pressing challenges for integrating AI technologies in the newsroom.
10 Ethical concerns are still significant for our respondents; many advocate for
explainable AI and setting ethical guidelines to mitigate algorithmic bias.
11 Setting de-biasing techniques emerged as a highly challenging area for
most respondents.
6
12 Cultural resistance and fears of job displacement and scepticism of AI technologies
cannot be discounted.
13 Across the board, respondents noted that mitigating AI integration challenges requires
bridging knowledge gaps among various teams in the newsroom. Similarly, cross-
department collaboration was seen as necessary for achieving effective AI adoption.
14 The challenge of keeping pace with the rapid evolution of AI was consistently
mentioned throughout the survey.
15 About 40% of respondents said their approach to AI has not changed over the past
few years, either because they are still in the beginning of their AI journey or because
AI integration remains limited in their newsrooms. Concurrently, around a 1/4 said
their organisation’s approach to AI has evolved; they have gained hands-on experience
that helps them think more realistically about AI.
16 More than 60% of respondents are concerned about the ethical implications of AI
integration for editorial quality and other aspects of journalism. Journalists are trying
to figure out how to integrate AI technologies in their work upholding journalistic
values like accuracy, fairness, and transparency.
17 Respondents called for transparency from the designers of AI systems and
technology companies, and the users, namely newsrooms, with their audiences.
18 Journalists and mediamakers continued to stress the need for a ‘human in the loop
approach,’ in line with the results in our 2019 survey.
19 There are fears that AI technologies would further commercialise journalism,
boosting poor quality and polarising content, leading to a further decline in public
trust in journalism.
20 Tech companies are driving innovation in AI and other technologies, but survey
participants voiced concerns about their profit-driven nature, the concentration of
power they enjoy, and their lack of transparency.
21 Around 80% of the respondents expect a larger role for AI in their newsrooms
in the future.
22 Survey participants expect AI to influence four main areas:
1 Fact-checking and disinformation analysis
2 Content personalisation and automation
3 Text summarisation and generation
4 Using chatbots to conduct preliminary interviews and gauge public sentiment
on issues
7
23 There are concerns that AI will exacerbate sustainability challenges facing less-
resourced newsrooms which are still finding their feet, in a highly digitised world and
an increasingly AI-powered industry.
24 Almost 43% of responses emphasised the importance of training journalists and other
personnel in AI literacy and other nascent skills like prompt engineering.
25 The vast majority welcomed more collaboration between newsrooms and other
media organisations and academic institutions, hoping it would help lessen the
disparity between small and large newsrooms, as well as regionally between
newsrooms in Global North and Global South countries.
26 The need for a balancing act between tech and journalism, a theme that also emerged
in our 2019 survey, remains imperative to a future where AI technologies are leveraged
to serve journalism and its mission.
27 The vast majority of respondents, around 85% have at least experimented with
generative AI (genAI) technologies in a range of ways such as writing code, image
generation, and authoring summaries.
28 Some are apprehensive about using genAI in editorial tasks, while others are using
them regularly in coding, headline generation, and search engine optimisation.
29 There was a high level of agreement among participants that genAI presented a
new set of opportunities not provided by traditional AI. They highlighted some of
the affordances of genAI, such as accessibility and low requirements for advanced
technical skills.
30 Respondents were much more divided – almost 1/2 were not sure – as to whether
genAI also presented a new set of challenges. Some believe genAI presents similar
challenges to traditional AI, such as algorithmic bias, but raises the risk ceiling to a
new level.
31 Newsrooms globally contend with challenges related to AI integration, but the
challenges are more pronounced for newsrooms in the Global South. Respondents
highlighted language, infrastructural, and political challenges.
8
Introduction
How Did We Get Here?
Artificial Intelligence in journalism has been significant for some years. The LSE
JournalismAI project started back in 2019 and our first global report published in the
same year showed that it was a key emerging set of technologies. AI was producing
efficiencies for newswork and it was also creating opportunities for new practices and
products or services.
We showed in that previous report that a range of news organisations were using AI
across the journalism process from newsgathering, content creation and distribution,
to marketing and revenue-gathering. A varied set of technologies were being used, with
programmes based on training software to manipulate data. Advances in machine-
learning and Natural Language Processing (NLP) enabled newsrooms to build or adapt
tools and systems to support their journalism.
Generally, these were large-scale but relatively basic functions such as scraping
social media or automating very simple content creation. It was used by investigative
journalists to comb through large document leaks or to help automate paywalls and to
personalise content in straightforward ways. Some uses of machine learning – such as
search – were so routine and universal they were taken for granted.
In 2019 we found that news organisations were facing various challenges in adopting AI.
There was a lack of general knowledge, specific skills, and resources. There were also
inequalities between big new organisations and smaller ones, especially those in non-
English speaking or less developed markets.
Working with news organisations over the last five years, we could see that the impact
of AI was systemic and accelerating, just as it was in other industries and sectors.
The most successful organisations were those that took a strategic, holistic approach
and who recognised that these technologies required fundamental self-analysis of the
organisation’s capabilities and future planning.
In the wider context it is possible to see AI as a third wave of technological change
for journalism. The first wave was going online, accompanied by the digitalisation of
tools and the shift to mobile. The second wave was the arrival of social media and the
impact that had on content creation, consumption and competition. The technology
platforms now provided much of the infrastructure for journalism and the ‘user’ was
central to its dissemination.
9
The arrival of generative AI (genAI) in the last year has accelerated all these trends
and created new disruptions. This report is a survey of how news organisations have
continued to develop ‘traditional’ AI and how they are approaching the new challenges of
genAI. Clearly, it presents fresh opportunities, but it has special risks and characteristics.
There are continuities. Most news organisations we spoke to were taking a more
strategic approach to genAI, often based on the lessons from dealing with AI and other
technology beforehand.
It is important to stress that genAI is probably the most rapidly emerging technology for
media in this digital era. Some of the more extreme dystopian critiques and over-heated
marketing hype have distracted from a proper debate about immediate concerns. It
is good that we are now all aware of AI and able to interact directly with it and explore
its force and flaws. It is hoped that we will have a more inclusive debate about what it
means for society in general and journalism in particular.
Journalism is a special practice. On the one hand it is around the world a sector under
great commercial, political and competitive pressure. It is weak in resources compared
to the giant corporations developing this technology. The potential for deep structural
threats to journalism in the future must be part of our thinking now. On the other hand,
news organisations have shown remarkable resilience and innovation in sustaining and
sometimes thriving despite the challenges they have faced. It might even be that in a
world where genAI is such a power, for ill as well as good, public interest journalism will
be more important than ever.
10
We wanted to know if our respondents had an operational definition of AI. As they did in
2019, the responses reflected quite varied understandings of AI, echoing once again the
fluidity of the term and the complexity of the topic.
Some respondents offered a clear operational definition of AI as the use of machines or
computer systems to perform tasks that traditionally required human intelligence. Many
offered technical definitions that centred around the concepts of “automation”, “machine
learning” and “algorithms.” Almost half the respondents used one or more of those terms
in their definitions:
It entails the creation of algorithms and models that allow machines to carry
out operations like speech recognition, visual perception, problem solving, and
decision-making that ordinarily require human intelligence.
Other respondents related their operational definitions of AI to its potential benefits and
their motives for integrating it in the newsroom, such as increasing efficiency, or better
serving the newsroom’s audience and mission:
For us, AI represents a group of technologies that can assist and empower
[our team] by providing insights and automated support across a range of
editorial, operational and communications tasks.
Technologies used to automate gathering and analysis of data that serve our
editorial niche and mission.
11
Several respondents emphasised the importance of ethical considerations in AI
development, while others mentioned concerns about the opacity of AI systems or the
need for human oversight:
A few respondents said they did not have a working definition of AI yet:
We do not have a collective working definition yet. Mine, as the person in charge
of exploring AI within the newsroom, is that AI is a set of processes that a computer
does to aid and facilitate human’s work, adding intelligence to it. By no means it
replaces human presence and it should always be checked and accompanied.
12
This report is presented in seven chapters. In order to facilitate comparisons between
this and the 2019 report, we kept the majority of the chapters the same, with the
exception of two new chapters.
The Introduction gives a brief background of the findings from the 2019 report, and
summative overview of the technological changes seen in the journalism industry over
the past years, to date. We define key issues and a summary of what you can expect in
this report.
Chapter One focuses on how AI is currently being used by newsrooms. The chapter
looks at how newsrooms are using AI across the new value chain as well as what has
been working and what has not been working.
Chapter Two unpacks the AI strategy or lack of, in newsrooms. We look at the types of AI
approaches newsrooms have undertaken, some of the key challenges and what impact
the technology can have on them.
Chapter Three is also similar to the previous report as we expand on ethics and
editorial policy.
Chapter Four looks to the future and role of AI in journalism.
Chapter Five touches on generative AI and journalism. It is a new chapter that looks at
the current use cases of genAI, as well as its opportunities and challenges.
Chapter Six reflects on the global disparity in AI development and adoption as well as
the challenges faced by the majority of the world’s population in the Global South.
The Conclusion ties all the above-mentioned chapters together and gives a brief analysis
of what all this means for journalism. We conclude the main body of the report with a six-
step roadmap towards an AI strategy that newsrooms could borrow from. You will also
find a glossary, endnotes, references and a list of suggested readings and resources.
This work was funded by the Google News Initiative and carried out by a team led by
Professor Charlie Beckett, director of the LSE’s international journalism think-tank, Polis.
We would like to thank all the journalists, technologists and researchers who took part in
the project. The project was managed by Tshepo Tshabalala and the lead researcher and
co-author was Mira Yaseen
13
Chapter 1
How AI is Being Used
in Journalism Today
100%
90%
75% 80%
75%
50%
25%
0%
Newsgathering News Production News Distribution
14
1.1 Newsgathering
AI applications can assist newsrooms in gathering material from various sources
and helping the editorial team gauge an audience’s interests as part of a data-driven
production cycle. The responses revealed that a large majority, almost three quarters of
organisations, use AI tools in newsgathering. The responses focused on two main areas:
1 Optical character recognition (OCR), Speech-to-Text, and Text Extraction:
Using AI tools to automate transcription, extract text from images, and structure
data after gathering.
2 Trend Detection and News Discovery: AI applications that can sift through large
amounts of data and detect patterns, such as data mining.
We list more detailed examples below of these two main areas of application of AI
in newsgathering.
1O
ptical character recognition (OCR), Speech-to-Text,
and Text Extraction:
The use of AI-powered tools for speech-to-text transcription and automated translation,
such as Colibri.ai, SpeechText.ai, Otter.ai, and Whisper, was a widely cited area of use.
They help streamline the production process and allow newsrooms to engage with
content in different languages:
Transcription services like Otter are invaluable for reporters on deadline, and our
tag tool streamlines production processes for editors.
For others, inaccuracies related to accent or language limitations mean the benefits of
transcription tools are not yet as accessible:
15
AI technologies provide a universal set of challenges pertaining to ethical and other
considerations that apply to industries and to newsrooms globally. However, early on in
the survey, we began to see an additional set of challenges, such as AI tools’ language
limitations. Newsrooms in Global South countries must contend with these constraints
from the first stage of newsgathering to news production. (More on this in Chapter 6).
2T
rend Detection and News Discovery:
AI applications help journalists uncover issues of interest to audiences in different
regions and get a sense of what they think about particular issues. Several respondents
mentioned using tools like Google Trends, web scraping, and data mining services like
Dataminr and Rapidminer to identify trending topics, detect news of interest, and gather
data from various sources to uncover stories. Here are some examples from our survey:
We use softwares like Rapidminer and other Google initiatives to mine data to
detect trending and news of interest around the world.
Mostly automations by webhooks feeding into Slack. We have also built our own
scraping services feeding us information when a certain threshold is reached in the
data that we scrape.
16
We have an internal tool that includes an automated tagger for news websites
articles and social media posts (which tags articles with topics/keywords) to collect
specific discourses on issues of accountability and classify them by topics. We use
neural networks for natural language sentiment analysis of refugees related data
using Google Cloud APIs. Other APIs for analytics such as Lebanon protests
platform to collect data on protest discourses and analyse main influences (genders
and job positions in profiles).
We have developed a tool with the OCCR [Organized Crime and Corruption
Reporting Project] team to “Arabize” their engine by extracting hundreds-of-
thousands of pages to the ARIJ [Arab Reporters for Investigative Journalism]
datadesk using Google Optical Character Recognition (OCR) services, and we build
[our own] crawler to collect the data from specific resources to be cleansed by
researchers and journalists, then we uploaded to our domain.
17
Newsrooms are already experimenting with and using genAI technologies like ChatGPT
in content production tasks, including the production of summaries, headlines, visual
storytelling, targeted newsletters and in assessing different data sources:
GenAI tools like ChatGPT are also being used to assist with code writing and
source assessments:
For production I am using ChatGPT to help with code writing. I have made a few
games/quizzes where even though the code is not completely written by ChatGPT, it
has certainly written quite a few functions.
We have also used either the ChatGPT interface or the OpenAI API to rationalise
different data sources.
AI technologies like Grammarly and spell checking tools are employed for editing,
proofreading, and improving the quality of written content.
18
Respondents shared examples of using personalisation and recommendation systems
to match content more accurately and at scale with interested audiences. Or the other
way around, tailoring content to a specific medium or audience:
Recommender system for podcast episodes, using the EBU Peach engine.
AI-powered social media distribution tools like Echobox and SocialFlow were
mentioned by several respondents, who said they used them to optimise social media
content scheduling.
Respondents also mentioned using chatbots to create more personalised experiences
and achieve faster response rates:
The WhatsApp chatbot is also used for news distribution, as users immediately
receive a link to our debunk if we have already verified the content they sent. Also, it
sends daily text and audio summaries with Maldita’s top stories.5
Enhancing the visibility of content in searches is key for all digital content, not least for
newsrooms. AI-driven SEO tools can help newsrooms boost discoverability and better
understand their audiences’ interests:
We mostly make use of SEO to help increase the visibility of our stories on our
website. We have found that human interest local stories tend to do better than stories
about celebrities or other topics.
Ubersuggest6 helps me see which keywords are highly searched online, Google
Discover shows me which stories and keywords are trending, CrowdTangle shows
me which social media posts are over performing. This helps me create relevant
news stories that people are interested in. Using SEO keywords that are searched
often increases the likelihood of the stories reaching a higher number of people.
19
We asked our respondents to share some of the impressive applications of AI
technology they have come across which are used by media organisations.
Here is a selection of the most common examples:
20
1.4 Why Newsrooms Use AI
Clearly, the integration of AI applications has the potential to streamline various aspects
of journalistic work. However, we sought to delve into the underlying incentives of the
respondents in employing AI. More than half cited increasing efficiency and enhancing
productivity as core objectives driving their adoption of AI. They said they hoped to
automate monotonous and repetitive tasks, thereby streamlining workflows and allowing
journalists to engage in “more creative, relevant, and innovative work”:
Many of our traditional news processes can be quite laborious and are reliant on
human instinct that can vary drastically from person-to-person. Machine learning (or
AI) should ideally streamline many of those newsroom processes, give insight into
the viability of current processes and ultimately free up the ‘human element’ to
focus on other areas.
Almost all our use cases for AI are to speed up news production. It’s always
about speeding up, I don’t think I’ve had one conversation about using it to
improve quality.
For the fact-checkers at Madrid-based Maldita, the impact of AI tools was felt strongly
during the Covid-19 pandemic, as they helped accelerate and scale the organisation’s
response to Covid-19 misinformation:
By automating some tasks we are able to dedicate more time to other important
things like fact-checking or investigations. It also allows our readers to receive quick
answers when they inquire about a potential hoax. For instance, during the first
weeks of the Covid-19 pandemic, our WhatsApp service was manual, meaning that
a Maldita journalist would have to filter through all messages and count how many
times content had been sent to us. We went from receiving 200 daily queries to over
2,000 during lockdown, which meant that we could simply not get back to all users
at a time when they desperately needed answers (some of the disinformation they
were receiving could be seriously harmful for their health).
21
Around a third of respondents said they hoped AI technologies would help them reach a
wider audience, personalise reader experiences, and enhance audience engagement, a
theme that featured strongly in the previous section about AI uses in news distribution:
Scraping web pages and creating Slack alerts based on filters have been the
most successful applications so far.
Proofreading and basic copy editing have been very successful; video
production using stable diffusion has also worked well.
Recommender systems and NLP systems affecting distribution have been the
most significant success.
22
Respondents highlighted that even with successful AI applications, testing and
improvement are continuous, reflecting the evolving nature of AI and the consistent need
for human intervention:
Many respondents, often smaller, emerging newsrooms, are still in the early stages of
AI adoption:
It is still too early to ascertain any failures, we have been testing many individual
tools and integrations, most of them have been helpful but none of them are heavily
integrated into our workflow.
23
Other than language challenges, very few respondents mentioned failures in specific
AI-applications. However, when discussed, some respondents attributed any failures to
organisational issues, rather than to technical limitations:
The biggest failure has been slow progression on already identified use cases
because of organisational issues, lack of focus and resources.
For some of our third-party available machine learning offerings, we found that
we didn’t have a strong onboarding process or clear explanations, so uptake has
been slower than anticipated.
One respondent explained how their organisation decided to discontinue their work on
an “automated service to write short stories about companies performance in the stock
market,” because it did not gain popularity with the audience:
[It] did not create enough user value (the users rather looked at the stock graph),
and when the pandemic hit and all stocks went south, our thresholds were reached
for almost all companies spamming our users.
24
Chapter 2
AI Strategy
We are currently developing a plan for an AI strategy that cuts across all
departments at AP. We have one working group that has been tasked with reviewing
aspects of the news operation for opportunities and avoidance of AI. A tool or
service needs to meet our journalistic standards and business mission to support
AP members and customers.
25
Some organisations take on a two-pronged approach in their AI strategy, working with
technology partners while enhancing their own in-house capabilities:
We partner with vendors that are moving fast, so we can move quickly,
too. Meanwhile, we are building in-house capabilities so we can have control
and ownership.
Jordan-based ARIJ, a media development hub in the Arab region, recently launched its AI
strategy to guide the organisation internally, but they plan to make it accessible to help
guide other Arabic-speaking media organisations in their own AI integration efforts:
[We] will provide this strategy in a playbook style to all Arab speaking
newsrooms so they can benefit from it, in developing their own strategy.
Depending on many factors, having a strategy might not be needed at all. Some
respondents said they adopt a case-by-case approach to AI, without necessarily
developing a particular institutional level strategy. They focus on how AI technologies can
help them achieve their objectives through AI or other conventional technologies:
Even those with comprehensive AI strategies, such as AfricaBrief, highlight the need
to incorporate training and to continuously evolve their strategy to adapt to nascent AI
technologies, such as generative AI. Their responses reflect the challenge of keeping up
with the fast-paced evolution of AI technologies, a consistent theme throughout the survey:
Right now we are fine-tuning our strategy in order to take into account the recent
development of GPT.
26
Several newsrooms that have not developed an AI strategy said they plan to do so in the
near future. For some, the absence of an AI strategy seems to be the result of competing
newsroom priorities and a lack of resources, rather than a lack of interest. Respondents
expressed their newsrooms’ support for individual efforts in experimentation, reflecting the
fact that many newsrooms have not reached an institutional level of AI integration:
Our organisation does not have a formal strategy for AI-related activities.
We rely on the initiative and enthusiasm of some of our colleagues who are
interested in AI.
Not yet. We have been training some team members and searching for funding
to design and develop products that involve.
The responsibility to develop and lead AI integration differs from one newsroom to another:
A dedicated
26%
cross-functional team
Innovation/
29%
Digital teams
Data Team 9%
Other* 26%
*Includes other departments, such as IT, business, management, editorial and product.
27
2.2 How Newsroom Processes and Roles are Affected by AI
Whether they are at the beginning of their AI integration journey or have more experience
with AI technologies, we found newsrooms are dedicating time and resources to building
their AI capacities. We asked respondents if AI integration efforts had impacted their
workflows and processes, as well as existing roles in the newsroom.
Around a quarter of respondents said the impact of AI adoption on their workflows and
processes in the newsroom has been significant. It has helped cut costs; streamlined
and scaled processes; and increased efficiency in fact-checking, social media monitoring,
content distribution, and accounting:
We saved more than 80% in the process of monitoring and searching for
verifiable phrases … We are convinced that this field will have a more positive impact
in the future.
AI has impacted our news production processes, automating tasks like news
gathering and content creation with ChatGPT. It has also streamlined internal
workflows, improved productivity, and holds potential for advanced tasks like NLP
and data analysis.
One respondent explained how the automation of some processes using AI technologies
changes the nature of their work, rather than replaces it:
Freeing up time for journalists to continue doing their job is the greatest
impact achieved.
28
The vast majority of respondents, almost 75%, who are still in the early stages of AI
adoption, have not witnessed a noticeable impact yet, but expect to in the future:
[It] will definitely have an impact in the future as AI takes over more of the
mundane tasks of newsgathering.
Currently, the impact of AI is not yet significant and widespread, but it already
emerges as an enabler, for sure.
Like more experienced news organisations, they hope AI integration will enable journalists
to spend more time on field work and special projects:
Right now, it does not have a significant impact. However, the impact may be quite
significant if we embrace AI… This will free the journalists to work on other sectors,
especially when they go in the field and remote areas to conduct very unique
interviews and videos for very unique stories that our audiences are interested in.
Not yet, but we are working on new vacancies for AI that include Prompt
engineers, AI and ML engineers, and data scientists.
Not yet because it’s a transition still in early stages. AI is augmenting rather than
totally changing roles.
I could see us creating more AI-specific roles in the future, likely as news
technologists who work closely with journalists.
Some newsrooms said AI integration has led to the creation of new roles related to AI in a
variety of areas, such as data analytics:
AP’s NLG of earnings reports nearly a decade ago liberated our reporters from
the grind of churning out rote earnings updates and freed them up to do more
meaningful journalism. More recently AP has created three new roles that focus on
AI across news operations and products.
29
Yes, we have created at least one new role focused on managing AI experiences,
and expect we may have more, but the growth is slow and deliberate. As much as
we can, we are leveraging the talent we have already. For instance: Our real estate
and development editors lead our real estate AI content creation. One of our leading
digital audience producers is overseeing our social media optimisation.
Yes there was a need to allocate a dedicated data analyst within the team.
Overall, even though the adoption of AI-powered technologies has not always
resulted in the development of new, AI-specific roles, it has prompted the evolution
of current roles and the acquisition of new skills by the staff in order to effectively
use AI technologies in their journalistic endeavours.
Some have already begun building prompt engineering capabilities, but not only within the
IT department:
Other responses echoed a similar need to engage journalists and build on their
capabilities in AI and digital skills, as opposed to relying solely on having the expertise
within the IT department:
Yes, there are new AI-specific roles. The digital team helps us monitor trends,
but as the digital editor, I do that too. Newsgathering and distribution have also
changed. I check trends and write content based on that.
30
Yes, the journalist had to train the algorithms and, for doing so, they received
training about how the algorithm works, what kind of data do we need and how to
gain accuracy. On the other side, the journalistic team have shared the editorial
criteria that guide their decisions to the engineers team, by providing keys on why
and what is considered a factual claim.
In line with these responses, it is no surprise that the hiring criteria in newsrooms is
changing, as one respondent remarked:
I think the impact has been more felt in considering who to hire and who not to
hire. I would need less writers once AI is deployed.
The responses reflect the ongoing challenge of balancing technical and journalistic skills
throughout AI integration in the newsroom.
These are still relatively new, diverse and complex technologies, coming in the
wake of a series of other digital challenges. So it is not surprising that respondents
are deeply divided on AI readiness. They split approximately in half between those
who felt they were catching the wave and others who hadn’t done more than dip
their toes in the water. There was a vein of optimism, especially as many of our
respondents were early-adopters who feel they have already made the first steps.
This year’s report also showed a disparity in AI readiness across newsrooms. Over the
last five years we have observed a broad increase in preparedness but the arrival of genAI
means news organisations have a fresh set of challenges.
Many newsrooms, around a 1/3, expressed confidence in their readiness to deal with
the challenges of AI adoption in journalism. They emphasised their efforts in advancing
tools and technologies to facilitate their work, as well as their ability to adapt quickly to
changing technologies. They believe they have inquisitive skilled personnel capable of
utilising AI effectively:
We have quite a few people who are interested in AI and skilled in using
technology. I don’t see a problem in the technical challenges.
31
A large proportion, around 53%, said they were not ready yet or only partially ready to deal
with the challenges of AI integration in the newsroom. They cited financial constraints and
the lack of technical expertise as key challenges. The next section explores this in detail.
40%
41%
30%
25%
20% 22%
10% 12%
0%
Technical Ethical Cultural Managerial
challenges challenges challenges challenges
32
Newsrooms were not entirely sure which skillset(s) to look for in technical personnel.
Newsrooms with several years of experience with AI integration in the newsroom
mentioned specifically the challenge of achieving compatibility and interoperability with
existing systems and platforms:
These responses highlight the huge strides some newsrooms have made in AI adoption
at an institutional level. In our 2019 report, many of the respondents, including early
adopters, were at the beginning of their AI journeys. Technical challenges focused on
which projects to prioritise, how to demystify AI and providing general AI literacy training
to personnel.
The responses also highlight a disparity between smaller, emerging newsrooms in Global
South countries on the one hand, and large, well-resourced, more experienced news
organisations in Global North countries. While responses by the former focused on finding
the resources to hire the technical experience needed, the latter have already deployed AI
technologies in various areas and are now focused on achieving interoperability:
Mitigating AI integration challenges goes beyond hiring the right technical staff. It
requires bridging knowledge gaps that exist among various teams in the newsroom,
a challenge that is more consistent across the board. Responses reflected a need to
enhance AI literacy and technical skills among journalists and technical staff alike:
33
One of the greatest challenges we have is that the Innovations/Technical team
is not very informed on AI and the solutions AI can bring to journalism. We don’t
have in-house experts who can help in AI coding and training. But some have shown
interest in learning and we hope soon they will be well versed with AI.
The challenge of keeping pace with the rapid evolution of AI is also experienced more
evenly by most newsrooms, and is testament to the need for continuous adaptation, a
theme we have seen consistently in the survey:
The technologies are evolving so quickly it’s difficult to know what technology to
take on for fear it will soon be outdated.
The ethical question is the most important because you have to keep it
transparent for the readers.
34
Algorithmic bias is another concern:
Respondents stressed the need for guidelines, standards, and regulations to ensure
the ethical use of AI in newsrooms and to address the potential risks associated with
its implementation:
Our newsroom managers are great allies, and there are many champions for
experimentation among our ranks. In our newsrooms, there is some fear about the
implications of AI – the impact on jobs, products and subscribers.
Most people understand the big changes happening with AI, but changing
workflows is always hard for any given profession.
The larger the organisation, will have multiple layers or management, the harder
it is to experiment without lots of meetings and presentations.
35
Resistance to, and over enthusiasm about AI were also mentioned:
Not yet, because so far it has mainly involved the IT department and just a handful
of journalists dedicated to testing and validating them in limited contexts.
There are no significant changes since we have very limited use of AI.
I think we do not yet have the collective consciousness that the tools that we
use are AI, so no, it has not changed.
36
Generally, they feel more confident about their engagement with AI, and better equipped to
handle the challenges posed by continuously emerging AI technologies:
Our approach became more realistic taking into consideration our resources and
the very fast involvement in the industry. We became more focused on AI
technologies that can complement the work of journalists in gathering and
aggregating data that is relevant and to help them identify trends and conduct
in-depth analysis.
37
Hands-on experience with AI technologies in the newsroom has helped some uncover
benefits they were not expecting when they started out:
Although our AI tools were first implemented as a way to save time and be more
effective, we have discovered that the data they gather is very useful to understand
how disinformation works as well as other research uses.
Others noted that more departments are now involved with AI integration efforts in the
newsroom with the goal of adopting an institutional approach to AI, compared to when AI
experimentation efforts were considered the domain of technical experts only:
Since we first began using AI in 2017, it has moved from the engineering team
to the newsroom.
38
Chapter 3
Ethics and Editorial Policy
AI systems are trained on vast amounts of data, and if the training data contains
biases, those biases can be amplified in the AI outputs. This can lead to biased
content recommendations, skewed perspectives, or unfair representation in news
coverage. It is essential to address and mitigate algorithmic biases to ensure fair
and inclusive journalism.
39
I don’t trust the current technologies to include perspectives of people who tend
to be marginalised.
AI generated models are built on databases that include bias especially when it
comes to content in Arabic and this will be reflected in the AI generated content.
Although I understand the concept of de-biasing, I don’t even know the steps of
doing so or even how to implement such a strategy.
I can’t say we’ve done that yet but debias training is being talked about. That is
the aspect of AI that we’ve found is the most time consuming so I do worry that it
might not be prioritised.
40
Designing de-biasing techniques often requires multidisciplinary collaboration:
Several respondents said they did not know whether their organisation deployed any,
while others said their use is still “too limited” for them to develop such techniques.
It is important to keep in mind that our respondents come from journalistic and technical
fields with widely ranging tech expertise which might explain why they did not offer many
examples of de-biasing techniques.
They called for transparency from the designers of AI systems as well as transparency
from those who apply the systems, such as newsrooms. They argued that audiences
should be made aware when AI systems are used in content creation or other tasks:
We need to understand how the algorithm works to be able to trust it. Regimes
are sometimes closely tied with tech companies. So we need transparent AI.
How does AI know what it knows? We must be sceptical of these systems, and
as transparent as possible with editors and readers when we use them.
41
An emphasis on the need for a ‘human in the loop approach’ has not changed much
since our 2019 survey. Newsrooms continue to view human intervention as crucial to
mitigating potential harms like bias and inaccuracy by AI systems:
The constant and mandatory intervention of the human factor in [AI] integration
is necessary.
It is not always clear how “human” values can be integrated with AI, which explains why
it is difficult to develop and implement ethical guidelines and de-biasing techniques.
Aligning metrics with human values can be complex, as one respondent said:
... Most alignment procedures require translation of values into metrics that can
be operationalised within data science/ML – and something might be lost in
translation here, even when we try to integrate values in our AI systems.
Some respondents suggested keeping editorial tasks AI-free for the time being:
For now, we believe that it is best to keep AI out of direct editorial roles in any
manner, way or form. Editorial decisions are based not just on ethics but on a
variety of factors like real-time situations which can change any minute. AI, we
believe, is not yet equipped to make decisions, however, we do think that in the
coming days, AI could assist the editorial chalk out strategies related to
distributing workflow.
42
3.3 Ethical Implications for Journalism at Large
We wanted to know if our respondents thought AI technologies are changing the
public’s perception of journalism and if there are other implications for journalism as an
industry. Their responses centred around two interrelated concerns: The concern that
AI technologies would further commercialise the journalism industry which would likely
lead to the second concern, a decline in public trust in journalism.
40%
60%
50%
40%
25%
18% 18%
0%
Editorial Readers’ Industry
Quality Perceptions in General
If journalists rely on AI for content creation the same way as influencers do, it
will be a huge threat to the industry. There have to be rules and boundaries.
If the industry seeks to only maximise revenue then it could have a negative
impact on editorial standards and ethics at large.
43
According to some respondents, the risk of disenchanting audiences is happening at a
time when public trust in journalism generally appears to be eroding:
I am concerned that the public already has declining trust in the media and a
decreased appetite for news. I am not entirely sure what the public’s attitude is
toward AI, but if they are largely sceptical of it, I worry that this might have negative
effects for newsrooms that do use AI in their work.
I worry about how the reader will react if they hear that a story in the
newspaper or website was written by a robot. I worry about the lack of trust of
machines and the apparent absence of a human touch to news gathering, writing,
gathering and distribution.
Tech companies are at the forefront of AI R&D, driving innovation and pushing
the boundaries of what AI can achieve. This has the potential to automate
processes, improve efficiency, and solve complex problems .
44
They also raised concerns about the profit-incentive driving these innovations and the
concentration of power technology companies enjoy.
Many respondents demanded more transparency from tech companies around the
data used and how the systems are designed. They hoped technology companies
would play a more proactive role in training journalists on AI tools and collaborate with
civil society, media, and government to ensure technical innovations are aligned with
humanistic values.
Several respondents appreciated the accessibility and affordability of some tools they
provide. At the same time, respondents voiced concerns about the ethics of technology
development. They mentioned algorithmic bias created through black boxed AI-systems,
privacy concerns, and accountability issues:
They also face the risk of neglecting ethical issues and social impacts, such as
privacy, fairness, accountability and transparency, in their pursuit of competitive
advantage and profit.
Tech companies often collect and analyse massive amounts of user data to
train their AI systems.
Some highlighted that technology is advancing at a rapid pace that journalism cannot
keep up with:
As a negative, the urgency and eagerness, speed, with which they want these
advances to be adopted, in many industries and at all levels. Your market/
commercial fight ends up affecting everyone.
The worst problem is the monopoly, the absence of control, the black boxes and
the fact that they develop tools and technologies that they want us to use without
first asking if we want them or how we want them.
45
They can enforce a news dependence, as we have also seen with other waves of
new technologies. They can become gatekeepers with a worldview that users of
their technologies have to adapt (an example is the bias controls that OpenAI have
put in place in the GPT models which are aligned with their commercial values and a
certain set of American values).
With these critiques in mind, many respondents called for more transparency from
technology companies pertaining to the AI systems they develop and the training data
they use:
We would like to see AI tools focus more heavily on explainability. AI art bots
should develop ethical credit-sharing processes.
They also hoped technology companies will provide more training to journalists on AI
tools that can enhance their work, especially in small newsrooms and organisations in
less-resourced regions:
I would like to see them collaborating with small news agencies such as ours.
We need technology companies to offer free extensive training to community
journalists. Most of the time, community media organisations don’t have the
resources and funding to offer relevant AI training programmes.
They also called on technology companies to pursue more collaboration with journalists,
civil society and governments to ensure the technologies they develop are aligned with
humanistic values:
I would like to see them adopt a more responsible and collaborative approach to
AI, engaging with stakeholders and regulators, and ensuring that their products and
services are aligned with human values and rights.
Also I would like them to engage in informed conversation with journalists all
over the globe even in markets that are not interesting for them and especially
where information gaps are affecting the most vulnerable.
46
Other respondents highlighted the opportunities tech companies have in leveraging AI
for “social good”:
Tech companies have the opportunity to leverage AI for social good, such as
improving healthcare, addressing climate change, and assisting in disaster
response. They can also contribute to bridging the digital divide by making AI more
accessible and inclusive.
Some explained that academia can play an important role in a much-needed critical
examination of AI and in addressing the ethical question:
They have to do more research on how AI can be used more effectively in public
interest journalism and also come up with a guide on how to use AI.
I believe schools and universities can serve as key catalysts in the adoption of AI
in newsrooms by providing education, research, ethical guidance, and fostering
critical thinking skills.
While welcoming a larger role being played by academic and other institutions in AI
adoption, respondents from various regions said that journalism study programmes
have not effectively evolved to reflect significant technological developments that have
drastically impacted journalism i.e. digitisation and the emergence of data journalism:
47
From our experience, journalism schools in the MENA are not coping with the
digital changes even (before talking about AI). Curricula of most schools do not equip
journalists with necessary knowledge and skills to use digital tools for fact-checking,
for data journalism, digital security etc.
ARIJ (MENA)
48
Chapter 4
The Future of AI and Journalism
We are rethinking our media and social media monitoring programmes and
methodologies to rely more on AI automation tools and to integrate the analysis of the
role of algorithms in mis and disinformation and hate speech.
So much time of ours is spent looking for eligible claims to fact-check – whether it
be from social media posts in various platforms, speeches, interviews, news reports,
among others. I think that within the next two to five years, my organisation may
introduce more AI-powered technologies for monitoring disinformation.
49
We are exploring AI-powered technologies, including chatbots like ChatGPT, to enhance
newsroom operations and engage with the audience through personalised news updates
on messaging platforms.
3 Text summarisation and generation: AI-powered technologies for text summarisation and
generation were mentioned as valuable tools for newsrooms. This includes using generative
language models to produce summaries, titles, and push messages for articles. Here are
some examples from the survey:
We hope to create a service using GPT-4 to “eat ‘’ through stock market announcements
creating easy to understand article drafts from them, and by training the model with our
feedback we would make it better and also teach it to tell us what is important and not,
hopefully. It’s in the experimentation phase so far, but we hope to have a prototype by the
summer and then expand on that further.
We will be using generative LLMs for summarisation tasks (e.g. proposal of titles or
push messages).
We are experimenting with using chatbots for headline and SEO title generation, and
summarisation.
We hope to integrate AI tools into the newsroom to help on more high-end editing
tasks, such as suggesting headlines and creation of multiple versions of stories. We are
also exploring new types of news products and forms.
4 Using chatbots to conduct preliminary interviews and gauge public sentiment on issues:
Some respondents expressed interest in utilising chatbots for conducting preliminary
interviews and gauging public sentiment on specific issues, allowing journalists to identify
interesting cases for further investigation and in-depth interviews:
50
AI tools for social media monitoring, content curation, news verification, and language
translation were also mentioned as areas of interest. These tools would aid in monitoring
social media platforms, curating relevant content, verifying information, and translating
content across different languages. The goal is to improve news production, content quality,
and audience engagement.
Others, especially smaller newsrooms are assessing their use of AI and working on aligning
their future strategy with the resources available to them:
For us now it is very important to evaluate what we have done so far and rethink
what we can realistically invest in in terms of human resources, financial resources
and technology. AI powered technologies are evolving faster than the capacities of
small newsrooms and organisations. We are currently conducting an internal
discussion to strategise our next steps in terms of AI related activities both in our
newsrooms and training and support programmes for other small independent media
in the region.
At the same time, this year’s discussions around training were more focused on specific and
nascent skills like prompt engineering, advanced technologies like large language models
(LLMs), and multidisciplinary training across various departments to enhance interoperability:
We will train journalists on new skills such as prompt engineering and create
workshops where they could play with new AI advances.
Respondents noted the need for a holistic approach to AI training that goes beyond
technical skills, asserting the need for cross departmental collaboration so various
functions are more in sync:
51
I would allocate resources to inter-departmental collaboration on innovation. I’d offer
courses in applied AI to any employee (journalist and developer) who is interested. I’d
establish well-funded data science teams in the editorial rooms – and a unit dedicated to
value alignment, as lacking alignment with editorial values is both intrinsically wrong and will
result in the brakes being pulled (rightfully) before AI-prototypes are employed at any real
scale in most legacy newsrooms.
... I would tear down most siloes so that journalists, developers, data scientists and so
on work closer together.
Around a 1/4 of responses highlighted the need to hire AI specialists, data scientists, and developers
with expertise in AI technologies. These experts would bridge the gap between journalism and
technology, working closely with journalists to integrate AI tools into newsroom processes. Here are
some examples from our survey:
Hire more engineers with experience in building AI tools and project managers.
The vast majority of responses, more than 90%, highlighted the need for training in a variety of skills
and competencies:
52
Some emphasised how the type of training needed depended on the role:
The competencies would be different from different teams and job roles – for
example, a product manager might need training on how to improve reader
experience on the site, whereas a news producer might need training on how to use
AI to better produce articles, videos, podcasts and other multimedia projects.
I think collaboration is always nice, but most organisations are currently busy
trying to find themselves within the maelstrom of digital transformation – too much
collaboration and talking about what one is doing can also hinder actual progress.
75% 85%
50%
25%
15%
0%
More Collaboration Would be Useful Not Necessarily
53
More exchange between advanced newsrooms and small newsrooms can be
beneficial to bridge knowledge and resources gaps.
54
AI could become a crossroad and an insurmountable hurdle for news
organisations that do not realise that AI is just a new aspect of the constant
progress of digital transformation. … some news organisations have been very slow
in digitising their business models (or haven’t even succeeded in doing so) – now
the next shock is just around the corner.
It may mean job losses because the work is currently being done by say five
people, and may only need one person.
It will have a drastic impact…If machines can write stories, edit them and
distribute them, it follows that newsrooms have to be leaner.
Others said that AI will not “replace jobs.” Rather, AI will redefine the role of journalists;
“steering AI… requires new competencies and new functions.”
Another respondent said:
We believe AI isn’t a threat to jobs. But people who learn to effectively use AI to
leverage their work will be in demand, and soon many roles will expect people to be
able to use these tools.
The need for a balancing act between tech and journalism, a theme that emerged in our
2019 survey, remains imperative to a future where AI technologies are leveraged to serve
journalism and its mission:
It will involve a rethinking of the entire workflow and, at least during the adoption
phase, additional work to adapt to this new approach. There will be more
collaboration and intersection between journalistic and technical figures.
Others worried that the reliance on AI technologies will undermine journalistic values, for
instance by pushing polarising content. This in turn would reduce public trust in journalism,
which many think is in decline as noted previously:
55
It might facilitate the path for some newsrooms but it can threaten core values
of journalism, negatively affecting the news industry. It can make our work more
efficient but less reliable if used badly.
At the moment I am too pessimistic because too many media are forgetting that
public interest and voyeurism are not the same thing.
I think it’s going to change what we even consider news. Unfortunately, it might
create a greater pivot to biased political and social commentary as humans feel the
need to differentiate themselves.
How organisations are affected by AI depends on various factors, including size, region,
and access to resources:
56
Chapter 5:
Generative AI and Journalism
75% 85%
50%
25%
14% 1%
0%
Yes Unclear No
57
The vast majority of respondents, around 85% at least, have experimented with genAI
technologies at varying degrees and in a range of ways as you’ll see in the responses below.
Some examples include writing code and image generation and authoring summaries. Others
are more project-oriented and on the extreme end of the spectrum, some newsrooms said they
already use genAI technologies regularly:
I have used them to construct emails, get code snippets and rephrase a sentence I
feel just isn’t right.
We have experimented with natural language processing, Open AI’s ChatGPT. We use
it to generate content that we use to develop infographics for our socials.
Some respondents made sure to indicate that their uses of genAI technology have not included
content generation, reflecting an apprehension to using genAI technologies in editorial tasks:
We are using it but not to generate content. We have experimented with ChatGPT for
analysing large swaths of data. Graphic designers have tried tools like DALL-E as a
reference/source of inspiration in the brainstorming process.
Some mentioned specific projects their organisations are working on that use
genAI technologies:
We have created a presenter and his programme 100% with generative artificial
intelligence, the image, what he looks like, what he says, the voice... everything is AI,
but supervised.
58
Several respondents said they are now using genAI technologies regularly in their
newsrooms in various ways, such as headline suggestions, search engine optimisation,
and producing summaries:
We use them on a daily basis for various tasks, such as summarising articles,
evaluating content quality, search engine optimisation, and generating copy.
We use Bing Co-Pilot for suggesting headlines and sublines for topics,
gathering background information and generating unique images for an article.
The use of genAI by newsrooms depends on their mission, size, experience and many
other factors. Unlike the examples we just mentioned, media development organisations
in MENA are using ChatGPT not for the purpose of integrating AI in their work. Rather,
they are using it in media literacy training to demonstrate its shortcomings, including
inaccuracies and bias in Arabic content, for example:
Though newsrooms are largely still experimenting with ChatGPT and other genAI
technologies, most have not had enough time to build comprehensive assessments.
This is expected given that genAI tools became accessible to the public in late 2022
with the launch of OpenAI’s ChatGPT. Despite their novelty, many respondents expect a
larger role for genAI technologies in content creation, including in writing summaries and
headlines, content customisation, and coding:
AI can help journalists generate summaries, headlines, captions and other types
of content using natural language generation techniques. AI can also help
journalists create engaging and personalised stories for different audiences and
platforms using natural language understanding and recommendation systems.
59
...Offload the work of content generation and adaptation to what
the user is looking for and focus on more core journalistic functions
(curation, investigation, analysis).
... Generative AI can help us create engaging and diverse content, such as
headlines, summaries, captions, quotes, or even stories, based on data or
information we provide… help us personalise and tailor our content to different
audiences, platforms, and formats, using natural language generation and
adaptation techniques … and enable us to explore new angles and perspectives on
topics that we may not have considered before, by generating questions,
hypotheses, or scenarios that stimulate our curiosity and creativity. In short,
genAI can enhance our journalistic skills and values, and empower us to produce
more relevant and impactful stories in ways that we can’t even imagine.
60
They pointed out the affordances of genAI technologies, such as their accessibility,
low requirements for technical skills, and what was described as their ability to
understand “context”, which make them stand out from other AI technologies that
generally require deep specialist expertise in areas like programming. Here are some
insights from our respondents:
GenAI can help because of the democratic way in which they have arrived, that
is: I don’t need an intermediary, a developer to make me the application that I need,
it’s like a Chrome extension. I make my life easier, the ease with which today, in
2023, you can do artificial intelligence compared to 2020 is impressive.
GenAI seems to require a lot less technical skills to the end user with much
faster response times, allowing us to bring it to fruition quickly throughout
the organisation.
GenAI can change the way we interact with information, allowing us to grasp
massive amounts of data, and level the playing field between high and low data
skills. They can give us much more control on the information we use to write
news, as they assist us in the time consuming writing tasks.
With all those affordances in mind and as the experimentation journey carries on,
journalists are trying to find out how drastically genAI technologies could raise the
productivity threshold. This is happening as these models continue to improve. As
millions of people experiment with these tools, the models are ingesting massive data
that is hoped will enhance them.
61
5.2 Challenges Presented by Generative AI
Interestingly, respondents were more divided over whether generative AI (genAI) presents
a different set of challenges in the newsroom compared to other AI technologies. Slightly
over half the respondents, 52%, were not sure if this was the case , whereas 40% did view
genAI as presenting new challenges in the newsroom.
The respondents argued that the types of challenges genAI presents are not very
different from the ones posed by other AI technologies (i.e. transparency, bias,
inaccuracy, and privacy issues). However, they think genAI technologies exacerbate to a
considerable degree those challenges, therefore potentially producing more harm:
52%
40%
40%
20%
8%
0%
Yes No Not Sure
The requirements for robustness (e.g. factuality and no harmful bias) is even
larger in the case of genAI, as mistakes are potentially more harmful when they
occur than with most other AI technologies.
62
In particular, many respondents are concerned about the repercussions of genAI on
misinformation and fake news. They expressed fears that it would exacerbate the
problem even further, expanding in scale:
Yes. So far, I can’t rely on AI for fact-checking. Especially that the most common
mass tool (ChatGPT) is faking data. In the current stage, AI can help me in writing,
drafting, but I’d never trust the accuracy until a [human] editor reviews it.
63
Some respondents believed genAI would produce more sophisticated manipulated
content, requiring in return more sophisticated validation methods. Here are some
examples from our survey:
Generating stories and copy using AI could reduce trust, increase inaccuracies
and perpetuate editorial bias.
I am worried about the power posed by genAI and there is a need to have tools
that automatically fact check [content produced by] ChatGPT in real time.
There is a need to ensure that journalists do not resort to ChatGPT for story
analysis. AI software that can help identify stories written by bots will be very
helpful in ensuring original content remains central in news production.
64
Chapter 6
The Global Disparity in AI
Development and Adoption
In the late twentieth century, the Global North and South terminology replaced previous
descriptors of the global order. It was generally agreed that the Global North would include
the United States, Canada, England, nations of the European Union, as well as Singapore,
Japan, South Korea, and even some countries in the southern hemisphere: Australia, and
New Zealand. The Global South, on the other hand, would include formerly colonised
countries in Africa and Latin America, as well as the Middle East, Brazil, India, and parts of
Asia. Many of these countries are still marked by the social, cultural, and economic
repercussions of colonialism, even after achieving national independence. The Global South
remains home to the majority of the world’s population, but that population is relatively
young and resource-poor, living in economically dependent nations 22.
We opted to use the North-South distinction to extend a power-conscious framing that considers
the power dynamics governing AI development and adoption in newsrooms globally, while
upholding that the Global North and the Global South are by no means monolithic; as each includes
socially and politically diverse countries.
65
6.1 Economic and Infrastructural Challenges
As discussed in Chapter 2, AI technologies pose a range of ethical and other challenges to
all industries, including journalism. These are experienced by newsrooms across the board,
regardless of size, resources, or geographic location. For newsrooms in Global South countries
however, the challenges are much more pronounced. Respondents in these countries
highlighted knowledge gaps, resource constraints, language barriers, as well as infrastructural,
legal, and political challenges.
A MENA-based respondent mentioned the political and economic realities low-resourced
independent media operate under, emphasising the challenge of competing with AI-powered
local and foreign state propaganda (i.e. bots, disinformation campaigns), amid low internet
penetration rates:
We’re talking about a war-torn region. You have millions of refugees and millions
living in deep economic crises, from Lebanon to Egypt. In our region, millions are
deprived of internet access, which should be a basic right, or have limited access to it. As
an independent media outlet producing professional content, you are dealing with low
internet penetration rates and repressive state propaganda dominating the digital
sphere… This creates digital illiteracy, which is very difficult to confront, and this is a key
challenge for us.
Some challenges are shared across large areas of the Global South. Respondents in Sub-
Saharan Africa, MENA, and the Asia-Pacific all mentioned low internet penetration rates and a
difficulty in hiring technical experts:
Technology is not fully embraced in most media houses in Malawi. Part of the reason
[being] their poor internet infrastructure and internet penetration [which is] quite low.
The adoption of AI in India and especially northeast India faces a whole lot of
challenges. We have over 200 tribes with their distinct languages and culture. There is a
lack of skilled workforce, data quality and availability issues, evolving ethical and
regulatory frameworks, infrastructure and connectivity gaps.
66
Some challenges to AI adoption are interrelated. Low internet penetration leads to low digital
literacy, which makes it easier for disinformation to thrive. Similarly, resource constraints
make it difficult to hire or even find AI experts:
[The] Botswana government does not promote transparency and does not have
comprehensive data privacy laws and policies that promote access to information.
This makes it difficult to promote dynamism in adopting AI-powered technologies in a
country that is quick to repress online content.
Local developers are incentivised to work at foreign companies which are more likely to offer
higher pay:
Technology companies invest the vast majority of their resources in Western markets. Most
tools are made for English speakers, which causes accessibility challenges to both non-
English speakers and English speakers with non-Western accents.
A Philippines-based respondent summarised how resource constraints, knowledge gaps,
and language barriers intersect:
Respondents gave us several examples where they discovered issues using AI tools in non-
English languages or non-Western English accents:
Coral has been very successful as a comment moderation tool, but we still find the
‘grey’ area comments require a human element, especially as it is an American tool not
built for the South African audience in mind.
Machine Learning (ML) for encoding is a real dealbreaker, Trint for speech-to-text
is highly recommended, translation to any other language than English and Mandarin
or Cantonese needs improvement.”
67
Voice AI tools do not sound like Africans, [they are] not authentic at all.
It is hoped that genAI technologies, which our respondents described as more accessible
than traditional AI technologies, will help bridge the regional disparities in AI adoption.
Cautious optimism is advised. If we look at ChatGPT for instance, the most famous publicly
accessible genAI tool, we find that it is not available for a large proportion of the world’s
population for various reasons. OpenAI does not support the access of ChatGPT in Russia,
Venezuela, Zimbabwe, Cuba, most likely due to US sanctions, or in China.23 Egypt has
reportedly banned ChatGPT for privacy concerns.24 Most of these countries are among the
most populous in the world.
Tools like Chat GPT are not available in Zimbabwe unless you use VPN and
you need to have a foreign number to get the code.
There are limitations for our country in some platforms (i.e. ChatGPT doesn’t
work in Egypt) and most of the tools don’t natively support Arabic.
GenAI technologies such as ChatGPT are also out of reach for hundreds of millions of
people around the world due to accessibility issues such as internet penetration rates,
particularly in rural areas.
AI scholars have warned that ignoring social, political and cultural contexts contributes
to increasing algorithmic bias and widening global AI disparity.26 Respondents noted how
many AI tools and applications fail to understand local contexts and cultures:
68
Scepticism of AI technologies by newsrooms in Global South countries also stems
from a distrust in the entities involved in the development and large-scale adoption of
AI technologies, such as global technology companies and local, government funded
technology and media institutions. For instance, in MENA, an alignment between
technology companies and governments was seen as a major obstacle to trust. One
respondent noted that newsrooms in MENA with resources for AI technologies were
aligned with nondemocratic governments:
It is feared that smaller newsrooms that advance public interest and accountability
journalism will struggle to survive. This could have significant implications for the entire
news ecosystem.
Even if local AI models were abundantly available, trust would remain an issue. Discussing
mobile application “Allam”, a Saudi Government-developed chatbot similar to ChatGPT, one
respondent explained how such projects remain tied to political considerations, diminishing
user trust in such models:
This is a local model, do we trust the datasets used by Arab state institutions?
[One wonders] if the datasets used were balanced or representative or if the data
were manipulated? Unfortunately, this is one of the issues we deal with regionally.
We don’t have pan-Arab models created by independent Arab institutions whose
choices when it comes to training datasets can be trusted. You know how
sensitive some of these contexts are … AI requires massive funding to be
competitive … Arab political realities raise urgent questions about the reliability of
[local AI] models. Are they going to be open source? Are they adaptable to Arab
newsrooms’ needs? Can newsrooms add their own datasets, for instance?
69
Despite the myriad complex challenges newsrooms in Global South countries face, respondents
from the region’s newsrooms expressed enthusiasm for building capacity in, and sharing AI
expertise. Arguably, they have to if they want to survive as AI is transforming journalism. This is
particularly true for smaller funding-dependent newsrooms whose mission is rooted in public
interest journalism and holding power to account.
When we asked respondents if they thought there was enough collaboration between newsrooms
around the development of AI technologies, several respondents noted that collaboration could be
especially useful for newsrooms in Global South countries that are experiencing similar challenges:
We think that collaboration would be specially useful between newsrooms from Global
South, such as us. We think that developing models in non-English languages (Spanish, in
our case) is really important for newsrooms.
This could involve joint efforts to create tailored AI algorithms for the African context and
establishing industry standards for responsible AI use.
Collaboration between Global South and Global North newsrooms was also highlighted as a step
toward lessing global AI disparity:
There is a big gap between the Global North and South. Both of them need to be resilient
together and collaborate to expose biases in AI, and have a serious conversation about AI
regulations and policies.
70
Conclusion
What Does AI Mean
for Journalism?
There is a caveat that this is all a reaction to a moving story. ‘Amara’s law,’ the adage coined
by American scientist and futurologist Roy Amara, applies here: “we overestimate the impact
of technology in the short-term and underestimate the effect in the long run.” Some new
technologies take time. The first newspaper went online in 1980, but it took another 17 years
before BBC Online went live. OpenAI only released ChatGPT in late November 2022 but by
January 2023 they were claiming one million users. Things are moving fast and some things
might get broken. Working practices will not be the same and some jobs will be replaced.
New ones will be created with different skills and responsibilities. Many journalists who have
experimented with genAI can see how it can make their work much more efficient and add
new dimensions to what they offer to the public.
As this report has shown, this is a volatile technology for news organisations. Most are aware
of the inherent risks in AI technologies generally and the dangers of bias or inaccuracy. They
are discovering that applying AI in news production has immediate possibilities, but how it
will shape future practice is uncertain.
It is important to understand the wider context. There are major issues around regulation,
intellectual property and commercial competition. There are big societal concerns related to
AI around misinformation, discrimination and bias as well as the dangers of media capture
by corporations or even governments. We should not lose sight of the bigger picture that
goes way beyond the news sector.
However, as journalists who report on the world, we should be much more aware of our role
in critically reporting on how AI is changing our lives, in an informed and independent way.
Our survey suggests that there is an awareness of this, albeit most people are putting most
of their energy into understanding and working through the immediate practical challenges.
71
Whether this is a brave new world or not depends to a large degree on humans making
policy and ethical choices within news organisations. If we want to make bland,
automated clickbait then this technology makes that a lot easier. But it also offers the
opportunity for ‘good’ journalists to do more ‘human’ work with the support of AI. In a
world of machine-created information, much of it unreliable, responsible, public service
journalism is in a great position to prove its value. AI also offers ways for journalism
to reinvent itself in imaginative ways. GenAI has also however created the threat of
‘disintermediation’ for the news media. Why should people go to a news organisation
for information if they can just prompt a chatbot? This survey suggests that many
newsrooms are now working hard to answer that question in a way that affirms the
utility and importance of journalism as part of our social, economic and political lives.
We look forward to working with them on that journey.
72
Glossary
Algorithm:
“A procedure for solving a mathematical problem in a finite number of steps that frequently
involves repetition of an operation”. More broadly, “a step-by-step procedure for solving a
problem or accomplishing some end.’’30
Automation:
“The technique, method, or system of operating or controlling a process by highly automatic
means, as by electronic devices, reducing human intervention to a minimum.”32
Bias:
A systematic prejudice or error affecting the rationality and fairness of a decision. Rooted in
decision theory, cognitive psychology and statistics, the notion of bias is extremely important
as both journalism and artificial intelligence techniques ultimately rely on human decisions,
and are as such subject to “cognitive” biases (confirmation bias, bandwagon effect, etc.).
When mirrored in bad, incomplete or flawed data sets to train AI algorithms, this may result
in equally flawed AI-powered decisions: “Algorithms can have built-in biases because they
are created by individuals who have conscious or unconscious preferences that may go
undiscovered until the algorithms are used, and potentially amplified, publically.”33
73
Bot:
‘Bot’ is short for ‘Robot’ and usually refers to ‘agent-like’ software – ie, software that
exhibits autonomy or autonomous characteristics. A bot is “a piece of software that can
execute commands, reply to messages, or perform routine tasks, such as online searches,
either automatically or with minimal human intervention.”34 Bots perform either perfectly
legitimate (eg. smart assistants, search engine spiders) and malicious activities (eg.,
covertly spread false information and political propaganda in coordination with other bots,
within a so-called “botnet”).35
Data Mining:
“Data mining is most commonly defined as the process of using computers and automation
to search large sets of data for patterns and trends, turning those findings into business
insights and predictions. Data mining goes beyond the search process, as it uses data to
evaluate future probabilities and develop actionable analyses.”36
Deepfakes:
This is the negative form of a broader concept of ‘synthetic media’. Audio and video altered
through machine learning and deep learning techniques for maximum, real-time realism in
fakery. The term originally comes from a Reddit user that, in 2017, used such techniques to
realistically and dynamically add faces of celebrities to pornographic content,37 and is now
widely used for any kind of content, the politically charged included.38
Deep Learning:
“Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks
capable of learning unsupervised from data that is unstructured or unlabelled. Also known as
deep neural learning or deep neural network”, it is one of the most advanced contemporary
applications of “AI”, powering a broad range of image, voice and text recognition tools.39
Generative AI (genAI):
“Generative AI is a sub-field of machine learning that involves generating new data or
content based on a given set of input data. This can include generating text, images,
code, or any other type of data. Typically, genAI uses deep learning algorithms [‘to learn
patterns and features in a given dataset, and then generate new data based on the
underlying input data.]’”40
74
Hallucinations:
“Hallucination is the term employed for the phenomenon where AI algorithms and deep
learning neural networks produce outputs that are not real, do not match any data the
algorithm has been trained on, or any other identifiable pattern. It cannot be explained by
your programming, the input information, other factors such as incorrect data classification,
inadequate training, inability to interpret questions in different languages, inability to
contextualise questions.”41
75
Natural Language Generation (NLG):
NLG is a subset of NLP. “While natural language understanding focuses on computer reading
comprehension, [NLG] enables computers to write. NLG is the process of producing a human
language text response based on some data input. This text can also be converted into a
speech format through text-to-speech services. NLG also encompasses text summarisation
capabilities that generate summaries from in-put documents while maintaining the integrity
of the information.”46
Neural Network:
“A programme or system which is modelled on the human brain and is designed to imitate
the brain’s method of functioning, particularly the process of learning.”47 “[A] computer
architecture in which a number of processors are interconnected in a manner suggestive of
connections between neurons in a human brain and which is able to learn by a process of
trial and error.”48
Prompt Engineering:
“Prompts are instructions given to an LLM to enforce rules, automate processes, and
ensure specific qualities (and quantities) of generated output. Prompts are also a form of
programming that can customise the outputs and interactions with an LLM.”49
Synthetic Media:
“Synthetic media is an umbrella term that refers to digital content generated by AI or
algorithmic means, often with the intention of appearing real.”51 Deepfakes are one type of
synthetic media.
76
References
Introduction
1. Brennen, J. Scott, et al. “An Industry-Led Debate: How UK Media Cover Artificial
Intelligence.” Reuters Institute, 2018, https://reutersinstitute.politics.ox.ac.uk/
sites/default/files/2018-12/Brennen_UK_Media_Coverage_of_AI_FINAL.pdf.
Accessed 14 August 2023.
2. Foy, Peter. “What is Generative AI? Key Concepts & Use Cases.” MLQ.ai,
5 December 2022, https://www.mlq.ai/what-is-generative-ai/. Accessed
10 August 2023.
3. Russell, Adrienne. Networked: A Contemporary History of News in Transition.
Wiley, 2011.
4. Chadwick, Andrew. The Hybrid Media System: Politics and Power. Oxford
University Press, USA, 2013.
Chapter 1
5. Maldita. “Disinformation on WhatsApp: Maldita.es’ chatbot and the “Frequently
Forwarded” attribute · Maldita.es – Periodismo para que no te la cuelen.” Maldita.es,
3 June 2021, https://maldita.es/nosotros/20210603/disinformation-whatsapp-
chatbot-frequently-forwarded-attribute. Accessed 14 August 2023.
6. Neil Patel. “Ubersuggest: Free Keyword Research Tool.” Neil Patel, https://neilpatel.
com/ubersuggest/?utm_source=neilpatel.com&utm_medium=blog&utm_
content=StepByStepGuideGrowingTrafficUbersuggest. Accessed 14 August 2023.
77
8. The Washington Post. “The Washington Post leverages automated storytelling to
cover high school football – The Washington Post.” Washington Post, 1 September
2017, https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-
leverages-heliograf-to-cover-high-school-football/. Accessed 14 August 2023.
9. Kunova, Marcela. “The Times employs an AI-powered ‘digital butler’ JAMES to serve
personalised news.” Journalism.co.uk, 24 May 2019, https://www.journalism.
co.uk/news/the-times-employs-an-ai-powered-digital-butler-james-to-serve-
personalised-news/s2/a739273/. Accessed 14 August 2023.
10. Czech Radio. “Artificial Intelligence Writes Stories for Czech Radio. The Launch
of the Digital Writer Project.” Czech Radio, December 2023, https://www.czech.
radio/artificial-intelligence-writes-stories-czech-radio-launch-digital-writer-
project-8384063. Accessed 14 August 2023.
11. Kobie, Nicole. “Reuters is taking a big gamble on AI-supported journalism.” Wired
UK, 10 March 2018, https://www.wired.co.uk/article/reuters-artificial-intelligence-
journalism-newsroom-ai-lynx-insight. Accessed 14 August 2023.
12. ArcXP. Arc XP: Enterprise CMS and DXP solution, https://www.arcxp.com/. Accessed
15 August 2023.
14. Reuters. “Reuters News Tracer.” Reuters News Agency, 15 May 2017, https://www.
reutersagency.com/en/reuters-community/reuters-news-tracer-filtering-through-
the-noise-of-social-media/. Accessed 14 August 2023.
15. Campos, Alba Martín. “Los servicios públicos externalizados por el Gobierno: del
reparto de vacunas a la destrucción de narcolanchas en Cádiz.” Newtral, 29 March
2022, https://www.newtral.es/servicios/. Accessed 14 August 2023.
16. Adair, Bill. “FactStream app now shows the latest fact-checks from Post, FactCheck.
org and PolitiFact.” reporterslab.org/, 7 October 2018, https://reporterslab.org/
factstream/. Accessed 14 August 2023.
78
Chapter 5
17. NVIDIA. “Generative AI – What is it and How Does it Work?” NVIDIA, https://www.
nvidia.com/en-us/glossary/data-science/generative-ai/. Accessed 28 August 2023.
Chapter 6
18. Yu, Danni, et al. “The ‘AI divide’ between the Global North and Global South.”
The World Economic Forum, 16 January 2023, https://www.weforum.org/
agenda/2023/01/davos23-ai-divide-global-north-global-south/. Accessed 23
August 2023.
19. Chan, Alan, et al. “The Limits of Global Inclusion in AI Development.” arXiv, 2 February
2021, https://arxiv.org/abs/2102.01265. Accessed 23 August 2023.
20. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.
21. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.
22. Braff, Lara, and Katie Nelson. “Chapter 15: The Global North: Introducing the
Region – Gendered Lives.” Milne Publishing, https://milnepublishing.geneseo.edu/
genderedlives/chapter/chapter-15-the-global-north-introducing-the-region/.
Accessed 23 August 2023.
24. EdGavit. “How to Use Chatgpt in Egypt: 8 Proven Method Step-By-Step Guide |
Bypass & Securely Use Chat Gpt.” GptCypher.com, 28 June 2023, https://gptcypher.
com/how-to-use-chatgpt-in-egypt/#1_REGULATORY_CONSTRAINTS. Accessed
23 August 2023.
26. Chan, Alan, et al. “The Limits of Global Inclusion in AI Development.” arXiv, 2 February
2021, https://arxiv.org/abs/2102.01265. Accessed 23 August 2023.
79
27. van Dijck, Jose. “Datafication, dataism and dataveillance: Big Data between scientific
paradigm and ideology | Surveillance & Society.” Open Journals @ Queen’s, 9 May
2014, https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/
view/datafication. Accessed 28 August 2023.
28. Zuboff, Shoshana. “Big other: Surveillance Capitalism and the Prospects of an
Information Civilisation.” Journal of Information Technology, vol. 30, no. 1, 2015.
journals.sagepub.com/, https://doi.org/10.1057/jit.2015.5. Accessed 25 August
2023.
29. Andrejevic, Mark. “Automating surveillance.” Communications & Media Studies, vol.
17, no. 1-2, 2019. https://research.monash.edu/en/publications/automating-
surveillance. Accessed 25 August 2023.
Glossary
30. “Algorithm Definition & Meaning.” Merriam-Webster, 7 August 2023, https://www.
merriam-webster.com/dictionary/algorithm. Accessed 10 August 2023.
31. “An Industry-Led Debate: How UK Media Cover Artificial Intelligence.” Reuters
Institute, https://reutersinstitute.politics.ox.ac.uk/our-research/industry-led-
debate-how-uk-media-cover-artificial-intelligence. Accessed 10 August 2023.
33. Gillis, Alexander S. “What is Machine Learning Bias? | Definition from WhatIs.”
TechTarget, https://www.techtarget.com/searchenterpriseai/definition/machine-
learning-bias-algorithm-bias-or-AI-bias. Accessed 15 August 2023.
36. Rutgers. “What Is Data Mining? A Beginner’s Guide (2022).” Rutgers Bootcamps,
https://bootcamp.rutgers.edu/blog/what-is-data-mining/. Accessed 14 August 2023.
37. Vincent, James. “Why we need a better definition of ‘deepfake.’” The Verge, 22 May
2018, https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-
manipulation-fake-news. Accessed 14 August 2023.
80
38. Parkin, Simon. “The rise of the deepfake and the threat to democracy.” The Guardian,
22 June 2019, https://www.theguardian.com/technology/ng-interactive/2019/
jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy. Accessed 14
August 2023.
39. Bruce, Peter. “A Deep Dive into Deep Learning – Scientific American Blog Network.”
Scientific American Blogs, 10 April 2019, https://blogs.scientificamerican.com/
observations/a-deep-dive-into-deep-learning/. Accessed 14 August 2023.
40. Foy, Peter. “What is Generative AI? Key Concepts & Use Cases.” MLQ.ai, 5 December
2022, https://www.mlq.ai/what-is-generative-ai/. Accessed 14 August 2023.
42. Foy, Peter. “What is a Large Language Model (LLM)?” MLQ.ai, 8 December 2022,
https://www.mlq.ai/what-is-a-large-language-model-llm/. Accessed 14 August 2023.
43. “What Is the Definition of Machine Learning? | expert.ai.” Expert.ai, 14 March 2022,
https://www.expertsystem.com/machine-learning-definition/. Accessed 14
August 2023.
44. Russell, John. “Library Guides: Optical Character Recognition (OCR): An Introduction:
Home.” Library Guides, 8 December 2022, https://guides.libraries.psu.edu/OCR.
Accessed 14 August 2023.
46. Kavlakoglu, Eda. “NLP vs. NLU vs. NLG: the differences between three natural
language processing concepts.” IBM, 12 November 2020, https://www.ibm.com/
blog/nlp-vs-nlu-vs-nlg-the-differences-between-three-natural-language-
processing-concepts/. Accessed 15 August 2023.
47. Harris, Marvin. “Neural network definition and meaning | Collins English Dictionary.”
Collins Dictionary, https://www.collinsdictionary.com/dictionary/english/neural-
network. Accessed 15 August 2023.
81
48. Merriam Webster. “Neural network Definition & Meaning.” Merriam-Webster, 10
August 2023, https://www.merriam-webster.com/dictionary/neural%20network.
Accessed 14 August 2023.
49. White, Jules, et al. “A Prompt Pattern Catalog to Enhance Prompt Engineering with
ChatGPT.” NASA/ADS, https://ui.adsabs.harvard.edu/abs/2023arXiv230211382W/
abstract. Accessed 14 August 2023.
50. Search Engine Land. “What Is SEO – Search Engine Optimization?” Search Engine
Land, https://searchengineland.com/guide/what-is-seo. Accessed 14 August 2023.
51. Munts, Maggie. “Zero Trust and Visual Vulnerability: What Does the Deep Fake Era
Mean for the Global Digital Economy?” Journal of International Affairs, 21 October 2022,
https://jia.sipa.columbia.edu/online-articles/zero-trust-and-visual-vulnerability-
what-does-deep-fake-era-mean-global-digital. Accessed 15 August 2023.
82
Readings & Resources
JournalismAI resources
The JournalismAI Starter Pack – our guide designed to help small and local publishers learn
about the opportunities offered by AI.
The JournalismAI Case Studies Database – our collection of 110+ examples of news
organisations worldwide making use of AI technologies to meet different needs.
Introduction to Machine Learning for Journalists – our short course that covers the basics
of machine learning for journalism.
Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject (2018).
Nick Couldry and Ulises Mejias. Television & New Media – An academic article that proposes
understanding datafication processes through the history of colonialism. The authors view the
processing of social data as a “new form of data colonialism” that normalises the exploitation
of human beings through data, the same way historic colonialism appropriated territory and
resources for profit.
Elements of AI – A free online course that helps demystify AI, by combining theory with
practical exercises.
83
Large language models, explained with a minimum of maths and jargon (2023)
Lee, T and Trott, S.
Sketching the Field of AI Tools for Local Newsrooms – A database of AI tools for local
newsrooms built by Partnership on AI. (December 2022).
Journalists AI toolbox
(2023) Mike Reilly – a live website listing AI and genAI tools for newsrooms.
Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms. H Cools, H &
Diakopoulos, N. (2023)
Books
Beginner’s prompt handbook: ChatGPT for local news publishers
Admitis, J. (March 2023).
Reporting on artificial intelligence: a handbook for journalism educators
Maarit, J. (Ed). (2023). Unesco.
For a wider selection of articles about the applications and implications of AI in journalism, with
case studies and practical insights, go to blogs.lse.ac.uk/polis. This will be updated regularly.
Please send us suggestions for further readings and resources.
84
Acknowledgements
The editorial responsibility for the content of this report lies solely with the author, Professor
Charlie Beckett.
Thanks to lead researcher and co-author Mira Yaseen, to Arab Reporters for Investigative
Journalism (ARIJ) for their assistance with research and outreach to MENA-based
organisations, and for additional respective regional data collection and research by Dr Trust
Matsilele, James Gatica Matheson and Vivek Mallik-Das.
This research project was overseen by the LSE JournalismAI manager Tshepo Tshabalala.
JournalismAI would not have been possible without the support of the Google News
Initiative. Special thanks to GNI’s David Dieudonné for his vital work to make this happen.
Although they may not have actively contributed to this report, credit should be given to
JournalismAI’s programme managers, Lakshmi Sivadas and Sabrina Argoub, as well as the
previous manager, Mattia Peretti, whose work over the past three years is the bedrock that
made a lot of this possible.
Last but not least, we want to thank again the media organisations who made this
report possible by taking the JournalismAI survey. The list follows on the next page
(some organisations opted to participate in this research anonymously and have
not been included in the list below):
85
NEWS ORGANISATIONS THAT COMPLETED THE
JOURNALISMAI SURVEY
86
Europe Latin America
AFP Abraji
Aftonbladet Chequeado
ARTE G.E.I.E. Cuestión Pública
Austria Presse Agentur (APA) El Surti
Časoris El Tiempo
CMI France Folha de Sao Paulo
Czech Radio Il Sole 24 Ore
E24 La Gaceta de Tucumán
Ekstra Bladet La Nación – Argentina
Evangelischer Presseverband Für Bayern Mutante
(EPV) Perfil
Group Nice-Matin PodSonhar
Maldita.es Rede Gazeta
Newtral T13
Observador TN
RTVE TV Azteca
Sveriges Radio Unitel
The Economist
VRT
International
OCCRP
Reuters
The Associated Press (AP)
87
Middle East and North Africa North America
(MENA) McClatchy
AlManassa MuckRock
AlMasry AlYoum NPR
ARIJ Semafor
Daraj The Texas Tribune
Jummar Zenger
Khuyout
Maharat Foundation
Masrawy
MBC Group, Egypt
Megaphone
Nawa Network – media platform of
Filastiniyat
Raseef22
Scientific Arab
Ultrasawt
Welad ElBalad
88
POLIS
Journalism at LSE
blogs.lse.ac.uk/polis/2023/06/26/how-
newsrooms-around-the-world-use-ai-a-
journalismai-2023-global-survey/
The London School of Economics and Political Science is a School of the University
of London. It is a charity and is incorporated in England as a company limited by
guarantee under the Companies Acts (Reg no 70527).
The School seeks to ensure that people are treated equitably, regardless of age,
disability, race, nationality, ethnic or national origin, gender, religion, sexual
orientation or personal circumstances.