Panel Two Recordings 

The AI Turn in Journalism: Disruption, Adaptation, and Democratic Futures

Tomás Dodds, Rodrigo Zamith, and Seth C. Lewis

University of Wisconsin-Madison, University of Massachusetts Amherst and University of Oregon

In this paper, we argue that, unlike previous changes in digital media technologies over the past few decades, this AI “turn” in journalism forces us to fundamentally rethink journalism’s identity and its relationship with audiences. While artificial intelligence complicates and challenges existing professional, social, political, and economic structures, it also offers new ways to realize journalistic goals once seen as impractical, if not impossible. Drawing on four orienting themes—adoption and hype, power and dependency, audiences and democratic implications, and education and empowerment—we unpack the multifaceted implications of this AI turn and its consequences for the journalistic field. Through a synthesis of emerging empirical research and theoretical reflection, we demonstrate how AI is reshaping newswork: accelerating production, personalizing content, expanding accessibility, and simultaneously deepening issues of misinformation, burnout, and public distrust. This transformation also recasts power dynamics in the news ecosystem, with growing dependencies on technology platforms. We highlight the urgency of interdisciplinary responses in journalism education and civic media literacy, as well as the need for critical engagement with the values and assumptions encoded in AI systems. Ultimately, we suggest that the AI turn is not merely a continuation of prior disruptions but a constitutive moment—one that presents both risks and opportunities. It could entrench problematic trends or, if approached intentionally, serve as a generative opening to reimagine journalism in ways that strengthen its democratic function. The decisions made in this moment will shape not only the future of journalism, but also the future of public life.

Navigating the AI Frontier: Challenges of Journalists in the Age of Misinformation

Ritesh Ranjan and S. Arulchelvan

Anna University

“By the time truth finishes its morning coffee, lies have already booked a world tour.” (Johnathan Swift, 1710). With widespread access to social media and smartphones, information spreads rapidly among the large population. Artificial intelligence plays a key role in fact-checking and combating false or misleading content to ensure accuracy. Theories like Media Ecology Theory and Ethical Theories in Journalism are relevant in light of journalists’ fact-checking procedures and their public informational role. AI may produce content in any format, including text, pictures, audiovisuals, and motion graphics, by utilizing data from many sources. It makes the distinction between what is and is not authentic more difficult.

Misinformation, amplified by AI-generated content, poses significant threats to individuals, societies, and institutions. The rapid spread of such content on social media polarizes public opinion and undermines trust. While AI tools can assist in verifying information, distinguishing between human and AI-generated content remains challenging. Journalists have an ethical duty to fact-check and educate audiences, necessitating awareness and adoption of these technologies.

The study explores journalists’ challenges of using AI tools to combat misinformation and examines how journalists employ artificial intelligence for fact-checking and public awareness. Interviews conducted with reporters, editorial staff, and editors from media organizations.

Journalists already utilize both AI and traditional methods for fact-checking. However, under time constraints, AI provides rapid responses. To effectively combat disinformation, a centralized agency—backed by media organizations and the government is essential. Media organizations, unions, and institutions should offer media professionals more frequent and up-to-date training sessions focused on emerging technologies to help them stay ahead in a rapidly evolving digital landscape.

How does AI empower journalists and the public?

Laeed Zaghlami

Algiers University

The biggest lesson for news organisation leaders is to empower journalists and audiences, listen to them, remove the friction they experience and then watch them do amazing things.. The next phase could be fundamentally different from the current news environment and also more ambiguous and the main reason newsrooms use artificial intelligence is to improve journalism and organisational efficiency and productivity. It is assumed that AI enables almost anyone to produce content of the same quality as news from the best publishers, including well-written text, punchy video and even engaging conversational voice-overs.

How will the potential impact of AI affect the integrity of journalism and the media business as a whole is the main issue and focus on journalistic credibility, copyright and content distribution? The implementation of artificial intelligence should be a catalyst for building bridges and for this reason; there is a need for close cooperation between managers, within departments and between academics. Over all, the idea is to create multidisciplinary teams, as well as a more diversified information system.

In fact, Artificial intelligence has to perform tasks that humans cannot realistically do. The idea here is not to throw artificial intelligence out of the window, but to look at the problem through this lens and see if artificial intelligence really is the way forward.

What’s more, AI helps to free us from routine, tedious, redundant and boring tasks, but in no way should it surpass or even erase us. At the end of the day, you have to maintain and exercise authority. AI can be your best partner, but it will become your worst nightmare if you expect it to do everything for you: “We should consider AI as a tool for empowerment.

In practice, I will talk about AI implementation in Algerian journalism in terms of constraints, challenges and prospects.

Ethical concerns of using Artificial Intelligence in Journalism: qualitative analysis

Dali Osepashvili

International Black Sea University

The use of artificial intelligence in journalism is now starting to take hold. In recent years, since the emergence of ChatGPT from early 2022, a number of concerns and worries arise regarding the ethical aspects of using AI (Barceló‐Ugarte, T. et all. Gutierre-Caneda, B. et all., 2024; Forja-Pena, T., García-Orosa, B., & López-García, X. (2024); Kalfeli, P & Angel, C. 2025); etc.

The purpose of the presentation is to show the results of the study on how Georgian media professionals assess the ethical challenges associated with the use of artificial intelligence in journalism and what threats they see in this regard. The semi-structured interviews-method was used for this purpose. The study was conducted from February 1 till 20 Aprill 2025. At the whole 10 media professionals (journalists and representatives from academia) participated this qualitative study.

Although the active use of artificial intelligence in Georgian journalism has not yet begun, most media professionals have a skeptical attitude. They see ethical challenges as the main threat.

According to this study, the main concerns are copyright, plagiarism-related challenges, lack of regulations and fears related to personal data. Most of the questioned respondents believe that artificial intelligence tools can only be used by journalists as auxiliary tools for creating content. As they noted, despite the ethical challenges, they will have to accept and incorporate this innovation, because the journalism of the future will inevitably be without artificial intelligence.

Code & Conscience: Upholding Public Service Media’s Values in the AI Age

Laura Pranteddu, Laura Amigo and Colin Porlezza

Insitute of Media and Journalism, USI (Università della Svizzera Italiana)

AI increasingly influences the news-making process (Simon, 2024) and offers opportunities to enhance journalistic workflows (Franks et al., 2022). Journalists acknowledge its potential benefits; however, they remain concerned about its impact on core journalistic values such as accuracy, autonomy, and objectivity (Komatsu et al. in 2020). Public Service Media (PSM), in particular, have an “uneasy relationship” with AI (Horowitz et al., 2022, p.130): while AI may support democratic practices, it also risks undermining key public service values, like universality, independence, and non commercial content. Therefore, PSM play a pivotal role in ensuring the responsible integration of AI technologies. Drawing on algorithmic governance in PSM (Tambini, 2021), this study examines how European PSM develop self-regulatory frameworks regarding AI, by exploring:

RQ1. How is the responsible use of AI technologies reflected in codes of ethics or guidelines at the organizational level of PSM?
RQ2. How are key values such as responsibility, accountability, and transparency specifically approached when it comes to the deployment of AI applications in PSM?

The research focuses on four PSM from Finland, France, Germany and Switzerland. We interviewed senior editors overseeing AI projects in the PSM in 2022 and carried out a document analysis of existing ethics codes in both 2022 and 2024, to account for recent changes following the introduction of generative AI. The data was analyzed using thematic analysis (Braun & Clarke, 2006).

Findings show that PSM have updated ethics codes to address generative AI, focusing on (non-)allowed uses, responsibility and transparency to ensure that AI processes are understandable and accessible to staff and the public. However, a gap remains in addressing the broader implications of AI design, particularly in aligning AI systems with
the public service mandate. Overall, the findings confirm PSMs’ uneasy relationship with AI, especially in translating ethical principles into actionable AI governance frameworks.