Panel Three Recordings 

AI in Newsrooms

Lindsay Pantry

School of Journalism, Media and Communication, University of Sheffield

Lindsay Pantry is a university teacher at the University of Sheffield’s School of Journalism, Media and Communication. This presentation will outline the School’s approach to integrating artificial intelligence (AI) into journalism education, highlighting current pedagogical strategies and emphasising the need to equip journalism students with critical understanding of AI tools, foster awareness of algorithmic bias, and explore the implications of generative AI for news production. She will also introduce some of the award-winning innovative techniques the School is using to equip our students to be ethical, trauma-informed journalists. 

The Adoption of Artificial Intelligence Technologies in the MENA region: Exploring Potentials and Challenges

Rana Arafat

City, University of London

Being considered a transforming revolution in technology, Artificial Intelligence (AI) has created influential changes across various sectors, including politics and media. Middle Eastern newsrooms are still taking their initial steps in exploring the potential of incorporating AI in their daily news production practices. The economic divide between countries with strong and weak economies in the region has a significant influence on the pace and format of AI adoption in media organizations (Harb & Arafat, 2024). While AI has the potential to accelerate gathering, filtering, composing, and sharing news, it poses various risks related to algorithmic bias, editorial integrity, the widespread of disinformation and fake news, as well as privacy and data security violations (Bontridder, & Poullet, 2021). Particularly, newsrooms operating in authoritarian regimes in the MENA region are challenged with governments’ attempts to use generative AI to whitewash and promote state-centred actions and policies” (Harb & Arafat, 2024). Taking Gaza war as an example, AI has further become an integral part of targeted war propaganda in the region raising concerns about the dangers of AI-powered political manipulation and generative disinformation. In this panel talk, I would share insights from my latest research to investigate the potentials and challenges of AI use in Middle Eastern newsrooms and its potential threats to press freedom, digital colonization, and local innovation.

Visual AI in Dutch Journalism: Navigating Risk, Responsibility, and Transparency in an Age of Synthetic Imagery

Astrid Vandendaele

Leiden University and Vrije Universiteit Brussel

This presentation draws on the findings of the research project AI in Beeld (Vandendaele, De Jong, Van der Woude & Arends, 2025), a collaboration between Leiden University and various Dutch media partners, to explore the evolving role of AI-generated images in journalism. While much attention has been given to generative AI in text production, this study focuses on the underexamined but rapidly advancing domain of visual AI. The capacity of AI tools to produce photorealistic imagery without cameras introduces new opportunities for creative illustration, cost eLiciency, and visual storytelling—particularly in contexts where photography is logistically or ethically constrained.

However, the use of visual AI also presents serious challenges to journalism safety and integrity. Through desk research, tool analysis, and 59 in-depth interviews with image editors, journalists, ombudspersons, and AI experts, the study identifies key risks: from diminishing public trust and copyright infringements to increased vulnerability to deepfakes, bias reproduction, and newsroom knowledge silos. These risks directly aLect journalists’ capacity to uphold core norms such as truth-telling, transparency, and visual accountability.
Despite the transformative potential of these technologies, few newsrooms currently apply clear editorial guidelines. The report highlights an urgent need for normative frameworks and professional standards tailored to visual AI—ones that secure human editorial control, protect against deceptive imagery, and ensure AI use is disclosed in ways that do not undermine audience trust.

By mapping the ethical tensions and recommending multilevel guidelines, this presentation aims to contribute to international discussions on AI governance in journalism. It will reflect on how visual AI not only transforms production processes but reconfigures what it means to produce trustworthy, safe, and socially accountable journalism in the digital age.

The transformative role of AI in journalism and content creation

 

Asmita Roy and Dhruv Sumanbhai Chauhan

University of Leeds

Artificial intelligence (AI) is revolutionizing the media landscape, particularly in journalism and content creation. This paper investigates how AI technologies are fundamentally altering the traditional structures, workflows, and values that define modern journalism. With AI-driven tools such as natural language processing, machine learning, and recommendation algorithms, media organizations are increasingly relying on automation to streamline content production, personalize news delivery, and analyze audience behavior. These technologies have enabled faster reporting, scalable content creation, and the democratization of media access. However, their rapid integration has also introduced complex ethical, professional, and societal challenges. Among the most pressing issues are algorithmic bias, misinformation amplification, loss of editorial transparency, and the erosion of journalistic independence. Furthermore, the economic dynamics of content monetization and newsroom employment are undergoing significant shifts due to AI adoption. This study employs a mixed-methods approach, including case studies of major media houses, content analysis of AI-generated news, and interviews with journalists and technologists. The findings reveal that while AI offers efficiency and innovation, it also necessitates the reconfiguration of journalistic ethics, regulatory frameworks, and audience trust. This research contributes to the growing discourse on responsible AI use in journalism by offering actionable insights for policymakers, media practitioners, and technologists. Ultimately, the paper underscores the need for a balanced approach—where AI enhances journalism without compromising its democratic and ethical foundations.