Laura Pranteddu, PhD Candidate at Università della Svizzera italiana, talks about how we can make sure public service media is protected in the age of AI

I began researching artificial intelligence in digital journalism in 2021. Initially, my research focused on understanding which journalistic tasks were being automated through AI in news production and to what extent journalists were involved in these automation processes. A recurrent question driving my research is whether newsroom AI tools can be purposefully designed, rather than merely implemented, to embody core ethical and journalistic values from the outset. This also involves examining whether and to what extent journalists participate actively in the creation and design of AI tools. My early work, elaborated on during a research project funded by the Swiss Federal Office of Communications (OFCOM), examined the presence, or absence, of formal ethical guidelines for AI use across selected public broadcasters and set the groundwork for deeper comparative analyses once generative AI gained broader adoption in 2022-2024. Back then, AI codes of ethics or guidelines were still in their infancy. However, with the release of models like ChatGPT, generative AI became a public phenomenon, raising urgent governance questions around transparency, responsibility, and accountability. In 2024, following this transformative shift, I revisited the ethical guidelines for AI use of four European public broadcasters: SRG SSR (Switzerland), Bayerischer Rundfunk (Germany), Yle (Finland), and France Télévisions with Radio France (France). This comparative analysis, conducted in collaboration with Dr. Laura Amigo and Prof. Colin Porlezza, aimed to identify how these broadcasters were adapting their governance frameworks to the new reality created by generative AI.
Ethical frameworks under pressure
Transparency was the most visibly addressed value in the updated frameworks. By mid-2024, all four broadcasters had introduced some form of labelling for AI-generated or assisted content. However, the approaches vary, and clarity is often missing. Bayerischer Rundfunk (BR) requires visual tags, RTS asks for flagging AI-modified visuals, while SSR and Yle support labelling in principle, but few explain what terms like “AI-generated” actually mean.
Responsibility often hinged on the principle of “human in the loop”, a reassuring phrase, but not always backed by clear practice. Most broadcasters reiterated that humans, not machines, remain editorially accountable. But how this plays out varies. At Swissinfo (a digital information platform, which is a unit of the Swiss public service media SRG), for example, journalists are explicitly held responsible for AI-assisted outputs, including translations. Every result must meet the outlet’s quality standards and be reviewed for bias, fairness, and inclusivity. Elsewhere, commitment is more general. SRF and BR refer to human oversight, but without always specifying who edits, signs off, or assumes legal and ethical responsibility.
Accountability proved to be the least developed value in most frameworks. All the broadcasters examined include internal oversight measures, from workflow controls to editorial checks. BR, for instance, tests AI tools in sandbox environments before deployment. But beyond these internal safeguards, external accountability is largely absent. None of the guidelines reviewed included formal audit procedures, independent review boards, or public reporting pathways. Ethical commitments are documented but rarely opened to scrutiny.
Three directions as possible trajectories
1. What if ethics came first, not last?
Too often, ethical concerns are raised only once a tool is already live. But what happens if we start earlier? Imagine every AI project kicking off with an editorial roundtable, not just a tech spec. What stories are we prioritising? Whose voices are we amplifying? Formalised impact assessments could help frame these questions before design becomes destiny. Ethics by design might be a good and responsible way to make sure that journalistic values don’t get engineered out by accident.
2. What if journalists and developers didn’t speak past each other?
In many newsrooms, editorial and technical teams still live on parallel tracks. One codes, the other edits. But when they meet, in co-design sessions, joint trainings, or even hallway conversations, mutual understanding could be enhanced. Journalists begin to grasp what’s possible while engineers might better understand what is at stake in terms of specific values. The BBC for instance started to develop this kind of responsible approach to technical development through their machine learning engine principles. This is not only about turning reporters into programmers or engineers into editors. It is also about building bridges and managing expectations since AI tools need to be tailored to the needs of newsrooms. It is also necessary to ensure that journalists are convinced about the utility and, in particular, the accuracy of what is being developed.and enough common ground so the tools we build don’t just work, they make sense.
3. What if we stopped reinventing the wheel, alone?
Across Europe, public service media are tackling similar problems, from labelling AI content to setting internal boundaries for use. To better coordinate the efforts, institutions like the European Broadcasting Union (EBU) have started connecting the dots, for instance through specific reports. These are not binding standards, but they can offer shared learning experience based on used cases helpful to establish a ground layer on which to build a responsible use of (gen) AI.
These cases and discussions from other members of the EBU can offer guidance in terms of strategic decision-making for newsroom leaders. The challenge ahead is not just how to integrate new tools. It is how to keep public service values alive and a lived experience in the systems we adopt. Because in the end, it’s not only about what AI can do for journalism. It’s also about what journalism needs to ask of AI.
Looking ahead
Recent initiatives in Switzerland suggest that public institutions, like EPFL and ETH Zurich, are beginning to explore alternatives to commercial AI infrastructures. A fully open-source language model, trained on national computing resources, is set to be released with an emphasis on transparency and multilingual access. While its effectiveness remains to be seen, it signals a broader effort to develop AI tools outside the dominant logic of proprietary platforms. For newsrooms, such national or regional responses could help ease longstanding dependencies on Big Tech companies and encourage more transparent, accountable AI systems. This matters for journalism. AI governance in public service media is not just a technical issue, it is a democratic one. AI codes of ethics or guidelines are a vital starting point, but without clear responsibilities, enforceable protocols, and public oversight, editorial AI risks drifting away from the values PSM are meant to uphold.