📣 新モデル更新:Pangram 3.2では最新モデルとヒューマナイザーの精度が向上しました!詳細はこちら。
No industry remains untouched by artificial intelligence.
Every profession, from doctors and lawyers to marketing professionals and social media influencers, has been tainted by AI-generated content. But journalists? The profession relies on reader trust; however, recent research has found a boom in unregulated AI use in the industry, with AI-generated articles even making their way into the pages of widely read and trusted newspapers. While other industries take stock of their ethical values to determine appropriate AI usage, journalism needs to do the same. AI tools can automate repetitive tasks and support data-driven storytelling, but the technology falters due to bias and hallucinations. The lack of a legally binding, industry-wide standard for appropriate AI use leaves newsrooms to navigate these unexplored waters on their own. Below, we will explore current frameworks for assessing the ethical use of AI in the newsroom, analyze the gaps, and propose a consolidated ethical approach centered on transparency and human oversight.
As AI technology accelerates, ethical governance lags. However, several trusted global organizations have taken the lead in the effort to catch up. These organizations are attempting to help newsrooms regain editorial control and reader trust while still struggling to address the full range of dilemmas editors, publishers, and their writers face in this day and age.
The United Nations Educational, Scientific, and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI in 2021, centered around human oversight. The section outlining communication and information industry recommendations provides some guidance for journalists. The policy dictates that AI systems should promote freedom of information, freedom of expression, transparency, and disclosure of official data. It encourages anyone in the media industry to incorporate AI into their work in an ethical manner. The framework also asks countries to promote transparent and educational environments where media can report on the harms and benefits of AI, and consumers can build digital and media literacy skills to combat misinformation and hate speech.
However, the media framework takes up only four paragraphs of the 44 pages outlining UNESCO’s ethical AI policies. There are barely any mentions of journalism and journalists specifically, and the policy acts as more of a statement of values and a general request to build out more comprehensive policies in the future. Not much for the media to go off of.
The Institute of Electrical and Electronics Engineers (IEEE) took a similar approach. Their Ethically Aligned Design framework uplifts the freedom of reporters to educate the public on artificial intelligence issues and the sanctity of human judgment and oversight. However, the policy is a similar statement of ideals and values and not a fleshed-out guide for navigating AI dilemmas in a newsroom. How should the media handle AI usage disclosure? Or navigate inappropriate uses? What defines an inappropriate use? All the uncertainty upends newsrooms and leaves audience trust in limbo, but robust policy can provide a map for this uncharted terrain.
Publishers, editors, and journalists are in desperate need of concrete, thorough, and ethically sound guidance to regulate, inform, and correctly justify the use cases for AI in the newsroom. In creating a strict and informed AI policy, newsrooms should base their guidance on four non-negotiable principles: transparency, accountability, inclusivity, and fairness.
Below is a starting point for applying these principles to AI use in the newsroom:
AI models deployed without ethical guardrails risk reinforcing social inequalities and stereotypes of marginalized groups while reinforcing the pre-existing beliefs of users and deepening polarization. UNESCO, in its research on stereotypes generated by AI, found that large language models assigned domestic roles to women at much higher rates than men while producing negative content about gay individuals and certain ethnic groups. Researchers from Stanford University found that AI models perpetuated extreme racist stereotypes linked to the pre-Civil Rights era after being prompted to describe people speaking African American English. The risk of perpetuating bias by using inevitably flawed AI systems can seep into translation, data analysis, story idea generation, and other work that newsrooms offload to these tools. Additionally, AI recommender systems designed to provide personalized content, including news, can continue to show readers content they agree with, strengthening their preconceived notions about the world and shutting them off to alternate viewpoints or information that broadens their perspective.
The infinite mountain of AI content on the internet can also confuse readers and threaten their trust in traditional news outlets. Research from the AI detection company Pangram found that 60,000 AI-generated news articles are published every day, most prominently in the technology, beauty, and business beats. Low-quality news sites and bad actors use AI to constantly churn out copious amounts of low-quality content (known as “pink slime") at a near-zero cost with the goal of farming ad revenue. These content farms cloud the news landscape and make it even more essential that trustworthy and professional news sites create solid AI policies that they disclose to their readers.
Leading news organizations such as The New York Times, the BBC, and ProPublica are successfully balancing innovation with ethics by keeping human oversight at the forefront of their policies, taking the reins away from algorithms and putting them back in the hands of trusted editors. The BBC – for instance – dictates that its staff make clear the use of AI in the creation, presentation, and distribution of their content. Its guidelines also promote consideration and monitoring of inherent biases, hallucinations, and plagiarism in the output of AI tools. All staff and freelancers must submit a proposal for the use of AI to a senior editorial official for approval. The BBC guidelines also outline appropriate use cases of AI, including demonstrating AI output in news pieces on the subject or altering the voices of sources who wish to remain anonymous. ProPublica explicitly uses AI to sort through large databases in crucial investigations and identify patterns to expose bias in other systems. The New York Times has made clear its use of AI in assisting in headline and summary creation, generating audio versions of articles, and analyzing data.
To establish appropriate and thorough AI guidelines compatible with the Society of Professional Journalists' Code of Ethics, newsrooms should create multidisciplinary teams that bring together both technological and ethical expertise to cover all the bases in this new world rocked by AI. Human oversight should remain at the center of AI use. AI output used in news stories should be continuously audited for accuracy.
AI should be a supplement to human capabilities, not a substitute for well-researched and reliable journalism. To enforce policy and ensure compliance, newsrooms should deploy verification tools like Pangram to detect undisclosed and inappropriate instances of AI-generated content. While AI tools offer tantalizing efficiency, they do not uphold the same credibility and ethical ideals that good newsrooms do. By adopting a framework rooted in transparency and accountability, newsrooms can leverage AI tools without compromising the democratic function of the press.
Ensure your publication maintains the highest standards of integrity in today’s AI age. Verify the authenticity of your content today.