Apple is facing growing criticism from press freedom groups after its newly launched artificial intelligence (AI) feature, designed to summarize news articles, produced a false headline about a BBC report. The feature, which is part of Apple’s iPhone 16 series, has caused concerns about the reliability of AI-generated news summaries.
False News Summary Sparks Concerns
Reporters Without Borders (RSF), a prominent press freedom organization, is urging Apple to remove its AI feature following a false summary of a BBC news story. The AI-generated notification, sent to users last week, inaccurately claimed that Luigi Mangione, the suspect in the killing of the UnitedHealthcare CEO, had shot himself. The BBC clarified that the original report did not make such a claim.
In response, the BBC reached out to Apple to express concern over the issue, though it remains unclear if the tech giant has addressed the complaint.
Reporters Without Borders Calls for Action
Vincent Berthier, head of the technology and journalism desk at RSF, has called on Apple to “act responsibly by removing this feature.” He emphasized the risk AI poses when it produces false information, stating, “AI systems are probabilistic machines, and facts can’t be decided by chance.”
RSF expressed broader concern about the potential dangers AI poses to media outlets, warning that it remains “too immature to reliably produce information for the public” and should not be used in ways that could affect news reporting.
Apple’s AI Tool Raises Issues for News Media
Apple launched its generative AI tool in the U.S. in June, promoting its ability to summarize content in formats such as bulleted points or digestible paragraphs. The feature was designed to help users streamline their news consumption by grouping notifications, which are then presented in a single push alert. While this can be convenient, the AI’s tendency to produce inaccurate summaries has raised red flags among news organizations.
This is not the first time the AI tool has misrepresented news. In late October, users reported that the feature had wrongly summarized a New York Times story, stating that Israeli Prime Minister Benjamin Netanyahu had been arrested when, in fact, an arrest warrant had been issued by the International Criminal Court.
The Problem of Misinformation
The Apple Intelligence feature operates by generating automatic summaries of news articles, but it does so without consulting the news outlets directly. While some publishers use AI to assist in writing stories, they have control over its use. However, Apple’s AI tool operates passively, summarizing articles under the publication’s name without the publisher’s oversight.
This not only spreads misinformation but also jeopardizes the credibility of news organizations. The errors introduced by Apple’s AI are concerning as they can influence how readers perceive the original content.
The Growing Concern Over AI in Journalism
Apple’s AI issues are part of a broader challenge facing the media industry as new AI technologies continue to evolve. Since the release of ChatGPT in 2022, various tech companies have rolled out their own large language models, many of which have been accused of using copyrighted content, including news articles, without permission. Some news outlets, such as The New York Times, have filed lawsuits accusing AI developers of scraping their content, while others have chosen to sign licensing agreements with tech firms.
Source: CNN Business
Leave a comment