AI threatens journalistic integrity

The usage of Artificial Intelligence [AI] has significantly risen in the last few years and it sparks concern about the integrity of journalism. There is little understanding as to the impact AI has on journalism as a whole and an even less understanding of the influence it has on its writers and readers. Although it may seem to be helpful for journalists when brainstorming for story pitches or composing a simple news brief, it can have detrimental effects on the integrity of their publication and readers’ perceptions. 

When creativity is a focal point of a profession, there is often the concern of AI imposing on the artistic process. Writers, musicians, journalists and artists are at risk of their careers being replaced by AI-generated work. The discussion of AI and job loss overlooks a more profound issue: automating creativity diminishes the individuality of human expression. As writers, the looming threat of AI is constant. This is problematic for both the writers and the readers of newspapers, magazines and news sites. AI fails to capture the raw human emotion on which journalism thrives. 

AI cannot replicate the cultural, emotional and ethical components that make journalism impactful. Creating clear and quick to read articles supports the high demand for easy to consume information, feeding the growing reliance that society has on AI. When resources like ChatGPT are used to write large volumes of content, it washes out real journalistic value.

Deep fakes and AI-written articles or news posts can be used to spread false information and manipulate readers. When audiences are consuming breaking news from an AI source, there is concern over the credibility of the content. AI-generated media often includes bias and fails to reference its sources leading to inaccurate facts and details published. When generating an answer to a prompt, the machines scour the internet looking for information to spout into their writing, but since AI does not have a conscience, the machines absorb writing that already has biases and often lacks nuance. 

Reporter Maggie Harrison Dupre discovered how much AI-written content is featured in major publications such as the “LA Times,” “Miami Herald” and “Us Weekly.” When readers rely on journalists for information to deepen their understanding of political topics and develop personal opinions, they want reliable human sources, not artificially programmed content.

The popular athletic magazine, “Sports Illustrated,” was found to have numerous artificially generated authors. Their staff biographies included fake descriptions of their hobbies and AI-generated images of themselves. When questioned about the robotic writers, “Sports Illustrated” removed the AI staff descriptions from their website. 

Similarly, the San Francisco-based news organization, “Hoodline,” has recently been utilizing AI-generated articles, complete with fake bylines, on their website. “Hoodline” markets itself as local news and has a disclaimer on its website stating that “while AI may assist in the background, the essence of our journalism— from conception to publication —is driven by real human insight and discretion.” This type of dependence on technology should not become a standard or normalized in any way.

Access to the press is a crucial part of American life. However, when readers realize that the sources they trust to gather information use AI, they may hesitate to read the news which would lead to more misinformation and a lack of education among the public. 

It is true that AI resources are simple to use and efficient in certain scenarios. When on a tight deadline, AI can generate helpful starting points and simplify complicated topics. It makes sense why writers would feel inclined to use AI due to its convenience. For example, if one were to type a request into ChatGPT for an article or ask it to write about news events it could be easily executed in a short amount of time. This however does not take into account the quality of the work it will produce.

Despite the convenience, there is no excuse for the immoral use of technology to replace human thought. AI in journalism causes not only a massive wave of false information but is also completely unethical because it removes the humanity of the press. Regulations are important to help prevent AI misuse, however, there is no place for it in journalism. It is unreliable, irresponsible and immoral. Our staff does not condone the use of AI in the production of our publication. 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.