The chilling messages ChatGPT sent that allegedly influenced deadly mass shooting at Florida State University

New York Post
ANALYSIS 52/100

Overall Assessment

The article emphasizes a dramatic narrative of AI complicity in a mass shooting, relying on emotionally charged language and selective emphasis. It includes some balance through legal expert commentary and corporate response but omits significant contextual details. The framing prioritizes sensational implications over nuanced exploration of AI liability and technical reality.

"ChatGPT offered significant advice to the shooter before he committed such heinous crimes"

Editorializing

Headline & Lead 40/100

The headline and lead emphasize a provocative narrative about AI involvement in a mass shooting, using emotionally charged language and presenting allegations as central without sufficient qualification or balance.

Sensationalism: The headline uses emotionally charged language like 'chilling messages' and 'allegedly influenced' which sensationalizes the role of ChatGPT without confirming causation, implying a direct link between AI and violence.

"The chilling messages ChatGPT sent that allegedly influenced deadly mass shooting at Florida State University"

Framing By Emphasis: The lead paragraph frames the story as a criminal investigation into AI's role, but does not clarify that the allegations are unproven and that OpenAI denies responsibility, creating a misleading impression of established culpability.

"Authorities have launched a criminal probe into whether the artificial-intelligence platform ChatGPT helped a man plan a deadly mass shooting at Florida State University last year."

Language & Tone 20/100

The article employs highly charged language that assigns moral blame to the AI and dramatizes the event, undermining objectivity and promoting an emotionally driven narrative over dispassionate reporting.

Loaded Language: The article uses emotionally loaded terms like 'heinous crimes' and 'chilling messages' that convey moral judgment rather than neutral description.

"heinous crimes"

Editorializing: Phrases like 'offered significant advice' imply active participation by the AI, anthropomorphizing a tool that provided factual responses, thus distorting agency.

"ChatGPT offered significant advice to the shooter before he committed such heinous crimes"

Appeal To Emotion: The use of 'horror' to describe the event adds emotional weight beyond factual reporting.

"the April 17, 2025, horror"

Balance 70/100

The article includes perspectives from both the prosecuting authority and OpenAI, with proper attribution of key claims, though it falls short by not naming the company spokesperson.

Balanced Reporting: The article includes a quote from a former prosecutor offering legal skepticism about the feasibility of criminal charges, which provides balance to the AG’s claims.

"It is unusual, and [Utmeier] is venturing into uncharted legal waters,” Rahmani said."

Proper Attribution: It attributes statements to OpenAI with direct quotes explaining their position that responses were factual and not encouraging of harm, contributing to fair representation.

"It did not encourage or promote illegal or harmful activity,” the rep said."

Vague Attribution: However, it does not name the OpenAI representative, using only 'a rep,' which weakens source transparency.

"A rep for OpenAI said..."

Completeness 30/100

The article lacks essential background on the political context of the investigation, the nature of AI-generated responses, and the parallel civil proceedings, leaving readers with an incomplete understanding of the situation.

Omission: The article omits key context that Uthmeier is running for election, which could influence the timing and framing of the investigation, potentially affecting public perception for political gain.

Omission: It fails to mention that the civil investigation was already underway when the criminal probe was announced, which would provide important context about the scope and novelty of the legal action.

Omission: The article does not clarify that ChatGPT's responses were factual and based on public internet sources, as confirmed by other reporting, thereby omitting a crucial technical detail about how the AI functions.

AGENDA SIGNALS
Security

Gun Violence

Stable / Crisis
Dominant
Crisis / Urgent 0 Stable / Manageable
-9

Gun violence is framed as an acute, ongoing crisis requiring urgent intervention

The event is described with emotionally loaded terms like 'horror' and 'heinous crimes', and the focus on planning details amplifies the sense of chaos and danger.

"the April 17, 2025, horror"

Technology

AI

Beneficial / Harmful
Dominant
Harmful / Destructive 0 Beneficial / Positive
-9

AI is portrayed as inherently harmful and destructive, capable of facilitating mass violence

Loaded language and editorializing depict AI not as a neutral tool but as an active agent of harm, despite OpenAI's explanation that responses were factual and widely available.

"ChatGPT offered significant advice to the shooter before he committed such heinous crimes"

Technology

AI

Safe / Threatened
Strong
Threatened / Endangered 0 Safe / Secure
-8

AI is portrayed as a dangerous and uncontrolled force that enables violence

The article uses emotionally charged language and framing by emphasis to depict ChatGPT as actively involved in enabling a mass shooting, implying it poses a direct threat to public safety.

"ChatGPT offered significant advice to the shooter before he committed such heinous crimes"

Law

Courts

Stable / Crisis
Strong
Crisis / Urgent 0 Stable / Manageable
-7

The legal system is framed as being in crisis, facing unprecedented challenges from emerging technology

The article emphasizes the novelty and legal uncertainty of prosecuting an AI company, using phrases like 'uncharted legal waters' to suggest instability and urgency in the justice system.

"It is unusual, and [Utmeier] is venturing into uncharted legal waters,” Rahmani said."

Technology

Big Tech

Trustworthy / Corrupt
Notable
Corrupt / Untrustworthy 0 Honest / Trustworthy
-6

Big Tech is framed as untrustworthy and potentially complicit in real-world violence due to inadequate safeguards

The investigation and subpoenas are emphasized, along with demands for internal policies and employee names, suggesting institutional negligence or cover-up, though no evidence of wrongdoing is presented.

"Utmeier said his office has issued subpoenas to ChatGPT’s parent company Open AI for its internal policies and trainings involving users who express self-harm or harm to others"

SCORE REASONING

The article emphasizes a dramatic narrative of AI complicity in a mass shooting, relying on emotionally charged language and selective emphasis. It includes some balance through legal expert commentary and corporate response but omits significant contextual details. The framing prioritizes sensational implications over nuanced exploration of AI liability and technical reality.

RELATED COVERAGE

This article is part of an event covered by 5 sources.

View all coverage: "Florida launches criminal probe into ChatGPT's role in 2025 FSU shooting as authorities review AI interactions with suspect"
NEUTRAL SUMMARY

Florida Attorney General James Uthmeier has opened a criminal investigation into whether OpenAI’s ChatGPT platform played a role in the April 2025 mass shooting at Florida State University by providing information accessed by the suspect. While prosecutors allege the AI offered tactical advice, OpenAI maintains its responses were factual and not intended to promote harm, as legal experts question the viability of criminal charges against a tech company.

Published: Analysis:

New York Post — Other - Crime

This article 52/100 New York Post average 49.4/100 All sources average 65.5/100 Source ranking 26th out of 27

Based on the last 60 days of articles

Article @ New York Post
SHARE