AI
Date Range
Score Range
AI competition framed as an urgent crisis requiring immediate guardrails
The use of the 'technological cold war' metaphor and emphasis on military AI risks amplifies urgency and crisis framing around AI development.
“China and the US are currently locked into a race on artificial intelligence that is becoming something of a technological cold war.”
Framed as potentially harmful due to leadership dysfunction
Expert commentary from Sarah Kreps links the trial to worsening public perception of AI, implying that the behavior of its leaders is undermining trust in the technology itself, despite no direct critique of AI’s capabilities.
“This is not looking good for any of them, and I think that that's a little bit unfortunate for the AI industry at a time when the public perception of AI is quite negative and seems to be getting worse”
AI and robotics framed as positive drivers of China's future
The article uses emotionally resonant, cinematic language to depict AI and robotics in China as exciting, futuristic, and beneficial, particularly through scenes of children interacting with robots in a lab.
“a group of kindergarten children cackle with delight as they watch a robot fish swim around the tank.”
AI portrayed as a dangerous threat to personal liberty
The framing centers on how AI-driven facial recognition directly led to a wrongful arrest, emphasizing the risks of automated systems overriding human judgment. The narrative warns that 'a bad match on a screen can turn into a search warrant and jail time,' positioning AI as an active danger.
“A bad match on a screen can turn into a search warrant and jail time. For Lipps, that risk became painfully real.”
Framed as a high-stakes, potentially destabilizing rivalry
[cherry_picking] While AI is mentioned as a key issue, the framing focuses on rivalry 'compared to a nuclear arms race,' emphasizing danger and competition over cooperation or regulation.
“Another major issue for the two superpowers is artificial intelligence, where rivalry has been compared to a nuclear arms race and both sides are seeking channels of communication to avoid conflict.”
framed as a destructive force displacing workers without sufficient societal consideration
Cherry-picking and appeal to emotion emphasize job losses over productivity gains, positioning AI as harmful to livelihoods.
“The brutal cuts make GM the latest major American company to slash white-collar jobs as bosses race to reshape their workforces for the AI age.”
Facial recognition technology is framed as a beneficial tool for public safety
[narrative_framing], [loaded_language]: The technology is described through positive police narratives and emotionally favorable language (e.g., 'fabulous'), positioning it as a net positive for law enforcement.
“Scotland Yard Commissioner Sir Mark Rowley said a judgment in favour of the force provided a 'mandate' to expand its use of the 'fabulous' technology.”
Forensic technology is portrayed as beneficial in uncovering truth and correcting errors
Advanced DNA evidence is credited with linking the real perpetrator to the crime, resolving a decades-old case.
“Since 2018, authorities had used advanced DNA evidence to link Brashers to the strangulation death of a South Carolina woman in 1990...”
AI development is framed as harmful, contributing to job insecurity and worker exploitation
[misleading_context], [appeal_to_emotion] — AI is linked to 'devastating job cuts' and 'forced' training of replacement systems, portraying it as destructive rather than beneficial
“staff are facing devastating job cuts, draconian surveillance, and the cruel reality of being forced to train the inefficient systems being positioned to replace them”
Algorithmic systems are framed as adversarial forces manipulating user behavior, especially for young people
[balanced_reporting] and [comprehensive_sourcing]: The framing positions recommender algorithms as intentionally harmful, with calls to disable them by default for adults and entirely for children, implying inherent hostility in their design.
“The committee recommends requiring platforms to disable recommender algorithms entirely for children and by default for people over 18.”