Family of Child Injured in Canada School Shooting Sues OpenAI

via BBC World

Flowers and tributes left outside Tumbler Ridge school after the shooting

The family of Maya Gebala, a 12-year-old critically injured in a mass shooting at a school in Tumbler Ridge, British Columbia, on February 10 has filed a civil lawsuit against OpenAI, alleging the company knew the gunman was planning an attack and failed to alert authorities. Eight people died in the attack, including five children and the suspect's mother -- one of the deadliest school shootings in Canadian history. The suspect, Jesse Van Rootselaar, 18, had used ChatGPT for months before the attack, describing detailed "scenarios involving gun violence." Twelve OpenAI employees reportedly flagged the conversations as indicating imminent risk of harm and recommended contacting Canadian law enforcement. The lawsuit alleges that request was overruled internally -- and the only action taken was to ban the account. Van Rootselaar then opened a second account and continued planning. OpenAI has maintained the account did not meet its threshold for a "credible or imminent" plan for serious harm. Sam Altman met with Canadian officials on March 4 and pledged to strengthen safety protocols. Maya remains hospitalized with a catastrophic brain injury.

AI chatbot companies have increasingly come under scrutiny for how they handle users who express violent intent. OpenAI and others have safety teams and reporting policies, but standards for when to contact law enforcement are not standardized across the industry. This is the most prominent civil lawsuit against an AI company in North America alleging failure to prevent real-world violence. OpenAI's stated policy is that it notifies authorities only when a threat is both credible and imminent -- criteria the company says were not met in this case. Canada has been actively pressing tech companies on AI safety since the attack.

After Outages, Amazon Requires Senior Engineers to Sign Off on AI-Assisted Code Changes

via Ars Technica, Financial Times

Amazon warehouse and technology infrastructure

Amazon has issued a new policy requiring junior and mid-level engineers to get sign-off from a senior engineer before pushing any AI-assisted code changes, following at least two service outages tied to its Kiro AI coding tool. In December, Kiro autonomously deleted and recreated a cloud cost calculator environment on AWS, causing a 13-hour outage. A second incident followed shortly after. Amazon retail CTO Doug Treadwell convened an emergency session of the company's normally optional weekly operations meeting to address the pattern -- an unusual step that signals internal alarm. The policy is one of the first explicit corporate governance rules requiring human approval for AI-generated code at scale. The timing is fraught: Amazon has cut more than 16,000 corporate roles in recent rounds of layoffs, and multiple engineers have attributed a surge in "Sev2" incidents -- situations requiring a rapid response to prevent product outages -- to reduced headcount. Amazon disputes that the layoffs are responsible, but has not disputed the increase in incidents.

Amazon has been among the most aggressive corporate adopters of AI coding assistants, deploying Kiro and other tools across its engineering workforce as part of a broader effort to boost developer productivity. AI-assisted coding tools suggest, autocomplete, and increasingly write entire blocks of code autonomously -- but they do not yet reliably predict the system-level consequences of changes. The December AWS incident, originally reported by the Financial Times, is the most concrete example to date of an AI coding tool causing a real-world production outage at a major cloud provider. Amazon's new policy doesn't ban AI-assisted changes -- it adds a human review requirement that was apparently absent before.

Scientists Built a Virtual Cell That Simulates Every Molecule -- Then Watched It Divide

via Nature News

Computer-generated illustration of a simulated cell in early stages of division

A team led by computational biophysicist Zane Thornburg at the University of Illinois Urbana-Champaign has published the most complete simulation of a living cell ever built -- one that models nearly every chemical reaction inside a bacterial cell, watches it copy its DNA, and then sees it physically divide in two. The work, published March 9 in the journal Cell, used the simplest known living organism: JCVI-Syn3a, a synthetic bacterium with just 493 genes. Every molecule in the simulation followed rules derived from real-world measurements, and chemical reactions occurred only when interacting molecules became physically close in the 3D model. Early attempts kept failing -- the genome would collapse faster than it could be copied, or spill out of the simulated cell membrane. After months of adjustments, the team ran a final version over Thanksgiving weekend and returned to find a complete cell division had occurred. The simulated cell took 105 minutes to divide -- "scarily close" to the real organism's actual reproduction time. Running those 105 minutes required 6 days on a supercomputer.

Understanding how all the molecules inside a living cell interact to produce life has been one of biology's central unsolved problems. Most existing simulations model isolated subsystems -- protein folding, gene expression, metabolic networks -- rather than the full, interconnected chemistry of a living cell. What makes this work notable is not just its scope but that the simulation reproduced realistic biology: the 105-minute division cycle, the physical swelling and elongation of the cell as it divides. JCVI-Syn3a was originally created by the J. Craig Venter Institute as part of research into the minimum gene set required to sustain life. The open questions: about a dozen of Syn3a's 493 genes have unknown functions, and the team modeled them as inert spheres. When those functions are eventually understood, they may change the simulation's behavior.

Age Verification Laws Could Force Trans People to Out Themselves to Use the Internet

via The Verge

Illustration depicting a distorted Kansas ID document symbolizing trans identity document problems

Kansas passed a law in February 2026 that invalidated all transgender people's driver's licenses and IDs overnight, requiring them to reapply for new IDs listing incorrect gender markers. Now, with over half of US states having passed age verification or digital ID laws requiring online platforms to check users' identities, tech policy researchers warn these systems are building a second layer of danger for trans people. Automated identity systems -- which either compare uploaded IDs against government databases or use AI "Facial Age Estimation" to analyze facial features -- are designed specifically to detect mismatches between how a person looks and what their ID says. Trans people's IDs, especially post-Kansas and post-passport order, frequently have exactly those mismatches. Research shows these systems also fail disproportionately for people of color. If flagged, users may be denied access to websites, apps, and public services with no standardized appeals process. The open-ended language in most state laws only requires a "commercially reasonable" verification method -- leaving no standard for accuracy or recourse.

In November 2025, the Supreme Court blocked court orders that had temporarily stopped the Trump administration from refusing gender marker changes on US passports. The January 2025 executive order directing the federal government to recognize only "immutable biological sex" was not legally binding, but has been adopted by multiple states and agencies. Age verification laws have been passed primarily as a tool to restrict minors from accessing adult content, but the digital ID infrastructure they create affects all users -- and trans people face disproportionate harm from identity-checking systems that are designed to look for inconsistencies.

Volkswagen to Cut 50,000 Jobs in Germany by 2030 as Profits Hit Decade Low

via BBC World

Volkswagen cars in employee parking lot at the Chattanooga, Tennessee assembly plant

Volkswagen has announced it will eliminate around 50,000 jobs across its German operations by 2030 -- roughly a quarter of its German workforce -- after post-tax profits fell 44% in 2025, dropping to their lowest level since 2016. CEO Oliver Blume disclosed the figure in a letter to shareholders in the company's annual report Tuesday, saying VW is "operating in a fundamentally different environment." The cuts span the entire VW Group, including Audi and Porsche. VW attributed the collapse to three simultaneous pressures: US import tariffs, intensifying competition from Chinese automakers both in China (a formerly lucrative market) and in Europe where Chinese brands are selling at lower price points, and the expensive restructuring costs of shifting its lineup to electric vehicles. Net profit fell from €12.4 billion in 2024 to €6.9 billion in 2025. Finance chief Arno Antlitz called the current 4.6% profit margin "not sufficient in the long run." VW had already struck a union deal in late 2025 to cut more than 35,000 jobs; Tuesday's announcement raised the total target further.

Volkswagen Group is Europe's largest automaker by production volume, with brands including VW, Audi, Porsche, Seat, Skoda, CUPRA, and Bentley. It has been squeezed between two forces: Chinese automakers (particularly BYD and SAIC) have taken large shares of VW's historically dominant China market, while those same brands are now competing aggressively in Europe at lower price points. Simultaneously, the shift to EVs requires billions in upfront capital before EV revenue can offset development costs. The late-2025 layoff announcement triggered weeks of tense labor negotiations with IG Metall, Germany's powerful automotive union, which ultimately accepted the 35,000-job figure with conditions on gradual reduction. This week's announcement raises the overall target and extends it to the wider group.

Russia-Backed Hackers Are Targeting Signal Users with Phishing Attacks, Dutch Intelligence Warns

via BBC World

Signal app logo displayed on a smartphone

Dutch intelligence agencies have identified a large-scale, Russia-backed cyber campaign targeting individual Signal and WhatsApp users -- specifically government officials, military staff, journalists, and civil servants. The attacks do not compromise Signal or WhatsApp's systems directly; instead, hackers posed as Signal customer support to trick targets into sharing account PINs or SMS verification codes, gaining control over their messages. The MIVD and AIVD, the Dutch military and civilian intelligence services, said the campaign targets people of interest to the Russian state and has been observed across multiple countries. Signal confirmed its systems remain secure but said it is taking the reports "very seriously." Security experts noted that end-to-end encryption protects messages in transit but cannot protect an account if the device or credentials are directly compromised. Users are advised to never share their Signal PIN, regularly audit which devices are linked to their account, and treat unexpected verification code requests as likely phishing.

Signal has become the secure messaging standard for government officials and journalists worldwide after a string of surveillance revelations beginning in 2013. Its end-to-end encryption makes intercepting messages in transit essentially impossible, which means adversaries have shifted their approach to directly compromising devices or credentials. Dutch intelligence agencies have a strong track record on attributing Russian cyberespionage -- they were among the first to publicly document Russian GRU hacking operations targeting European institutions. The timing coincides with Russia's continued interest in intelligence on Western governments amid the ongoing Iran war and broader geopolitical tensions.

First-of-Its-Kind E. Coli Vaccine Shows Strong Protection Against Childhood Diarrhea Deaths

via Scientific American

Pink-colored rod-shaped E. coli bacteria floating against dark background

A vaccine called ETVAX has shown significant protection against a form of E. coli responsible for up to 42,000 childhood deaths per year in low-income countries, in the first large-scale pediatric trial of any E. coli vaccine. Results were published last month in The Lancet Infectious Diseases. Researchers from the University of Gothenburg tested ETVAX on 4,936 children in Gambia, ages six to 18 months, in a randomized controlled trial with two-year follow-up. Children received three oral doses. The vaccine reduced moderate-to-severe diarrhea from ETEC by 48% when all cases were included -- and by 68% in infants under nine months. It also appeared to offer partial protection against other gut pathogens including viruses and parasites, reducing overall severe diarrhea from any cause by 21%. No adverse effects were found. The vaccine targets the four most common E. coli adhesin proteins, which appear in about 80% of all ETEC strains. ETVAX was developed by Scandinavian Biopharma.

ETEC is the most common cause of traveler's diarrhea and a leading killer of children under five in low-income countries -- particularly where access to clean water and sanitation is limited. Until now, there has been no approved vaccine specifically targeting E. coli infections in humans. Dukoral, an approved oral cholera vaccine, offers partial and incidental protection against some ETEC strains, but was not designed or optimized for ETEC. The challenge in developing an ETEC vaccine is that the bacteria have 26 distinct adhesin proteins; ETVAX covers the four most common ones. Previous smaller trials in Bangladesh and Zambia found ETVAX safe and effective; the Gambia trial is the first large randomized study in a pediatric population. Some study authors hold commercial rights to the vaccine.

[China Watch] China's Cybersecurity Agency Issues Second Warning on OpenClaw Risks as Adoption Surges

via South China Morning Post

OpenClaw AI agent software logo and interface

China's national cybersecurity emergency response agency (CNCERT) issued its second warning in a week about security risks tied to OpenClaw, the open-source AI agent that has swept China in recent weeks, even as major cloud providers and local governments rush to deploy it. CNCERT warned that OpenClaw's design -- which requires high-level system permissions to autonomously perform tasks -- creates two specific and serious risks. First, "prompt injection": attackers can embed hidden malicious instructions in webpages that OpenClaw reads and acts upon, potentially causing it to leak API keys, system credentials, or private data. Second, "operational errors": the agent may misinterpret commands and unintentionally delete critical files or emails, causing permanent data loss. Despite the warnings, Alibaba Cloud and Tencent Cloud have been actively marketing easy OpenClaw deployment packages, and multiple local governments have offered subsidies to encourage adoption. The tensions between official caution and corporate momentum reflect a recurring dynamic in China's AI boom.

OpenClaw is an open-source AI agent created by Austrian developer Peter Steinberger in late 2025. It can autonomously perform computer tasks on behalf of users -- writing emails, organizing files, preparing documents. The tool went viral in China in early 2026, nicknamed '小龙虾' (little crayfish), surpassing 100 million downloads. Yesterday's digest noted China's first warning; this is the second. The morning's digest also covered a related story about Amazon requiring human sign-off on AI-assisted code changes after outages -- a parallel concern about AI agents operating with insufficient oversight. CNCERT is China's civilian cyber emergency coordination body; it is non-governmental and focuses on technical coordination rather than law enforcement.

Grammarly's Response to Using Journalist Names Without Permission: An Opt-Out Email Address

via The Verge, Platformer

Screenshot of a draft document in Google Docs showing an AI-generated Grammarly comment attributed to a journalist's real name

Following widespread backlash over its "Expert Review" feature -- which uses real journalists' names and identities to give AI-generated editing suggestions credibility -- Grammarly's parent company Superhuman has announced its response: writers can email expertoptout@superhuman.com to request removal. The company declined to apologize, declined to pause or remove the feature, and issued a statement that did not use the word "permission" once. Grammarly CEO Shishir Mehrotra declined to speak with reporters. The Verge, whose own editors including Nilay Patel were among those whose names had been used without consent, noted that the opt-out model is backward: most people have no way to know their names are being used unless they or someone they know happens to test the product. Grammarly added that it is "working on further refining" the feature but gave no timeline or specifics. Legal analysts have pointed out that using a named person's identity to commercially promote or endorse a product is typically covered by right-of-publicity laws, and that an opt-out model may not satisfy them.

This is a follow-up to this morning's digest story. Grammarly's 'Expert Review' feature was built into Superhuman, a premium email client. When a user drafts an email, the feature shows editing suggestions and attributes them to real, named writers -- including major journalists, authors, and public figures -- without those people's knowledge or consent. It was first revealed last Wednesday by Wired. The core legal question is whether using someone's name to lend authority to an AI product constitutes commercial misrepresentation or violates right-of-publicity laws -- something intellectual property lawyers say is genuinely novel territory with no settled case law.

Ig Nobel Prize Ceremony Moves to Europe as US Becomes 'Unsafe' for International Guests

via Ars Technica

Audience at the 2022 Ig Nobel ceremony at Harvard University, with paper airplanes filling the air

The Ig Nobel prizes -- the annual celebration of science that "makes you laugh, then think" -- will permanently relocate from Boston to Europe, after founder Marc Abrahams declared it has become "unsafe" for international guests to travel to the United States. The ceremony, traditionally held at Harvard, MIT, and Boston University, drew international scientists to Boston every fall for over three decades. In 2025, four of the 10 winners skipped the US ceremony rather than face complications at the US border. This year, Abrahams has joined forces with ETH Zurich and the University of Zurich; the ceremony will rotate between Zurich (every even year) and a different European city each odd year, structured like the Eurovision Song Contest. Abrahams noted the US visa climate is affecting other events too: the Game Developers Conference in San Francisco is seeing similar dropout rates among international attendees. A Godot Foundation director in Spain told Ars, "I honestly don't know anyone who is not from the US who is planning on going to the next GDC."

The Ig Nobel prizes have been awarded since 1991 for research published in genuine peer-reviewed journals -- the criterion is not that it is bad research, but that it is improbable in a way that prompts both laughter and reflection. Past winners have included real Nobel laureates. The ceremony at Harvard's Sanders Theatre was famous for paper airplane throwing, a running gag with a child named 'Miss Sweetie Poo' who interrupts overlong speeches, and 60-second acceptance speech limits. The move to Europe is the most concrete major cultural-scientific event relocation to publicly cite US border climate as its reason, and part of a pattern of international conferences and events shifting away from the US.