US Bombs Military Sites on Kharg Island and Threatens Iran's Oil Lifeline

via AP News, Scientific American, SCMP China

Smoke rises from a strike site on Iran's Kharg Island.

President Trump said US forces had "obliterated" military targets on Kharg Island, the export hub that handles most of Iran's crude shipments, while warning that the island's oil infrastructure could be hit next if Tehran keeps interfering with shipping through the Strait of Hormuz. The strike matters less as a battlefield event than as a signal that Washington is now openly threatening Iran's economic jugular, not just its military assets. The IEA says the war has already created the biggest oil-supply disruption in modern market history, and maritime traffic through Hormuz remains far below normal as shipowners hesitate to enter the chokepoint. An American official also told AP that 2,500 additional Marines and an amphibious assault ship are heading to the region, suggesting the White House expects either a longer campaign or a riskier next phase at sea.

Kharg Island in the northern Gulf is Iran's main oil export terminal. The Strait of Hormuz, between Iran and Oman, carries roughly a fifth of globally traded oil and gas, so even partial disruption quickly spills into fuel, shipping, and fertilizer costs worldwide.

US and China Open Paris Trade Talks Ahead of Trump-Xi Summit

via AP News, SCMP China

Chinese and US flags displayed side by side before trade talks.

US Treasury Secretary Scott Bessent and Chinese Vice-Premier He Lifeng will meet in Paris on Sunday and Monday for talks meant to clear the ground for Trump's state visit to Beijing on March 31. AP reports the meetings follow last year's one-year tariff truce, which paused a trade war that had briefly pushed reciprocal duties into triple digits before both sides stepped back. Analysts quoted by SCMP expect limited breakthroughs this weekend, but they will watch for narrower bargains on tariffs, investment restrictions, soybeans, and rare earths that can be packaged as deliverables for the summit. In practice, Paris looks less like a grand settlement than a staging session: each side wants enough stability to make a leader-level meeting worth holding, without giving away leverage before the cameras are on.

Trump and Xi agreed last year to a temporary trade truce after tariff rates briefly surged to extreme levels. The Paris meetings are the latest attempt to stabilize the relationship before a Beijing summit, not a sign that the deeper disputes over industrial policy and export controls are resolved.

Zelenskyy Says US Sanctions Waiver on Russian Oil Could Hand Moscow $10 Billion

via AP News, BBC World

Volodymyr Zelenskyy speaks at a news conference in Paris.

Ukrainian President Volodymyr Zelenskyy said Washington's 30-day waiver on some Russian oil sanctions, issued as the Iran war scrambles global energy markets, is "not the right decision" because it could give the Kremlin about $10 billion more to spend on the invasion. Standing beside French President Emmanuel Macron in Paris, Zelenskyy argued that every extra dollar from energy exports ends up financing missiles, drones, and artillery rather than diplomacy. The criticism shows how the Middle East war is now colliding with Europe's other live conflict: measures designed to calm oil markets can also loosen pressure on Moscow just as Ukraine is trying to hold Western unity together. BBC notes that Kyiv and several European allies fear any easing, even if framed as temporary crisis management, will lengthen the war rather than create room for negotiations.

Since the full-scale invasion in 2022, Western sanctions have tried to curb Russia's oil revenue without detonating global energy prices. The Iran war is now testing that balance by making policymakers more willing to tolerate Russian exports if they help offset lost Middle Eastern supply.

Staff Say xAI Is Losing People and Focus Amid Constant Reorganization

via Ars Technica

xAI founder Elon Musk speaking at an AI event.

A Financial Times report republished by Ars describes xAI employees as frustrated by nonstop internal churn, weak coordination, and leadership decisions that keep resetting priorities before teams can finish the last push. The complaints matter because Elon Musk has been trying to build xAI into a real challenger to OpenAI and Anthropic at the same time he is integrating parts of the company with his broader social platform and product stack. Inside the company, staff say the result is not creative urgency but organizational whiplash: projects change direction abruptly, morale is slipping, and some early talent has already left. That is a familiar Silicon Valley story, but at xAI the stakes are higher because frontier-model development is capital intensive and timing sensitive. A company that cannot keep its own research and product teams aligned can burn enormous money without closing the gap.

xAI was launched in 2023 as Musk's answer to OpenAI. Since then it has tried to move simultaneously in model training, consumer products, and AI coding tools while competing for the same scarce researchers and computing capacity as much larger incumbents.

Invisible Unicode Supply-Chain Attack Hit GitHub, NPM, and Other Package Repositories

via Ars Technica, Aikido Security

Illustration representing hidden Unicode code in software packages on GitHub and NPM.

Researchers at Aikido Security say they found 151 malicious packages uploaded between March 3 and March 9 that used invisible Unicode characters to hide dangerous code from human reviewers. The trick is nasty because the package names and most of the source still look normal, while the malicious functions are written in characters that disappear in many editors, terminals, and code-review tools. Ars reports the campaign hit GitHub and also spread through ecosystems including NPM and Open VSX, extending a long-running class of software supply-chain attacks rather than inventing an entirely new one. What changes here is the defense problem: manual review, which teams often fall back on when automated scanning misses typosquatting or obfuscation, becomes much less useful when the bad code is literally not visible. That turns ordinary dependency hygiene into an even more adversarial text-rendering problem.

Supply-chain attacks try to compromise developers indirectly by poisoning third-party packages or dependencies instead of attacking the final target head-on. Modern software stacks pull in huge numbers of libraries, so one convincing fake package can spread surprisingly far before anyone notices.

[China Watch] China Approves Its First Implantable Brain-Computer Interface

via SCMP China, Scientific American

A patient interacts with a brain-computer interface implant system in China.

Neuracle Medical Technology has won China's first approval for an implantable BCI system, a regulatory milestone that would let the device be used commercially for patients with partial spinal cord injuries. Scientific American describes the approval as a world first for this class of invasive interface, moving the technology out of the trial-only zone and toward actual clinical deployment. The immediate goal is modest but meaningful: helping patients regain some hand movement by reading brain activity and turning it into control signals. The larger story is strategic. China has been pouring money and policy support into neurotechnology, and this approval gives it a chance to claim not just lab progress but a real regulatory lead over rivals such as Neuralink. For a field usually dominated by futuristic demos, commercial approval is the harder benchmark because it says a government regulator is willing to let patients use the device outside pure experimentation.

A brain-computer interface translates neural signals into commands for software or hardware. Most implants remain experimental because the technical challenges are hard and regulators must weigh surgical risk, long-term reliability, and whether patients receive meaningful functional improvement.

OpenAI and Ginkgo Show an AI-Robot Loop for Running Real Biology Experiments

via Scientific American

Technicians work in Ginkgo Bioworks' automated laboratory.

OpenAI and Ginkgo Bioworks are showing what a more autonomous scientific workflow could look like: a model proposes biological experiments, a robot-heavy lab runs them, the results flow back into the model, and the system quickly decides what to test next. Scientific American says the collaboration demonstrates iteration at a pace that would be difficult for a human-only team to match, especially in large search spaces where thousands of possible biological tweaks are plausible. The important point is not that AI suddenly "solved" biology. It is that model-driven experimentation is starting to look operational rather than aspirational. If the loop is reliable, it could shift scientists' work away from manually planning each test and toward setting objectives, constraints, and validation standards. That would matter far beyond biotech startups because many scientific fields are bottlenecked less by ideas than by the speed and cost of physically checking them.

Ginkgo Bioworks runs highly automated biology labs that can test large numbers of experimental variants. The broader AI-lab idea is to close the loop between hypothesis generation and physical testing, making research feel more like active search than a slow sequence of separate manual steps.

AI-Driven Memory Shortage Is Slowing Research Labs

via Nature News

Server memory modules and processors used for AI computing.

Nature reports that the AI boom is now squeezing scientific research through an unexpectedly mundane bottleneck: memory. Labs in fields from machine learning to other data-heavy disciplines are facing rising prices and longer waits for the high-bandwidth memory and related components needed to run large models and advanced simulations. The result is not just inconvenience. Some projects are being delayed, resized, or redesigned because researchers cannot get the hardware mix they planned for, while others are spending more time on efficiency tricks simply to keep moving. That is a useful reminder that AI competition is no longer only about better models; it is also about supply chains for the chips and memory systems those models depend on. When industry demand surges fast enough, even well-funded academic work can end up competing for leftovers, which then shapes what kinds of science are practical to attempt.

Modern AI systems depend heavily on specialized memory that can move large amounts of data quickly. Shortages of that memory do not just slow AI companies; they also affect universities and labs that share the same suppliers and cannot pay hyperscaler prices.

NASA Says Artemis II Is Back on Track for April 1, but the Risk Questions Haven't Gone Away

via Scientific American, Ars Technica

NASA's Space Launch System rocket on the launch pad for Artemis.

NASA says it is again targeting April 1 for the launch of Artemis II, the first crewed mission around the Moon since Apollo, after another round of delays tied to rocket safety concerns. The immediate news is straightforward: managers have decided the stack is ready to roll back out. The harder story is that the agency is still being pressed on how much residual risk remains and how candidly it has explained those risks in public. Scientific American reports that NASA officials acknowledged the mission is inherently dangerous, while Ars notes that recent briefings often sidestepped direct questions about just how risky specific known issues are. None of that means the launch should not happen. It does mean Artemis is entering the phase where technical ambition, political symbolism, and communication discipline all matter at once, because the mission's credibility now depends on more than a successful countdown.

Artemis II is the first crewed flight of NASA's Moon-return program and would send astronauts around the Moon without landing. Because it is the first human mission on the new rocket and spacecraft stack, every delay and engineering concern draws outsized scrutiny.

Black Students Remain the Fastest-Growing Group on the Common App

via Higher Ed Dive

Students walk across a college campus during Common App admissions season.

New mid-season Common App data suggest the Supreme Court's 2023 ban on race-conscious admissions has not produced an immediate collapse in Black applicants using the platform. Higher Ed Dive reports that applicants from underrepresented minority groups are up 5% year over year, with Black applicants rising 8% and applicants identifying as two or more races up 7%, making them the fastest-growing groups in the current cycle. White applicants still make up the largest share, but their share of US applicants edged down again, continuing a trend that Common App says has been visible for more than a decade. The important caveat is that applications are not admissions offers, and platform-wide numbers cannot tell you what happened at any single selective university. Still, the early data matter because they cut against the simplest prediction that the post-affirmative-action era would immediately shrink the top of the admissions funnel.

Common App is a nonprofit application portal used by hundreds of colleges, so its aggregate data are an early way to track who is applying. The Supreme Court's 2023 decision barred colleges from explicitly considering race in admissions, making application-pool shifts a closely watched leading indicator.