De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.
The threat from new pathogens is an even graver danger than AI-backed hackers.
Dit artikel komt uit The Economist
Pipette adding sample to test tubes in a biotechnology laboratory.
Artificial intelligence (AI) will soon add biology to its list of superhuman abilities. Anthropic’s Mythos model—already withheld from general release owing to its hacking skills—recently succeeded on a third of the most difficult data-crunching tasks pulled together by biology experts. Mythos could do things that were beyond all of the tested humans, such as reverse-engineering a cell type from raw DNA data.
As we report, problem-solving like that means AI may soon grant people extremely dangerous powers: to synthesise viruses, generate novel neurotoxins or assemble omnicidal „mirror life”. Such dangers are the dark side of AI’s wonderful promise to democratise intelligence. It is even conceivable that an AI could give a misanthropic loner the power to end humanity.
De redactie van NRC selecteert de beste artikelen uit The Economist voor een breder perspectief op internationale politiek en economie.
Biosecurity risks are thus far worse than cyber-security ones. If one engineered virus may cause billions of deaths, humanity has no room to learn from mistakes. There may be no „defender’s dividend”, in which AI itself helps forestall the danger. Software can be fixed quickly, but human biology is far less malleable. Making models safe for release will therefore require breakthroughs in the fundamental science of ai.
How much time is there? Today’s public AI models are book smart, acing paper tests, yet fortunately still appear to give novices little practical help at the laboratory bench. But Anthropic, the maker of (non-public) Mythos, warns that it may soon be able to guide novices through tricky lab work. Mythos and its peers have not been tested for their practical abilities, which means they may already have such a capability.
Models with these talents will—like nuclear weapons—never be safe in public hands. And today’s techniques for making them safe fall short. One option, for example, is to try to make them refuse dangerous requests. „Jailbreaking” these models by tricking them into giving forbidden answers has become harder, but in one recent study 90% of the novice participants were still able to extract answers about virology from models that ought to have clammed up. Gambling the future of humanity on such defences would be a mistake.
Another measure is to exclude dangerous data from models’ training runs. SecureBio, a think-tank, suggests removing information about mirror life, obtaining live pathogens, bypassing biodefence guardrails and assessing pandemic potential. The trouble is that a sufficiently capable model may work out the excised knowledge from first principles. Similar attempts to remove child-sexual-abuse material from the training data of image generators did not succeed. A system trained on benign images can depict obscenities it has never seen.
A third idea is to focus on the physical world. Governments’ security services could and should pay more attention to the vendors of technologies, such as DNA synthesis, with both legitimate and nefarious uses. „Know your customer” regulations should limit such services to established researchers. But creating viruses is not like building a nuke, which requires scarce and traceable material. In biology using off-the-shelf technology for lethal ends is relatively easy. The state cannot monitor every Petri dish.
Scientific breakthroughs will therefore be needed, to create new kinds of safeguards. One promising approach is the equivalent of brain surgery on models after they are trained. Another technique teaches models to favour wrong answers in some areas; yet another could be to uncover and disable the neurons that activate in work on synthetic biology. That would require advances in foundational AI science so as to crack open the „black box” of existing neural networks.
Until such techniques exist, governments must limit access to systems that might enable bioterrorism. This matters especially for open-source models, which cannot be recalled once they have been disseminated, and whose use cannot be monitored. Responsible researchers should be able to use AI to advance the frontiers of science—DeepMind’s Isomorphic Labs is developing novel cancer therapies, for example—but under security protocols. There is no point harnessing AI to improve lives if it also gives terrorists the power to make humans extinct.
© 2026 The Economist Newspaper Limited. All rights reserved.