A safety-centric perspective on innovation and risk in the use of artificial intelligence in genomics
@Article{ref1,
author={Hassan, Hassan A.
and An{\v{z}}el, Aleksandar
and {\.{I}}lgen, Bahar
and Luchner, Marina
and Schramowski, Patrick
and Blasse, Anja
and Mayer, Johannes U.
and Ladewig, Katharina
and Hattab, Georges
and Sprang, Maximilian},
title={A safety-centric perspective on innovation and risk in the use of artificial intelligence in genomics},
journal={Trends in Genetics},
year={2026},
month={2026/04/30},
publisher={Elsevier},
abstract={Foundation models such as Nucleotide Transformer and Evo models have emerged as transformative tools, enabling multimodal DNA?RNA?protein design with demonstrated capabilities in identifying regulatory elements and generating functional biological elements including CRISPR-Cas systems and transposon arrays.The first viable artificial intelligence-designed bacteriophage genomes have been successfully created and tested in laboratory settings, with some exhibiting greater fitness and faster lysis dynamics than wild-type phages, proving that the principle of genomic design is practically feasible and elevating dual-use risk from a theoretical to an immediate concern.Multiple complementary techniques are being developed to enhance model transparency, validate predictions, and enable the detection of model failures in high-stakes applications.Implemented technical safeguards, including data filtering to remove human pathogens, are not sufficient to protect against the potential harms of genomics foundation models and need to be improved.The EU AI Act has classified genomic artificial intelligence applications as high risk, establishing new regulatory standards that mandate rigorous bias assessments, transparency requirements, and comprehensive risk management throughout the development and deployment lifecycle.},
issn={0168-9525},
doi={10.1016/j.tig.2026.04.001},
url={https://doi.org/10.1016/j.tig.2026.04.001}
}
Adopting a safety-centric approach, this article explores how generative artificial intelligence (AI), and more specifically, foundation models for biological sequences, can exacerbate data quality issues, technical biases, and dual-use potential, particularly in critical applications such as clinical genetics, precision medicine, and pathogen engineering. This work centres on how misuse risks emerge throughout the innovation pipeline and how these intersect with the growing accessibility of generative genomic models. Particular attention is given to dual-use governance and infrastructure hardening in sequence analysis workflows. The work aims to provide scientists, regulators, and policymakers with a toolkit to discuss beneficial innovation in genomic AI while maintaining robust safeguards against harm and misuse.
