A new European research initiative aims to replace fragmented security solutions with a comprehensive approach that protects artificial intelligence systems from initial design through to real-world operation.
The EU-funded SHASAI project (Secure Hardware and Software for AI systems) will tackle cybersecurity risks under the Horizon Europe programme to ensure AI technologies remain resilient and trustworthy. The initiative focuses on the entire lifecycle of AI systems rather than applying isolated fixes.
“With SHASAI, we aim to move beyond fragmented security solutions and address AI cybersecurity as a lifecycle challenge,” said Leticia Montalvillo Mendizabal, Cybersecurity Researcher at IKERLAN and SHASAI Project Coordinator.
“By combining secure hardware and software, risk-driven engineering and real-world validation, the project will help organisations deploy AI systems that are not only innovative, but also resilient, trustworthy and compliant with European regulations.”
Real-world scenarios
The project consortium will validate its methods across three real-world scenarios to ensure results apply to different fields. These use cases include AI-enabled cutting machines in the agrifood sector, eye-tracking systems for assistive healthcare technologies, and a tele-operated last-mile delivery vehicle in the mobility sector.
SHASAI plans to translate high-level safety principles into technical practices that support Europe’s broader efforts to promote trustworthy AI. The project aligns with frameworks including the EU AI Act, the Cyber Resilience Act (CRA), the NIS2 Directive and the EU Cybersecurity Strategy.