The NCSC Guidelines for Secure AI System Development provide comprehensive recommendations to ensure that AI systems are designed, developed, deployed, and operated securely throughout their entire lifecycle. The guidelines are structured into four key areas: secure design (focusing on risk assessment and threat modeling from the outset), secure development (addressing supply chain security, documentation, and technical debt management), secure deployment (covering protection of infrastructure and models, incident management, and responsible release), and secure operation and maintenance (emphasizing logging, monitoring, update management, and information sharing after deployment). These guidelines advocate for a ‘secure by default’ approach, aligning with international best practices, and prioritize transparency, accountability, and organizational leadership to make security a top business priority. They are intended to help organizations mitigate both traditional and AI-specific security risks, ensuring that AI technologies are reliable, fair, and resilient against evolving cyber threats
Publication's URL
URL: https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development/Publication's scorecard
Country: GBR
Scope: Cyber
Typology: Standard
Publication's date: November 27, 2023
Category: Data Protection & AI
Sector: Cross-Sector
Rating: