Skip to main content
Research project

AISEC: AI Secure and Explainable by Construction

Project overview

AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots, and drones. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception.

As AI takes over a wider range of tasks, we gradually approach the time when security laws, or policies, ultimately akin to Isaac Asimov's "3 laws of robotics" will need to be established for all working AI systems. A homonym of Asimov's first name, the AISEC project aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems.

This project employs types with supporting lightweight verification methods (such as SMT solvers) in order to create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems. Types serve as a unifying mechanism to embed security and safety contracts directly into programs that implement AI. The project develops Vehicle -- an integrated development environment with infrastructure to cater for different domain experts: from lawyers and security experts to verification experts and system engineers designing complex AI systems. It is built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces (aka chatbots).

Staff

Lead researcher

Professor Ekaterina Komendantskaya

Professor

Research interests

  • Logic in Computer Science
  • Theorem Proving
  • Machine Learning

Collaborating research institutes, centres and groups