Continual observation post-deployment of Artificial Intelligence ensures its secure operation
Post-Deployment Monitoring of AI Systems: A Growing Imperative
In the rapidly evolving world of artificial intelligence (AI), the importance of post-deployment monitoring and reporting is becoming increasingly apparent. This article explores the current state of regulations and best practices in this area, focusing on the EU AI Act Code of Practice.
Regulatory Frameworks and AI Post-Deployment Monitoring
The EU's Digital Services Act and Digital Markets Act have introduced partial regulatory monitoring, setting a precedent for more comprehensive oversight. Key regulatory frameworks require the establishment of post-market monitoring systems, the enabling of monitoring protocols for AI systems incorporating General Purpose AI models, and transparent documentation and auditability of the AI system lifecycle.
Best Practices for Post-Deployment Monitoring
Industry best practices emphasize continuous technical and ethical evaluation, prompt action upon monitoring insights, live model monitoring platforms, detailed model and data documentation, clear protocols for feedback integration, and a structured implementation and monitoring approach.
The Gap in Post-Deployment Information
Despite the progress, a significant gap exists between the current state and the ideal scenario of post-deployment monitoring. Many companies have disclosure-focused information but lack assessment-focused information. The lack of overall post-deployment information and standardised processes to collect it is a key factor contributing to this gap.
The Need for Transparency and Accountability
AI is being rapidly deployed across various industries, from mundane tasks to complex applications. However, the information asymmetry between AI companies, public bodies, and civil society is another key factor. Governments often lack knowledge about whether entities like courts or utility companies are using AI, and AI companies have declined commitments to voluntary information sharing with public bodies.
The Future of Post-Deployment Monitoring
Post-deployment monitoring for AI is still a nascent field. The effectiveness of post-deployment monitoring varies across sectors, but it is necessary to establish what works best for AI. Trusted flagger mechanisms, industry-focused and consumer-focused AI monitoring, and ongoing testing and monitoring are all crucial elements in bridging the gap between the current state and the ideal scenario.
In conclusion, establishing robust, transparent, and ethically informed post-deployment systems is essential for understanding AI's impacts and potential for society-level harms. This includes monitoring AI model integration and application usage, gathering impact and incident feedback, documenting all relevant data and model changes, and using that intelligence to manage risks responsibly and iteratively improve AI system performance and societal outcomes.
In the ongoing development of AI systems, the integration of cybersecurity measures and data-and-cloud-computing practices become crucial elements for post-deployment monitoring. Establishing robust systems that ensure the transparency, security, and accountability of AI data in the cloud is essential to understand AI's impacts and potential harms.
Transitioning further, technology plays a vital role in facilitating effective post-deployment monitoring. Implementing advanced tools for real-time monitoring, ethical evaluation, and data documentation helps bridge the gap between the current state and the ideal scenario of transparent, accountable, and improved AI system performance.