The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice.
It’s also aiming to garner feedback and encourage international consensus building for what it dubs “human-centric AI” — targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. The Commission’s High Level Group on AI — a body comprised of 52 experts from across industry, academia and civic society announced last summer — published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It’s boiled the expert consultancy down to a set of seven “key requirements” for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations — namely:- Human agency and oversight: “AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.”
- Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”
- Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
- Transparency: “The traceability of AI systems should be ensured.”
- Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
- Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”
- Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”
This work will have two strands: (i) a piloting phase for the guidelines involving stakeholders who develop or use AI, including public administrations, and (ii) a continued stakeholder consultation and awareness-raising process across Member States and different groups of stakeholders, including industry and service sectors:
- (i) Starting in June 2019, all stakeholders and individuals will be invited to test the assessment list and provide feedback on how to improve it. In addition, the AI high- level expert group will set up an in-depth review with stakeholders from the private and the public sector to gather more detailed feedback on how the guidelines can be implemented in a wide range of application domains. All feedback on the guidelines’ workability and feasibility will be evaluated by the end of 2019.
- (ii) In parallel, the Commission will organise further outreach activities, giving representatives of the AI high-level expert group the opportunity to present the guidelines to relevant stakeholders in the Member States, including industry and service sectors, and providing these stakeholders with an additional opportunity to comment on and contribute to the AI guidelines.