As artificial intelligence and machine learning begins to permeate every aspect of technology, the growing concerns around the ethical use of AI is coming to the fore. As such we’ve seen the likes of Facebook, Google and other big tech entities create panels and boards to look into the ethical use of AI.

Joining those firms is the European Union (EU) which earlier this week outlined seven requirements for ethical AI development.

As Engadget points out, the seven requirements pay particular attention to groups who would be vulnerable should AI not be developed ethically.

“Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements,” the EU explained.

The Union also added that the assessment list, featured below, should be used as a guideline when developing or deploying AI, but are not designed to interfere with policy or regulation.

The seven requirements are as follows:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Around the middle of this year, the commission that put together this list will be launching a pilot phase where several stakeholders will be able to provide their input as part of the European AI Alliance.

In early 2020 the EU says an AI expert group will review the findings made during said pilot phase and look to make any changes to the assessment list if needed.

With AI having far-reaching implications, we should be seeing more of these kinds of initiatives being launched in coming months and years.

Whether they can indeed ensure that ethical and unbiased AI can be developed, remains to be seen.