EU ethics Guidelines For Trustworthy AI: Chapter 1

EU ethics Guidelines For Trustworthy AI: Chapter 1

European Commission has released a set of ongoing guidelines on how to build AIs that can be trusted by society. We present an annotated analysis.

The EU Commission follows the trend in the technological advancements, setting up pilot groups to understand how these advancements can be used for its own prosperity. Examples of that are the EU Blockchain Observatory which we’ve looked into in this article “EU Blockchain Observatory and Forum Blockchain AMA” or the EU bug bounty initiative which is covered in “EU Bug Bounty – Software Security as a Civil Right”.

The Commission’s instrument in this case is the High-Level ExpertGroup on AI (AI HLEG), an independent expert group set up in June 2018.The aim of the HLEG is to draft two deliverables: AI Ethics Guidelines and Policy and Investment Recommendations. It’s the former that we’ll be focusing on here.

The aim of these guidelines is to promote so-called “Trustworthy AI”, comprising of the following three components:

  1. It should be lawful, complying with all applicable laws and regulations
  2. It should be ethical, ensuring adherence to ethical principles and values
  3. It should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm

No 1 is not part of the group’s mandate but 2 and 3 are.

Taken individually, these rules look difficult to follow, but nevertheless achievable. What complicates matters, a lot, is:

Each of these three components isnecessary but not sufficient in itself to achieve Trustworthy AI.Ideally, all three work in harmony and overlap in their operation. Inpractice, however, there may be tensions between these elements (e.g. at times the scope and content of existing law might be out of step withethical norms).

Despite that, these rules mostly concern “typical” AIs, the ones inour phones, the diagnostic ones in the doctor’s office, the resumesorting ones or the ones inside autonomous vehicles; they can also alsobe applied to super-advanced AIs such as those in cybernetics andautonomous robots. In fact, the 3-rule interaction, together with thetensions between them. may be too complex for a machine to handle,instead requiring the human-machine intersection of cybernetics,something already explored by the sci-fi genre.

If that sounds too futuristic, keep in mind that Brain Implants andCyborgs are already here, not in the sense of the movies though. Theirmost outspoken representative is UK scientist “Captain Cyborg”, aka DrKevin Warwick, a pioneer of leading research projects:

which investigate the use of machinelearning and artificial intelligence techniques to suitably stimulateand translate patterns of electrical activity from living culturedneural networks to use the networks for the control of mobilerobots.Hence a biological brain actually provided the behavior processfor each robot.

He underwent surgery to implant a silicon chip transponder in his forearm with which he could “operate doors, lights, heaters and other computers without lifting a finger”. That was in 2002 with Project-Cyborg 1.0. With the commencing of Project-Cyborg 2.0 he looked at how a new implant could send signals back and forth between Warwick’s nervous system and a computer:

Professor Warwick was able to control an electric wheelchair and an intelligent artificial hand, developed by Dr Peter Kyberd, using the neural interface. In addition to being ableto measure the nerve signals transmitted along the nerve fibres inProfessor Wariwck’s left arm, the implant was also able to createartificial sensation by stimulating via individual electrodes within the array. This bi-directional functionality was demonstrated with the aidof Kevin’s wife Irena and a second, less complex implant connecting toher nervous system.

These human enhancement experiments already raise serious issues on BioEthics; imagine also adding AI to the mix.

In the past we’ve looked at cases which demonstrate the the power that AI technology has already achieved. One is “Atlas Robot – The Next Generation” which showcases the capabilities of the new generation of the Atlas robots, and another is “Achieving Autonomous AI Is Closer Than We Think” where we looked into the USAF project of AI powered software running on a Raspberry Pis capable of beating an experienced pilot in simulated air combat.

There is another issue that AI ethics have to cope with –Autonomous Robot Weaponry. Before you rush to declare it unethical by default, remember that evenin war there are still rules and ethics that should be adhered to, suchas the Geneva convention.

HLEG’s consortium is not the first of its kind. The non-profit organization “Formation of Partnership On AI” by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft serves thesame cause, beating them to it by almost 3 years. But while the“Partnership” is a private sector initiative, HLEG is endorsed by thepublic sector, which goes to show that despite the private sector beingquicker to the news, there’s still forward thinking among bureaucracy.More importantly, HLEG tries to fill the void left exploitable by thelaw’s and governments’ struggle in keeping up with the latesttechnological advancements.

So what does HLEG try to address? In its own words:

We believe that AI has the potentialto significantly transform society. AI is not an end in itself, butrather a promising means to increase human flourishing, therebyenhancing individual and societal well -being and the common good, aswell as bringing progress and innovation.

In particular, AI systems can help to facilitate the achievement of the UN’s Sustainable Development Goals,such as promoting gender balance and tackling climate change,rationalizing our use of natural resources, enhancing our health,mobility and production processes, and supporting how we monitorprogress against sustainability and social cohesion indicators.

To do this, AI systems need to behuman-centric, resting on a commitment to their use in the service ofhumanity and the common good, with the goal of improving human welfareand freedom.

While offering great opportunities,AI systems also give rise to certain risks that must be handledappropriately and proportionately. We now have an important window ofopportunity to shape their development. We want to ensure that we cantrust the sociotechnical environments in which they are embedded.

In other words, as it happens with every technology out there, AI can be turned to good or evil and HLEG is trying to funnel this unstoppable river of evolution towards the right, ethical, direction. The notion is that human beings and communities will have confidence in AI only when a clear and comprehensive framework for achieving its trustworthiness isin place.

The talk is on the socio-economic issues raised, which these guidelines try to address. For example:

  • Who is responsible when a self-driven car crashes or an intelligent medical device fails?

  • How can AI applications be prevented from promulgating racial discrimination or financial cheating?

  • Who should reap the gains of efficiencies enabled by AI technologies and what protections should be afforded to people whose skills arerendered obsolete?

Because ultimately, as people integrate AI more broadly and deeplyinto industrial processes and consumer products, best practices need tobe spread and regulatory regimes adapted.

From HLEG’s perspective:

“the guidelines aim to provideguidance for stakeholders designing, developing, deploying,implementing, using or being affected by AI who voluntarily opt to usethem as a method to operationalise their commitment”.

The key word here is voluntarily; they can’t force anyone to live by those rules. But, what they could very well do in the near future, especially given that they act as an instrument of the EU Commission and subsequently of the public sector, is to recommend to governments that as part of their procurement procedures, they should only accept contracts by the private sector when they abide by those guidelines, as such acting as a certificate of ethical quality assurance.

The Guidelines themselves are split into three chapters:

Chapter I – Foundations of Trustworthy AI identifies and describes the ethical principles that must be adhered to in order to ensure ethical and robust AI.

Chapter II – Realising Trustworthy AI translates these ethical principles into seven key requirements that AI systems should implement and meet throughout their entire life cycle.

Chapter III – Assessing Trustworthy AI sets out a concrete and non-exhaustive Trustworthy AI assessment list to operationalise the requirements of Chapter II, offering AI practitioners practical guidance. 

We’ll consider each chapter in turn.

Chapter I – Foundations of Trustworthy AI focuses on four ethical principles, rooted in fundamental rights, which must berespected in order to ensure that AI systems are developed, deployed and used in a trustworthy manner. Those principles are:

(i). Respect for human autonomy
AI should help humans and not manipulate them.

(ii). Prevention of harm
AIs should not do harm, be it mentally or physically. Also “they must be technically robust and it should be ensured that they are not open to malicious use”. Let me start by saying that this one is hard to safeguard.

The 2016 Microsoft’s AI Twitter chatbot incident serves as such alesson. The researchers’ intention was that the chatbot, Tay, would becapable of acquiring intelligence through conversations with humans.Instead it was tricked into altering its innocent and admittedly naivepersonality resembling a teenage girl, to adopt an anti-feminist andracist character. Later Microsoft admitted to there being a bug in itsdesign. This goes to remind us that after all AI is just software andthus prone to the same issues that any program faces throughout itsexistence.

In extent, who can tell what will happen if the software agents thatpower robotic hardware get hacked or infected with a virus? How can wemake adequate precautions against such an act?

You could argue that this is human malice and that with appropriatesafety nets it can be avoided. Reality is quick to prove this notionfalse as bugs in any piece of software ever developed, leading tovulnerabilities or malfunctions, are discovered every day. But for thesake of continuing this argument let’s pretend that humans developbug-free software, something that eradicates the possibility of hackingand virus spreading. Then, what about the case of the machineself-modifying and self-evolving their core base?

(iii) Fairness
Free from unfair bias, discrimination and stigmatization.

It is a well known secret that AI’s reflect the biases of theirmakers. For example, the case where the resume sorting algorithms wouldderive the race of the candidates from their CV and use it eitheragainst them or for them when deciding to promote them or not.

(iv) Explicability
As AI becomes more and more integrated into all aspects of human activity, there’s a pressing need to find a way topeek into its decision making process.This is very important in sectorssuch as Healthcare, which are critical to humans’ wellbeing.And for itto be trustworthy it should be able to explain its actions, not act as a black box.

An example of that we explored in “TCAV Explains How AI Reaches A Decision”, where we saw the example SkinVision, a mobile app that by taking apicture of a mole can decide if its malignant or not. Would thediagnosis be incorrect or misinterpreting a malignant mole as benigncould have dire consequences.But the other way around is not withoutdefects as well.It would cause uninvited stress to its users and turnthem into an army of pseudo-patients who would come knowing down theiralready burned out practitioner’s door.

For such an AI algorithm to be successful, it’s of foremostimportance to be able to replicate the doctor’s actions. In other words, it has to be able to act as doctor, leveraging his knowledge. But whyis it so necessary for the algorithm to be blindly trusted, for thediagnosis to be autonomous?

Because: 

Across the globe, health systems arefacing the problem of growing populations, increasing occurrence of skin cancer and a squeeze on resources. We see technology such as our own as becoming ever more integrated within the health system, to both ensurethat those who need treatment are made aware of it and that those whohave an unfounded concern do not take up valuable time and resources.This integration will not only save money but will be vital in bringingdown the mortality rate due to earlier diagnosis and will help with thefurther expansion of the specialism.

Then, there’s the possibility of tensions arising between those principles as in situations where “the principle of prevention of harm and the principle of human autonomy may be in conflict”.
An example of that is that using surveillance for preventing harm, conflicts with the right of people to privacy. In “OpenFace – Face Recognition For All” we saw an example of that applied to face recognition technologies.

There are many applications besides surveillance, such as
foridentity verification in order to eliminate impersonation, VR andgaming, or even making business more customer-centric by helping themidentify returning customers but on the other hand, the use of such atechnology raises many privacy and civil liberty concerns, as in thehands of an authoritative government could become a tool for controlling the masses.

It also compromises privacy by tracking public activity byintroducing the ability of linking physical presence to places a personhas been, something that until now was only feasible through credit card transaction monitoring or capturing the MAC address of their mobiledevice. Imagine the ethical scope arising of personalized advertising..

Potentially it contributes to an already troublesome scenario whereprivacy and its protective measures like cryptography are heavilyattacked, blurring the line between evading privacy and usingsurveillance as a countermeasure to crime and terror.

As expected, there’s no fixed recommendations in cases like thissince they are deemed too fluid to reach a solid conclusion, a situation worsen by the law’s and ethics’ incapability in keeping up with thechallenges such a technology heralds.As such law and ethics have noanswer to any of the aforementioned dilemmas.One thing is for certain,however – this technology grants great power and with great power comesgreat responsibility. 

Chapter II: Realizing Trustworthy AI

This chapter in essence, reiterates the concepts met in the previousone, but in more concrete terms via a list of seven requirements:

  1. Human agency and oversight
    Including fundamental rights, human agency and human oversight

  2. Technical robustness and safety
    Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility

  3. Privacy and data governance
    Including respect for privacy, quality and integrity of data, and access to data

  4. Transparency
    Including traceability, explainability and communication

  5. Diversity, non-discrimination and fairness
    Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

  6. Societal and environmental wellbeing
    Including sustainability and environmental friendliness, social impact, society and democracy

  7. Accountability
    Including auditability, minimisation and reporting of negative impact, trade-offs and redress.  

The chapter concludes with technical and non-technical methods torealize Trustworthy AI. “Technical” here doesn’t mean examples of codeand algorithms, but once again suggestions with the added differencethat they look into the methodologies that should be employed forbuilding such trust.

As such, the lifecycle of building trustworthy AI should involve: 

“white list” rules (behaviors orstates) that the system should always follow, and “black list”restrictions on behaviors or states that the system should nevertransgress”.

Also there are methods to ensure value-by-design, methods that should allow the AI to explain itself, methods for testing and validating andmethods for quality assessing.

The “non-technical” methods include Regulation; Codes of conduct;Standardization; Certification; Accountability via governanceframeworks; Education and Awareness to foster an ethical mind-set;Stakeholder participation and Social dialogue; Diversity; and Inclusivedesign teams.

Chapter III: Assessing Trustworthy AI

This chapter revolves around a checklist prepared for stakeholderswho’d like to implemented Trustworthy AI in their organizations orproducts. In every modern company this list will have to be used inrelation to the role of its departments and employees .

As such, the Management/Board:

would discuss and evaluate the AIsystem’s development, deployment or procurement, serving as anescalation board for evaluating all AI innovations and uses, whencritical concerns are detected.

whereas the Compliance/Legal/Corporate department:

“would use [the list] to meet the technological or regulatory changes”.

Quality Assurance would:

“ensure and check the results of the assessment list and take action to escalate issues arising”

while Developers and project managers would:

“include the assessment list in their daily work and document the results and outcomes of the assessment”.

This is the kind of list that could be used as the entry barrier forthe private sector to able to seal contracts with the public sector; “have you checked everything in the list? if yes, there’s your contract”.

The guidelines conclude with Examples of Opportunities where AI canbe put to innovative use as in Climate action and sustainableinfrastructure, Health and well-being, Quality education and digitaltransformation.

I would also add the following to this list, extracted from the “How Will AI Transform Life By 2030? Initial Report“:

Transportation
It’s asector that will be heavily affected by automation through self-drivingvehicles.As autonomous vehicles become better drivers than people,city-dwellers will own fewer cars, live further from work, and spendtime differently, leading to an entirely new urban organization.

Home/Service Robots
Over the next fifteen years, coincident advances in mechanical and AItechnologies promise to increase the safe and reliable use and utilityof home robots in a typical North American city.Special purpose robotswill deliver packages, clean offices, and enhance security.

Low resource communities
Poor communities, often underrated and left on their doings without thenecessary attention, are expected to find hope in the presence of AI :Under the banner of data science for social good, AI has been used tocreate predictive models to help government agencies more effectivelyuse their limited budgets to address problems such as lead poisoning.Similarly, the Illinois Department of Human Services (IDHS) usespredictive models to identify pregnant women at risk for adverse birthoutcomes in order to maximize the impact of prenatal care.

Various others wouldinclude mobile devices that shut off all communication when they sensethat their owners needs some rest, intelligent agents that start aconversation with you when they sense the loneliness in the sound ofyour voice or in reading your facial expressions, self driving cars that mobilize disabled people or make the roads safe again, and more.

The document does also include the flip side of the coin withexamples of Critical Concerns arising of the use of AI, such as inIdentifying and tracking individuals, Covert AI systems (impersonatinghumans), AI enabled citizen scoring in violation of fundamental rightsand, of course, Lethal autonomous weapon systems (LAWS) which we’vealready explored in “Autonomous Robot Weaponry – The Debate”.

To sum up the guidelines, Chapter I was about the ethical principlesand rights that should be build into AI, Chapter II laid forward theseven key requirements in order to realize an AI that is Trustworthy,while Chapter III went through the non-exhaustive assessment listnecessary for organizations to implement AI in their organization andincluded a few examples of beneficial opportunities as well as criticalconcerns.

Wrapping up, the guidelines can be considered a good attempt for theEU to catch up with the coming revolution. As with every technology,there’s bad use and good use, and the guidelines try to foster thecorrect use in every stakeholder.

Scientists and policy makers can give answers to some of thequestions laid forward by the report, but to others they cannot, henceit increasingly seems that the decisions will be based on a case by case approach of trial and error.

Ethics aside, there’s still the question of how the future workplace is going to be shaped by the use of AI, see “Do AI, Automation and the No-Code Movement Threaten Our Jobs?”

The question that should be addressed asap, has to be whether everyone will be positively and equally affected by the coming revolution. Answer that and the task is almost done.

More Information

Credit: Nikos Vaggalis (i-programmer.info)  
0
0