{"id":5225,"date":"2019-04-29T09:24:55","date_gmt":"2019-04-29T09:24:55","guid":{"rendered":"https:\/\/gtechbooster.com\/?p=5225"},"modified":"2023-04-01T01:36:52","modified_gmt":"2023-04-01T01:36:52","slug":"eu-ethics-guidelines-for-trustworthy-ai-chapter-1","status":"publish","type":"post","link":"https:\/\/gtechbooster.com\/eu-ethics-guidelines-for-trustworthy-ai-chapter-1\/","title":{"rendered":"EU ethics Guidelines For Trustworthy AI: Chapter 1"},"content":{"rendered":"\n<p>European Commission has released a set of ongoing guidelines on how  to build AIs that can be trusted by society. We present an annotated  analysis. <\/p>\n\n\n\n<div class=\"gtech-migrated-from-ad-inserter-placement-2\" style=\"text-align: center;\" id=\"gtech-661150838\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-4175899055\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/17207\" rel=\"noopener\" class=\"notrack\" aria-label=\"26001\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg 1024w, https:\/\/gtechbooster.com\/media\/2023\/01\/26001-768x960.jpeg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" width=\"500\" height=\"625\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div><p>The EU Commission follows the trend in the technological advancements, setting up pilot groups to understand  how these advancements can be used for its own prosperity. Examples of  that are the EU Blockchain Observatory which we&#8217;ve looked into in this  article<a href=\"https:\/\/www.i-programmer.info\/news\/84-database\/11926-eu-blockchain-observatory-and-forum-blockchain-ama.html\">&nbsp;&#8220;EU Blockchain Observatory and Forum Blockchain AMA&#8221;<\/a> or the EU bug bounty initiative which is covered in <a href=\"https:\/\/www.i-programmer.info\/news\/83-mobliephone\/12440-eu-bug-bounty-software-security-as-a-civil-right.html\">&#8220;EU Bug Bounty &#8211; Software Security as a Civil Right&#8221;<\/a>.<\/p>\n\n\n\n<p>The Commission&#8217;s instrument in this case is the High-Level Expert\nGroup on AI (AI HLEG), an independent expert group set up in June 2018.\nThe aim of the HLEG is to draft two deliverables: <strong>AI Ethics Guidelines<\/strong> and <strong>Policy and Investment Recommendations<\/strong>. It&#8217;s the former that we&#8217;ll be focusing on here.<\/p>\n\n\n\n<p>The aim of these guidelines is to promote so-called &#8220;Trustworthy AI&#8221;, comprising of the following three components:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>It should be lawful, complying with all applicable laws and regulations<\/li><li>It should be ethical, ensuring adherence to ethical principles and values<\/li><li>It should be robust, both from a technical and social perspective  since, even with good intentions, AI systems can cause unintentional  harm<\/li><\/ol>\n\n\n\n<p>No 1 is not part of the group&#8217;s mandate but 2 and 3 are.<\/p>\n\n\n\n<p>Taken individually, these rules look difficult to follow, but nevertheless achievable. What complicates matters, a lot, is:<\/p>\n\n\n\n<p><em>Each of these three components is\nnecessary but not sufficient in itself to achieve Trustworthy AI.\nIdeally, all three work in harmony and overlap in their operation. In\npractice, however, there may be tensions between these elements (e.g. at\n times the scope and content of existing law might be out of step with\nethical norms).<\/em><\/p>\n\n\n\n<p>Despite that, these rules mostly concern &#8220;typical&#8221; AIs, the ones in\nour phones, the diagnostic ones in the doctor&#8217;s office, the resume\nsorting ones or the ones inside autonomous vehicles; they can also also\nbe applied to super-advanced AIs such as those in cybernetics and\nautonomous robots. In fact, the 3-rule interaction, together with the\ntensions between them. may be too complex for a machine to handle,\ninstead requiring the human-machine intersection of cybernetics,\nsomething already explored by the sci-fi genre.<\/p>\n\n\n\n<p>If that sounds too futuristic, keep in mind that Brain Implants and\nCyborgs are already here, not in the sense of the movies though. Their\nmost outspoken representative is UK scientist &#8220;Captain Cyborg&#8221;, aka Dr\nKevin Warwick, a pioneer of leading research projects:<\/p>\n\n\n\n<p><em>which investigate the use of machine\nlearning and artificial intelligence techniques to suitably stimulate\nand translate patterns of electrical activity from living cultured\nneural networks to use the networks for the control of mobile\nrobots.Hence a biological brain actually provided the behavior process\nfor each robot.<\/em><\/p>\n\n\n\n<p>He underwent surgery to implant a silicon chip transponder in his forearm with which he could<em> &#8220;operate doors, lights, heaters and other computers without lifting a finger&#8221;<\/em>. That was in 2002 with <a href=\"http:\/\/www.kevinwarwick.com\/project-cyborg-1-0\/\">Project-Cyborg 1.0<\/a>. With the commencing of <a href=\"http:\/\/www.kevinwarwick.com\/project-cyborg-2-0\/\">Project-Cyborg 2.0<\/a> he looked at how a new implant could send signals back and forth between Warwick\u2019s nervous system and a computer:<\/p>\n\n\n\n<p><em>Professor Warwick was able to control\n an electric wheelchair and an intelligent artificial hand, developed by\n Dr Peter Kyberd, using the neural interface. In addition to being able\nto measure the nerve signals transmitted along the nerve fibres in\nProfessor Wariwck\u2019s left arm, the implant was also able to create\nartificial sensation by stimulating via individual electrodes within the\n array. This bi-directional functionality was demonstrated with the aid\nof Kevin\u2019s wife Irena and a second, less complex implant connecting to\nher nervous system.<\/em><\/p>\n\n\n\n<p>These human enhancement experiments already raise serious issues on BioEthics; imagine also adding AI to the mix.<\/p>\n\n\n\n<p>In the past we&#8217;ve looked at cases which demonstrate the the power that AI technology has already achieved. One is <a href=\"https:\/\/www.i-programmer.info\/news\/169-robotics\/9472-atlas-robot-the-next-generation.html\">&#8220;Atlas Robot &#8211; The Next Generation&#8221;<\/a> which showcases the capabilities of the new generation of the Atlas robots, and another is <a href=\"https:\/\/www.i-programmer.info\/programming\/artificial-intelligence\/9885-achieving-autonomous-ai-is-closer-than-we-think.html\">&#8220;Achieving Autonomous AI Is Closer Than We Think&#8221;<\/a>  where we looked into the USAF project of AI powered software running on  a Raspberry Pis capable of beating an experienced pilot in simulated  air combat.<\/p>\n\n\n\n<p>There is another issue that AI ethics have to cope with &#8211;<a href=\"https:\/\/www.i-programmer.info\/programming\/artificial-intelligence\/9382-autonomous-robot-weaponry-the-debate.html\">Autonomous Robot Weaponry<\/a>.\n Before you rush to declare it&nbsp;unethical&nbsp;by default, remember that even\nin war there are still rules and ethics that should be adhered to,&nbsp;such\nas the Geneva convention.<\/p>\n\n\n\n<p>HLEG&#8217;s consortium is not the first of its kind. The non-profit organization <a href=\"https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/10137-the-partnership-on-ai-building-on-ai100-reports-finding.html\">&#8220;Formation of Partnership On AI&#8221;<\/a>\n by Amazon, DeepMind\/Google, Facebook, IBM, and Microsoft serves the\nsame cause, beating them to it by almost 3 years. But while the\n&#8220;Partnership&#8221; is a private sector initiative, HLEG is endorsed by the\npublic sector, which goes to show that despite the private sector being\nquicker to the news, there&#8217;s still&nbsp;forward thinking among&nbsp;bureaucracy.\nMore importantly, HLEG tries to fill the void left exploitable by the\nlaw&#8217;s and governments&#8217; struggle in keeping up with the latest\ntechnological advancements.<\/p>\n\n\n\n<p>So what does HLEG try to address? In its own words:<\/p>\n\n\n\n<p><em>We believe that AI has the potential\nto significantly transform society. AI is not an end in itself, but\nrather a promising means to increase human flourishing, thereby\nenhancing individual and societal well -being and the common good, as\nwell as bringing progress and innovation.<\/em><\/p>\n\n\n\n<p><em>In particular, AI systems can help to\n facilitate the achievement of the UN\u2019s Sustainable Development Goals,\nsuch as promoting gender balance and tackling climate change,\nrationalizing our use of natural resources, enhancing our health,\nmobility and production processes, and supporting how we monitor\nprogress against sustainability and social cohesion indicators.<\/em><\/p>\n\n\n\n<p><em>To do this, AI systems need to be\nhuman-centric, resting on a commitment to their use in the service of\nhumanity and the common good, with the goal of improving human welfare\nand freedom.<\/em><\/p>\n\n\n\n<p><em>While offering great opportunities,\nAI systems also give rise to certain risks that must be handled\nappropriately and proportionately. We now have an important window of\nopportunity to shape their development. We want to ensure that we can\ntrust the sociotechnical environments in which they are embedded.<\/em><\/p>\n\n\n\n<p>In other words, as it happens with every technology out there, AI can\n be turned to good or evil and HLEG is trying to funnel this unstoppable\n river of evolution towards the right, ethical, direction. The notion is\n that human beings and communities will have confidence in AI only when a\n clear and comprehensive framework for achieving its trustworthiness is\nin place.<\/p>\n\n\n\n<p>The talk is on the socio-economic issues raised, which these guidelines try to address. For example:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Who is responsible when a self-driven car crashes or an intelligent medical device fails?<br><br><\/li><li>How can AI applications be prevented from promulgating racial discrimination or financial cheating?<br><br><\/li><li>Who should reap the gains of efficiencies enabled by AI technologies\n and what protections should be afforded to people whose skills are\nrendered obsolete?<\/li><\/ul>\n\n\n\n<p>Because ultimately, as people integrate AI more broadly and deeply\ninto industrial processes and consumer products, best practices need to\nbe spread and regulatory regimes adapted.<\/p>\n\n\n\n<p>From HLEG&#8217;s perspective:<\/p>\n\n\n\n<p><em>&#8220;the guidelines aim to provide\nguidance for stakeholders designing, developing, deploying,\nimplementing, using or being affected by AI who voluntarily opt to use\nthem as a method to operationalise their commitment&#8221;<\/em>.<\/p>\n\n\n\n<p>The key word here is voluntarily; they can&#8217;t force anyone to live by  those rules. But, what they could very well do in the near future,  especially given that they act as an instrument of the EU Commission and  subsequently of the public sector, is to recommend to governments that  as part of their procurement procedures, they should only accept  contracts by the private sector when they abide by those guidelines, as  such acting as a certificate of ethical quality assurance.<\/p>\n\n\n\n\n\n<p>The Guidelines themselves are split into three chapters:<\/p>\n\n\n\n<p><em><strong>Chapter I \u2013 Foundations of Trustworthy AI<\/strong>&nbsp;<\/em>identifies and describes the ethical principles that must be adhered to in order to ensure ethical and robust AI.<\/p>\n\n\n\n<p><em><strong>Chapter II \u2013&nbsp;Realising&nbsp;Trustworthy AI<\/strong>&nbsp;<\/em>translates these  ethical principles into seven key requirements that AI systems should  implement and meet throughout their entire life cycle.<\/p>\n\n\n\n<p><em><strong>Chapter III \u2013 Assessing Trustworthy AI<\/strong>&nbsp;<\/em>sets out a concrete  and non-exhaustive Trustworthy AI assessment list to operationalise the  requirements of Chapter II, offering AI practitioners practical  guidance.&nbsp;<\/p>\n\n\n\n<p>We&#8217;ll consider each chapter in turn.<\/p>\n\n\n\n<p><strong>Chapter I &#8211; Foundations of Trustworthy AI&nbsp;<\/strong>focuses on\n four ethical principles, rooted in fundamental rights, which must be\nrespected in order to ensure that AI systems are developed, deployed and\n used in a trustworthy manner. Those principles are:<\/p>\n\n\n\n<p>(i). Respect for human autonomy<br>AI should help humans and not manipulate them.<\/p>\n\n\n\n<p>(ii). Prevention of harm<br>AIs should not do harm, be it mentally or physically. Also <em>&#8220;they must be technically robust and it should be ensured that they are not open to malicious use&#8221;<\/em>. Let me start by saying that this one is hard to safeguard.<\/p>\n\n\n\n<p>The 2016 Microsoft&#8217;s AI Twitter chatbot incident serves as such a\nlesson. The researchers&#8217; intention was that the chatbot, Tay, would be\ncapable of acquiring intelligence through conversations with humans.\nInstead it was tricked into altering its innocent and admittedly naive\npersonality resembling a teenage girl, to adopt an anti-feminist and\nracist character. Later Microsoft admitted to there being a bug in its\ndesign. This goes to remind us that after all AI is just software and\nthus prone to the same issues that any program faces throughout its\nexistence.<\/p>\n\n\n\n<p>In extent, who can tell what will happen if the software agents that\npower robotic hardware get hacked or infected with a virus? How can we\nmake adequate precautions against such an act?<\/p>\n\n\n\n<p>You could argue that this is human malice and that with appropriate\nsafety nets it can be avoided. Reality is quick to prove this notion\nfalse as bugs in any piece of software ever developed, leading to\nvulnerabilities or malfunctions, are discovered every day. But for the\nsake of continuing this argument let&#8217;s pretend that humans develop\nbug-free software, something that eradicates the possibility of hacking\nand virus spreading. Then, what about the case of the machine\nself-modifying and self-evolving their core base?<\/p>\n\n\n\n<p>(iii) Fairness<br>Free from unfair bias, discrimination and stigmatization.<\/p><div class=\"gtech-mid-cont\" style=\"text-align: center;\" id=\"gtech-970592002\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-3895420518\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/17207\" rel=\"noopener\" class=\"notrack\" aria-label=\"26001\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg 1024w, https:\/\/gtechbooster.com\/media\/2023\/01\/26001-768x960.jpeg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" width=\"500\" height=\"625\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div>\n\n\n\n<p>It is a well known secret that AI&#8217;s reflect the biases of their\nmakers. For example, the case where the resume sorting algorithms would\nderive the race of the candidates from their CV and use it either\nagainst them or for them when deciding to promote them or not.<\/p>\n\n\n\n<p>(iv) Explicability<br>As AI becomes more and more integrated into all\n aspects of human activity, there&#8217;s a pressing need to find a way to\npeek into its decision making process.This is very important in sectors\nsuch as Healthcare, which are critical to humans&#8217; wellbeing.And for it\nto be trustworthy it should be able to explain its actions, not act as a\n black box.<\/p>\n\n\n\n<p>An example of that we explored in <a href=\"https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/12531-tcav-explains-how-ai-reaches-a-decision.html\">&#8220;TCAV Explains How AI Reaches A Decision&#8221;<\/a>,\n where we saw the example SkinVision, a mobile app that by taking a\npicture of a mole can decide if its malignant or not. Would the\ndiagnosis be incorrect or misinterpreting a malignant mole as benign\ncould have dire consequences.But the other way around is not without\ndefects as well.It would cause uninvited stress to its users and turn\nthem into an army of pseudo-patients who would come knowing down their\nalready burned out practitioner&#8217;s door.<\/p>\n\n\n\n<p>For such an AI algorithm to be successful, it&#8217;s of foremost\nimportance to be able to replicate the doctor&#8217;s actions. In other words,\n it has to be able to act as doctor, leveraging his knowledge. But why\nis it so necessary for the algorithm to be blindly trusted, for the\ndiagnosis to be autonomous?<\/p>\n\n\n\n<p>Because:&nbsp;<\/p>\n\n\n\n<p><em>Across the globe, health systems are\nfacing the problem of growing populations, increasing occurrence of skin\n cancer and a squeeze on resources. We see technology such as our own as\n becoming ever more integrated within the health system, to both ensure\nthat those who need treatment are made aware of it and that those who\nhave an unfounded concern do not take up valuable time and resources.\nThis integration will not only save money but will be vital in bringing\ndown the mortality rate due to earlier diagnosis and will help with the\nfurther expansion of the specialism.<\/em><\/p>\n\n\n\n<p>Then, there&#8217;s the possibility of tensions arising between those principles as in situations where <em>&#8220;the principle of prevention of harm and the principle of human autonomy may be in conflict&#8221;<\/em>.<br>An example of that is that using surveillance for preventing harm, conflicts with the right of people to privacy. In <a href=\"https:\/\/www.i-programmer.info\/news\/105-artificial-intelligence\/9375-openface-face-recognition.html\">&#8220;OpenFace &#8211; Face Recognition For All&#8221;<\/a> we saw an example of that applied to face recognition technologies.<\/p>\n\n\n\n<p>There are many applications besides surveillance, such as <br>for\nidentity verification in order to eliminate impersonation, VR and\ngaming, or even making business more customer-centric by helping them\nidentify returning customers but on the other hand, the use of such a\ntechnology raises many privacy and civil liberty concerns, as in the\nhands of an authoritative government could become a tool for controlling\n the masses.<\/p>\n\n\n\n<p>It also compromises privacy by tracking public activity by\nintroducing the ability of linking physical presence to places a person\nhas been, something that until now was only feasible through credit card\n transaction monitoring or capturing the MAC address of their mobile\ndevice. Imagine the ethical scope arising of personalized advertising..<\/p>\n\n\n\n<p>Potentially it contributes to an already troublesome scenario where\nprivacy and its protective measures like cryptography are heavily\nattacked, blurring the line between evading privacy and using\nsurveillance as a countermeasure to crime and terror.<\/p>\n\n\n\n<p>As expected, there&#8217;s no fixed recommendations in cases like this\nsince they are deemed too fluid to reach a solid conclusion, a situation\n worsen by the law&#8217;s and ethics&#8217; incapability in keeping up with the\nchallenges such a technology heralds.As such law and ethics have no\nanswer to any of the aforementioned dilemmas.One thing is for certain,\nhowever &#8211; this technology grants great power and with great power comes\ngreat responsibility.&nbsp;<\/p>\n\n\n\n<p><strong>Chapter II: Realizing Trustworthy AI<\/strong><\/p>\n\n\n\n<p>This chapter in essence, reiterates the concepts met in the previous\none, but in more concrete terms via a list of seven requirements:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>Human agency and oversight<br>Including fundamental rights, human agency and human oversight<br><br><\/li><li>Technical robustness and safety<br>Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility<br><br><\/li><li>Privacy and data governance<br>Including respect for privacy, quality and integrity of data, and access to data<br><br><\/li><li>Transparency<br>Including traceability, explainability and communication<br><br><\/li><li>Diversity, non-discrimination and fairness<br>Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation<br><br><\/li><li>Societal and environmental wellbeing<br>Including sustainability and environmental friendliness, social impact, society and democracy<br><br><\/li><li>Accountability<br>Including auditability, minimisation and reporting of negative impact, trade-offs and redress.&nbsp;&nbsp;<\/li><\/ol>\n\n\n\n<p>The chapter concludes with technical and non-technical methods to\nrealize Trustworthy AI. &#8220;Technical&#8221; here doesn&#8217;t mean examples of code\nand algorithms, but once again suggestions with the added difference\nthat they look into the methodologies that should be employed for\nbuilding such trust.<\/p>\n\n\n\n<p>As such, the lifecycle of building trustworthy AI should involve:&nbsp;<\/p>\n\n\n\n<p><em>&#8220;white list\u201d rules&nbsp;(behaviors or\nstates) that the system should always follow, and \u201cblack list\u201d\nrestrictions on behaviors or states that the system should never\ntransgress&#8221;<\/em>.<\/p>\n\n\n\n<p>Also there are methods to ensure value-by-design, methods that should\n allow the AI to explain itself, methods for testing and validating and\nmethods for quality assessing.<\/p>\n\n\n\n<p>The &#8220;non-technical&#8221; methods include Regulation; Codes of conduct;\nStandardization; Certification; Accountability via governance\nframeworks; Education and Awareness to foster an ethical mind-set;\nStakeholder participation and Social dialogue; Diversity; and Inclusive\ndesign teams.<\/p>\n\n\n\n<p><strong>Chapter III: Assessing Trustworthy AI<\/strong><\/p>\n\n\n\n<p>This chapter revolves around a checklist prepared for stakeholders\nwho&#8217;d like to implemented Trustworthy AI in their organizations or\nproducts. In every modern company this list will have to be used in\nrelation to the role of its departments and employees .<\/p>\n\n\n\n<p>As such, the Management\/Board:<\/p>\n\n\n\n<p><em>would discuss and evaluate the AI\nsystem&#8217;s development, deployment or procurement, serving as an\nescalation board for evaluating all AI innovations and uses, when\ncritical concerns are detected.<\/em><\/p>\n\n\n\n<p>whereas the Compliance\/Legal\/Corporate department:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em> &#8220;would use [the list] to meet the technological or regulatory changes&#8221;<\/em>.<\/p><\/blockquote>\n\n\n\n<p>Quality Assurance would:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em> &#8220;ensure and check the results of the assessment list and take action to escalate issues arising&#8221;<\/em><\/p><\/blockquote>\n\n\n\n<p>while Developers and project managers would:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>&#8220;include the assessment list in their daily work and document the results and outcomes of the assessment&#8221;.<\/em><\/p><\/blockquote>\n\n\n\n\n\n<p>This is the kind of list that could be used as the entry barrier for\nthe private sector to able to seal contracts with the public sector; <em>&#8220;have you checked everything in the list? if yes, there&#8217;s your contract&#8221;<\/em>.<\/p>\n\n\n\n<p>The guidelines conclude with Examples of Opportunities where AI can\nbe put to innovative use as in Climate action and sustainable\ninfrastructure, Health and well-being, Quality education and digital\ntransformation.<\/p>\n\n\n\n<p>I would also add the following to this list, extracted from the <a href=\"https:\/\/www.i-programmer.info\/programming\/artificial-intelligence\/10114-how-will-ai-transform-life-by-2030-a-review.html\">&#8220;How Will AI Transform Life By 2030? Initial Report<\/a>&#8220;:<\/p>\n\n\n\n<p><strong>Transportation<\/strong><br>It&#8217;s a\nsector that will be heavily affected by automation through self-driving\nvehicles.As autonomous vehicles become better drivers than people,\ncity-dwellers will own fewer cars, live further from work, and spend\ntime differently, leading to an entirely new urban organization.<\/p>\n\n\n\n<p><strong>Home\/Service Robots<\/strong><br>Over\n the next fifteen years, coincident advances in mechanical and AI\ntechnologies promise to increase the safe and reliable use and utility\nof home robots in a typical North American city.Special purpose robots\nwill deliver packages, clean offices, and enhance security.<\/p>\n\n\n\n<p><strong>Low resource communities<\/strong><br>Poor\n communities, often underrated and left on their doings without the\nnecessary attention, are expected to find hope in the presence of AI :\nUnder the banner of data science for social good, AI has been used to\ncreate predictive models to help government agencies more effectively\nuse their limited budgets to address problems such as lead poisoning.\nSimilarly, the Illinois Department of Human Services (IDHS) uses\npredictive models to identify pregnant women at risk for adverse birth\noutcomes in order to maximize the impact of prenatal care.<\/p>\n\n\n\n<p><strong>Various<\/strong> others would\ninclude mobile devices that shut off all communication when they sense\nthat their owners needs some rest, intelligent agents that start a\nconversation with you when they sense the loneliness in the sound of\nyour voice or in reading your facial expressions, self driving cars that\n mobilize disabled people or make the roads safe again, and more.<\/p>\n\n\n\n<p>The document does also include the flip side of the coin with\nexamples of Critical Concerns arising of the use of AI, such as in\nIdentifying and tracking individuals, Covert AI systems (impersonating\nhumans), AI enabled citizen scoring in violation of fundamental rights\nand, of course, Lethal autonomous weapon systems (LAWS) which we&#8217;ve\nalready explored in <a href=\"https:\/\/www.i-programmer.info\/programming\/artificial-intelligence\/9382-autonomous-robot-weaponry-the-debate.html\">&#8220;Autonomous Robot Weaponry &#8211; The Debate&#8221;<\/a>.<\/p>\n\n\n\n<p>To sum up the guidelines, Chapter I was about the ethical principles\nand rights that should be build into AI, Chapter II laid forward the\nseven key requirements in order to realize an AI that is Trustworthy,\nwhile Chapter III went through the non-exhaustive assessment list\nnecessary for organizations to implement AI in their organization and\nincluded a few examples of beneficial opportunities as well as critical\nconcerns.<\/p>\n\n\n\n<p>Wrapping up, the guidelines can be considered a good attempt for the\nEU to catch up with the coming revolution. As with every technology,\nthere&#8217;s bad use and good use, and the guidelines try to foster the\ncorrect use in every stakeholder.<\/p>\n\n\n\n<p>Scientists and policy makers can give answers to some of the\nquestions laid forward by the report, but to others they cannot, hence\nit increasingly seems that the decisions will be based on a case by case\n approach of trial and error.<\/p>\n\n\n\n<p>Ethics aside, there&#8217;s still the question of how the future workplace is going to be shaped by the use of AI, see&nbsp;<a href=\"https:\/\/www.i-programmer.info\/programming\/artificial-intelligence\/12647-do-ai-automation-and-the-no-code-movement-threaten-the-software-developers-job.html\">&#8220;Do AI, Automation and the No-Code Movement Threaten Our Jobs?&#8221;<\/a><\/p>\n\n\n\n<p>The question that should be addressed asap, has to be whether  everyone will be positively and equally affected by the coming  revolution. Answer that and the task is almost done.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">More Information<\/h4>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\">Ethics Guidelines For Trustworthy AI<\/a>&nbsp;<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-preformatted\">Credit: Nikos Vaggalis (i-programmer.info)  <\/pre>\n<div class=\"gtech-end-cont\" id=\"gtech-3724483567\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-2221119758\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/17207\" rel=\"noopener\" class=\"notrack\" aria-label=\"26001\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg 1024w, https:\/\/gtechbooster.com\/media\/2023\/01\/26001-768x960.jpeg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" width=\"500\" height=\"625\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>European Commission has released a set of ongoing guidelines on how to build AIs that can be trusted by society. We present an annotated analysis. The EU Commission follows the trend in the technological advancements, setting up pilot groups to understand how these advancements can be used for its own prosperity. Examples of that are [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":5233,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1915],"tags":[46,230,234,236],"class_list":["post-5225","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ndocs","tag-artificial-intelligence","tag-data-handling","tag-data-protection","tag-data-security"],"blocksy_meta":{"styles_descriptor":{"styles":{"desktop":"","tablet":"","mobile":""},"google_fonts":[],"version":6}},"_links":{"self":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts\/5225","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/comments?post=5225"}],"version-history":[{"count":0,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts\/5225\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/media\/5233"}],"wp:attachment":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/media?parent=5225"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/categories?post=5225"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/tags?post=5225"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}