Research and Advances
Artificial Intelligence and Machine Learning Europe Region special section: Big trends

Trust, Regulation, and Human-in-the-Loop AI: within the European region

Posted
  1. Introduction
  2. Recent Changes in the Regulatory Landscape for AI
  3. Building Safe and Reliable AI to Engender Trust
  4. Human and Society in the Loop
  5. Applications in Critical Sectors
  6. Future Outlook
  7. References
  8. Authors
  9. Footnotes
Trustworthy Autonomous Systems Hub logo on blue

Artificial intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit's factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.

Trust has no accepted definition, but Rousseau28 defined it as "a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another." Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal. Trust breaks down after an error or a misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign, and retraining.

Trust matters, especially in critical sectors such as healthcare, defense, and security, where duty of care is foremost. Trustworthiness must be planned, rather than an afterthought. We can trust in AI, such as when a doctor uses algorithms to screen medical images.20 We can also trust with AI, such as when journalists reference a social network algorithm to analyze sources of a news story.37 Growing adoption of AI into institutional systems relies on citizens to trust in these systems and have confidence in the way these systems are designed and regulated.

Regional approaches for managing trust in AI have recently emerged, leading to different regulatory regimes in the U.S., the European region, and China. We review these regulatory divergences. Within the European region, research programs are examining how trust impacts user acceptance of AI. Examples include the UKRI Trustworthy Autonomous Systems Hub,a the French Confiance. ai project,b and the German AI Breakthrough Hub.c Europe appears to be developing a "third way," alongside the U.S. and China.19

Healthcare contains many examples of AI applications, including online harm risk identification,24 mental health behavior classification,29 and automated blood testing.22 In defense and security, examples include combat management systems9 and using machine learning to identify chemical and biological contamination.1 There is a growing awareness within critical sectors15,33 that AI systems need to address a "public trust deficit" by adding reliability to the perception of AI. In the next two sections, we discuss research highlights around the key trends of building safer and more reliable AI systems to engender trust and put humans in the loop with regard to AI systems and teams. We conclude with a discussion about applications, and what we consider the future outlook is for this area.

Back to Top

Recent Changes in the Regulatory Landscape for AI

The E.U. is an early mover in the race to regulate AI, and with the draft E.U. AI Act,d it has adopted an assurance-based regulatory environment using yet-to-be-defined AI assurance standards. These regulations build upon GDPR data governance and map AI systems into four risk categories. The lowest risk categories self-regulate with transparency obligations. The highest risk categories require first-party or third-party assessments enforced by national authorities. Some applications are banned outright to protect individual rights and vulnerable groups.

The U.K. AI Council AI Roadmape outlines a sector-specific audit-led regulatory environment, along with principles for governance of AI systems including open data, AI audits, and FAIR (Findable, Accessible, Interoperable, Reusable) principles. An example of sector-specific governance is the U.K. online safety bill,f which assigns a duty of care to online service providers and mandates formal risk assessments by the U.K. telecom regulator OFCOM.

Outside the European region, the U.S. National Security Commission on AI report 2021g outlined a market-led regulatory environment, with government focus areas of robust and reliable AI, human-AI teaming, and a standards-led approachh to testing, evaluation, and validation. China's AI development plan27 emphasizes societal responsibility; companies chosen by the Chinese state to be AI champions follow national strategic aims, and state institutions determine the ethical, privacy, and trust frameworks around AI.

The European region, driven by U.K. and E.U. AI regulation, is creating a "third way" alongside the AI regulation adopted by the U.S. and China. This "third way" is characterized by a strong European ethical stance around AI applications, for example limiting the autonomy of military AI systems, in direct contrast to China, where autonomy for AI-directed weapons is actively encouraged as part of its military-civil fusion strategy.14 It also is characterized by a strong European focus on a citizen's right to data privacy and the limits set on secondary data processing by AI applications, in contrast to China and the U.S., where state-sponsored strategic aims or weak commercial self-regulation around AI applications frequently override data privacy concerns. An example of this "third way" in action is the European city of Vienna becoming the first city in the world to earn the IEEE AI Ethics Certification Mark,30 which sets standards for transparency, accountability, algorithmic bias, and privacy of AI products. How different regional approaches to AI regulation perform in the heat of geo-political AI competition is likely to shape how regional AI research is conducted for many years to come.

Back to Top

Building Safe and Reliable AI to Engender Trust

Assuring safe, reliable AI systems can provide a pathway to trust. However, non-deterministic AI systems require more than just the application of quality assurance protocols designed for conventional software systems in well-regulated regions such as Europe. New methods are emerging for the assurance of the machine learning life cycle from data management to model learning and deployment.2

Exploratory data analysis and adversarial generative networks help assure training data comes from a trusted source, is fit for the purpose, and is unbiased. Built-in test (BIT) techniques support model deployment, such as watchdog timers or behavioral monitors, as well as "last safe" model checkpointing and explainable AI methods. Active research focuses on explainable machine learning.5 Approaches include explanation by simplification, such as local interpretable model-agnostic explanations (LIME) and counterfactual explanations; feature relevance techniques, such as Shapley Additive Explanations (SHAP) and analysis of random feature permutations; contextual and visual explanation methods such as sensitivity analysis and partial dependence plots; and full life-cycle approaches such as the use of provenance records. Research challenges for assurance of machine learning include detection of problems before critical failures, continuous assurance of adaptive models, and assessing levels of independence when multiple models are trained on common data.


The E.U. is an early mover in the race to regulate AI, and with the draft E.U. AI Act, it has adopted an assurance-based regulatory environment using yet-to-be-defined AI assurance standards.


The manufacturing sector and smart cities deployments increasingly are using digital twins,36 simulations of operating environments, to provide pre-deployment assurance. Digital twins also are used in healthcare,8 for example to assure pre-surgical practice, and other critical sectors. A recent U.K.-hosted RUSI-TAS Conference35 discussed how digital twins can provide AI models with a safe space to fail. Other research trends include probing vulnerabilities of AI to accidents or malicious use. This includes examining how malicious actors can exploit AI.11 Attack vectors include adversarial inputs, data poisoning, and model stealing. Possible solutions include safety checklists12 and analysis of hostile agents that use AI to subvert democracies.31

Safe and Reliable AI has received a lot of attention in the European region recently compared to the U.S. and China, and it is no coincidence that every one of the works cited in this section are from authors based in this region. This level of activity is probably motivated by the assurance and audit-based European regulatory stances. The more we understand the vulnerabilities and assurance protocols of AI, the safer and more reliable AI systems will become. Safe, transparent systems that address user concerns will encourage public trust.

Back to Top

Human and Society in the Loop

Human-in-the-Loop (HITL) systems are grounded in the belief that human-machine teams offer superior results, building trust by inserting human oversight into the AI life cycle. One example is when humans mark false positives in email spam filters. HITL enhances trust in AI by optimizing performance, augmenting data, and increasing safety. It enhances trust by providing transparency and accountability: unlike many deep learning systems, humans can explain their decisions in natural language.

However, the AI powering social media, commerce, and other activities may erode trust and even sow discord.4 If perceived as top-down oversight from experts, HITL is unlikely to address public trust deficits. Society-in-the-Loop (SITL) seeks broader consensus by extending HITL methods to larger demographics,16,25 for instance by crowdsourcing the ethics of autonomous vehicles to hundreds of thousands of people. Another approach is co-design with marginalized stakeholders. The same imperative drives CODEs (Council for the Orientation of Development and Ethics) in AI and data-driven projects in developing countries,i where representatives of local stakeholder groups provide feedback during project life cycles. SITL combined with mass data literacy7 may reweave the fabric of human trust in and with AI.

A growing trend is to add humans into deep learning development and training cycles. Human stakeholders co-design AI algorithms to encourage responsible research innovation (RRI), embed end-user values, and consider the potential for misuse. During AI training, traditional methods such as adversarial training and active learning are applied to the deep learning models13,21 using humans to label uncertain or subjective data points during training cycles. Interactive sense making17 and explainable AI5 also can enhance trust by visualizing AI outputs to reveal training bias, model error, and uncertainty.

Research into HITL is much more evenly spread across the European, U.S., and Chinese regions than work on safe and reliable AI, with about half the work cited in this section from authors based in the European region. Where the European region does differentiate itself is with a stronger focus on HITL to promote ethical AI and responsible innovation, as opposed to the U.S. and China, where there is a tighter focus on using HITL to increase AI performance.

Back to Top

Applications in Critical Sectors

AI offers considerable promise in the following sectors. Each illustrates high-risk, high-reward scenarios where trust is critical to public acceptance.

Defense. General Sir Patrick Sanders, head of U.K. Strategic Command, recently emphasized, "Even the best human operator cannot defend against multiple machines making thousands of maneuvers per second at hypersonic speeds and orchestrated by AI across domains."18 While human-machine teaming dominates much current military thinking, by taking humans out of the loop AI transforms the tempo of warfare beyond human capacity. From strategic missile strikes to tactical support for soldiers, AI impacts every military domain and, if an opponent has a high tolerance for error, it offers unstoppable advantages. Unless regulated by treaty, future warriors and their leaders will likely trust AI as a matter of necessity.

Law enforcement and security. Law enforcement is more nuanced. Though used only for warnings, Singapore's police robots have provoked revulsion in European press,34 and the E.U. AI Act reflects this attitude by classifying law enforcement as high-risk. Some groups have claimed ambiguities in the E.U. AI Act leave the door open for bias, unrestrained surveillance, and other abuses,32 but at minimum it provides a framework for informed progress while asserting the European region's core values.

Healthcare. Healthcare interventions directly impact lives. Research into diagnostic accuracy shows that AI can improve healthcare outcomes.6,10,23,26 However, starting with patients and physicians, trust cascades upward, and as Covid has shown, trust is ultimately political, and thus needs to be nurtured carefully.

Transportation. Self-driving cars may receive the most publicity, but AI also is applied to mass transit, shipping, and trucking. Transportation involves life-or-death decisions, and the introduction of AI is changing the character of liability and assurance. These questions reflect a fundamental question which is being debated today: Who does the public trust to safely operate a vehicle?

Back to Top

Future Outlook

We think future standards for assurance will need to address the non-deterministic nature of autonomous systems. Whether robotic or distributed, AI is effectively an entity, and regulation, management, and marketing will need to account for its capacity to change.

Many projects currently are exploring aspects of bringing humans into the loop for co-design and training of AI systems and human-machine teaming. We think this trend will continue, and if coupled with genuine transparency, especially around admitting AI mistakes and offering understandable explanations for why these mistakes happened, offers a credible pathway to improving the state of public trust in AI systems being deployed into society.

We think that increasingly, Trust with AI will shape how citizens trust information, which has the potential to reduce the negative impact of attempts to propagate disinformation. If citizen trust in the fabric of AI used within society is reduced, then trust in AI itself will weaken. This is likely to be a major challenge for our generation.

Creating regulatory environments that allow nation-states to gain commercial, military, and social advantages in the global AI race may be the defining geopolitical challenge of this century. Regulation around AI has been developing worldwide, moving from self-assessment guidelines3 to frameworks for national or transnational regulation. We have noted that there are clear differences between the European region and other areas with robust capacity in AI, notably the need for public acceptance. The future will be a highly competitive environment, and regulation must balance the benefits of rapid deployment, the willingness of individuals to trust AI, and the value systems which underlie trust.

Acknowledgments. This work was supported by the Engineering and Physical Sciences Research Council (EP/V00784X/1), Natural Environment Research Council (NE/S015604/1), and Economic and Social Research Council (ES/V011278/1; ES/R003254/1).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More