Agents for good? Reconciling agentic AI with existing AI governance frameworks

2024 was another “bumper year” for AI development and governance, as organisations have begun to grapple with the “real-world” challenges of balancing AI innovation with the compliance and risk demands it is placing on senior stakeholders.

One concept that gained significant traction was “Agentic AI”, which refers to advanced AI technologies capable of autonomous problem-solving and task execution. Predicted to be a major technological breakthrough in 2025, agentic AI poses unique challenges for existing AI governance frameworks such as the EU AI Act. In our article below, we explore some implications agentic AI may raise for current regulations and governance frameworks, focusing particularly on how to reconcile the need for human oversight with the inherent autonomy of the systems.

What is Agentic AI?

“Agentic AI” was a buzz term in the technology sector during 2024. Referring to AI technologies that can act autonomously to solve complex problems and perform tasks, agentic AI is predicted to be one of the big technology breakthroughs of 2025. Already, OpenAI and Google have unveiled models that can take actions on a user’s behalf, and Salesforce has launched its workplace AI agent, which claims to “use advanced reasoning abilities to make decisions and take action” without relying on human engagement. Alongside the buzz, however, come concerns around whether existing AI governance frameworks, such as the EU AI Act, are equipped to address the risks that agentic AI may pose.

Writing in The Atlantic, Jonathan Zittrain highlights three distinct qualities that set apart agentic AI. The first is that AI “agents” can be given “a high-level…goal and independently take steps to bring it about, through research or work of their own”. The second is that they can interact with the world by connecting with external software tools, i.e. not simply talking with us, but “acting out in the world”. The third is that agentic AI can operate indefinitely, leading to the risk that human operators can deploy it but then potentially forget about it. “With no framework for how to identify what they are, who set them up, and how and under what authority to turn them off, agents may end up like…satellites lobbed into orbit and then forgotten”. These agents could then interact and “collide” with each other without human supervision.

Agentic AI and the EU AI Act

As Oliver Patel, Enterprise AI Governance Lead at AstraZeneca, notes, “there is no mention of the words ‘agent’ or ‘agentic’ in the EU AI Act, ISO 42001 or the NIST AI Risk Management Framework”. Although agentic AI may technically be covered by the EU AI Act’s existing definition of “AI System” (which appears to have been designed to be broad enough to allow for future developments), the lack of specific reference to such terms highlights that the Act may leave some gaps in this area.

What the Act is clear on is the need for “human oversight” for high-risk AI systems. Article 14, which enters into force on 26 August 2025, provides that “high-risk AI systems shall be designed and developed in such a way… that they can be effectively overseen by natural persons”. Whether an agentic AI system falls into the category of “high-risk” would be considered on a case-by-case basis. However, the Act allows the Commission to take into account the “extent to which the AI system acts autonomously” when assessing whether the system poses risks to health and safety or an adverse impact on fundamental rights. This suggests agentic AI systems that operate with considerable autonomy will be more likely to stray into this “high-risk” category.

Under the Act, human oversight is intended to prevent or minimise these risks to health, safety or fundamental rights posed by high-risk AI systems. How this oversight might be affected, though, is still subject to interpretation. The Act is clear that humans monitoring AI should have “an adequate level of AI literacy, training and authority”, and it requires providers of high-risk AI to enable humans to properly understand the system and be able to monitor its operation, including taking decisions. AI deployers are also required to ensure that human oversight is properly assessed and documented. As Caitlin Andrews of the IAPP notes, however, respective views on the meaning of “oversight” will differ, and there are pitfalls to the general practice of oversight. Johannes Walter, of the ZEW – Leibniz Centre for European Economic Research, has observed that humans often struggle to assess the quality of algorithmic advice, and as a result may fail to correct harmful AI decisions where it is unclear how the AI has arrived at a result.

The process of human oversight may be further complicated by agentic AI. Emerging AI systems, such as Google’s Gemini 2.0 (which incorporates prototypes designed to explore the capabilities of AI assistants and “human-agent interaction”) and OpenAI’s Sora and Canvas, may act as AI agents, in the way that they can autonomously tackle multi-step problems. There are concerns that the requirement for human oversight may be inherently incompatible with agentic AI systems, which by definition are designed to act on their own to achieve specific goals. Patel, in his “Top 10 AI Governance Predictions for 2025”, anticipates that “the unique risks and impacts of agentic AI systems will challenge and expose existing AI governance frameworks”.

The UK approach

Agentic AI is clearly on the UK government’s radar. The last government’s 2024 consultation on its “pro-innovation approach to AI regulation” white paper, identified “autonomy risks” as one of the key risk areas, and noted “new research on the advancing capabilities of agentic AI demonstrates that we may need to consider potential new measures to address emerging risks as the foundational AI technologies that underpin a range of applications continue to develop”. 

The UK government has consistently (even with the change of administration in 2024) demurred from publishing any overarching legislative framework, ostensibly because it does not wish to “rush to regulate” and “potentially implement the wrong measures that may insufficiently balance addressing risks and supporting innovation”. Following the government’s January 13th announcement of its “AI Opportunities Action Plan”, a 50-point plan to make the UK a global leader in AI, the development of further guidelines and best practices in 2025 seems likely, but despite the stated aim in the King’s Speech of establishing “appropriate legislation”, formal laws to regulate AI still seem some way off. The government’s AI Opportunities Adviser, Matt Clifford, published 50 recommendations around AI, all of which the government is adopting through its AI Opportunities Action Plan. This contains recommendations for the government to publish best-practice guidance and case studies, but does not mention legislation; instead, Clifford defers to regulators to enable “safe AI innovation”. Prime Minister Sir Keir Starmer notes Britain has ““freedom… in relation to…regulation to do it in a way that we think is best for the UK”. However, Sachin Dev Duggal, CEO of AI startup Builder.ai, told CNBC in response that, although the government’s AI action plan “shows ambition,” proceeding without clear rules is “borderline reckless.”

Trust issues

Zittrain notes, “the blinding pace of modern tech [can] make us think that we must choose between free markets and heavy-handed regulation—innovation versus stagnation”. Zittrain instead argues for “the right kind of standard-setting and regulatory touch”, which can make new technology “safe enough for general adoption”. So how best to navigate the risks of agentic AI to ensure there is this “right kind of standard-setting and regulatory touch”?

The Economist highlights “trust” as a key issue that may have a bearing on the regulatory approach towards agentic AI. “Checking whether a chatbot has given a right or wrong answer is usually easy. Determining whether an AI agent has booked the best restaurant or holiday it could within your budget may be more difficult”. This also illustrates the potential problems around fulfilling the human oversight requirement under the EU AI Act. Even with AI literacy and training, a human may not understand when something has gone wrong with the system, especially with agentic AI, where there may not be the means to fully monitor and determine how an AI agent has acted. Existing AI government frameworks may need to be augmented or revised to take account of the new risks around agentic AI, and providers and deployers may need further guidance as to how to maintain compliance when developing and marketing these innovative new systems.

What to do in the meantime

While we await developments and guidance, companies looking to implement agentic AI should act with prudence for the moment and consider introducing safety practices when procuring these solutions. For example, consider setting up internal walls around which tasks the agentic AI is allowed to perform, which types of applications or data it has access to, and consider whether to limit employee or user access to certain groupings of individuals or situations. Businesses should ensure a level of human oversight by employees with the necessary “AI literacy”, competence and authority, and should ensure oversight observations are periodically logged and documented. And of course, as a more general point, the risks around agentic AI further emphasise that companies should be implementing robust AI governance frameworks, policies and processes internally within their businesses. 

It remains to be seen whether the EU will provide guidance during 2025 on how agentic AI fits into the AI Act, or whether any other specific legislation emerges globally. What we can be fairly sure of is that 2025 will be another bumper year for developments not only in agentic AI as the “next big thing”, but in AI governance in general as it attempts to keep pace.

 

How we can help

Shoosmiths has launched AI Comply , a software solution powered by Enzai that supports clients with their AI compliance in an evolving landscape of regulation. By combining Shoosmiths’ legal expertise with Enzai’s cutting-edge AI governance software, AI Comply empowers businesses to monitor their AI use to ensure alignment with the latest regulatory standards, including with the EU AI Act’s literacy standards. AI Comply is a flexible solution, designed to adapt to clients’ unique business requirements and to developments in law, including anything that may be on the horizon around agentic AI.

Disclaimer

This information is for general information purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. Please contact us for specific advice on your circumstances. © Shoosmiths LLP 2025.

 


Insights

Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.