technology

AI regulations are already out of date — IT leaders need to think ahead

computerworld • 27 Mar 2026, 12:00

AI regulations are already out of date — IT leaders need to think ahead

Most AI regulations passed in the last few years are already irrelevant, but enterprises should think ahead with rudimentary governance plans for quicker compliance, said legal experts in two panel discussions at Nvidia’s GTC trade show last week.

Current AI regulations target frontier models, high-risk models, and transparency. They typically focus on LLMs and the prevention of voice and video deepfakes.

But future-proofing AI systems wasn’t a consideration when these laws were conceived, said William Dunning, managing associate for AI regulation at UK-based law firm Simmons & Simmons, in the AI in the Age of Regulation: Build Trust Through Compliance panel discussion. The legislation doesn’t cover new technologies such as agentic systems or self-updating models, which are expected to be significant in the coming years, Dunning said.

Technologists have discussed self-updating AI systems as agents that develop new skills by interacting with other agents or referencing materials such as company documents or research papers.

“Some oversights, such as human oversight, are going to be really challenging when it comes to things like AI agents,” Dunning said.

It also isn’t clear how regulations will apply to emerging technologies such as world models for smart robots.

Current laws focus on system-to-human interactions, so people know when AI is being used. But system-to-system AI interaction hasn’t been considered, panelists said.

“There’s quite a lot of uncertainty as to how current AI regulations are going to work,” Dunning said.

The main change in the next 12 months will be a shift from policymaking to enforcement, said Nikki Pope, senior director for AI and legal ethics at Nvidia. The EU AI Act will be enforced this year, but significant ambiguity remains around what will actually be enforced, creating uncertainty for businesses, she said.

The wide-ranging EU AI Act, which passed in 2024, requires companies to label deepfakes and regulate high-risk AI content, said Minesh Tanna, partner and global AI lead at Simmons & Simmons, in the separate Global Governance and the Future of AI Safety Regulation panel discussion at GTC. 

The US federal government considers the EU AI Act too intrusive, but California has passed laws to regulate deepfakes and AI-generated content, Tanna noted.

Pope said something similar during the AI in the Age of Regulation discussion. No federal regulations will come out of the United States anytime soon, and states are publishing their own rules, she said. California, for instance, is focused on transparency, watermarking, and how AI will affect individuals and groups.

The National Institute of Standards and Technology (NIST), meanwhile, is working to address AI trustworthiness and safety. Product liability litigation will likely continue as AI companies face lawsuits for alleged harm caused to people, Pope said.

“If it isn’t safe, it’s going to cause harm, and existing frameworks like product liability will kick in to compensate victims and to deter companies from causing that harm,” Tanna said in the Global Governance discussion.

Looking ahead, regulation is likely to cover a wide range of workplace tasks, from how AI is trained, to bias in hiring and interviews, to distinguishing between AI and human-generated content. Companies need governance plans to comply or face fines and lawsuits. With governance comes trust, panelists said.

“There is going to be culpability with respect to any harms that are caused,” said Jennifer Barrera, president and CEO of the California Chamber of Commerce, during the Global Governance panel discussion. For example, AI-based hiring requires experts, systems, and guardrails to avoid bias and discrimination violations, Barrera said.

Tanna compared the goal of AI regulation to airplane travel. People should feel safe using AI, the way passengers feel when they board a plane.

AI is evolving fast, but the ultimate goal is to build trust, and panelists offered a few tips on where to start.

Lawyers, IT leaders, and engineers need to take an inventory of the AI tools in use in any organization. The use cases for a tool like Microsoft Copilot may be benign, but its usage still needs to be recorded, with guidelines defining how it should be utilized.

Engineers also need to be a central part of AI governance. Lawyers interpret regulation but are not technical experts, and bridging that gap is essential. Both sides are needed to operationalize AI governance. It’s not dissimilar to the work done in cybersecurity, in which engineers work with management to identify gaps and put security measures in place, panelists said.

Engineers can identify missing pieces and steps needed to reach that delta that helps lawyers create compliance and governance measures, Tanna said.

Les originalartikkelen

Relaterte artikler etter nøkkelord