adventtr/E+ via Getty Images

Follow ZDNET:
Add us as a preferred source on Google.

ZDNET’s Key Takeaways:
The global regulatory environment is undergoing a significant transformation, generating unprecedented expectations and compliance responsibilities for organizations. Business executives can transform these compliance obligations into valuable instruments for steering ethical and strategic innovation in artificial intelligence. By collaborating both with internal teams and with trusted external partners, organizations can not only meet compliance standards but also turn them into catalysts for achieving measurable performance outcomes.

The growing phenomenon often described as an “AI gold rush” has generated immense pressure on governments, regulators, and public agencies worldwide. As corporations strive to seize competitive advantages from rapidly evolving AI technologies, policymakers are racing to enact frameworks that safeguard individual rights and the security of personal data. Among the most notable examples is the European Union’s landmark AI Act, which has emerged as a model for comprehensive legislative oversight. To provide a global comparative perspective, international law firm Bird & Bird created the AI Horizon Tracker, a resource offering analysis across twenty‑two jurisdictions to illustrate the diverse regional strategies that are shaping policy frontiers.

Digital and business leaders must now identify practical approaches to comply with these emerging policies. Although adhering to regulations can initially appear burdensome, compliance need not stifle creativity. In fact, it can serve as an essential foundation for guiding thoughtful experimentation with AI. The following five perspectives from leading executives illustrate how governance can be transformed into a framework for innovation.

1. Explore Within Constraints
Art Hu, Global Chief Information Officer at Lenovo, emphasized that there is no universal equation for reconciling AI innovation with effective governance. According to him, responses differ drastically across industries, sectors, and public institutions because each operates under unique legislative and ethical expectations. Hu explained to ZDNET that leaders should pay close attention to forthcoming regulatory trends and align their innovation strategies accordingly. He warned that mistakes in this domain now carry far greater consequences than before: the potential downside—the “tail risk,” as he described it—has intensified considerably. To mitigate that risk, he advocates a disciplined approach to exploration, rooted in structured experimentation. Hu recommends the adoption of clearly defined “whitelists” and sandbox environments—controlled spaces in which novel AI concepts can be developed safely. Within these boundaries, companies can innovate freely while avoiding long‑term adverse outcomes that unregulated experimentation might provoke.

2. Work Alongside Partners
Paul Neville, Director of Digital, Data, and Technology at the UK’s The Pensions Regulator (TPR), cautioned that AI represents not a mere technological refresh but a transformational shift in how organizations operate. As he has reiterated in multiple forums, many assume that the future of AI simply involves automating current processes to make them faster or more efficient. However, this perspective lacks imagination and fails to resolve today’s most fundamental challenges. Neville argued that visionary leaders must instead craft and communicate bold new visions of what technology can achieve. Only by reimagining work patterns and operational models can organizations extract AI’s full potential.

At TPR, Neville’s team collaborates closely with the UK government to interpret and apply new statutory requirements, including those introduced through forthcoming pensions legislation. This cooperation ensures that emerging rules not only safeguard citizens’ interests but also enable advanced digital services tailored to modern expectations. For example, the integration of AI within TPR’s operations presents opportunities to create interactive, dynamic, and visually engaging platforms that enhance user engagement. Neville regards this synergy between technological innovation and legislative intent as a model for future public‑sector transformation.

3. Manage Bespoke Cases
Martin Hardy, Cyber Portfolio and Architecture Director at Royal Mail, believes that compliance and risk management can become powerful enablers of AI exploration. Within the cybersecurity discipline, organizations engage extensively in threat modeling—activities that often focus on generic scenarios. Hardy noted that the greatest value arises when security professionals apply their expertise to specialized, bespoke situations that generic tools cannot easily address. AI technologies can accelerate this process by automating the baseline analysis, thereby liberating experts to focus on unique or sector‑specific threats. For instance, an AI system that performs eighty percent of the foundational work enables security architects to concentrate on high‑impact vulnerabilities, such as potential attacks by targeted adversaries.

Nevertheless, Hardy underscored the double‑edged nature of data‑driven AI adoption. Storing massive volumes of information introduces vulnerabilities: if a system were ever compromised, adversaries could potentially access a detailed map of an organization’s weak points. Businesses thus face a dilemma: failing to leverage AI could leave them behind competitors, while unguarded implementation might expose them to serious breaches. The key, according to Hardy, is to embrace AI intelligently—with vigilance, transparency, and strategic safeguards.

4. Foster Key Relationships
Ian Ruffle, Head of Data and Insight at RAC, highlighted that the delicate balance between governance and creativity ultimately depends on company culture. Technology alone cannot guarantee progress; success lies in people’s capacity to use technology with wisdom and accountability. Ruffle explained to ZDNET that senior executives cannot monitor every granular risk, which is why cultivating trust and collaboration among internal specialists is crucial. In practice, this means empowering teams to interpret data ethically and to remain aware that every dataset represents real individuals. Building a culture of empathy, respect, and mutual responsibility strengthens compliance and innovation simultaneously.

Ruffle further observed that nurturing strong relationships among data protection officers, information security professionals, and leadership teams yields long‑term benefits that surpass the short‑term allure of cutting‑edge technologies. To truly balance governance and innovation, organizations must preserve the human dimension—the capacity for reflection, ethical reasoning, and creative problem solving. As he metaphorically put it, leaders “walk a tightrope,” requiring constant human oversight to navigate complex challenges effectively.

5. Ask Crucial Questions
Dr. Erik Mayer, Transformation Chief Clinical Information Officer at Imperial College London and Imperial College Healthcare NHS Trust, stressed the importance of precision when dealing with data governance in AI projects. He warned that excessive data cleansing can introduce bias by eliminating variables that the model might need to produce accurate insights. To address this, Mayer’s team maintains ongoing dialogues with regulators to clarify expectations and metrics for AI validation. They investigate practical questions such as: What key performance indicators are necessary to secure regulatory approval? How complete and unbiased is the dataset? What is the provenance and definition of each data element?

From Mayer’s perspective, these inquiries are essential because governance is not simply an administrative necessity—it is a method to ensure that AI systems function safely in real‑world applications. He cautioned that data cleaning, if done without careful documentation, could erase essential nuances. The preferred approach is to retain the rawest feasible data and to record each transformation step in detail. This disciplined transparency becomes the cornerstone of compliance and reliability. Ultimately, he argued, organizations must be able to affirm that their AI implementations are safe, verifiable, and continuously monitored for performance and fairness. Sustainable success therefore depends not merely on initial regulatory approval but on perpetual validation and iterative improvement over time.

Sourse: https://www.zdnet.com/article/5-ways-to-use-governance-challenge-regulation-ai-innovation/