The 5 governance pitfalls you can still avoid
The problemYou’ve heard of shadow IT. Shadow AI is the AI version, and it occurs when departments or individuals deploy AI agents without proper organizational visibility or oversight. Much like shadow IT before it, shadow AI emerges from legitimate business needs combined with the increasing accessibility of AI development tools. And as AI has become more widely available and easier to use, shadow AI has proliferated.
Today, what once required specialized data science teams now can be implemented by business analysts or developers using accessible platforms and APIs. A form of democratization, shadow AI offers tremendous potential for innovation, but also significant governance challenges, including:
AI agents deployed without security review or risk assessment
Inconsistent controls across similar use cases
Duplication of effort as multiple teams solve similar problems
Fragmentation of institutional knowledge about AI capabilities
Increasing technical debt as ad-hoc solutions proliferate
Misused data
Want to learn how to avoid the five common pitfalls in deploying AI? Watch our webinar.
LinkedIn, the world’s largest professional network, offers an AI-powered Hiring Assistant that exemplifies the regulatory blind spots created by embedded AI agents. Though functioning as a productivity tool, it qualifies as a high-risk system under EU AI Act Article 6.
The governance gapWhile LinkedIn provides the technology, your organization bears deployer accountability—with legal exposure under multiple regulations that don't differentiate between providers and users. When AI-driven hiring decisions require explanation (mandated by the EU AI Act, Article 86, ‘Right to Explanation of Individual Decision-Making’), the compliance burden falls to your organization, not the platform.
This pattern multiplies across your technology stack, where embedded AI agents operate largely invisible to governance frameworks but fully visible to regulators.Key warning signs
Unexplained API calls to AI services
Business processes improving without documented technology changes
Data access patterns showing unusual volume or timing
Teams reluctant to detail theirproductivity improvements
Rising costs for cloud-based AIservices without corresponding IT projects
What you can do nowEffective shadow AI mitigation requires balancing control with enablement. Here are steps your organization can take to minimize risk and optimize your customers’ experiences:
Create sanctioned pathways for AI agent experimentation with appropriate guardrails
Implement discovery tools that can identify AI agent signatures across your technology ecosystem
Establish AI registries where teams document agents, their capabilities, and governance controls
Develop graduated governance that scales with an agent's potential impact and access
Build communities of practice that share knowledge and promote responsible development
The goal isn't to eliminate innovation, but to bring it into a framework where risks can be identified and managed effectively.
The problemAI agents require data to function. Often lots of it.From multiple sources. And with minimal human intervention.
This scenario creates fundamental governance challenges, including:
Agents needing broader access patterns than human users
Traditional role-based access control proving insufficient for dynamic agent behavior
Data policies designed for human users failing to anticipate agent-specific scenarios
Context-awareness gaps when data moves between systems via agents
Complex entitlement management across multiple agents working in concert
Organizations often discover these gaps only after deployment, when agents encounter unexpected limitations or, worse, when they access and use data in ways that violate organizational policies or regulations.
A healthcare technology company deployedan AI agent to streamline insurance verification processes. The agent was granted access to patient records to extract necessary information. However, the access controls didn't properly limit what data the agent could process, resulting in it inadvertently including protected health information in logs that were visible to unauthorized personnel, creating both a compliance violation and a security incident.
Key warning signs
AI agents with unnecessarilybroad access rights
Frequent access exceptions beinggranted for agent operations
Data policy violations involvingautomated processes
Manual workarounds being implementedto "feed" agents necessary data
Inconsistent data handling practicesacross similar agent deployments
What you can do nowAddressing data access and policy gaps requires extending your data governance framework to explicitly address agent-specific scenarios. Consider the following best practices:
Develop agent-specific data access patterns that implement the principle of least privilege while enabling functionality
Create data policy frameworks that explicitly address automated processing
Implement dynamic access controls that can adapt based on the agent's task context
Establish data lineage tracking for agent operations to maintain visibility
Regularly audit agent data access against both policies and usage patterns
By addressing these issues proactively, organizations can enable agent functionality while maintaining appropriate data protection.
The problemThe promise of AI agents lies in autonomy; their ability to complete tasks with reduced human intervention. But this proposition creates a fundamental governance challenge: How to determine appropriate human oversight without undermining efficiency.
Most organizations struggle to balancecompeting objectives:
Maximizing automation benefits while maintaining appropriate control
Ensuring humans remain accountable for outcomes without becoming bottlenecks
Keeping humans engaged enough to provide meaningful oversight
Maintaining skills and knowledge that might atrophy as tasks are automated
Designing oversight that scales with deployment size and complexity
Without thoughtful design, organizationstend toward two problematic extremes:
Excessive manual reviews that negate efficiency gains
Rubber-stamp approvals that provideonly the illusion of oversight
Data Citizens Dialogues Podcast
A marketing agency deployed an AI agent to generate and publish social media content for clients. The system was designed with a cursory human approval step, but as volume increased, reviewers found themselves approving dozens of posts in minutes. Eventually, the agent published content containing factual inaccuracies about a client's product that triggered a consumer backlash and contract cancellation.
Oversight processes that haven't been redesigned for agent-specific workflows
Human approvers consistentlyaccept agent recommendationswithout modification
Increasing approval speeds with no corresponding efficiency improvements
Oversight responsibilities addedto already full workloads
Lack of clarity about who's ultimately responsible for agent outcomes
What you can do nowEffective human oversight requires intentional design. Identify and implement the following governance best practices:
Implement risk-proportional oversight where human involvement scales with potential impact
Design meaningful intervention points where human judgment adds value
Create exception-based workflows that focus human attention on anomalous cases
Establish clear accountability chains from agent actions to responsible individuals
Develop oversight skills and literacy so humans can effectively evaluate agent recommendations
The goal isn't maximum oversight, but optimal oversight. Human intervention designed precisely where it adds the most value and mitigates the most risk.
The problemRegulatory frameworks are evolving rapidly to address AI, but they weren't designed with autonomous agents specifically in mind.
This creates significant compliance challenges, including:
Uncertainty about how existing regulations apply to agent-specific scenarios
Compliance requirements designed for human decision-makers that don't translate cleanly
Documentation standards that don't capture agent operation nuances
Jurisdiction questions when agents operate across regulatory boundaries
Rapidly evolving requirements that may conflict with deployment timelines
Organizations often find themselves either over-interpreting regulations (creating unnecessary constraints) or under-interpreting them (creating compliance risk), with limited precedent to guide decisions.
Compliance isn't easy. Learn howto operationalize AI compliance.
A financial advisory firm deployed anagent to provide preliminary investment recommendations. The system was designed without considering regulations requiring suitability documentation for investmentadvice. When regulators conducted a routine examination, they found the firm couldn'tproduce records demonstrating how theagent's recommendations aligned with client objectives and risk tolerance, resulting in significant penalties and reputation damage.
Compliance teams excluded from agent development processes
Regulatory reviews occurring onlyat deployment rather than throughout development
Uncertainty about which regulationsapply to specific agent capabilities
Documentation gaps aroundagent decision-making
Inconsistent approaches tosimilar regulatory questionsacross the organization
What you can do nowAddressing regulatory misalignment requires proactive engagement with compliance questions, including:
Conduct regulatory applicability assessments for each agent's specific functionality
Engage regulators early when compliance questions emerge
Build compliance by design into agent development methodologies
Implement comprehensive documentation of decision-making processes
Establish a regulatory change management process to track evolving requirements
By treating regulatory alignment as a design requirement rather than a deployment checkpoint, organizations can reduce compliance risk while maintaining development momentum.
The problemAI agents often operate as black boxes, making decisions through complex processes that aren't easily inspected or understood. This lack of explainability creates significant governance challenges:
Difficulty determining if agents are operatingas intended
Inability to diagnose problems when they occur
Challenges validating that agents remain aligned with organizational values
Limited ability to learn from and improve agent operations over time
Eroding trust from stakeholders who can't understand agent decision-making
As agents become more autonomous andhandle more complex tasks, the explainabilitygap widens. And it often becomes apparentonly when something goes wrong.
Don't let missing explainability ruin your AI initiatives. Learn how to scale AI with data governance.
A large retail organization deployed an AI agent to optimize pricing across thousands of products. When sales of certain product categories unexpectedly declined, the team couldn't determine whether the agent was functioning correctly or if external factors were to blame. Without visibility into the agent's decision-making, they were forced to roll back the system entirely and revert to manual pricing. And it was a costly, disruptive process that could have been avoided with better explainability.
Inability to answer basic questions about why an agent took specific actions
Post-incident reviews that can't identify root causes
Stakeholder confusion or skepticismabout agent operations
Growing reliance on the original development team to interpretagent behavior
Ad-hoc explanations that vary dependingon who provides them
What you can do nowBuilding appropriate explainability requires both technical and process approaches, including:
Establish explainability requirements based on use case risk and stakeholder needs
Implement monitoring that captures decision factors not just outcomes
Create visualization tools that make agent operations understandable to key audiences
Develop explanation frameworks that translate technical details into business terms
Train teams on how to interpret and communicate agent operations
The right level of explainability varies by context—not all decisions require the same degree of transparency—but intentional design is essential to avoid critical gaps.