Agentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate contentAgentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate content

Laura I. Harder: How to Prepare Boards for the Security Risks of Agentic AI

2026/03/19 13:28
6 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Agentic artificial intelligence (AI) promises to transform how organizations operate. Unlike earlier AI tools designed to summarize documents or generate content, these systems can act autonomously, execute tasks and interact with enterprise systems. For boards overseeing technology risk, that shift introduces a fundamentally different category of security concern. Laura I. Harder, Vice President of the Information Systems Security Association (ISSA) International and an offensive cyber officer in the U.S. Air Force Reserves, believes many leaders underestimate how quickly those risks can materialize. “The risk to organizations really comes down to having too much agency,” Harder says. “Agents can change permissions, change functionality and create actions that you maybe weren’t expecting.” As organizations move from experimenting with AI to operationalizing autonomous agents, boards must move just as quickly to establish governance structures, guardrails and oversight mechanisms capable of managing systems that can make decisions and take action without human intervention.

Agentic AI Changes the Security Equation

For the past several years, most corporate AI deployments have centered on tools that analyze information or generate outputs. Those capabilities introduced privacy and data integrity concerns, but the systems themselves rarely executed actions inside enterprise environments. Agentic AI changes that dynamic. Instead of simply offering recommendations or filtering resumes, agents can trigger workflows, access databases and interact with software systems across an organization. “It’s now not just giving us advice. It’s taking action and it acts on its own,” Harder says.

Laura I. Harder: How to Prepare Boards for the Security Risks of Agentic AI

That autonomy creates new security challenges because the systems can be manipulated. Just as humans can fall for social engineering, AI agents can be tricked into executing unintended tasks through techniques such as prompt injection. Harder points to real-world examples where hidden instructions embedded in inputs alter how AI behaves. “The AI is going to behave based off of the instructions it’s given,” she says. These threats are compounded by the opaque nature of many AI models. Organizations often rely on third-party tools without full visibility into how decisions are made. The result is a system capable of executing actions while operating in ways that are difficult to predict.

The Hidden Risk Boards Often Overlook

When boards begin evaluating agentic AI, Harder says the most underestimated vulnerability is permissions. Every AI agent operates within a network of systems, data sources and applications. The level of access granted to those systems determines the potential damage if something goes wrong. Harder describes this as the system’s “blast radius.” An agent that is given broad permissions may be able to interact with far more data and infrastructure than leaders realize.

A common example occurs when AI systems are connected to internal collaboration tools or document repositories. If a widely shared folder contains sensitive information, an agent operating in that environment will be able to access and use that data within the permissions granted to the user, service account, or integration it runs under. In practice, that means the agent can surface or act on information that may have been broadly accessible but not actively monitored.

Third-party AI services introduce an additional layer of risk. “If you’re using a model, what information does that model have access to, and can your information be used to train that model?” Harder asks. Without clear controls, proprietary information, intellectual property or sensitive customer data could unintentionally leave the organization through AI interactions.

Building Governance That Can Keep Up With AI

AI governance must be treated as a structured program rather than a technology add-on. Organizations should begin by establishing a dedicated AI governance board, often modeled after existing privacy or risk governance committees. That group should adopt established frameworks such as the NIST AI Risk Management Framework or international standards like ISO 42001. “Having AI governance and AI protections is not just a product that you can purchase,” she says.

These frameworks provide guidance on policies, risk assessments and operational controls. But they still require organizations to define how AI will function within their environment and what data it will be allowed to access. “You need policies, procedures and inventories,” Harder says. “Those pieces will help build the infrastructure that your teams can work from.” One emerging practice is the creation of an “AI bill of materials” that inventories every AI tool used inside the organization, what systems it connects to and what data it can access. Without that visibility, organizations cannot fully understand the exposure created by autonomous systems interacting with enterprise infrastructure.

Guardrails That Prevent AI From Going Rogue

Even with governance structures in place, agentic systems require technical safeguards that limit how they operate. The most effective strategy is to design security controls from the beginning. Systems should initially be developed inside closed, controlled sandbox environments using test data (not production data) and limited privileges. “As you are building your agentic system, you should do so in a sandbox,” she says. “It’s a controlled environment where synthetic systems can operate with low risk and no privilege.”

Testing must also include red teaming, where security professionals attempt to break the system or manipulate its behavior. These exercises expose vulnerabilities before systems are deployed into production environments. “Having a human in the loop ensures that if and when your AI tool decides to make a decision that maybe you didn’t want it to, there’s some sort of restriction,” Harder says. Isolation techniques can also limit risk. In some architectures, agents are contained inside virtual machines where policies restrict what commands they can execute and what systems they can access.

Board Oversight Ultimately Matters

For boards, the rise of agentic AI is a governance and accountability challenge and Harder stresses that organizations remain responsible for the actions their AI systems take. “You cannot go back and say, ‘I didn’t know it could do this,'” she says. “You have to do your due diligence.” That responsibility carries both legal and fiduciary implications. Boards must ensure that autonomous technologies are implemented with clear oversight, constrained authority and continuous monitoring. “Do not connect agents to privileged tools until you can prove that it has constrained authority, human checkpoints and monitoring,” Harder says. As agentic AI continues to move from experimentation into core operations, the organizations that succeed will be those that treat governance and security as foundational requirements rather than afterthoughts.

Follow Laura I. Harder on LinkedIn for more insights.

Comments
Market Opportunity
The AI Prophecy Logo
The AI Prophecy Price(ACT)
$0.01352
$0.01352$0.01352
-1.74%
USD
The AI Prophecy (ACT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.