Compliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transfer requirements, CCPA obligations, SOC audits, each one generates its own documentation burden, and the teams responsible for meeting those obligations often do so through manual processes that consume hundreds of hours per audit cycle. Sumit Sharma has spent the last several years building automation systems to replace manual workflows. His work has covered automated control monitoring and evidence generation, third-party risk assessment tooling used by tens of thousands of employees, and security awareness training platforms serving over 650,000 users globally. He has also contributed to the professional knowledge base through ISACA Journal publications, peer review work for the Cloud Security Alliance and IEEE and speaking engagements on third-party risk management and AI governance. We spoke with him about what compliance automation looks like when it’s actually running at scale, where AI fits into risk assessment today, and what most vendors get wrong about how these programs operate inside large companies.
This project involved three components: continuous control monitors, a failure escalation mechanism, and auto evidence generation. The generated evidence was available within a system across each control for auditors to readily consume. This reduced the manual overhead on business and technology teams to provide evidence manually to audits for specific samples during each audit cycle. With such innovative systems, issues can always arise. A couple of examples of what broke along the way is, the monitoring logic was not wrongly configured, or the data source selection is wrong that lead to inaccurate monitoring or evidence generation.
The portal that was overhauled was more from the user interface (UI) standpoint. Before we made such changes, we had some internal metrics to begin with, for example, the customer satisfaction score of the tool was lower than the expected baseline. Also, we had been seeing a lot of internal user tickets getting filed, complaining about UI issues, slowness, and difficulty moving between screens within the tool, which seemed to us to make a change to the UI to better guide the users. Hence in a way we heard user feedback and acted on it. Before the roll out of a new portal, we met with few users/teams who used the tool more frequently than others. We also heard feedback from upstream and downstream system users that gave us additional perspective,which helped us to focus on our requirements in the right direction and make improvements on the key UI components. Regarding the adoption, we started making internal communication on what changes we plan to bring with this new UI and when to avoid any surprises. We also invited some users for the user acceptance testing to get their firsthand feedback. Upon roll out, we also had videos uploaded on the portal providing a walkthrough of all new features for the users.
I have seen organizations automating manual workflows, such as sending reminders, and alsobuilding a risk assessment logic that rates a third party based on certain criteria. Additionally, I think certain organizations are also trying to integrate risk reviews with other reviews within the third-party life cycle to further create a seamless process for internal and external users.
What I have been noticing is that practitioners are approaching it more as a risk-centric view of AI, which means they are looking at it as a new cybersecurity or compliance surface instead of a new innovation. They are trying to push for auditable controls that are mapped across the entire AI lifecycle as opposed to high level ethics statements. Also, there is a strong demand for cross–framework alignment (NIST, ISO, EU AI Act) to reduce fragmentation. Overall, having an AI governance must be adopted for a safer and faster adoption in the AI development process, where AU governance is just not a check box exercise but something that can enable trust, innovation and speed. This can also be a key differentiator for the organizations who are either building it or adopting it.
I would answer this question a little differently. Every third party requires sign off from legal, procurement, privacy and security and this is the right industry practice that regulators want to see. Your question seems to be more around how you run a project with so many stakeholders. When you work with multiple cross–functional stakeholders, a project’s problem statement and the impact that it will have play a key role here. A good to have project will not fly with so many stakeholders. Hence before starting or conceptualizing any project, one must clearly document the problem and the impact. Projects that are required to satisfy a regulatory requirement can have an easy sell because no one wants a company to get fines or a bad reputation due to non–compliance. However, projects which are aimed at performing a proactive risk mitigation can have a lot of push back. Potential reasons for this could be operational overheads on different functions, lack of resources to manage it. To address these concerns, one should identify key metrics for this group of stakeholders so they can easily quantify the impact it will have on their teams. This will help them better prepare for managing such operational constraints and also help you to align on the right timelines for a project go-live. And this way you do not run an about to be failed project, but a project well thought out, where requirements are clearly captured and though it takes time, but you deliver a highly impactful project.
As of now we are already seeing or reading about instances where AI agents can access sensitive data and coordinate with other agentic agents. I feel from an AI perspective these risks should be mapped to fundamental principles around internal control and governance. There are these traditional frameworks such as COSO that emphasize segregation of duties, monitoring and risk assessments that ensure reliable operations. However, they do not address novel risks introduced by Agentic AI, such as over-privileged access, inter-agent collusions, and prompt-basedmanipulations. There is a need for a control framework that integrates classical IT general control framework (ITGC) with emerging AI-specific considerations. Organizations must think about measuring the autonomy of such agents, including what they can access or invoke without human intervention. Model drifts will require tracking, and organizations must log different steps and action chains and feedback loops for agents. Also as mentioned before, such frameworksmust align with global regulatory requirements that will further give organizations an opportunity to rationalize their control environment as opposed to creating multiple similar controls to satisfy AI requirements for different country level or regional requirements.
I believe this is no one environment that has taught me the most about managing technology risk. Working in consulting gave me a broader exposure across industries, clients and local and global regulations. Banking taught me why technology risk is so important to manage in a financial institution due to the sheer fact that one systematic issue can have global ramification across the bank and can lead to financial loss, which can directly impact bank’s revenue and on top of itscustomer investments. Coming into tech with all this experience helped me understand the leadership mindset on how much risk management is important to them and to the business. Unlike consulting or banking, big techs operate at a massive scale, velocity and global regulatory exposure. What I learned is though basic risk management fundamentals apply but they need to move beyond the point in time to more continuous risk monitoring. Also, the blast radius of a failure is immediate and user impacting, requiring risk-based decision making. Risk is important but it should not slow down the business. Also, since the risk dimension is more from managing user data and its impact, some regulatory requirements from other industries such as banks, may not apply. Hence risk management here needs to be tightly coupled with product design, data architecture and automation rather than being a mere policy. What I learned and I am still learning in tech is balancing innovation speed with regulatory obligations, which has definitely sharpened my ability to design projects that scale and are more preventive in nature than being reactive.
I feel vendors selling AI compliance tools kind of underestimate how fragmented and complex large companies are. I feel there is an underlying assumption that there is one centralized governance body, when in practice this responsibility is split across multiple compliance teams with overlapping authority. There are tools built as dashboards and at times ignore that the actual compliance processes are executed through multiple systems, and development pipelines. Also,if the tools do not help in automating manual workflows such as evidence generation, it is difficult for them to scale. Another potential mistake I feel is thinking about compliance as a static checklist as opposed to being a continuous process that should incorporate regulatory updates and perform model changes. Tools are often created as ready for regulator reporting without thinking about the usability for engineers and program managers who will actually workon the tool day in day out. Also, bigger organizations care less about flashy risk scores and are more concerned about traceability, auditability and accountability when a regulator asks, “who approved it and why”.


