Skip to main content

On-demand webinar coming soon...

On-demand webinar coming soon...

March 2, 2026

Bottleneck to breakthrough: AI governance That scales 

Graphic for the season 2, episode 1 edition of the Trustonomy podcast

About this episode


Is your AI Governance a safety net or a bottleneck? Most companies treat AI governance as a “one-size-fits-all” checklist, but for one organization that led to a nightmare backlog - hundreds of stalled AI use cases.

Bryan McGowan, Global Trusted AI Leader at KPMG, reveals how they build a governance framework that moves at the speed of innovation, and enables scale without sacrificing security.

The conversation also looks ahead to what’s coming next, including the governance challenges posed by agentic AI, identity and access controls, and the importance of testing and validation across the AI lifecycle.

Ojas Rege, host of the Trustonomy podcast

Host
Ojas Rege

Bryan McGowan's headshot

Guest
Bryan McGowan




Ojas Rege is SVP and GM of Privacy and Data Governance at OneTrust, with 35 years of experience in enterprise security and applications. He advises global organizations on responsible data and AI strategy. His perspective on technology has been featured in Bloomberg, CIO Magazine, Financial Times, and Forbes. Ojas holds BS and MS degrees in Computer Engineering from MIT, an MBA from Stanford, is a Fellow of the Ponemon Institute, and holds CIPP/E and CIPM privacy certifications..

Bryan McGowan is a Principle in the KPMG Advisory practice and leader of Global and US Trusted AI. With over 24 years of experience running large and complex projects, he plays a key role in advancing strategic AI initiatives by integrating AI systems into business processes.


Bryan McGowan:

The client called and said, "Hello? We've published our AI governance policy. We feel it's a very well-thought-out policy. It's built around some industry standards, such as NIST and ISO, and we've rolled that out across our organization."

 

 

Ojas Rege:

Bryan McGowan is the Global Trusted AI Leader for KPMG. He works with clients to strategically and safely integrate AI initiatives into their business processes. He's a trusted advisor in the industry.

 

With this large tech company, nothing sounded out of the ordinary to Bryan. An informed AI governance policy, check; based on industry standards, check; rolling it out across the company, check again. But people don't usually call Bryan if things are going smoothly.

 

 

Bryan McGowan:

It was taking them weeks and weeks of time to get through an individual review or risk assessment of a given AI system.

 

 

Ojas Rege:

Turns out the company's well-though-out AI governance policy was causing quite the backlog.

 

 

Bryan McGowan:

The backlog at that time was actually a couple hundred systems.

 

 

Ojas Rege:

A couple of hundred AI systems and use cases all piling up. That's no small task, especially when that number is growing daily.

 

 

Bryan McGowan:

Maybe today it's 100, maybe tomorrow it's 200, but if you're only working 5 to 10 systems a month and your backlog's growing by 20 to 30, it's a never-ending cycle of being able to get through. And so they had to step back and say, "There's got to be a better way for us to think through and to implement this program. And if we're ever going to scale it more broadly across our organization, we're going to have to rethink the way we're doing things today."

 

 

Ojas Rege:

So what happens when your governance framework doesn't scale? And what do you do about it?

 

I'm Ojas Rege, general manager, privacy and data governance, and this is Trustonomy, an original podcast from OneTrust. This season, we're diving into a space that's shifting rapidly, AI-ready governance. We're going to discuss the real work of building programs that keep pace with innovation. We'll look at what it takes to modernize governance for the AI era.

 

In this first episode, we're starting with a story about AI governance as a process, one that involves knowing when to pause, step back, and take a hard look at the approach itself. The key is identifying what's truly high risk, in other words, which use cases need a deep dive and which ones you can fast path.

 

Back to Bryan. An AI governance backlog isn't just frustrating; it undermines buy-in for the governance program itself and can impact revenue and reputation.

 

 

Bryan McGowan:

At some point, you're going to start to slow production, and you're going to have a negative impact on the business, and you're either going to have to have a trade-off of not completing your governance process or potentially going live with risk and vulnerabilities that you may not want to have in production.

 

 

Ojas Rege:

A trade-off between not completing your governance process and potentially going live with vulnerabilities is obviously not what anyone wants, including Bryan's client. They were acutely aware of the negative impact that might have.

 

 

Bryan McGowan:

There were a number of organizations that were maybe not being as prudent in terms of governance and protecting some of the data and/or monitoring some of the accuracy and reliability of the outcomes that were having very public failures and/or having penalties handed down from them from regulators for some of those mistakes. And so I think it was really a culmination of all of those facts that said, "We need to get this right, but we need to do it in a way that's going to be scalable and not add additional burdens to the organization."

 

 

Ojas Rege:

That's the gold standard, isn't it? An AI governance policy that ensures AI systems are developed and used ethically, safely, and responsibly, and a policy that's scalable, that helps you do your job better and faster.

 

 

Bryan McGowan:

So one of the challenges that they were acknowledging to us was a fairly lengthy process that probably from start to finish was taking 30 to 45 days on average to get through all of those steps and get to a point where folks were comfortable. And the reason was it was taking them weeks and weeks to go through this process manually and gather all the information that was required, review the documentation, make sure that they were meeting the essence and the spirit, confirming if there are technical guardrails, how those had been configured, et cetera. And they were doing that for all of the use cases that were coming through the door.

 

 

Ojas Rege:

This reminds me, I think it's that old classic Charlie Chaplin movie where he is on the assembly line and he's trying to do things, and the cans keep coming, and he can't keep up, and then everything's a mess.

 

 

Bryan McGowan:

Yeah.

 

 

Ojas Rege:

Not every use case needs the same level of scrutiny. Bryan's client needed a better approach.

 

 

Bryan McGowan:

One of the early, I'll call it revelations that stood out was really taking this concept of a risk-tiered approach. It seems relatively simple, but I think that the concept of stepping back and thinking about risk is not a one-size-fits-all, and therefore, governance shouldn't be a one-size-fits-all approach for every AI system in the environment.

 

 

Ojas Rege:

How did you think about tiering risk and understanding the difference in risk patterns?

 

 

Bryan McGowan:

In terms of thinking through the patterns, it's what are we trying to enable? So very simply voice to text, text to speech, video to text, audio to text, those are very simple traits. But I think if you think about other things like document summation, document review, creating certain types of outputs, those also can be output. So the way that we are introducing data in via attachments or via upload or via search, also the way that outputs are being generated and/or if we're applying templates or certain data formats, those are the sorts of things that we want to think through that ultimately would drive some of the risk assessment that is going to be applied to an individual system.

 

 

Ojas Rege:

Got a terminology question for you. We talked about why one size fits all does not quite work for this, but what do we call the alternative? How do you refer to it?

 

 

Bryan McGowan:

We just call it a risk-tiered approach to governance. I don't know if that's the industry-accepted term, but in our view, there are different risk and considerations. And if we look to even some of the leading regulations, so I'll pick on the EU AI Act, the concept of risk-tiered approach, and we're seeing it in several other orders, there are certain things that are riskier or that present more risk and/or exposure for organizations. So taking our cues from some of those leading frameworks and regulations and really trying to think through what are the key risk and challenges that we're trying to manage against, and how might we ultimately tier our approach so that we're not applying that one size fits all?

 

 

Ojas Rege:

Yeah, because I was thinking if you said one size fits all, and then you said no size fits all, and then you worry that there's no size fits anyone, then ... A risk-tiered is a nice, easy, practical, and obvious way to phrase it.

 

A risk-tiered approach to governance, as Bryan says, was one of the key suggestions that helped with the tech company's backlog. The other had to do with a shift in who did what or, more accurately, in what did what.

 

 

Bryan McGowan:

There was not a technology solution in play, and so we explored that with them as well, just to drive more automation, because the amount of information that you're trying to collect, and if you're trying to do all that manually, it really is creating just another compliance burden on the organization. Hey, maybe we should think about automated workflows. Maybe there's ways that we can build APIs to bring data in from some of the underlying hyperscaler platforms and/or tools that are using. Maybe there's things that we can do around auto-discovery of assets in the environment to make it less burdensome in terms of how we're gathering the information on the inventory that exists.

 

 

Ojas Rege:

Bryan and his team had solid recommendations, but moving from a one-size-fits-all manual process to a more flexible AI governance strategy that's risk-tiered and includes automation also requires a major shift in mindset.

 

 

Bryan McGowan:

We set up a multi-day workshop where we had the key stakeholders, both from the overarching governance committee, but each of those key stakeholders that are involved in existing technology reviews, third-party reviews, security reviews, et cetera, and got them all into a room and started to really map out what that process was going to look like in order to think about potentially like a fast path of how something may move through with less approvals and/or less requirements, and what might it look like on the things that we want to spend more time on.

 

It was not a hard sell because the manual process was very burdensome. And from the governance team perspective, they're trying to make this less burdensome, we're trying to find ways to make this more efficient and not add additional work to your plate, but we also need your help in order to make sure that we're designing this in a way that leverages the information that you have available versus asking you to create new information. So I think the collaboration was very important.

 

And one of the things I noticed early is when you think about the various components of a governance program, it's requiring parts of the organization that probably don't speak together as much in the past to be much more collaborative in nature earlier on in the process. And so I think those were all very key components to really bring folks along on the journey and make sure that they had the support for their program.

 

 

Ojas Rege:

And if there was any remaining skepticism, it faded when the team saw the results of the new approach.

 

 

Bryan McGowan:

Being able to take that process that was 30 to 45 days and really bring that down for the company to really be two weeks or less in most cases, I think, was also very exciting for folks that had been spending the last couple of months trying to go through this process manually and seeing the backlog pile up in terms of just the workload that they were not yet able to get to because it was just taking so long for them to conduct the process in the previous model.

 

 

Ojas Rege:

The reason people go down the one-size-fits-all path is that it's the path of least resistance, it sounds the easiest. And so when they realize it doesn't work, it feels really daunting to put something in place that would work, it just feels like overwhelming complexity. What are your practical tips for implementation?

 

 

Bryan McGowan:

Yeah. We have a series of intake questions. Those intake questions are built around some of the categories, type and nature of the system, underlying technology, are we connecting proprietary data, are there regulatory considerations, et cetera, et cetera. Behind those questions is a series of rules that we can configure, and those rules will automatically determine what principles of our AI framework are required, and by default, then which corresponding risk and controls will be needed as well. And those are all driven also off of a risk assessment.

 

So as you're answering the questions, what you get on the backside using the rules logic is a very streamlined low, moderate, high, unacceptable, as an example. And then you can kick off your various different workflows from there. And so I think having that simplified yet consistent approach, and to the extent that you can apply that in a consistent manner, it takes away a lot of the judgment and a lot of the back and forth in terms of how might we rate this one versus this other one, et cetera.

 

 

Ojas Rege:

How will the emergence of AI agents change this governance process?

 

 

Bryan McGowan:

I definitely think there's going to be some unique considerations on the agentic side. If we're moving into an era where we start to have, and there's a lot of terminology out there, so a super agent or an orchestration agent or whatever other terminology you want to use, but if I have an agent that is able to operate either autonomously or semi-autonomously and can orchestrate and/or call upon actions from other agents, I now need to think about the involvement and the human in the loop differently, just as an example.

 

So what I mean by that is if I have an overarching, I'll call it super agent that's going to call upon, as I mentioned, four or five other agents, it's going to execute a series of different tasks before I ultimately may see an output. So in addition to my normal review of the output that I may perform on maybe a document or an analysis that's been created, I'm also going to want to have some sort of immutable logging that I likely look at to start to think through were the right steps completed. Am I comfortable with the sources that are being cited in this document, as an example?

 

 

Ojas Rege:

There are many new opportunities and challenges introduced with the agentic AI, like how do we learn to manage AI agents when we're still struggling to manage people? And what are the implications of having AI agents on enterprise access and authorization?

 

 

Bryan McGowan:

I think it's one thing if you're using an AI system that is launched and interaction is driven by an individual. So it may inherit my permissions, and based upon whether I have access to something, it's going to then determine whether or not the system can access certain information. That's a common model that we're seeing today, but moving into a more autonomous or semi-autonomous, there's going to need to be a digital identity for these agents as well. And so thinking through how you're going to approach that, how are you going to manage that access, what level of access and autonomy are we going to allow if an agent seeks to create an additional identity or if it seeks to create additional connections that we haven't?

 

And some of these I'm alluding to, if folks wanted to read more, we did just publish a study that we did with the University of Amsterdam, which essentially was looking at, can you have a zero-employee company, and what might that look like? And so some of those learnings in terms of agents starting to create additional connections that weren't initially configured, those are things that came out of that study that I think, again, go back to there's some additional ethics, governance, access considerations that we're going to need to think through as we get into the agentic era.

 

 

Ojas Rege:

Final question: We've covered a lot of ground, any learnings or challenges we haven't covered that you feel people should be watching out for?

 

 

Bryan McGowan:

One of the pieces of work that we're doing right now that we're really starting to see, I mean, lack of a better term, just high-value add, is we've been building testing programs. So in addition to the, I'll call it the upfront risk assessments that may be done within a governance program, making sure that you have some sort of a testing program to validate, are all of these things that we're asking folks to do actually working as we think that they are?

 

And so what we've been finding from some of the work with our clients to date is that in Denver cases, we look at the governance documentation that's been provided. Most of that is either someone has submitted and/or certified that certain guardrails are performed, that certain tests have been completed.

 

As we're doing this backend testing, and the approach I'm describing, we call it system cards, where we're doing app-level, code-level, system-level test, including prompting, but ultimately trying to determine, are these guardrails working as expected? And the findings from that process has been really eyeopening.

 

I'm thinking of one of my financial services clients, they've opened tickets with some of their technology partners for guardrails that were supposed to do a certain thing that were shown to not be working. So having some sort of a mechanism to test and validate that within the governance program is something that we're seeing companies really see a lot of value in recently.

 

 

Ojas Rege:

You have to think about governance across the lifecycle of the AI system, not just building it, but testing and validating it, and figuring out all these boundaries can be tricky. Here are a few of my takeaways. First, AI is ubiquitous; it's everywhere, touching every code-based dataset and process. The hidden challenge isn't just risk; it's scale. If your governance can't keep up, your backlog grows, slowing innovation, and holding back the business.

 

Next, one size doesn't fit all. A risk-based, flexible approach to governance lets you focus on the high-risk use cases while fast tracking the low-risk ones, preventing backlog and keeping innovation moving.

 

And finally, as we move into an era of agentic AI, managing identity and access becomes critical. Governance must include agent identity and authorization to set the guardrails that keeps AI safe and effective.

 

This is Trustonomy. In this series, we're exploring what AI-ready governance looks like in action, not just theory. We'd love to hear what's working for you, what lessons you've learned, and what you want to hear more about in future episodes. Visit us at onetrust.com/podcasts/trustonomy. I'm Ojas Rege, thanks for listening.

More from Trustonomy

Episode 5: The Tylenol murders and the trust recovery

This season we’ve been sharing stories about companies and organizations that made mistakes and lost public trust. In this episode, we look at a company that did nothing wrong but had to find a way through a crisis to rebuild trust.

November 23, 2023 28 min read

Learn more

Episode 4: The privacy breakdown that betrayed a nation

Many companies collect personal data - names, birthdays, interests, payment information, and geolocation. But there’s no data more private and sensitive than biological data. So what happens when that information is used without consent?

November 09, 2023 28 min read

Learn more

Episode 3: The missing data that doomed Pearl Harbor

Companies run on data. It’s the backbone that allows them to understand their customers, make informed decisions, and see the big picture. But what happens when you don’t know what data you have, where it is, or how to access it?

October 26, 2023 30 min read

Learn more

Episode 2: Blowing the whistle on the Space Shuttle Challenger disaster

There’s a fine balance between getting things done and getting them done the right way. Every business has deadlines, technical hurdles, and contractual pressures to consider. But what happens when you create an environment that prevents people from sharing ideas and concerns?

October 12, 2023 28 min read

Learn more

Episode 1: The safety shortcuts that sank a steamboat company

When you run a business, you build relationships with other businesses. They become your vendors and suppliers. But what happens when these third parties make decisions that put your customers and your business at risk?

September 28, 2023 26 min read

Learn more

You might also like

Graphic showing the OneTrust and OneTrust Talks Tech podcast logos underneath a microphone icon overlaid on top of a dark green background with a circular step gradient in the center.

OneTrust Talks Tech

The OneTrust Talks Tech podcast is dedicated to providing listeners with weekly OneTrust product updates and information from industry experts on privacy, risk, compliance, ethics, and environmental topics.