McKinsey’s 2025 State of AI report found that 88% of organizations now use AI in at least one business function. Generative AI adoption jumped from 33% to 72% in a single year. The interesting bit though is that only 7% have scaled AI across their organization.
That’s because most organizations don’t have a clear AI strategy.
We hear it all the time: “We know we should be doing more with AI, but we have no idea where to start.”
The answer is almost always the same. Before you buy tools, hire an AI lead, or sign up for the enterprise plan on anything, you need to understand where you actually stand. You need an AI audit.
At Refound, we’ve run dozens of these across industries from manufacturing to media to financial services. This guide covers what an AI audit is, what it involves, and how to run one yourself. If you want to do it internally, this gives you the framework. If you decide to bring in outside help, you’ll at least know what good looks like.
What is an AI audit?
When most people hear “AI audit,” they think compliance. That’s a real and growing field, with frameworks like NIST’s AI Risk Management Framework and the EU AI Act creating new compliance requirements. But that’s not what we’re talking about here.
An AI audit, in the operational sense, is a structured assessment of your organization’s AI readiness and maturity. It maps where you are today, identifies where AI can have the most impact, and produces a prioritized roadmap for getting there.
Here’s how the two types compare:
| Compliance AI Audit | Operational AI Audit | |
|---|---|---|
| Primary question | ”Are our AI systems safe and legal?" | "Where should we be using AI, and how ready are we?” |
| Focus | Risk, bias, fairness, regulatory adherence | Readiness, opportunity, ROI, implementation |
| Triggered by | Regulation, legal requirements, incidents | Strategy, competition, growth objectives |
| Output | Compliance reports, risk assessments | Prioritized roadmap, ROI projections, maturity score |
| Who leads it | Legal, compliance, risk teams | Operations, strategy, or external consultants |
| Frameworks | NIST AI RMF, EU AI Act, ISO 42001 | Custom organizational assessments |
Both matter. If you’re deploying AI in high-stakes areas like hiring, lending, or healthcare, you need compliance auditing. This guide focuses on the operational side because that’s what most companies searching for “AI audit” actually need: a clear-eyed look at where they stand and where to go next.
The output is a specific, prioritized list of opportunities with clear ROI projections and a phased plan for implementation.
Why companies run AI audits
Nobody wakes up and says “I’d love to audit something today.” There’s always a trigger. In my experience, it’s usually one of these:
The competitive wake-up call. A direct competitor ships an AI-powered feature and your leadership team realizes they don’t even have a strategy. This is the most common trigger I see, and it usually comes with urgency attached.
The shadow AI problem. Your teams are already using ChatGPT, Copilot, Claude, and a dozen other tools you’ve never approved. Customer data is flowing through personal accounts. Nobody knows what’s happening or what the risks are. An ISACA-cited survey of over 12,000 workers found that 60% had used AI tools at work, but only 18.5% were aware of any company policy about it.
Even worse, 68% were accessing AI through personal accounts, and 37% had shared employee data through unapproved platforms. In our audits, the actual numbers are always higher than leadership expects.
The board is asking questions. “What’s our AI strategy?” is now a standard board question. Boards love asking broad questions like this. If you don’t have a clear answer, an audit gives you one.
Budget season. Someone has $500K earmarked for “AI initiatives” and no idea where to spend it. An audit turns a vague budget line into specific, defensible investments with projected returns.
New leadership. A new CEO, CTO, or division head wants to understand the current state before making changes. An audit gives them the baseline.
AI investments aren’t paying off. S&P Global data shows that 42% of companies abandoned most of their AI projects in 2025, up from just 17% the year before. The most common reasons: unclear value and uncontrolled costs. An audit prevents you from joining that statistic by ensuring you invest in the right things first.
If any of these sound familiar, you’re in the right place.
Discover Your AI Maturity Level
Take our 5-minute assessment to find out where you stand on your AI journey and get personalized recommendations.
What an AI audit actually covers
We’ve refined our audit framework over dozens of engagements. It covers six areas, and each one matters. Skip any of them and you’ll end up with blind spots.
1. Current AI usage inventory
First, you map what’s already happening. This is where shadow AI shows up. You’ll survey teams across the organization to find out what tools people are using, what they’re using them for, and whether anyone is paying attention.
We typically find three categories: sanctioned tools (officially approved and deployed), tolerated tools (leadership knows about them but hasn’t formally addressed them), and shadow tools (people are using them and nobody in management has a clue). The third category is always bigger than expected.
A Komprise IT survey found that 90% of organizations are concerned about shadow AI from a privacy and security standpoint, and nearly 80% have already experienced negative AI-related data incidents. It’s already happening, and an inventory is the first step toward getting it under control.
2. Data readiness assessment
AI runs on data.Now, every company, including yours, has data. The question is whether your data is accessible, structured, clean, and governed in a way that makes it useful.
This means evaluating your data infrastructure, quality, and governance. Can you actually get customer data out of your CRM in a usable format? Are your operational records consistent? Do you even know what data you have across systems? Most organizations score themselves much higher on data readiness than reality warrants.
3. Team capabilities and culture
Who on your team has AI skills? Not just technical skills, but practical experience applying AI to business problems. Are people excited about AI or anxious about it? Is there a culture of experimentation, or do people need permission to try anything new?
This dimension is often the most important and the most overlooked. You can buy the best tools in the world, but if your team doesn’t know how to use them, or worse, actively resists them, you’re burning money. We wrote a detailed guide on rolling out AI to your team that covers this.
4. Process automation opportunities
This is where the ROI lives. You systematically walk through high-volume, repetitive processes and evaluate which ones are candidates for AI automation.
The best opportunities share a few traits: they’re time-consuming, they follow somewhat predictable patterns, they involve structured or semi-structured data, and the cost of errors is manageable. Invoice processing, customer inquiry routing, report generation, data entry, quality checks. These are the quick wins that fund bigger initiatives. For a deeper dive into what automation looks like in practice, see our AI automation service.
5. Build vs. buy analysis
For each opportunity, you need to decide: buy an off-the-shelf tool, customize an existing platform, or build something custom? This decision depends on how unique your process is, how much control you need, your technical capacity, and your budget.
We see companies make two common mistakes here. Some try to build everything custom when a $50/month SaaS tool would solve 80% of the problem. Others spend six-figures on enterprise platforms they’ll never fully deploy because they don’t have the internal expertise to configure and maintain them.
6. Risk assessment
What could go wrong? Data privacy issues. Regulatory requirements. Workforce displacement concerns. Vendor lock-in. Integration complexity. Hallucination risk for customer-facing applications.
The goal isn’t to scare anyone. It’s to go in with eyes open. Every opportunity has risks, and responsible organizations address them upfront rather than scrambling after launch.
Discover Your AI Maturity Level
Take our 5-minute assessment to find out where you stand on your AI journey and get personalized recommendations.
The AI maturity model: where does your company stand?
One of the most useful outputs of an AI audit is a clear picture of your organization’s maturity level. We assess this across two dimensions: people (how ready your team is) and process (how embedded AI is in your operations).
There are well-known AI maturity frameworks out there. MITRE’s model has five levels across six pillars. Gartner’s has five levels evaluated across seven areas. Both are thorough, and both are designed for large enterprises with dedicated AI governance teams.
For the mid-market and growth-stage companies we typically work with, those models are overkill. I’ve found that four levels, assessed across two core dimensions, capture the real differences between organizations without requiring a six-week assessment just to score yourself:
| Stage | People | Process | What it looks like |
|---|---|---|---|
| Grounded | Limited awareness. Few people use AI tools. | No formal AI in workflows. | ”We’ve been meaning to look into AI.” |
| Curious | Individuals experimenting. Pockets of enthusiasm. | Some ad-hoc tool usage. Shadow AI present. | ”A few people on the team use ChatGPT, but it’s not official.” |
| Climbing | Formal training underway. AI champions identified. | Pilot projects running. Some processes automated. | ”We have a couple of AI projects in production and we’re expanding.” |
| Leading | AI literacy across the org. Dedicated AI roles. | AI embedded in core workflows. Continuous optimization. | ”AI is part of how we work. We’re building custom solutions.” |
Most companies we audit land somewhere between Curious and Climbing. They’ve got scattered adoption but no coherent strategy. The audit gives them the map to move up.
A few things to keep in mind when assessing yourself:
Maturity isn’t uniform. Your marketing team might be Climbing while your finance team is still Grounded. That’s normal. The audit should capture these differences rather than averaging them away.
Higher isn’t always better right now. A 15-person company doesn’t need an AI Center of Excellence. Match your ambition to your resources and your industry. What matters is that you’re moving deliberately, not that you’re at the top of the chart.
People maturity usually lags process maturity. Companies buy tools faster than their teams learn to use them. If you’re investing in AI automation, invest equally in training your people. Otherwise you’ll have expensive software that nobody touches.
What to prioritize at each stage
Your maturity level determines what to focus on first:
- Grounded: Don’t start with tools. Start with education. Get leadership aligned on why AI matters for your specific business. Run an AI training workshop to build baseline literacy. Then do the audit.
- Curious: Formalize what’s already happening. Create an AI usage policy. Identify your most enthusiastic users and make them AI champions. Pick one or two quick wins and make them official.
- Climbing: This is where strategy matters most. You have momentum. The risk is scattering it across too many initiatives. The audit should help you prioritize ruthlessly and build toward an AI-first operating model.
- Leading: Your audit focus shifts from “where to start” to “where to optimize.” Look for cross-functional opportunities, custom AI development, and competitive moats.
We built an AI maturity quiz based on this model if you want a quick read on where your organization sits across both dimensions.
How to run an AI audit (DIY version)
If you have someone internal with a reasonable understanding of AI and good cross-functional relationships, you can run a solid audit yourself. Here’s the framework.
Phase 1: Stakeholder interviews (Week 1-2)
Interview 10-20 people across departments and levels. You want a mix of executives, middle managers, and individual contributors.
Ask these questions:
- What are the most time-consuming parts of your job?
- Where do you see information bottlenecks or repeated manual work?
- Are you using any AI tools today? Which ones, and for what?
- What would you automate if you could?
- What concerns do you have about AI in your role?
Record these conversations (with permission). The patterns that emerge are gold. When people in different departments mention the same bottleneck, you’ve found an opportunity.
Phase 2: Tool and data inventory (Week 2-3)
Build a complete picture of your current technology landscape. Document:
- Every AI tool currently in use (approved or not). Survey IT, check expense reports, ask team leads directly.
- Data sources and systems. Where does your data live? CRM, ERP, spreadsheets, email, shared drives? How does it flow between systems?
- Integration points. What talks to what? Where are the manual handoffs between systems?
- Data quality. Pick 3-5 critical data sets and assess completeness, accuracy, and accessibility. Don’t try to audit everything. Sample.
Phase 3: Opportunity mapping (Week 3-4)
Take what you learned from interviews and the technology inventory and start identifying specific opportunities. For each one, document:
- The current process. What happens today, step by step.
- The pain point. What’s slow, error-prone, or expensive about it.
- The AI approach. How AI could improve or replace this process.
- Estimated impact. Time saved, cost reduced, quality improved. Be specific.
- Implementation difficulty. Easy (off-the-shelf tool), medium (configuration needed), hard (custom development).
- Dependencies. What needs to be true for this to work? Clean data? API access? Team training?
Plot your opportunities on a simple 2x2 matrix: impact (vertical) vs. difficulty (horizontal). Start with the high-impact, low-difficulty quadrant. Those are your quick wins.
Phase 4: Roadmap and recommendations (Week 4-5)
Turn your opportunity map into a phased plan:
Phase 1 (Quick wins, months 1-3): Deploy proven tools for well-understood problems. These build momentum and demonstrate value. Look for opportunities that save at least 5 hours per person per week.
Phase 2 (Strategic projects, months 3-6): Tackle medium-complexity opportunities that require some integration work or process redesign.
Phase 3 (Transformation, months 6-12): Take on the bigger bets. Custom AI solutions, cross-functional process redesign, new AI-powered products or services.
For each phase, estimate costs, expected ROI, and required resources. Be honest about what you don’t know. An honest “we need to pilot this before we can estimate ROI” is better than made-up numbers.
Discover Your AI Maturity Level
Take our 5-minute assessment to find out where you stand on your AI journey and get personalized recommendations.
What companies actually discover
After running audits across industries, certain findings come up again and again. Here’s what you’ll probably find:
Shadow AI is everywhere. In every single audit we’ve run, AI usage was significantly higher than leadership believed. People aren’t being sneaky about it. They’re just solving problems with whatever tools are available.
The issue isn’t that they’re using AI. It’s that they’re doing it without guidelines, governance, or security awareness.
Data is messier than anyone admits. Everyone says their data is “pretty good.” It almost never is. Duplicate records, inconsistent formatting, critical data trapped in spreadsheets on someone’s desktop, undocumented tribal knowledge that never made it into any system. The gap between perceived data readiness and actual data readiness is consistently the widest gap in every audit.
Quick wins are hiding in plain sight. Every organization has at least 3-5 processes that could be meaningfully improved with AI tools that already exist, like ChatGPT or Claude. The typical finding is 10-20 hours per week of recoverable time per team, concentrated in report generation, data reconciliation, and communication tasks.
The real bottleneck is people, not technology. The tools exist. The data can be cleaned. What’s usually missing is someone to own the initiative, a culture that supports experimentation, and the training infrastructure to bring people along. We’ve written about building AI champions within your organization because this pattern comes up so frequently.
Industry patterns
The specifics vary by industry, but certain patterns repeat:
Professional services firms (agencies, consultancies, law firms) are usually Curious-stage. They have high shadow AI adoption among individual contributors, very little firm-level strategy, and massive opportunities in proposal generation, research synthesis, and client communication. Their biggest risk is client data leaking through unapproved tools.
Manufacturing and operations companies tend to be Grounded or early Curious. They have rich operational data locked in legacy systems and see the biggest ROI from quality control automation, predictive maintenance, and supply chain optimization. Their biggest challenge is data accessibility, not data volume.
SaaS and technology companies are often Climbing already but unevenly. Engineering teams may be using Claude Code daily while customer success still does everything manually. Their audit usually reveals that the gap isn’t technology adoption but cross-functional coordination.
Each industry has different starting points, but the audit framework applies the same way. The six areas don’t change. What changes is where you’ll find the biggest opportunities and the steepest challenges.
What a good audit deliverable looks like
Whether you run this internally or hire someone, the output should be a document your leadership team can actually act on, not a 90-page report that sits on a shelf.
A strong AI audit deliverable includes:
- Executive summary (1-2 pages). Current maturity level, top 3-5 opportunities, recommended first move, estimated total addressable ROI. This is for the CEO who has 10 minutes.
- AI maturity scorecard. Your organization scored across both dimensions (people and process), broken down by department. Visual, scannable, honest. Include the benchmark: where you stand relative to your industry.
- Opportunity register. Every identified opportunity in a standardized format: current state, proposed AI approach, estimated impact, implementation difficulty, dependencies, risks. Sorted by priority. This is the core of the deliverable.
- Shadow AI inventory. What tools are in use, by whom, for what purpose, and what data is flowing through them. This alone often justifies the audit.
- Data readiness assessment. An honest evaluation of your data infrastructure, quality, and governance gaps. Not aspirational. Current state.
- Phased implementation roadmap. Quick wins (month 1-3), strategic projects (month 3-6), and transformation initiatives (month 6-12). With estimated costs and resource requirements for each phase.
- Risk register. Identified risks and recommended mitigations for each priority opportunity.
If someone hands you a deliverable that’s missing any of these, push back. The whole point of an audit is to enable decisions. Every section above directly supports a specific decision your leadership team needs to make.
Discover Your AI Maturity Level
Take our 5-minute assessment to find out where you stand on your AI journey and get personalized recommendations.
The first 90 days after an audit
An audit is only useful if you act on it. Here’s what the first three months typically look like for organizations that execute well:
Days 1-14: Align leadership. Present the findings to your executive team. Get agreement on which 2-3 opportunities to pursue first. Assign clear ownership. If you skip this step, nothing else happens.
Days 15-30: Launch quick wins. Deploy the easiest, highest-impact opportunities. These are usually tool deployments that require minimal integration. A marketing team switching to an AI writing assistant. A support team deploying an AI summarization tool. Customer ops using AI for ticket categorization. The goal is visible wins that build organizational confidence.
Days 30-60: Address shadow AI. Use the inventory to create a formal AI usage policy. Decide which shadow tools to sanction, which to replace with approved alternatives, and which to shut down. Communicate clearly. This isn’t about punishment. It’s about giving people better, safer versions of what they’re already doing.
Days 60-90: Start strategic projects. With quick wins generating momentum and shadow AI under control, begin the medium-complexity initiatives from your roadmap. These usually require some integration work, process redesign, or team training. Build in measurement from day one so you can demonstrate ROI to justify the next phase.
The companies that get the most value from an audit treat it as a starting gun, not a finish line. The audit tells you where to run. You still have to run.
When to bring in outside help
You might not need an outside consultant for this. If you have an internal leader who understands AI, has credibility across departments, and can dedicate several weeks to the effort, a DIY audit can work well. The framework above gives you what you need.
There are situations where outside help makes more sense:
You need speed. An internal audit typically takes 4-6 weeks because the person running it has a day job. An experienced external team can do it in 2-3 weeks because it’s all they’re doing.
You need objectivity. Internal politics shape what people say. An outside interviewer hears things that never make it up the chain. People are more candid when the conversation isn’t going back to their boss.
You need expertise. If nobody on your team has run an AI audit before, you’ll spend significant time figuring out the methodology. An experienced team has pattern-matched across dozens of organizations and knows what to look for.
You need executive buy-in. Sometimes the findings carry more weight when they come from outside. This shouldn’t be how it works, but it’s reality.
What to expect on timeline and cost
A DIY audit costs your internal person’s time for 4-6 weeks, plus the opportunity cost of pulling them off their regular work.
External AI audits from boutique consultancies typically range from $5,000 to $50,000 depending on company size and scope. Large consulting firms (McKinsey, Deloitte, Accenture) charge significantly more. You’re paying for the brand name but not much else. For most mid-market companies, a focused engagement from someone who specializes in this work delivers better ROI than an enterprise consulting engagement.
Timeline-wise, expect 2-3 weeks for an external audit and 4-6 weeks internally. The fastest we’ve turned one around was a week for a small company with good data hygiene.
If your organization is ready to go from understanding the framework to acting on it, we’ve built our AI audit service around exactly this process. Same framework, delivered faster, with the pattern recognition that comes from having done it many times.
Want to run an AI audit at your company? Here’s how we do it →
Discover Your AI Maturity Level
Take our 5-minute assessment to find out where you stand on your AI journey and get personalized recommendations.