How to Evangelize AI Adoption in Enterprise OrganizationS
Enterprise AI evangelism is the practice of driving AI tool adoption, skill building and business value across a large workforce through structured enablement, live demonstrations and scalable operating models.
Most enterprise organizations already have access to AI tools. The gap is not technology. It is enablement. In this episode of the Marketing AI Sparkcast, host Aby Varma spoke with David Kuoch, Vice President and AI Evangelist at Citi, about how he drives AI adoption across an organization of 220,000 employees. Kuoch has presented to more than 5,000 people within Citi and delivered over 50 live demos in the past year. He operates within a 15,000 person enterprise data operations group, sitting at the intersection of engineering, data governance and supporting education. His approach offers a replicable framework for any enterprise leader responsible for evangelizing AI adoption.
Spotify | Apple | iHeart Radio
Why Enterprise AI Enablement Requires Two Goals at Once
Effective enterprise AI enablement requires pursuing two goals simultaneously: raising the floor and raising the ceiling. Raising the floor means bringing every employee to a baseline level of competency with AI tools. Raising the ceiling means connecting AI initiatives to measurable business outcomes.
"I look at it as raising the floor, then also raising the ceiling," Kuoch said. "AI enablement work is raising the floor, bringing people to the level where they can understand what's going on. And then the other part is, what's the business value?"
The floor work centers on experimentation, education and reducing the fear that AI will eliminate jobs. Kuoch is direct on this point. "AI's gonna replace the people that don't know how to use AI," he said. "Would you rather be the person that's on the front lines learning and getting your hands dirty?"
The ceiling work centers on moving from isolated proofs of concept to scalable workflows that produce consistent, repeatable results. Enterprise leaders who treat these as separate workstreams risk losing momentum on both. The employees who build foundational skills through hands on learning are the same ones How Live Demos Drive Faster AI Adoption Than Slide Decks
Live demonstrations using real employee workflows are more effective at driving AI adoption than traditional presentations. Kuoch deliberately avoids slide based sessions in favor of building AI solutions in real time using examples drawn from the audience's actual work.
"I'm not a big fan of PowerPoint slides," Kuoch said. "What I like to do is when I sit in these demos, I'm gonna show them exactly the work that needs to be done. I'll take an example of what someone's actually doing in their day, and we do it right here on the spot."
This approach works because it shifts the framing from theoretical to personal. When an employee sees AI applied to a task they perform every day, the value becomes immediately tangible. Kuoch has delivered more than 50 of these sessions across Citi in a single year, each one tailored to the specific work of the audience in the room.
For enterprise leaders building AI evangelism programs, this principle is worth adopting early. The discomfort of working without a polished deck is far outweighed by the credibility it builds with employees who are skeptical about AI's relevance to their roles.
Where Employees Should Start Their AI Journey
Employees should start their AI journey by examining their own daily workflows and identifying tasks that are manual, repetitive or time consuming. That is the low hanging fruit where AI can deliver immediate, visible results.
"Look at your day to day things that you're doing, and where you see opportunities, that's where you start," Kuoch said. "It's hard to understand that until you actually get your hands dirty and do the work."
This guidance applies across departments. Whether someone works in marketing, compliance or finance, the entry point is the same. Find the manual work, apply AI to it and learn from the experience. Kuoch shared that his own learning curve required approximately 40 attempts before he reached a breakthrough. That struggle, he argues, is not something organizations should try to bypass.
"If you don't experiment, you don't go through the growing pains of what it is, you don't really see it," Kuoch said. "Sometimes it almost handicaps the organization. I think it's healthy for an organization to struggle and get people to go through that."
Organizations that skip the experimentation phase and jump directly to advanced use cases create a knowledge gap that undermines long term adoption. The experimentation phase is where employees develop the intuition they need to identify higher value opportunities later.who will identify the highest value use cases.How Problem Framing and Prompt Engineering Form the Foundation of AI Enablement
Problem framing and prompt engineering are the two most critical skills in any enterprise AI enablement framework. Problem framing means defining the desired outcome before selecting a tool. Prompt engineering means structuring inputs so that AI produces high quality, consistent outputs.
"Focus on what your desired outcome is," Kuoch said. "Maybe it's a report, maybe it's some kind of metric. Start off with that one and work backwards. It's almost like reverse engineering."
For prompt engineering, Kuoch teaches a framework called CoStar. CoStar stands for context, role, objective, style, tone and response. When employees structure their prompts using this method, they produce significantly better results. Without it, a common principle applies. "Junk that comes in, you get junk out," Kuoch said. "A lot of times people don't know how to do prompt engineering."
Beyond these two foundational skills, Kuoch's full enablement framework includes process decomposition (turning messy workflows into structured steps), data interpretation (using AI to add intelligence to static reports), output design, workflow thinking and governance. Each principle addresses a specific gap that surfaces when organizations move from casual experimentation to structured adoption.
Enterprise leaders should assess which of these principles their teams need most and build training programs around those specific gaps rather than attempting to cover everything at once.
When to Shift From Prompt Libraries to AI Operating Models
Organizations should shift from prompt libraries to AI operating models when multiple teams are independently building similar prompts and duplicating effort across the enterprise. Prompt libraries are a natural first step in AI adoption, but they do not scale.
"You have thousands and thousands of people basically recreating the wheel," Kuoch said. "It's great because you're raising the floor and people are getting comfortable with AI, but where it becomes problematic is, how do we get consistency?"
In a large organization, dozens of teams build their own prompt libraries in isolation. The same ideas get duplicated hundreds of times with no mechanism for ensuring consistency. Kuoch describes this as a maturity signal. When it appears, the organization needs to shift its thinking.
"At some point an organization needs to say, we're mature enough to have these prompt libraries, but we need to shift to operating models," Kuoch said. "That's when you get the ability to scale things and make it reproducible."
An AI playbook standardizes how a specific task is performed using AI so the output is consistent regardless of who runs it. An AI operating model defines how AI powered workflows integrate into the team's broader processes. The distinction matters because it is what unlocks measurable business value. When every team member uses the same playbook to generate a report, the organization gains consistency, reproducibility and the ability to automate.
How to Balance Standardization and Flexibility in Enterprise AI Programs
Enterprise AI programs should use a modular design approach that applies a core framework broadly and allows individual teams to customize specific elements for their context. This balances the need for organizational consistency with the reality that different teams have different workflows.
"I try to look at the biggest funnel, see how many people I can get within that funnel," Kuoch said. "And then you go one layer under that and say, this is just for this part of the organization. So it requires a little bit more customization."
The core framework remains the same across compliance, marketing and finance. Each team then adds its own tone, policies and procedures as modular components. Version control is essential because the goal is to maintain a single source of truth while allowing controlled iterations for different groups.
This approach solves one of the most common failure modes in enterprise AI programs. When teams feel that a framework does not accommodate the realities of their workflow, adoption stalls. Kuoch described situations where teams were unknowingly duplicating work that another group had completed six months earlier, simply because there was no visibility across the organization.
"I've sat in meetings where it's like, oh, someone's already six months ahead of you," Kuoch said. "No one's talking to each other. They all have the same ideas."
Modular design gives teams latitude to make AI tools work for their specific context while preserving enough consistency for the organization to scale.Why Internal AI Communities Accelerate Enterprise Adoption
Internal AI communities accelerate enterprise adoption by creating a self sustaining feedback loop where knowledge compounds over time. Kuoch is part of an internal AI community of approximately 4,000 people at Citi that serves as the organization's primary knowledge sharing network for AI.
"As things come in, this is what's happening, this is what we're recording, we send that out to those people so they can share with the rest of the organization," Kuoch said.
The community includes accelerators, early adopters and employees who are new to AI. Information flows through dedicated channels, recorded sessions and regular communications about which large language models are performing best and which techniques are producing results.
Kuoch's philosophy on sharing is direct. "AI is not a secret. Share what you actually have. Tell people what it is because they're gonna build something better from what you have," he said. "It's kind of like making pizza. You share the recipe with someone else, and someone's gonna throw some different ingredients on it and create something better."
This openness addresses a challenge unique to large organizations. New employees arrive without context on what the organization has already learned. A well maintained community with recorded sessions and searchable channels gives those employees a way to catch up without requiring the evangelism team to repeat the same lessons repeatedly. In a 220,000 person organization, Kuoch noted that topics discussed six months ago can still be entirely new to someone joining a different division.
How to Deliver AI Business Value by Designing for Structured Data
Organizations deliver AI business value by focusing on one desired outcome, working backward from that outcome and designing for structured data from the very first step. This approach prevents the fragmented, one off outputs that limit the scalability of early AI projects.
"Focus on that one desired outcome," Kuoch said. "You focus on what it is, let's say I want a report, and you work backwards from it. I don't need to know everything else because we can work backwards."
The overlooked step in this process is building structured metadata into the initial AI workflow. When the first output produces well organized data, that data becomes reusable across multiple downstream applications. A single structured dataset can feed a presentation, an executive summary, a client report and more without requiring the information to be reprocessed.
"I want to always build a structured database," Kuoch said. "Whatever I'm taking in, I can extract it for a PowerPoint slide, for an executive summary, whatever it is. That core data is so essential because now I don't have to reprocess that again."
This principle applies across company sizes but carries particular weight in enterprises where reprocessing data at scale is costly. Teams that build their first AI workflows without considering data structure create isolated outputs that cannot be extended. Teams that design for structured metadata from the start build a foundation that compounds in value as the organization layers additional AI powered workflows on top of it.
What Smaller Organizations Can Learn From Enterprise AI Adoption
The core principles of enterprise AI enablement transfer directly to mid market companies and small businesses. Problem framing, prompt engineering, desired outcome thinking and community based learning apply regardless of company size. The difference is speed.
"Small businesses have the luxury to iterate and move fast," Kuoch said. "You can break things and it's okay. It's a wonderful thing when you design something and it doesn't work. You got that lesson, now you can design for something else."
Smaller teams can move through the experimentation phase in weeks rather than months, which means they can reach the operating model stage faster. However, Kuoch cautions that speed without guardrails creates risk. Even organizations outside regulated industries should establish governance practices early.
"Go fast, build it, break it as much as you can, but also keep in consideration that there's a certain guardrail and governance that you have to design for," Kuoch said.
The questions are the same regardless of scale. What data is being fed into AI tools? Who is reviewing the outputs? What happens when the model produces an incorrect result? Teams that answer these questions early will be better positioned to scale their AI programs without hitting governance walls later.
Ready to Scale AI IN MARKETING Across Your Enterprise?
Driving AI adoption at scale requires more than access to tools. It requires a deliberate enablement strategy, a framework for moving from experimentation to operating models and a culture of open knowledge sharing. If your organization is navigating the complexities of enterprise AI evangelism, contact Spark Novus to explore how we can help your team build an AI strategy that delivers real business value.
FAQs About AI Evangelism in Enterprise Organizations
-
Start with strategy aligned to a clear business north star. Then translate that into real workflows through education and live demonstrations using everyday work scenarios. Adoption comes from clarity and relevance, not access. Reinforce this with structured, role-based learning that shows teams how AI fits into how they already operate and how it contributes to measurable outcomes, supported by clear guardrails and leadership alignment.
-
A prompt library is a collection of prompts created by individual marketers during experimentation. An AI operating model is a standardized way of executing marketing tasks using AI, defining workflows, inputs, outputs, and guardrails so campaigns are consistent, on brand, and measurable. The shift from prompt libraries to operating models is what enables marketing organizations to scale AI adoption and drive repeatable business performance.
-
Balance AI standardization and flexibility through a modular operating model. Establish a shared framework that defines how marketing work gets done with AI, including workflows, inputs, outputs, brand guardrails, and measurement. Then allow teams to adapt execution based on channel, audience, and campaign needs. This ensures consistent brand expression and measurable performance at scale, while giving teams the flexibility to move quickly in their context without creating fragmentation.
-
CoStar is a prompt engineering framework that structures AI inputs around six elements: Context, Objective, Style, Tone, Audience, and Response. There are many frameworks that can be effective, and CoStar is a strong example that brings clarity and structure to how teams work with AI. In a marketing organization, teaching teams to use a framework like this improves the consistency and quality of AI-generated content, keeps outputs aligned with brand standards, and reduces rework. This structured approach helps teams move beyond inconsistent experimentation and builds confidence in AI as a reliable part of the marketing workflow.
-
Organizations should not skip the experimentation phase. It is where teams build the foundational understanding needed to use AI effectively. Skipping it creates a knowledge gap that weakens long-term adoption and leads to poor execution. The key is to run structured, workflow-based experimentation tied to business goals, so learning translates into scalable, high-impact use cases.
-
Measure success across two dimensions. Adoption metrics track how many employees are actively using AI and how their capability is progressing over time. Business impact metrics track the value delivered through AI-enabled workflows, including improvements in speed, consistency, and efficiency, along with measurable impact on marketing performance such as pipeline, engagement, conversion, and cost per outcome.
-
Internal AI communities play a critical role in enterprise adoption by creating a structured, peer-driven network for sharing knowledge and real-world use cases. They help reduce duplicated effort across teams, accelerate learning, and provide an effective onboarding path for new employees. When supported by leadership and aligned to business priorities, these communities surface practical applications faster than top-down training alone and help embed AI into day-to-day workflows across the organization.