Image of the title slide for taking the secy ouf ot AI - AI for Ops.

This isn’t the AI we were promised – session recap from RevOpsAF

At RevOpsAF 2025, Courtney Sylvester and I stepped onstage with a message that cut through the hype: AI in ops isn’t living up to the promise.

Our session, “Taking the Sexy Out of AI,” wasn’t a rejection of artificial intelligence—it was a call for reality, nuance, and better expectations.

Because let’s be honest: most teams didn’t get flying cars. We got broken routing, misleading dashboards, and a mountain of AI data quality issues that derailed even the most well-intentioned automation projects.

“You were sold a dream…”

From the start, the AI pitch has been hard to ignore. Smarter campaigns. Faster processes. Automated everything. But as Courtney and I shared, the actual implementation? That’s where it breaks.

We thought it would be like Iron Man’s JARVIS... instead it’s Clippy.
We thought it would be like Iron Man’s JARVIS… instead it’s Clippy.

What looked like “plug-and-play” on social turned into a tangle of issues:

  • Executive directives made in a vacuum
  • Ops teams left to bridge capability and interest gaps
  • Ambiguity around onboarding, integrations, and data flows
  • No real AI governance plan (or sometimes, even a discussion)

If your AI “solution” has introduced more fire drills than accurate forecasts,  I promise – you’re not alone.

AI data quality issues are still the biggest blocker for ops teams

Courtney and I didn’t sugarcoat it—AI data quality issues are where most ops projects hit a roadblock.

When your data quality foundation is full of outdated contacts, misrouted leads, and fields that don’t match up, AI doesn’t come to the rescue. It just makes the chaos faster and louder.

Our advice? Start small. Clean the data first.

Here’s a 4-step process we shared to actually do that:

  1. Identify accounts or contacts to review
  2. Scrape company websites or LinkedIn for updates
  3. Summarize findings (like employee changes or new product lines)
  4. Standardize and sync back into your CRM

A photo of man saying its not glamorous but it works.

Is it glamorous? Nope. Is it effective? Absolutely.

“AI should solve problems… not create new ones.”

Another theme we hit on: intent matters. Many teams race to implement AI tools without a clear business objective—just pressure from leadership or a fear of “falling behind.”

As I put it on stage, “FOMO is not a GTM strategy.”

If your tools aren’t solving real operational challenges, they’re just more work with a shinier interface.

Playing the long game

Once your data is clean and your priorities are clear, there’s room to experiment.

We laid out a slower, smarter strategy that’s less about wow-factor and more about wins:

  • Monitor LinkedIn activity before you engage with a lead
  • Use AI to summarize what they’re posting about
  • Auto-generate talking points and campaign briefs
  • Push scripts into your CRM or enablement tool for SDRs

Work smarter, not harder.

This isn’t just AI for AI’s sake—it’s about giving your team better context, faster.

Governance: the guardrails your AI actually needs

We didn’t just call out the technical headaches—they zeroed in on one of the biggest risks ops teams overlook: AI without governance.

Without clear governance, things can spiral out of control fast. We’re talking biased outputs, privacy issues, and automations that make decisions no one signed off on. And by the time you notice? The damage is done.

The good news: Governance doesn’t have to be complicated.

Think of it as basic table stakes:

  • Assign ownership: someone needs to be responsible for how AI is used
  • Make the logic visible: now what your models are doing and why
  • Monitor continuously: build in regular check-ins, updates, and fail-safes

As we put it, “Don’t wait until something breaks to figure this out.” Because broken AI doesn’t just affect dashboards—it undermines trust.

AI isn’t the goal—it’s just one of the tools

One of the key takeaways from Courtney? AI isn’t the finish line—it’s a tool. And like any tool, it only works if the foundation is solid.

It’s not a shortcut. It’s not a fix-all. It’s something you layer in after you’ve aligned with your stakeholders, cleaned up your data, and figured out what problems actually need solving.

Otherwise, those AI data quality issues are just going to keep surfacing.

Fix the foundation first.

Courtney and I didn’t throw shade at AI—they just gave it a reality check. It’s not magic. It’s not instant. And it’s only as good as the messy processes and questionable data behind it.

Our advice?

  • Start small
  • Clean your data
  • Align with your stakeholders
  • Put guardrails in place

Then—and only then—let AI do its thing.

Because if your chatbot is recommending products that don’t exist or your lead scoring is causing your sales team to ghost real buyers… It’s probably time to hit reset.

  Grab the AI prompt engineering handbook and start building a foundation that you can actually work with.

Leave a comment