I Co-Founded an AI Company in 2013. Here's What This Wave Gets Right - and What It Gets Dangerously Wrong.
I co-founded an AI company in 2013. We called it "predictive analytics" because nobody knew what machine learning was yet. Half our sales pitch was explaining that no, we were not a fortune-telling service.
I know what it felt like to be inside an AI hype cycle. I know the pitch decks that overpromised. I know the Gartner reports that announced AI would transform every industry within three years. I know the "big data is going to change everything" conference panels where everyone agreed and nobody defined their terms. I have personally sat on those panels.
I'm watching 2026 and I'm getting a strong sense of déjà vu. Not entirely - there are things about this moment that are genuinely different from 2013, and I'll be specific about them. But the hype cycle mechanics are running the same playbook, and the failure modes are going to be familiar.
What We Built in 2013 - and What Actually Happened
SimpleRelevance built narrow ML models. We trained models on historical purchase data, product metadata, and behavioral signals to generate personalized recommendations. The models worked. In A/B tests, they consistently outperformed the "people who bought this also bought" style of recommendation logic that retailers were using.
The companies that deployed us well - that integrated our output into their actual customer communications and measured outcomes honestly - saw real lift. 12% higher email click-through rates. 8–15% improvements in conversion for recommended products.
The companies that didn't deploy us well were a longer list. And the reasons were almost never about the technology. They were about the gap between what the technology could do and what their organization was actually set up to change.
The most common failure pattern: a VP of Digital Marketing bought us, ran a pilot, got excited about the pilot results, and then couldn't get the engineering team to prioritize the integration work needed to scale it. The pilot sat in a deck for two years. Eventually the VP moved to another company and the project died.
This is not a technology failure. This is an organizational change failure that technology enabled people to avoid confronting. We enabled them to feel innovative without actually doing the hard part. That's a hell of a business model, right up until it isn't.
The Parallel: "Big Data" Then vs. "AI" Now
The 2013 version of the current moment was called "big data." The narrative was: companies are sitting on enormous amounts of untapped data, and new analytical tools will unlock transformative insights that drive competitive advantage.
The narrative was partially true. The data was real. The tools were real. Some companies extracted genuine value.
What also happened: thousands of companies bought Hadoop clusters and hired data scientists without having a clear problem they were solving. The data lakes became data swamps. The data scientists spent 80% of their time cleaning data and 20% generating reports that nobody read. Vendors vastly overstated how quickly their technology could generate value for a typical enterprise.
Most companies used big data the way my grandmother uses her iPad - they owned a remarkably powerful tool and used it to check the weather. Expensive weather.
Gartner's hype cycle ran its course. The "big data" label went through trough of disillusionment by 2016 and came out the other side as more boring, more operational, and more genuinely useful. The companies that actually extracted value were the ones that had been disciplined about specific use cases from the beginning.
I'm watching the same cycle play out with AI now. The vendor overpromise is louder because the technology is genuinely more impressive. The adoption gap will be just as real.
What's Actually Different This Time
I want to be clear that this is not "big data repackaged." There is a genuine step-function difference in what foundation models can do compared to the narrow ML of 2013.
In 2013, every application of AI required a purpose-built model trained on domain-specific data. You couldn't take a recommendation model trained on retail purchase data and apply it to industrial procurement. You trained a model for each specific problem. This meant high development cost, long timelines, and narrow applicability.
Foundation models change this fundamentally. A single model can understand language, context, and reasoning across an enormous range of domains without being retrained for each one. You can take a language model and apply it to sales qualification, customer support, contract analysis, and market research - often with nothing more than a well-designed prompt. The accessibility of AI application has increased by an order of magnitude.
This means the adoption curve for legitimate AI value creation is actually faster now than it was in 2013. A sales leader who wants to build an account research agent doesn't need a data science team. They need access to an API and a weekend. I know because I did it (and the associated insomnia).
The potential is real. I'm not being a contrarian for sport. But the hype-to-reality gap is also real, and the failure modes are predictable.
What's Dangerously the Same
Vendor overpromise. The AI vendor pitches of 2026 have the same structure as the big data vendor pitches of 2013: here's an impressive demo with carefully selected data, here's a category of problem you've always struggled with, here's the transformation that awaits you on the other side of signing. What the demo doesn't show is the 6 months of data cleaning, integration work, change management, and iteration between signing and getting any value. Buying a $200K revenue intelligence platform and not changing your process is like buying a Peloton and hanging laundry on it.
The adoption gap. Buying an AI tool and integrating AI into how your organization actually works are completely different activities. Most companies are doing the former while expecting the results of the latter. This is the same mistake enterprises made with Hadoop in 2013. Every sales leader I talk to says they're "using AI." When I ask how, 90% say "we have a Gong license." That's like saying you're a chef because you own a microwave.
Data quality problems. In 2013, "big data" initiatives repeatedly ran aground on the same rocks: the data was messier than anyone admitted during the sales process. In 2026, AI applications are hitting the same rocks. Your CRM data is incomplete. Your historical activity data has gaps. The language model can only work with what it's given, and what it's given is often not what the demo assumed.
The 3 Questions I Ask Every AI Vendor Now
When an AI vendor is pitching me, I ask these three questions. If they can't answer all three specifically, I don't buy.
1. Show me a customer at my stage and scale who has had this deployed in production for 12 months. What are their actual before/after metrics? Not a pilot. Not "early results." 12 months in production.
2. What does the data integration actually require? Be specific. What APIs? What data prep? What ongoing maintenance? Who owns this after deployment? The gap between "quick setup" in the pitch and "6-month integration project" in reality is where most AI vendor disappointments live.
3. What does it cost if it doesn't work? I want to understand the exit. Can I turn it off without losing critical data or rebuilding processes? Vendors who are genuinely confident in their product are comfortable with this question. Vendors who aren't will wriggle.
What I'm Building Now and Why
The tools I'm building at joepeck.ai are designed around the lessons from 2013. Specifically:
They're narrow. Each tool does one thing: the Deal Coach qualifies deals against MEDDPICC. The Forecast Truth Machine scores pipeline against behavioral signals. Narrow tools that do one thing well are more reliable and more adoptable than broad platforms that promise to transform everything.
They're workflow-native. They're designed to fit into how a sales leader already works - review your pipeline, paste deal notes - not to require a new way of working. The adoption gap kills AI projects. I build for the lowest possible adoption friction.
They're honest about their limitations. AI that pretends it's always right is a liability. These tools tell you what they're scoring and why, so you can agree or override with context the AI doesn't have.
The technology is better than 2013. The lessons from 2013 are the same. I've now been wrong about the timeline twice, which makes me the ideal person to tell you how to avoid being wrong once.
See what I've built at joepeck.ai.
Want to talk through your revenue strategy?
I work with a small number of companies at a time. If this resonated, let's connect.
Let's Talk