
SaaS Backwards - Reverse Engineering SaaS Success
Join us as we interview CEOs and CMOs of fast-growing SaaS firms to reveal what they are doing that’s working, and lessons learned from things that didn’t work as planned. These deep conversations dive into the dynamic world of SaaS B2B marketing, go-to-market strategies, and the SaaS business model. Content focuses on the pragmatic as well as strategic, providing a well-rounded diet for those running SaaS firms today. Hosted by Ken Lempit, Austin Lawrence Group’s president and chief business builder, who brings over 30 years of experience and expertise in helping software companies grow and their founders achieve their visions.
SaaS Backwards - Reverse Engineering SaaS Success
Ep. 163 - Is Your SaaS Ready for the Agentic AI Era?
Guests: Ken Lempit, James Ollerenshaw, and Rob Curtis
AI isn’t just a bolt-on anymore—it’s rewriting the rules of SaaS from the ground up.
In this episode, Standup Hiro co-founder Rob Curtis and tech strategist James Ollerenshaw join host and GTM expert Ken Lempit to unpack how Agentic AI is forcing SaaS leaders to rethink everything: product design, go-to-market, customer trust, and even internal culture.
From natural language interfaces to the rise of AI “co-workers,” they break down why SaaS companies that adapt will dominate—and why bolt-on AI features won't be enough to survive.
📌 Key Takeaways:
- The Age of AI Co-Workers
Forget copilots. The next SaaS revolution will be about true AI teammates—systems that work with you, not just for you. - Natural Language is the New UI
SaaS buyers are expecting to talk, not click. Voice-first experiences and conversational commands will soon be table stakes. - Trust Is the New Battlefield
Enterprises are more forgiving of human errors than machine mistakes. To win with AI, SaaS leaders must prove not just capability, but reliability. - Bolt-Ons vs. Reinvention
Tacking AI onto legacy products won’t cut it. Companies willing to re-architect around AI-native principles will seize the biggest opportunities. - Invisible Software is Coming
Many SaaS tools will fade into the background, powering workflows behind the scenes. The battle for customer love (and margin) starts now.
Heads up: This episode runs longer than usual—but for good reason.
We go deep into the real-world implications of Agentic AI, from product strategy to pricing models, trust, and the future of SaaS UX. If you're a SaaS leader trying to stay ahead of the AI curve, every minute is packed with actionable insights that could reshape how you build, sell, and scale.
---
Not Getting Enough Demos?
Your messaging could be turning buyers away before you even get a chance to pitch.
🔗 Get a Free Messaging & Conversion Review
We’ll analyze your website and content through the eyes of your buyers to uncover what’s stopping them from booking a demo. Then, we’ll give you a personalized report with practical recommendations to help you turn more visitors into sales conversations.
And the best part?
💡 It’s completely free.
No commitments, no pressure—just actionable advice to help you book more demos.
Your next demo is just a click away—claim your free review now.
Ken Lempit: Welcome to SaaS Backwards, the podcast that looks at what's working and what isn't in the world of B2B SaaS.
This is the second episode in our new series on the impact of artificial intelligence on the SaaS industry, where we're unpacking what AI really means for builders, sellers, and buyers of enterprise and departmental software solutions.
In our last conversation, we explored how agentic AI and no-code tools are rewriting the SaaS playbook, lowering the barrier to building software, and calling into question how much value SaaS vendors really own.
Today we're diving a little deeper into the world of agents, what they mean for SaaS and what opportunities they present.
Like voice, which we'll cover on next week's episode.
Today we'll be asking questions like, how close are we to a world where we don't have to click through dashboards, but use natural language to drive SaaS apps.
Something like Captain Picard talking to the enterprise.
And what happens when agents are capable of initiating workflows, handling escalations, or even rewriting business logic on the fly.
To explore these questions. I'm joined by two brilliant minds, James Ollerenshaw, a marketing strategist with deep AI experience and healthy skepticism earned from watching more than one hype cycle play out.
And Rob Curtis, co-founder of Hiro studio, A venture builder working at the cutting edge of Agentic AI and a frequent chronicler of this moment in tech via his substack.
I am Ken Lempit, I'm your host, but today I'm gonna turn hosting responsibility over to Rob Curtis, and Rob, take us into the episode.
Rob Curtis: James, Ken, great
to be here
talking about what feels like the future, but is
candidly happening right now.
This week we're talking
About agents.
Agents have been in
the news this week quite a lot,
and I wanted to share
two stories that give very different signals
about agents, but I think perhaps present both cautionary tales and some optimism for this audience.
I don't know if you saw the ads
stop hiring people.
Large billboards,
Times Square, SF. They went viral.
They spoke to
the deepest fear of being
replaced by our creations. That company
Artisan has just raised its next funding round.
There's a lot of conversation going on
at the moment about how we
can take out more human cost and replace it
with more automation.
At the same time, 11x maker of more synthetic human workers,
especially SDRs, has been sued by
ZoomInfo for using their logo at the end of what I
believe was a failed pilot.
They're also being looked into after allegations have emerged from their staff, that they've been inflating their retention numbers.
The company is saying something seventies to
eighties percent retention. And their staff are saying these are significantly below that.
Why does this matter? Not only are we all in the market
for SDRs but it tells a story about what investors
are investing in and what the
world is looking for, which is
more effective ways to use the tools
to achieve the same outcomes. And that's going
to affect software creators like listeners of this podcast.
Like you said, Ken
starts to feel like
the Star Trek Enterprise is upon us.
I love the idea of using natural language. We trained ourselves to talk like robots to Google, Restaurant near me, Vietnamese vegan.
Nobody talks like that.
With the rise of
LLMs and chat interfaces, we are starting to talk again like humans.
Ken, you've been around a while through the world of SaaS. We are
now in a world where natural language allows us to operate our SaaS products,
potentially initiate new systems,
log in, make payments, do a whole range of
things in natural language.
One example is the difference between looking in retool for your data
Versus asking a simple question, analyze Q4 performance.
Ken, as you look around your clients and this industry,
how ready do you think SaaS providers are for the world of
natural language?
Ken Lempit: So I think a lot of this depends on where they are in their development and
who they sell to. We're working with a client in
the manufacturing software
space and their product
was built decades ago.
The original code is probably 20
years old and it's being reinvented on an AI
native platform. The most exciting part isn't the dialogue
like interface. It's a pretty
plain Jane interface to operate
the mainline function of the system. It is actually the ability to inquire of this very
complex data set in natural language.
Companies
prepared to reinvent themselves or are
in launch at the moment are the ones
that are gonna be most likely to
lead with these natural language
capabilities.
And it's really exciting.
To see
data able to be accessed
in ways
that were unimaginable only two
years ago. like you could not imagine doing what
you can do today
in a command
line or chat
interface.
There's
gonna be opportunity where folks
have great data.
Like their data model's solid and they can reveal
their data to an AI interface. And I think
The posture's gonna be most
aggressive with young companies either,
reinventing themselves
or,
starting fresh.
Though
If you have a great data lake and
you're willing
to expose it to an agentic
or natural language
interface maybe you can have stuff that's just as exciting as what we saw only a couple of weeks ago in this startup client's demo.
Rob Curtis: Thanks, Ken. That's super interesting. And James, you are one person that often
reminds us of the expectations of enterprise buyers,
and in particular around things like accuracy.
In a world
of natural language versus
a world of
say, command prompts,
It's
gonna
be really interesting to see which of those is most capable of bringing out the most accurate instructions to lead to the best results.
What are your thoughts?
James Ollerenshaw: Before I go into that question,
I want to go back to
what we were
saying about us entering
the
world of Star Trek
And the enterprise talking to the computer in,
Natural language I think that's an
incredible
thing to have happened.
That shows 30 years old or something and
It was real, science fiction
to say computer and have this conversation.
The computer had to converse back
and help us solve problems. We work with the computer in real time using natural voice. Ken, you set me up as the skeptic here.
I have been through
a couple of hype cycles
with AI
and see where this can be a challenge
But I'm a
great enthusiast for it. I've got my Star Trek necklace on so I'm all in on this kind of future working.
But it's gotta work. Does it work effectively? If you are,
In
an enterprise and
you have a process,
that process
needs to be
reliable. What we see
with the latest
wave of
AI is that it is not always reliable. It hallucinates it interprets instructions different ways depending
on how they're expressed.
We see that in typing into ChatGPT
interfaces, now we're able to
talk to the computer.
That creates
More scope
for variation in how
we create those
commands.
And even
more scope for variation in how the computer responds.
That can be wonderful
but
when trying to get consistent outcomes,
it's not always helpful.
As
we start
to talk to computers more,
how can we be
confident we
are getting
consistent results?
Not always the
same results, but consistent quality, integrity of data
fairness of customer
data protection. These are
very real
questions for those implementing these tools
into processes.
Everybody's excited about buying agents
and not hiring people. I still see a great need for the human in the loop
Rob Curtis: I wanna bring it back to natural language. The question we kicked off with was, how do we feel natural language versus prompts is gonna bring out the best in these systems? Do you have a take on, where the line between
a prompt that's spoken like a human versus Q4 results variances now?
What do you think is going to drive the best results?
James Ollerenshaw: That's a good question. It's a difficult one
to answer. It depends.
I've
been involved in AI with regulated
industries. Particularly financial services,
to some extent, healthcare as well. These tools are not reliable enough there to be
used without
significant human intervention. When interacting with Gemini or ChatGPT, we put in the prompt, see the result.
And if we are educated enough and
paying attention enough to the
output,
we can determine it's quality.
But as that gets much faster
it's new voice and we have less time, maybe the people we put to work on that are less educated in a particular process and outcomes and what good looks like that question of reliability and risk of errors becomes greater.
So, the need
to have
capable humans in the
loop becomes greater. It depends
on the industry and tasks. If we're trying to sell holidays,
And having
A conversation with a computer about,
Where
might be nice to go, can the
agent then go and actually book things,
for me, that's a much lower risk situation than managing my bank account. Or the results of a Cancer scan. Companies
need to think about this carefully.
In many cases regulations will prevent application of these. But
even if it's booking a holiday, you don't want it to make a mistake. You want the car available on the correct date,
the hotel to the preferences you've set.
So, I think the capabilities are exciting, but
It might move
a little slower
in reality
than many of the vendors would have us think.
Rob Curtis: Yeah.
As
I think about the stories
we read about the
technology,
one that comes to
mind
is using pleases and thank yous with
LLMs. The insertion of a single word can change
the attitude of the LLM and its outputs.
These are probabilistic models, so one could imagine
that any particular string
of words, will create a perhaps different presentational answer.
But whether it actually
presents a
different
conclusion is to be
seen. This is
going to
force us to think carefully about how we use
natural language or force the platform
players to
be able to take stream of consciousness and turn that into reliable
prompts that drive
the software in its intended fashion.
I want to go back to Ken, something that
Came up in our last
episode.
You predicted
last
week that many SaaS leaders are going
to have
to rethink their platforms for the agent age.
I'm seeing some real reinvention and some
Bolt-on
AI assistance.
Ken, what's your
take on how this is playing out so far?
Ken Lempit: I think the bolt-ons are
a great way to get
Experience, what I like
to call
time over target.
They don't require
a complete rethinking of a product to start
getting some
traction. If you've got
a hundred million
or more in revenue on a SaaS
platform, it's not like you're just gonna snap your fingers and change direction right away.
You need some
market confirmation
that the things you'd like to build are gonna be valuable.
I think as a user organization and user individual the co-pilots
are probably less valuable than
to the vendors
themselves,
right?
This is the way they figure out how people want to engage with their product in new ways. But they don't seem tempting to me.
Not in my
experience.
Depends on what that copilot is doing.
If you're a user in an area where you have to do repetitive data entry, I guess a copilot could be
very attractive. Copilots are,
for the most part, an interim part
of this migration to smarter software. There are probably gonna be
applications
where they become more robust and do 60,
70, 80% of the work and then ask for
permission to complete a project or process.
I think that's what you're gonna see next
is this, let me help you
with this
document
categorization project, and
I'm gonna
confirm along the way that I've done it right. There are also gonna
be
applications where
the
risk isn't as
great and
we're gonna let
these processes
rip and do a little QC
and make sure they weren't stupid.
I don't
know if there is a definitive answer here. It's almost like human experience, almost anything we can imagine as a
way of working
is gonna be represented by
co-pilots and semi-autonomous agents.
Rob Curtis: Ken, as you say that, it makes me think about the use cases
in which people are using agents in their businesses right now. For example, Cursor or Replit or any number of these coding apps, the app is there to drive the software. It is to bridge the gap between a complex system and a user
who might need help, guides
support, learning and education in order to use that product. Using co-pilots in
those instances is a game changer. That's where Vibe Coding
has ultimately come from. And you see these popping
up more like copilots as a way of improving product
usage, depth, and retention.
Then you've got a second set of
agents, which I'll just throw out
operator by OpenAI as a good example, their job is to connect
workflows across multiple systems. What's the
purpose of that agent?
Is sticking
an agent on your knowledge base enough? Or is your desire to integrate your platform into a wider ecosystem of other platforms?
Is that where your agent
strategy comes in? 'cause those are
Very
different ways
of using a similar technology.
And James, so I am curious to, to hear what
you think.
James Ollerenshaw: Yeah, I think it's
the data
Behind that agent as well.
It's easy enough to stick on bolt on. Chat GPT
or Gemini onto your
existing SaaS platform
and get
time over target,
with minimal development change.
But if you are
building, AI native in your
whole data setup
is engineered for
those kinds of interaction
that AI systems can
have and start to apply in agentic
ways, that's more towards what
you were saying Rob. It's
a second order of benefits.
That's a significant re-engineering of
the underlying AI technology
that creates those capabilities. So there's value in the bolt on
to explore my data in new ways and get more out of it, but I don't think it's a game changer.
If I have a system where
all the data is set up for AI to
explore and work with it in that way and then go off
to carry out tasks and talk, through APIs to other systems which are
also set up that way, that's when things
start to look very different.
Ken Lempit: I think we need some other language here. The
co-pilot is the bolt-on, I'm using it almost in a
pejorative sense. There's room to
distinguish between things that are suggesting the next sentence in my email,
which is of low value
and what you were
describing as truly a co-worker.
Co-working
with this intelligent agent as opposed to it just trying to augment a little bit my experience.
So maybe we need
Co-worker as
our thought process, even though
I don't think
we're gonna change the vernacular here on
the podcast,
but for the purpose of the podcast co-working and being my partner in
execution is where the added value comes.
Right?
Rob Curtis: I think that's where you
start to see meaningful time and cost savings.
An SDR
is somebody
Operating multiple platforms, connecting the data on those together and
driving a computer. There are other human skills that come along to it, but some
of the synthetic,
co-worker platforms are designed to take away a lot of that kind of intersystem interactions.
'cause those are non-strategic work.
I think we've all seen in the media this week, the announcement from the Shopify CEO.
There was a leaked memo. He recently told
managers individuals will be evaluated on how well they operate AI. And indeed when it comes to things like headcount and new spend, the question will be how have
you exhausted your options on AI
before we start even considering bringing on a new human worker?
A lot of the coverage has been about the replacement of humans. I have a slightly
cynical view on this, which is that
public companies have two
audiences to take care of that many private companies don't. One is public market stakeholders.
Look no CEO leaks, anything. SaaS
CEOs now know any internal memo is probably
going to make it to the media. And I would
posit that Shopify is, speaking to these two audiences,
public market stakeholders, and the next batch of Y Combinator
startup founders to say, we are taking
AI very seriously.
Here are great signals as to
why we should maintain our
stock price.
But if you are thinking of coming up behind us and
replatforming us as AI native,
then we are also ready to
take you on.
Because those are signals every CEO is
expected to put out to the market.
I've got an AI plan and this is not going
to affect my business. If You're using public comms to
make that point, rather than software tools and
releases, then perhaps that speaks to a gap
between what Shopify is capable of
doing today and what they need to do, which is
get ready to become more AI native
to fend off competitors.
Ken, when
you think about this from Shopify,
does it raise questions about their
ability to compete with
AI natives, or do you think this
is just part of the maturity
curve that
all SaaS companies will eventually go
on?
Ken Lempit: That's interesting.
I hadn't thought about it as revealing weakness before. I thought of it
as a heartless point
of view. Not illogical, it was a logical
extension of, I've got a couple thousand employees. I don't want
10% more.
How do I coax my teams to align on that mission
of not growing my headcount while growing
Revenue and market share?
So this seemed like a corrosive message. At the same time, it could be argued that as an employee,
I wanna be on the leading edge. I don't want to be a victim.
I want to be a victor
in the AI skills race long as it's not a race to the bottom. So from my standpoint, it was more about culture of the organization, the experience
of employees, and how do you put that kind of message in front of people?
I guess my style would've been to encourage them
to build their skillset
for their own
benefit as well as our company.
And finally,
our customers, which might be something. James, you may
have heard from me recently
on the product front, I think Shopify is a dominant commerce platform
Just under Enterprise and down. I don't think Nike's
gonna build its
store on Shopify though.
I think they're in
a powerful position. And this might be the kind of
posturing that if
there is a weakness in
the product roadmap, would mask that
for a time.
I don't think that they're
in a really exposed
position, but it's not like I've
done a deep dive on them.
They seem dominant. That's my take. I think it's
a cultural issue.
Culture eats strategy for breakfast. That's what we used to say. And I'm gonna say culture eats AI for breakfast. If you have two companies, they do
very similar things.
And one,
the culture
is much more supportive and healthy and the other is much more militaristic and, top down and they have the same tech stack inside them.
The one with great
culture's gonna win.
I just don't think
You can beat people to success. You have to encourage and grow them to success.
Rob Curtis: James
Companies are in a bind of managing all of their different stakeholders and they've both
got to satisfy
investors who want to know that
they've got a
hand on things, but also
very rapidly take their
teams through what's gonna
feel like
a very
aggressive change management process.
And I wonder
if you've
got any ideas about
how can a leader
balance those things? What
can a leader say to a team about AI that
is not going to have a
kind of toxic
culture implication?
James Ollerenshaw: If you take that message at
face value, it was blunt, but
a call for upskilling with AI.
The quite famous quote, AI won't take your job, but a
person using AI will
or some version of that which was Richard Baldwin at the
2023 World Economic
Forum Growth Summit.
We see that happen. If you're able to use
these tools effectively, you are a more valuable
and efficient employee than somebody that can't.
Every
company boss has got to adjust to this.
Rob to what you were saying earlier, the
company bosses have got
to signal to their investors that they are
engineering their companies and their
employee base that way. I know two perspectives on that. The long view is this is an industrial revolution, machines take on new human skills.
We've just seen another wave of that.
And the people that can work with
those machines can adapt to that future.
Probably you're not
gonna take away work because people
figure out other things to do
it can be messy.
What I've seen happen in practice and companies that I've worked for building, selling
AI and the customers buying that is that they haven't
laid anybody off.
Now I've tended
to be working mostly
FinTech. This doesn't represent all areas of the market, and
things are moving very
fast. The technology comes in and people,
gain
efficiencies and better
outcomes, and use existing teams more effectively. So it may mean less
hiring. It
doesn't necessarily mean layoffs. And I think there's a message
from the Shopify leader in that it's to be able to
stay with
this company, we need you to use this technology effectively. That's a great transferable skill for any leader
or employee.
Rob Curtis: White collar work across many industry verticals is going to be affected. I don't envy the leaders that
have to message that to their
teams, while also messaging to external stakeholders and
The best talent in the market, come to us because we are
AI forward
is a very difficult message to also partner with,
stay with us because we are AI forward, and here is how we're going to support you, help you invest in you. And I would say the Shopify
CEO's note was
a reality.
Any staff member
that was surprised
probably had not read the tea leaves about their own culture because it's very innovative but I would've loved to have seen, we care about our talent. Here are the things we're doing to protect and
build our talent to be ready for these new
technologies.
I have led teams through
major macroeconomic change, balancing both of those things is incredibly hard and protecting
culture in the face of major disruption and in particular significant shareholder, like change as in what they need. That's tough.
James, I want kinda
go back now
to the impact of agents on buyers in particular.
Let's think about some of these agents that work across different platforms. I wonder
what you think it will take for enterprises to trust an agent that is as fallible
or more so than a human
in driving complex processes across systems.
How do you think enterprises think about it?
Are they comparing them to humans? Are they comparing them
to perfection? What do you think that dynamic looks like?
James Ollerenshaw: People do compare AI to humans. Most definitely.
Humans are intolerant
of the machine getting it
wrong.
It freaks people out when they see a small error rate by
a computer. I've seen this with
a company that I used
to work in serving the mortgage industry handling application data and turning documents into data and seeing 50 examples of the machine getting it wrong.
And everybody was freaking out about this. And the total number of data points was thousands.
We were talking about an error rate of a small
fraction of a percent
compared to 15, 20% by people, which the industry was entirely
used to and comfortable with.
So the fear factor around
a machine getting it wrong, it's just higher.
So was it gonna take,
Rob Curtis: why for these, all these smart people focused on math, focused on metrics? Are they wrong?
That is a bad decision.
Why are they getting it
wrong?
James Ollerenshaw: Why don't we
have self-driving cars? A lot of the self-driving car technology is already good enough and safer
than humans driving,
but we don't trust it.
Rob Curtis: We've got Waymo in a bunch of states, so I think we are starting to trust it.
James Ollerenshaw: People are adjusting, but do
you see widespread
trust in AI? I don't.
Bit by bit, we
start to trust the machines more once they've proven themselves tools.
But we have
a very high barrier to this.
If you are in an enterprise and running an operation you've established certain outcomes using human teams.
It makes no sense. It's interesting
to see the intolerance
or the smaller error rates that machines can deliver against humans.
Ken Lempit: I think we have a business opportunity here, gentlemen. If we create a standards
Body around the effectiveness of the execution of a Agentic AI, you
could benchmark your application against that standard of performance.
Then you could have
different price levels for
more high quality execution against an agreed standard. Now, that might not be
possible in every scenario, but self-driving is
a great example.
How safe is it
compared to the average taxi or Uber driver? You don't see that kind of
institutional quality assessment that you
do in other areas of commercial endeavor.
So that might be
one way to address it, is to hold
agents accountable,
just like we hold humans accountable.
Somebody's gonna be looking at
your work output and holding you accountable. James, you had 35 title errors over 10,000 transactions.
That's a great job, but what if you had 300? Would it still be a great job?
We don't have any external measure of quality of
these things. As a buyer of this technology, how do you discern,
Is Oracle's agentic,
ERP, better
or worse than SAPs, almost impossible to know.
So maybe there's an opportunity
in front of organizations to start
characterizing and calibrating their agents and giving transparency into the results.
If you're tasked with buying an agentic ERP for
manufacturing, how do you know
how good it's gonna be?
I've got,
Fred
and Mary over here,
they rarely, if ever, make a mistake. How do I know when I create an automated replacement for 80 or 90% of their work, what is that quality gonna be like? That's one of the
fears and they're born out when some small percentage of errors happen.
I read a
thing where a Waymo car was stuck going in circles,
It's sensationalized.
How often does an
Uber driver, play with his phone hit the curb and have a flat tire? I bet
it's more often. But it's not sensationalized.
Rob Curtis: If you take it out of life and death examples
and you just think about
performance is a fiduciary, right?
If confirmation bias led
you to spend more money per unit produced,
Your board would have
a difficult conversation, you've got a cognitive bias encouraging you to choose suboptimal solutions that are costing us money.
If we're in a world of 15% versus 1.5%
error rates, and somebody is choosing a
10 x worse solution over a cognitive bias,
maybe it's time for us to start having a conversation.
Self-driving cars are to
some extent a trolley problem.
Do you hit six school kids
or do you hit a priest? That is a philosophical question that we have never been able to answer as humanity, because
it's an impossible question. But when we're talking about loan applications,
yes, these can be
life and death in extreme
situations, but fundamentally they're revenue drivers.
It's gonna be really interesting.
Last week I talked about quality as
the primary is going to switch
and we're going
to be willing to take 80% of the value or accuracy for 20%
of the cost. I'm curious
to see how outcomes play into the evaluation
going forward?
James Ollerenshaw: Your
logic is impeccable.
The machine outperforms the human
Many times in many cases.
But my experience of seeing these systems deployed
is that's not
how those buying or deploying them feel about it. I can't mount a logical argument against that.
I've been on the side of the company selling the technology and
arguing the case
that, look, the machine's
demonstrable,
Factually better. I mean Ken you're talking about having ways of measuring it. Side
by side real data
run in tests.
the sensitivity to the machine getting it wrong is way higher.
Maybe humans are kind to humans.
It's just gonna take us some
time to adapt.
We completely
trust our dishwasher to do its
job,
but I'm in Mexico at the moment and people
here are suspicious of dishwashers,
you just don't see them as much. So there is not
as trust, as much
trust in that very familiar machine that we all use every day. it's
A matter of familiarity and
experience with it. And over
time we can expect
that to change.
Ken Lempit: So I think the dishwasher's a
cool analogy and
I'll tell you why.
First of all, I think
I do a better job than the dishwasher.
I may not be as water or time efficient,
but my quality is higher.
But you bring up embodied versus
disembodied, right? The software agent is a disembodied
unknowable. You can't monitor it in
the same way as I can open the dishwasher
after an hour and ten minutes, I can see
very quickly how well it did. When these agents are
embodied, when I can sit side by side with a robot,
It becomes
more like a colleague and less like a mystery.
Now it might
also be more threatening,
right? Which is why a
lot of these embodied humanoid robots are short, to
make them less threatening.
The embodied
agent, even if it doesn't need to be embodied,
might increase the
acceptance of these co-workers.
They really are co-workers
in a lot of ways. They
may also be more capable of processing,
integrating, and acting on
our behalf.
The human aspect,
the organizational change management
is what you were really talking about.
Those things can make or break a project.
I think it's gonna be organizational change management. Maybe there'll be industry standards for quality of robotic operators
that are needed.
In financial services transaction processing,
we need to be very accurate, core systems in finance live decades longer than
you'd believe because they're known good.
The
known good process
is of utmost importance.
Could
you imagine if Morgan Stanley said the AI drops the ball 4%
of the time. We'll clean it
up afterwards.
That could be a competitive disadvantage.
It's gonna be that depends
in terms of adoption.
Rob Curtis: What I'm hearing is that
the way in which
we sell software and we sell
agents is going to have to evolve.
We are going to
have to be more tenacious. We are going to have to find new ways, not just to sit back
and say, we'll allow the world to catch
up with us. But
I wonder how every salesperson
becomes a therapist to
explain not only that, maybe there is a
bias on the table but how to overcome it. And
that's our job to sell these products.
And as more come out,
It's an existential question. We cannot necessarily wait for the world to catch up and we can't afford to ignore
the reality that humans
trust humans in different ways. This is one to watch as AI comes,
how does it change buyer behavior and what can we
do to influence that further?
I wanna bring us onto our last
topic before we get to predictions,
which is one of my favorite parts of our show each week, there was a LinkedIn or an X
comment from Greg Eisenberg of Late Checkout. Now he does a lot of talking about the future of SaaS and one of the things that he talked about was in relation to agents,
Two futures for SaaS companies,
many becoming
agent owners of themselves and operating with the customer,
Thinking of them as owning the customer relationship and
owning the customer in
many ways.
And then a second group of software providers that will become invisible software.
They'll be the APIs that agents talk to to get things done.
A simple consumer example is
OpenTable.
I used to
go to restaurant websites to book a reservation, then I went to OpenTable, and now I never go
to a restaurant landing page ever again.
If I'm going to
Operator
by OpenAI,
which is capable of making
restaurant reservations, it
now goes directly to
the aggregator itself. OpenTable operates all of its features to get me a reservation. If I
ask OpenAI, I've
got a reservation in the table that I want,
at the restaurant that I
want, at the time
that I want. And I think to myself,
will I ever go to
OpenTable again?
For limited use
cases, maybe discoverability,
but for functional work, probably
never again. I can see this paradigm playing out with many SaaS platforms. If you're asking what are the Q4 reports? Do you need to go into retool? Maybe you don't need
to go into retool as many times. So you start consolidating licenses.
I'm curious about what agents do to the primary relationship, between a customer and a SaaS provider, and secondarily
is being invisible software a bad thing? James, I'll start with you.
James Ollerenshaw: I think it comes down to reliability.
Booking the restaurant using
SaaS tools and if that can be a kinda automated through agents
that are using those SaaS tools
and interact with OpenTable, I'll just let it
talk to the computer and let it do that.
How, much does it matter
to me that I get
a good interaction
with that restaurant?
So I sometimes find myself wanting to go to the restaurant's, own websites and do the booking myself because
I would've a certain understanding about that restaurant.
And I also know if I book directly with a restaurant, they're not having to pay a fee
to open table. So I get
better treatment, terms, a better conversation if something goes wrong.
I see that with hotel bookings.
The systems and incentives that are built into them,
have an effect on all of this. As we trust the machine, what's really going on there?
That's one side of it.
Then Ken, you used
Morgan Stanley as example, and the
AI getting it wrong 4% of the time, will use people
to clear up the difference.
That's what the AI systems of all the different AI FinTech company I've worked with are doing.
We'll put the AI in, it's gonna be more accurate
than the people, but it's gonna fail probably something like
4 or 5% of the time, and we're gonna fill in that gap using people. The challenges is how do we communicate around that to make that seem reasonable.
So I think we'll have
Invisible software that is good enough for most of the time we can stitch up the gaps with people,
but in instances where we care more
about the relationship that we have, so
the restaurant to the hotel, or the outcome of a process, the visibility
we need, we might choose to have that visibility.
Rob Curtis: Do you think that's like 90 10? There are things that absolutely cannot fail and
things that nobody cares about, right?
The question
for SaaS providers is gonna be, how many seats have I got?
How much can I charge for this?
What's your opinion on
how much of say a hundred seats is gonna fall into the most extreme version of I must
get this right? I want better term. I wonder what you think the breadth of that looks like. Do you have a sense of,
is this 50 50, is this like 80% of cases people are happy to automate?
What does your gut tell you?
James Ollerenshaw: I think this is where you start to see
outcome-based pricing, usage based
pricing compared to seat based pricing becoming a consideration. And so in
a situation where
you're still gonna need a fair amount of human interaction along the process, seats can still make sense.
If that's going away, we need different ways of pricing it. It's gonna depend
what your software does and the markets selling into, and
some all the other effects
we've been talking about. But you're right,
it's gonna look different for different types of SaaS companies.
Rob Curtis: Ken, what do you think?
Ken Lempit: There's been talk about, if
you have API access and you're revealing that to agents, your pricing power goes away. I think the opposite.
If you're accessing me by API, I
know every time you request work.
I can build pricing against that. 'cause
I know what work I'm doing for you and what value you should
be getting as a transfer, an expectation of value. So it's gonna
take some work because customers are not used to paying much for an API call.
Sometimes
a platform will have an infinitesimal, API call charge.
But once API calls our unit of work. Then we have to start charging
for a unit of work. Pricing is up in the air. I think it's gonna be
one of the big consequences and probably margin up in the air.
Being a broker
is less of a value proposition
than owning the transaction cycle. You brought up OpenTable. They own the transaction cycle, almost everything
they process, they're the beginning,
middle, and end of the transaction.
If they become just a broker and don't own the
audience, they lose the first party data, they lose a lot of things
in being relegated to the broker role.
So I think it's gonna depend on
your business model.
On the other hand, for OpenTable,
there's cost of audience acquisition, if they can focus on being the restaurant service provider, maybe that's a more efficient business model and they'll benefit.
Part of
the SWOT analysis is, some
of your top line might
be threatened as
a result of being relegated to a
broker role, but maybe it's
less expensive to be in business as a result, especially if
you're still that important
to the restaurant.
OpenTable, Toast, all these different things. They do a lot more than just the booking depending on which vendor we're talking about. So maybe they get a cleaner value prop.
It's a big depends answer.
Rob Curtis: It's
really interesting,
like I don't love any
company that has an API called buy another platform that I care about.
It's really difficult to love a company that provides APIs into a company you engage with day to day.
And my hunch is that this is going to change
the way in which we think about how to compete
your dashboards doesn't matter.
User experience probably doesn't matter. These big domains of differentiation
I think are going to start
to push folks that
fall into the invisible software category,
into commoditization. What I like
about OpenTable is
it sends me a little
WhatsApp and
compared to the restaurant, I can
achieve the same
functional thing. But because they own the customer relationship,
they have more ways to surprise and delight.
And I think companies that fall into the realms of just being an
API provider, need to have a hard look at
their business and say, do I need this
design team? Do I need this user experience team ? Do I need a marketplace of restaurants at all? Or do I go to transaction processing for your restaurant bill?
And I think that just
occupies an incredibly different space
in the mind of consumer. And for a new software provider
like myself, that's a great opportunity, which
is how can
I provide that
API call at a cheaper price.
In the commoditization
world, if you're invisible software, there's no customer
loyalty or love, then the agent
is probably going to be operating on instructions, like, find me
the cheapest way of doing this thing.
These
are the real questions that many SaaS providers are going to want
to be asking themselves
as they start to see for their particular industry and use case, what is happening to players
like me and
how do I compete in a world where
most cannot afford the investment in owning the
agentic player?
It's complex, it's multi-step, it's gonna be a real battle. And
if you're falling
into the realms of invisible software, it's going to be how do you protect your margin by drastically reducing
the scope of what value looks like for your customers. These are gonna be interesting
things to pull
on as we
see more
real examples hit the market.
Ken, I'm gonna hand over to you.
This has been a great discussion, but as always, we're onto predictions, so I'll let
you take it from here.
Ken Lempit: I think I have to go back to the prediction
around what is an agent, what is a co-worker?
So I'm gonna say, the
co-worker is king.
The idea that I can engage with and
benefit from a relationship with
a digital co-worker,
is where there's a lot of opportunity.
And I think this idea of calibration of that skill level
also is something that
Might actually happen. That's my prediction, calibration of AI effectiveness.
Rob Curtis: James predictions for this week.
James Ollerenshaw: When you look at the
history of technological revolutions they are rarely binary.
We don't have a clean before and after.
We end up with a mixture some of the old, gets pushed out but often doesn't completely go away
as the new comes in.
I think it's gonna be much the same with
this era of AI and Agentic AI.
There's gonna be many instances where we are very happy to
the AI take care of it.
Sometimes we're not. Sometimes we wanted to do it for ourselves. And in between there, there'll be areas of grey where we wanna be
able to be involved with the computer.
And I wanna go back to the dishwasher,
How many times have you thought, I'd love to see what's going on inside the dishwasher?
What, why can't the dishwash have a glass door?
There's some reasons
why it doesn't. That's not important.
We wanna see what's going on
inside the machine and how things are
working in many cases.
And so, sometimes, yeah, we can set it and forget it. Sometimes we'll wanna
do it ourselves and sometimes it's gonna be in between.
Rob Curtis: What was your prediction?
James Ollerenshaw: Agents are coming but it's gonna be non-binary. We'll see a great expansion in the use of
those, but it will not be total.
Rob Curtis: Yeah,
that feels right.
And there's a lot of, it depends and what's your
James Ollerenshaw: Yeah,
Those dependencies are really important.
If you are a software company what is your niche?
What is your place in
that broad ecosystem? You might
want to take a narrow part
and focus on that,
But for, the broader business and economy,
we'll see a greater shift into AI but some of the old's gonna stick around too.
Rob Curtis: Thank you James. last week
I predicted agent
optimization is going to be
the future of discoverability. This week I
wanna talk about voice and we're gonna have a whole show on voice next week.
I think voice is going to
drive the agentic revolution across a majority of
consumer use cases, and
start getting into SaaS.
What does that mean?
Agents will begin to be optimized
for tasks that can be described simply in a short conversation. I write prompts
that are this big right
now, but I suspect as we get
closer and closer to being able to speak to computers,
we are going to then
find ourselves influenced in terms of the things
that we can ask for and how we engage with them.
I'm gonna predict that
there is going to
be a large
aha moment around voice this year in 2025.
I'm gonna hold myself to a very specific prediction here,
and I'm also going to predict that it
will not be Siri. Some
new player is going to come out and
change the game in terms
of how consumers and buyers
and people across the world think about the role of voice in AI.
And I think Siri is gonna miss the boat on this one.
I think it's gonna be a new
player that's gonna come out with a unique twist on how to use voice plus Agentic AI to solve
real meaningful problems for consumers.
Ken Lempit: I want another bite at the apple now.
So building on this vision you have for voice,
I think it's also
gonna be the
year where it's a conversation
where your AI will remember what you talked about previously.
Right now when you engage by
voice or keyboard, it's like you're
starting over. It's like Groundhog Day, right? Every interaction is almost
a new thing. Unless it's right in the thread, you can't go to
ChatGPT and say, Hey, remember last Wednesday we talked about, building a
three story apartment building?
It's like,
sorry. So that, I think
continuity of conversation is gonna become
really important this year, and
that'll be a breakout.
Rob Curtis: You'll be pleased
to see OpenAI pushed out a major memory enhancement that will make that true
as of yesterday.
Memory's been
a big constraint and
OpenAI pushed out an update late
last night which will be, 10th of April for
listeners. It can now remember many of your
conversations first time round.
I've not tried it yet
'cause I'm also finding an exhausting to have to say the same things.
But we're seeing big
developments in memory
that should make the utility increase a hundred fold.
Ken Lempit: This has been
a really great conversation.
For those of you loyal listeners
to the SaaS backwards podcast, you'll recognize this is at least twice as long as the usual
episode, even after we've edited it down. For that, I apologize,
but I hope you'll also
be appreciative of the depth of this conversation.
Rob curtis, building applications in his venture studio.
Standup Hiro is about to come out and I
think by the time this is published,
Standup Hiro will be available.
And that's HIRO
James Ollerenshaw, technology marketing
executive AI guy multiple
times over, AI marketing lead multiple times
over.
Thank you so much. If you haven't subscribed already to the podcast, please do so wherever podcasts are distributed, and also you can get the podcast on
YouTube. Just search SaaS backwards.
Hey rob, if people wanna reach you, how
can they do that,
Rob Curtis: I encourage everyone to
go to www.standuphiro.com. There are plenty of ways
to get ahold of you from there to speak to
some of our agents. And you can also reach me on LinkedIn. Rob curtis.
Ken Lempit: My Demand Generation Agency for software as a service companies is Austin Lawrence.
You can reach us
at austinlawrence.com
I'm on Linkedin/in/kenlempit and thanks everyone for
staying with us and we'll see you
next time when we get deep into voice
Agentic AI.
I.