Things your AI product manager is tired of hearing
Turning tired AI narratives into common sense.
Getting off the hype train
While AI companies compete relentlessly to release bigger, better, more performant models almost on a weekly basis - a key question remains unanswered: what is the value of all this to your product’s end users?
It is so easy to get overwhelmed, overenthusiastic as you watch the news, browse LinkedIn and generally to hop on the hype train.
On the other hand, as a product leader, there is also no use putting your head in the sand and pretending AI is hype that will just blow over.
Take it from someone that has been bathing in AI for over a decade now, this is truly the moment a lot of AI product professionals have been waiting for…when their niche expertise finally hits mainstream.
But as the saying goes : ‘Be careful what you wish for!’
In the first article of AI Product Focus, I wanted to highlight some of the tired narratives surrounding AI that seem to have taken the product ecosystem by storm, and turn them into common sense. These repeated phrases often reflect misconceptions or oversimplified views into what AI is, can and should do as part of a wider product ecosystem and organisation.
Ready to hop off the hype train? Let’s disembark together…
1. “Show me the AI”
If I’d had a dollar for every stakeholder that has said this to me over the last two years, believe me when I say - I would not be writing newsletters right now!
This is probably my favorite, yet most frustrating one - as it reflects misunderstanding on many different levels.
This misconception is often related to the myth of magic that surrounds the use of AI in products. There is indeed something satisfying about seeing “the AI” at work. In the stakeholder’s mind eye, AI is often represented through visible cues like dancing dots, flashy animations, indicating that the magic is operating. And what could be more innovative, than a shiny new tool that everyone can see standing out amongst the crowd?
This is, however, a very short-term view of what innovation and differentiation truly means in today’s market.
The truth of the matter is, there are very few situations in which emphasizing the magical myth of AI solutions is truly beneficial.
Differentiation and innovation stem from solving a real user problem disruptively, in a novel way delivering exponential benefits over existing solutions. AI solutions, in the form of AI Agents or underlying AI algorithms, can be a great accelerator for this - but it is a means to an end - not a magic show. Fairy dust will only get you so far…
AI is not a spectacle, it is a potential solution to a user problem.
Let’s not forget that AI-driven algorithms have been part of users’ lives way before the ChatGPT era: Netflix recommendations engine is a prime example, traffic another one. These are in-product AI solutions that have made a difference for their end-users in a way that was completely embedded in the wider product, and a core part of the product strategy… without any shiny visual cues.
In addition, these visual cues often pose problems when it comes to accessibility, readability and user experience. But that is a debate for a different day, or article!
Which brings to light the next common misunderstanding…
2. Can’t we just use ChatGPT?
ChatGPT, and of late, DeepSeek is on everyone’s lips and in everyone’s LinkedIn articles. In very rare cases, these applications are understood appropriately, and more often than not they are confused with the large language models that power these applications.
So the answer is : using ChatGPT is using an application. So yes, you can use it, but what you integrate into your product is not ChatGPT - what you can decide to use in your product are integrations into the GPT models OpenAI makes available through their API.
One more time, for those in the back : ChatGPT is not a large language model, it is an application that uses a large language model.
With that misunderstanding out of the way, large language models have been trained on very large datasets but aren't one-size-fits-all: they need customization for specific domains, use cases, and unique requirements.
Whether envisioning adding LLM capabilities into an existing feature, creating an LLM-specific feature, creating a new LLM-driven product, or AI Agent - these need as much strategic and design considerations as any other product or features - if not more…
Some considerations include development cost, integration cost, token cost, expected revenue of the LLM-driven feature, impact, problem it solves, … you get the gist.
3. Product management is dead
The AI Hype has organized many funerals in its wake: design, engineering, …and apparently, also product management.
These statements are incredibly reductive - so can we please stop using them?
When they come from product managers, these statements are clearly like shooting yourself in the foot. Of course, these headlines attract people’s attention : but at what cost?
The cost of exposing product management to exactly that type of thinking : surely ChatGPT can write tickets, surely ChatGPT can come up with a product marketing plan, …
When they come from stakeholders, or the overall ecosystem, not only does it bring a gloom and doom feeling - it is also an overly simplified view into what an AI product manager (or a product manager in general for that matter) brings to an organisation.
AI, and large language models, or their applications are not the death of any discipline.
They are an accelerator at best, a tool, a technology at the most - and in the case of AI product managers an incredible opportunity to elevate their roles to what they were always meant to be: professionals that align business goals, AI-driven products and customer needs.
4. We need an AI Agent
Maybe you do, maybe you don’t.
Now don’t get me wrong, I am the VP of Product at OpenDialog, a B2B SaaS platform that allows product professionals, engineers and all types of builders to build and manage Conversational AI Agents… my biased answer will therefore probably always be: of course you need an AI Agent.
Here’s the catch…putting in place an AI Agent for use cases that customers do not adhere to, will create a counterproductive movement when it comes to adopting AI Agents for your organisation (and in turn the platform provider you choose), or as I often call it: the infernal cycle of innovation. In French we have this expression: ‘fausse bonne idée’ (false good idea)
Here’s how it goes :
an AI agent gets built for an unclear use case, or worse, doesn’t get the necessary attention towards its discoverability, therefore
the AI agent’s usage doesn’t live up to the business expectations, therefore
the conclusion is made that AI Agents are not a worthy investment…, so
end of the AI Agent adventure.
It is in nobody’s best interest to just create an AI Agent for the AI Agent’s sake. There needs to be a well-thought through use case and scope for which an AI Agent is indeed the most efficient solution, validated by product discovery work indicating that users are willing to adopt this solution.
To make it super clear: it is not because there is a mass adoption of ChatGPT, or the DeepSeek application that all consumers are ready to use AI agents for anything, at any time.
Adoption is the key to success of any AI Agent initiative.
So, next time you encounter a business problem: this is the ideal moment to ask yourself - could an AI agent solve this problem quicker, more efficiently and would users be willing to adopt it for this use case? If the answer is yes, then…by all means: you need an AI Agent.
If you haven’t asked yourself the question of why yet, time to rewind.
5. Our Conversational AI Agent should answer any question
Let me ask you a question: when you go to the doctor’s office do you expect them to answer questions about plumbing?
Similarly, users come to your organisation and your product to solve a specific problem - and while it might be tempting to be the answer to all their problems, the reality is : if you want to do it right, staying in your swimlane is always the better answer to win the race.
“But users ask Conversational AI Agents to tell them jokes, or ask about the weather”, I can hear you think. While that might be true, that doesn’t mean that you necessarily have to indulge endlessly in useless small talk.
Let me rephrase this differently: if your AI Agent was in fact one of your human employees and this employee spent an hour talking to a customer about the weather, the latest football game, without solving their initial problem or making a sell: what would your reaction be? And more importantly, what is the cost to the organisation?
This is where it becomes important to stay focussed on the end goal of your Conversational AI Agent and your user: what task are they here to accomplish?
This requires strategic thinking about what questions or responses you do want to engage with, and which you want to gently reorient towards the desired outcome.
Conclusion
In the end, it all comes down to focussing on what really matters, and the true heart of product management:
Uncovering user needs
Defining valuable use cases
Integrating the most efficient technology for the problem at hand
Defining feature scope and breadth of functionality
Evaluating product ROI and iterating as needed
Join the AI Product Focus tribe
Each one of these tired narratives really merits its own article on how to best tackle them strategically, and I will go into these a little deeper in upcoming articles. Ready to continue your journey towards true AI Product focus?
You can expect AI Product Focus to appear on a regular basis with articles including opinion pieces, a practical AI Product Building series, research, frameworks and more.
Up next: [AI Product Building Series] Building an AI-driven Product : 0 to 1 - an introduction.