How to ask?
I spent the weekend using Claude Code to reorganise my notes and update my website. Through the process, I realised that working with agentic AI tools (and, by extension, chat-based LLMs) is akin to working with engineers in a product org.
I mean the actual mechanics: writing up what I wanted before asking for it, being specific about edge cases, reviewing the output, and pushing back. The stuff that separates good product managers from bad ones.
This surprised me.
Take planning. Most people just start typing what they want. Make me a website. Write me a story. Analyse this data. And then they’re disappointed when what comes back isn’t quite right.
Good PMs don’t work this way. Before they ask anyone to build anything, they’ve already written a document explaining what they want and why. They’ve thought through the edge cases. They’ve gathered examples of what good looks like. That is how you get what you actually want instead of “what you literally asked for”.
The same thing works with LLMs. Before I ask Claude to do something complex, I dump my raw thoughts into it and ask it to write a plan. This surfaces how it’s thinking about the problem. Often, it’s different from how I thought about it. Sometimes it’s better. Sometimes it’s worse. Either way, I can correct course before any real work happens.
Then there’s precision. Bad PMs say things like “use the customer info” and “make sure it works”. Good PMs reference specific database fields and define exact test cases. The gap between vague and specific is where most miscommunication lives.
LLMs are, if anything, more sensitive to this than humans. A person will ask clarifying questions. An LLM will just guess. And its guesses, while often plausible, are frequently wrong and misaligned with your requirements.
I’ve started breaking complex tasks into steps for this reason. When I was updating my personal website, I didn’t ask Claude Code to redesign the whole thing. I asked it to audit the current structure first. Then propose changes. Then update the navigation and so on.
The last piece is a review. Code gets reviewed before it ships. Product specs get reviewed before anyone builds them. But most people treat LLM output as final the moment it appears.
One trick that works is role switching. I have the LLM do the work as a “developer” or “analyst,” then evaluate its own output as a “senior engineer” or “manager.” This forces a change in perspective and catches real problems.
In the current scenario, I believe being AI-native has nothing to do with being technical. It is about developing the ability to figure out what you want, communicate it precisely, and verify that you got it.
It can be done by anyone who has spent years translating fuzzy information into concrete instructions. Engineers. Product managers. Editors. Consultants. Anyone.
I don’t know if this will last. Maybe LLMs will get good enough at reading intent that precision won’t matter. But right now, the bottleneck isn’t the model, it’s our ability to say what we mean.