Skip to content

Prompting mistakes that make you look like a beginner

Most developers use AI tools daily, but few use them well. Learn the common prompting mistakes that waste your time and how to fix them.

Prompting mistakes that make you look like a beginner

Most developers now use AI tools daily. Cursor, Copilot, Claude, ChatGPT. They've become part of the workflow. You paste in some code, describe a problem, get a response, and move on.

But there's a wide gap between using these tools and using them well. Some developers get consistently useful outputs. Others fight the same frustrations repeatedly, blaming the tool when the problem is often how they're asking.

You can forget the formal notion of "prompt engineering" here. You're trying to get help with your actual work: debugging, understanding unfamiliar code, building features, learning new frameworks. The skill is closer to knowing how to ask a good question than it is to programming.

Here's what separates developers who get value from these tools from those who don't.

You're starving the AI of context

This is the most common mistake, and it compounds every other problem.

A developer pastes an error message into the chat:

TypeError: Cannot read property 'map' of undefined

And asks: "Why am I getting this error?"

The AI doesn't know what language you're using (probably JavaScript, but maybe TypeScript). It doesn't know what framework. It doesn't know what map is being called on, what data you expected to be there, where that data comes from, or what your code is trying to do. So it gives you a generic explanation of what the error means and lists five possible causes.

You gave it nothing to work with. The generic response is the only reasonable one.

Now compare this:

"I'm building a React app that fetches user data from an API. The fetch works—I can see the data in the network tab. But when I try to render it, I get Cannot read property 'map' of undefined. Here's my component:"

JavaScript
function UserList() {
  const [users, setUsers] = useState()

  useEffect(() => {
    fetch('/api/users')
      .then((res) => res.json())
      .then((data) => setUsers(data))
  }, [])

  return (
    <ul>
      {users.map((user) => (
        <li key={user.id}>{user.name}</li>
      ))}
    </ul>
  )
}

Now the AI can see the actual problem: useState() initializes as undefined, and the component tries to call map on the first render before the data arrives. It can give you a specific fix (initialize with an empty array, or add a loading check) rather than a generic explainer.

The principle: give the AI the same context you'd give a colleague looking over your shoulder. What you're building, what you expected, what happened instead, and the relevant code. This takes thirty extra seconds and saves you three rounds of back-and-forth.

You accept code you don't understand

The AI gives you a solution. It runs. You move on.

Two weeks later, something breaks in that code and you're staring at it like a stranger wrote it. Effectively, a stranger did. You can't debug it because you never understood what it was doing.

This is the most dangerous pattern in AI-assisted development. You've accumulated logic you can't maintain. Your codebase gradually fills with black boxes that work until they don't.

The fix is simple but requires discipline: don't accept code you couldn't explain to someone else.

You don't need to understand every line before pasting it. But after you paste it, read it. If something is unclear, ask: "Can you explain what the reduce function is doing here?" or "Why are you using useCallback instead of just defining the function directly?"

Sometimes this reveals that the AI's solution is overcomplicated. You'll ask why it's using some pattern, and realize a simpler approach works fine for your case. Other times, you'll learn something genuinely useful, like a pattern or API you didn't know about.

Either way, you end up with code you own rather than code you're hosting.

You ask for too much at once

"Build me an authentication system with JWT tokens, refresh token rotation, password reset flow, and OAuth integration with Google and GitHub."

What you get back is either a superficial sketch that handles none of these properly, or a wall of code so large you can't evaluate whether it's correct.

AI tools work better as collaborators than contractors. They're good at helping you think through one piece at a time, less good at architecting complete systems in a single response.

Break it down:

  • "I need to implement JWT authentication in Express. Let's start with just the login endpoint. What's the basic structure?"
  • Then: "How should I handle token expiration?"
  • Then: "Now I want to add refresh tokens. What changes?"

Each response is small enough to understand and verify. You can test each piece before moving on. You catch misunderstandings early instead of discovering them after you've built three features on a broken foundation.

This also lets you inject your own judgment along the way. Maybe the AI suggests a library you don't want to add as a dependency. Maybe it assumes you're using a database you're not using. Small steps let you correct course.

The instinct to ask for everything at once comes from treating the AI like a vending machine: put in a request, get out a result. It's more useful as a thinking partner. You're building together, not ordering a finished product.

You describe problems vaguely

"My code doesn't work" is not a problem description. Neither is "it's broken" or "I'm getting an error."

This comes down to the same discipline that makes someone good at debugging in general: being precise about expected behavior versus actual behavior.

Compare:

Vague: "My API calls aren't working"

Specific: "I'm making a POST request to /api/users with a JSON body. The server receives the request (I see the log) but req.body is empty. I'm using Express. Here's my route and how I'm making the request."

The vague version forces the AI to play twenty questions. Is it a CORS issue? Network error? Server-side problem? Client-side problem? Authentication? Parsing?

The specific version points directly at the problem. The AI can immediately recognize this as a missing body parser middleware issue (or whatever it actually is) and give you the fix.

Precision saves time. Thirty seconds spent describing the problem accurately saves five minutes of clarifying questions. And the act of describing precisely often helps you solve it yourself. Half the time I carefully write out a problem, I spot the issue before hitting send.

You don't tell it what you actually want

The AI has defaults. For JavaScript, it might reach for TypeScript. For React, it might use hooks from libraries you haven't installed. For styling, it might give you Tailwind when you're using plain CSS. For HTTP clients, it might suggest axios when you prefer fetch.

If you don't specify, you get its defaults, then spend time either converting to what you wanted or adding dependencies you didn't need.

State your constraints upfront:

  • "I want to use just the standard library, no external dependencies"
  • "We're using React with plain CSS, not Tailwind or CSS-in-JS"
  • "Keep it simple. This is a prototype, not production code"
  • "I need this to work in Node 16, so no features from later versions"
  • "We use Prisma for database access, not raw SQL"

This also applies to style preferences. If you have opinions about code organization, error handling, naming conventions, say so. "I prefer explicit error handling over try-catch wrappers" or "keep the functions small, under 20 lines" or "don't use abbreviations in variable names."

You're giving the AI information it needs to give you something useful. Without constraints, it guesses. With constraints, it collaborates.

You trust it too much

AI tools hallucinate. They present incorrect information with the same confidence as correct information. This is structural to how these systems work and won't be fixed soon.

Specific failure modes I've seen:

Hallucinated APIs: The AI suggests a function that doesn't exist, or exists but with a different signature. It looks right. It's plausible. But it won't run.

Deprecated patterns: Training data includes old code. The AI might suggest a React lifecycle method in a hooks-based component, or use a library feature that was removed two versions ago.

Plausible-sounding nonsense: For less common frameworks or libraries, the AI sometimes generates code that looks reasonable but reflects a misunderstanding of how the tool works.

Confident incorrectness: Asked about library behavior, it might state something definitively that's simply wrong. Not hedged, not uncertain. Just wrong.

The appropriate response is verification, not avoidance. Check that the function exists. Check that the library you're told to install is real and maintained. Test the code. Read documentation when something seems off.

A useful heuristic: the more specific and obscure the claim, the more likely it needs verification. The AI is more reliable on common patterns in popular frameworks than on edge cases in niche libraries.

These tools remain useful. But they're useful the way a smart colleague with patchy knowledge is useful. You'd check their work. Check the AI's work too.

You don't iterate

The first response is rarely the best response. It's a starting point.

Maybe the code works but it's verbose. You can ask: "Can you simplify this? It feels more complicated than it needs to be."

Maybe you realize you forgot to mention a constraint. Add it: "Actually, this needs to handle the case where the user isn't logged in. How would that change things?"

Maybe you want to understand better: "Why did you use a Set here instead of an array?"

Maybe you want alternatives: "What's another way to approach this? I'm not sure about adding that dependency."

The developers who get the most from AI tools treat the conversation as a collaboration. They push back, ask for changes, request explanations. They don't accept the first draft as final.

This mirrors how experienced developers work with human collaborators. You don't take the first suggestion uncritically. You discuss, refine, sometimes reject. The AI is a participant in your thinking process, not an oracle dispensing answers.

The underlying skill

What ties all of this together looks like communication but is really about clarity of thought.

To give good context, you have to understand what context is relevant. To ask precise questions, you have to know what you're actually confused about. To state constraints, you have to know what you want. To verify outputs, you have to have enough understanding to evaluate them.

The developers who struggle with AI tools often struggle with the same things in other contexts: writing unclear bug reports, asking vague questions in meetings, producing requirements that leave out critical details.

Good use of AI depends on clear thinking. The AI can't read your mind, so you have to articulate what's in there. That articulation is valuable whether or not the AI gives you what you need.

The goal is to become good at knowing what you want and communicating it precisely. The AI is just the context where that skill becomes immediately, obviously useful.

Practice your prompting with Imagine

Want to put these prompting skills to work on something more substantial? Try them out at Imagine and see how the prompting skills that make you effective with AI coding assistants apply directly to building real applications.