Prompting and Practical Workflows
The quality of what you ask determines the quality of what you get. This module teaches you how to prompt AI tools effectively, structure requests for professional work, and build AI into your daily workflows without adding new risk.
Why prompting is a professional skill
Ask ten different people to use an AI tool for the same task and you'll get ten very different experiences — not because the tool is inconsistent, but because of what they asked for and how they asked for it. Prompting is a learnable skill, and the gap between someone who prompts well and someone who doesn't is significant in terms of the quality of output they receive.
This matters more in a professional context than a casual one. A vague prompt in a personal setting means a slightly off answer you can ignore. A vague prompt in a professional context means a draft you have to almost entirely rewrite, or an analysis that misses the point of what you actually needed.
AI tools respond to the context you give them. The more clearly you communicate your role, your task, your constraints, and the format you need — the more useful the output. Vague in, vague out. Specific in, specific out.
There's a common misconception that good prompting means finding "magic words" or specific tricks. It doesn't. Good prompting means communicating clearly — the same skills you use when briefing a colleague, writing a project scope, or explaining a problem to a new team member. The difference is that with AI, you have to provide context that a colleague would already know from working with you.
Think of it this way: if you hired a highly capable contractor with no context about your project and asked them "can you help with the thing we talked about?", you'd get a confused response. Give that same contractor a clear brief — who you are, what you're working on, what specifically you need, what constraints apply — and they'll produce something useful. AI is the same.
The anatomy of a good prompt
A well-structured prompt has four components. You don't always need all four — simple requests can be simple — but understanding each one helps you diagnose why a prompt isn't working and fix it.
You won't always need all four sections. A simple request like "explain what a webhook is in plain English" needs no role, context, or format instruction. But for any professional task — a draft document, an analysis, a set of options — building in these components will dramatically improve what you get back.
Skipping the context. "Write me a status report" gives the AI almost nothing to work with. "Write a status report for a core insurance platform migration project, week 6 of 20. We're on schedule for the policy data migration milestone but have a risk around the legacy API that needs escalating to the steering committee. Audience is non-technical executives." — that produces something you can actually use.
Prompting patterns that consistently work
Beyond the basic four-component structure, there are several prompting patterns that are particularly useful in professional IT contexts. These aren't tricks — they're ways of giving the AI better information so it can produce better output.
Pattern 1: Weak prompt vs strong prompt — side by side
Pattern 2: Iteration. Treat AI conversations as a dialogue, not a one-shot query. If the first response isn't quite right, don't start over — refine. "That's good but too formal for this audience — can you make it more direct?" or "The summary is too long — cut it to the five most important points" or "Add a section on risks we haven't mentioned yet." Each iteration costs you seconds and improves the output significantly.
Pattern 3: Ask for options, not answers. Rather than "what should I do about this architecture decision?", try "give me three different approaches to solving this integration problem, with the tradeoffs of each." You get richer material to work with and your judgment decides which direction to take. This is particularly valuable for decisions where context you hold (political, relational, historical) is as important as technical considerations.
Situation: You're a QA engineer who has inherited a test suite with poor coverage. You want to use AI to help identify gaps.
Weak approach: "Find gaps in my test coverage" — requires the AI to have context it doesn't have.
Strong approach: "I'm working on a test suite for an insurance billing module. The module handles premium calculation, payment processing, and invoice generation. Here is the list of test cases we currently have: [paste list]. What business scenarios and edge cases are likely missing from this coverage? Think particularly about boundary conditions, error states, and regulatory compliance scenarios relevant to Canadian insurance billing."
The result: The AI has the context it needs to generate genuinely useful gap analysis — not generic test suggestions, but scenarios specific to the domain you described.
Pattern 4: Provide examples. If you need output in a specific style or format, show the AI what you mean rather than trying to describe it. "Here's an example of a requirements statement we use on this project: [example]. Write five more in the same format covering [topics]." Matching existing formats and styles is something AI tools do very well when given a reference.
Integrating AI into your professional workflows
The most effective AI users aren't the ones who use it for everything — they're the ones who've identified the specific points in their workflow where it creates the most value and built it in there deliberately. The goal isn't to use AI more. It's to work better.
Here are the highest-value integration points for IT professionals, organised by the type of work:
High-value AI integration points for IT professionals — the first layer of drafting, structuring, and preparation
What to avoid: Using AI for tasks where accuracy is critical and verification is difficult. Specific client data, regulatory requirements, legal commitments, or technical specifications that will be used in production systems — these need primary sources, not AI synthesis. This isn't a reason to not use AI; it's a reason to be clear about when you're in draft-and-review mode versus reference mode.
Specific regulatory requirements, client contract terms, financial figures, legal obligations, technical specifications you'll build to, or medical/safety-critical information. AI can help you understand these areas and prepare questions — but the authoritative source must always be the original document, not an AI summary of it.
Module summary — what you've learned
The four components
Role/context, task, additional context, and output format. Use all four for complex professional tasks. The most commonly skipped — and most valuable — is context.
Iteration is the method
Treat AI as a dialogue. If the first output is close but not right, refine within the same conversation. Don't restart — context is valuable and conversation history is free.
Ask for options, not answers
For decisions with real consequences, ask AI to generate options and tradeoffs — not to make the decision. Your judgment, context, and accountability determine the final call.
Where to integrate it
Highest value: pre/post meeting preparation, documentation first drafts, communication, analysis structuring, code explanation. Lowest appropriate use: primary source for facts, regulations, and specs.
Module 03 — Judge — covers the other side of the coin: where AI fails, how to spot it, and how to use AI responsibly in professional and enterprise environments. This is where most professionals have the biggest gaps in their understanding.
You've finished Use — the practical prompting foundation. Your progress has been saved. When you're ready, continue to Module 03: Judge.