AcademyAI Fundamentals › Module 02

Prompting and Practical Workflows

The quality of what you ask determines the quality of what you get. This module teaches you how to prompt AI tools effectively, structure requests for professional work, and build AI into your daily workflows without adding new risk.

⏱ 30–35 min 3 knowledge checks Practical examples throughout
02
Module
Your progress
0%
1

Why prompting is a professional skill

Ask ten different people to use an AI tool for the same task and you'll get ten very different experiences — not because the tool is inconsistent, but because of what they asked for and how they asked for it. Prompting is a learnable skill, and the gap between someone who prompts well and someone who doesn't is significant in terms of the quality of output they receive.

This matters more in a professional context than a casual one. A vague prompt in a personal setting means a slightly off answer you can ignore. A vague prompt in a professional context means a draft you have to almost entirely rewrite, or an analysis that misses the point of what you actually needed.

Core principle

AI tools respond to the context you give them. The more clearly you communicate your role, your task, your constraints, and the format you need — the more useful the output. Vague in, vague out. Specific in, specific out.

There's a common misconception that good prompting means finding "magic words" or specific tricks. It doesn't. Good prompting means communicating clearly — the same skills you use when briefing a colleague, writing a project scope, or explaining a problem to a new team member. The difference is that with AI, you have to provide context that a colleague would already know from working with you.

Think of it this way: if you hired a highly capable contractor with no context about your project and asked them "can you help with the thing we talked about?", you'd get a confused response. Give that same contractor a clear brief — who you are, what you're working on, what specifically you need, what constraints apply — and they'll produce something useful. AI is the same.

2

The anatomy of a good prompt

A well-structured prompt has four components. You don't always need all four — simple requests can be simple — but understanding each one helps you diagnose why a prompt isn't working and fix it.

The four components of an effective prompt
Role / context You are a senior Business Analyst working on an insurance policy administration system modernisation project for a large Canadian carrier.
Task Draft five discovery questions to ask the business stakeholders at our kick-off session. Focus on understanding current pain points with the legacy system, data migration concerns, and integration dependencies with downstream claims systems.
Additional context The audience is VP-level and above. They are non-technical but commercially aware. Avoid jargon. Questions should be open-ended and designed to surface concerns they may not volunteer unprompted.
Output format Present as a numbered list. For each question, include a one-line note explaining what intelligence it's designed to surface.
Role / context — who you are, what project, what environment
Task — what exactly you need the AI to produce
Context — constraints, audience, nuances that shape the output
Format — how you want the output structured

You won't always need all four sections. A simple request like "explain what a webhook is in plain English" needs no role, context, or format instruction. But for any professional task — a draft document, an analysis, a set of options — building in these components will dramatically improve what you get back.

The most common mistake

Skipping the context. "Write me a status report" gives the AI almost nothing to work with. "Write a status report for a core insurance platform migration project, week 6 of 20. We're on schedule for the policy data migration milestone but have a risk around the legacy API that needs escalating to the steering committee. Audience is non-technical executives." — that produces something you can actually use.

Knowledge Check
Which of the following prompts is most likely to produce a useful result for a professional task?
3

Prompting patterns that consistently work

Beyond the basic four-component structure, there are several prompting patterns that are particularly useful in professional IT contexts. These aren't tricks — they're ways of giving the AI better information so it can produce better output.

Pattern 1: Weak prompt vs strong prompt — side by side

❌ Weak prompt
"Summarise this meeting"
Why it underperforms: No audience, no format, no indication of what's important. You'll get a generic transcript summary that may not capture the decisions or actions you actually need.
✓ Strong prompt
"Summarise the key decisions, open questions, and action items from this meeting transcript. Audience is the project sponsor who didn't attend. Format: three sections with bullet points. Flag any items that need a decision before next week's steering committee."
Why it works: Clear audience, clear structure, clear priority filter. The output will be something you can forward directly with minimal editing.
❌ Weak prompt
"Explain this code"
Why it underperforms: You'll get a generic line-by-line explanation. Useful, but probably not what you actually needed — which might be understanding the business logic, the performance implications, or the risk of changing a specific block.
✓ Strong prompt
"I'm reviewing this Java method that handles insurance premium calculations. Explain what it does in plain business terms, identify any edge cases that might cause incorrect results, and flag anything that looks like it might have performance issues at scale. Assume the audience is a non-technical BA."
Why it works: Specific domain, specific concerns, specific output audience. The AI now knows to prioritise business-language explanation over technical detail.

Pattern 2: Iteration. Treat AI conversations as a dialogue, not a one-shot query. If the first response isn't quite right, don't start over — refine. "That's good but too formal for this audience — can you make it more direct?" or "The summary is too long — cut it to the five most important points" or "Add a section on risks we haven't mentioned yet." Each iteration costs you seconds and improves the output significantly.

Pattern 3: Ask for options, not answers. Rather than "what should I do about this architecture decision?", try "give me three different approaches to solving this integration problem, with the tradeoffs of each." You get richer material to work with and your judgment decides which direction to take. This is particularly valuable for decisions where context you hold (political, relational, historical) is as important as technical considerations.

Pattern in practice — QA context

Situation: You're a QA engineer who has inherited a test suite with poor coverage. You want to use AI to help identify gaps.

Weak approach: "Find gaps in my test coverage" — requires the AI to have context it doesn't have.

Strong approach: "I'm working on a test suite for an insurance billing module. The module handles premium calculation, payment processing, and invoice generation. Here is the list of test cases we currently have: [paste list]. What business scenarios and edge cases are likely missing from this coverage? Think particularly about boundary conditions, error states, and regulatory compliance scenarios relevant to Canadian insurance billing."

The result: The AI has the context it needs to generate genuinely useful gap analysis — not generic test suggestions, but scenarios specific to the domain you described.

Pattern 4: Provide examples. If you need output in a specific style or format, show the AI what you mean rather than trying to describe it. "Here's an example of a requirements statement we use on this project: [example]. Write five more in the same format covering [topics]." Matching existing formats and styles is something AI tools do very well when given a reference.

Knowledge Check
You've asked AI to draft a project risk register and the output is good but the risk descriptions are too technical for your executive audience. What's the best next step?
4

Integrating AI into your professional workflows

The most effective AI users aren't the ones who use it for everything — they're the ones who've identified the specific points in their workflow where it creates the most value and built it in there deliberately. The goal isn't to use AI more. It's to work better.

Here are the highest-value integration points for IT professionals, organised by the type of work:

Where AI adds the most value in IT professional workflows BEFORE A MEETING Prepare questions Research context fast Draft agendas AFTER A MEETING Summarise notes/transcript Extract action items Draft follow-up emails DOCUMENTATION First-draft requirements Process descriptions User guides, release notes LEARNING Explain unfamiliar tech Summarise long docs Compare approaches ANALYSIS & DECISIONS Structure complex problems Generate options + tradeoffs Identify missed considerations CODE & TECHNICAL Generate boilerplate Explain inherited code Write unit tests COMMUNICATION Status reports first draft Difficult emails Presentation structure In all cases: AI produces the first layer — your expertise and judgment determine the final quality

High-value AI integration points for IT professionals — the first layer of drafting, structuring, and preparation

What to avoid: Using AI for tasks where accuracy is critical and verification is difficult. Specific client data, regulatory requirements, legal commitments, or technical specifications that will be used in production systems — these need primary sources, not AI synthesis. This isn't a reason to not use AI; it's a reason to be clear about when you're in draft-and-review mode versus reference mode.

Do not use AI as a primary source for

Specific regulatory requirements, client contract terms, financial figures, legal obligations, technical specifications you'll build to, or medical/safety-critical information. AI can help you understand these areas and prepare questions — but the authoritative source must always be the original document, not an AI summary of it.

Knowledge Check
An architect is evaluating three approaches to an API integration and wants to use AI to help with the decision. Which approach will get the most useful result?
5

Module summary — what you've learned

The four components

Role/context, task, additional context, and output format. Use all four for complex professional tasks. The most commonly skipped — and most valuable — is context.

Iteration is the method

Treat AI as a dialogue. If the first output is close but not right, refine within the same conversation. Don't restart — context is valuable and conversation history is free.

Ask for options, not answers

For decisions with real consequences, ask AI to generate options and tradeoffs — not to make the decision. Your judgment, context, and accountability determine the final call.

Where to integrate it

Highest value: pre/post meeting preparation, documentation first drafts, communication, analysis structuring, code explanation. Lowest appropriate use: primary source for facts, regulations, and specs.

Ready for Module 03

Module 03 — Judge — covers the other side of the coin: where AI fails, how to spot it, and how to use AI responsibly in professional and enterprise environments. This is where most professionals have the biggest gaps in their understanding.

Module 02 Complete

You've finished Use — the practical prompting foundation. Your progress has been saved. When you're ready, continue to Module 03: Judge.