AcademyPM Accelerator › Module 01

Planning at Delivery Speed

The planning artefacts that take a PM days — scope documents, schedule frameworks, risk registers, stakeholder maps, kickoff decks — follow recognisable structures that AI handles well. What AI can't do is fill them with the project intelligence you've built from conversations, context, and delivery experience. This module shows how to use AI for the structure while keeping your judgment at the centre.

⏱ 30–35 min 3 knowledge checks Guidewire / insurance IT delivery context
01
PM Module
Your progress
0%
1

Where PM planning time actually goes — and what AI changes

Ask a PM what took most of their time at the start of a new engagement and the answer is usually some version of: building the project plan, writing the scope document, preparing the kickoff deck, setting up the risk register, drafting the governance framework. These artefacts are essential. They're also structurally similar across every Guidewire implementation, every insurance IT delivery, every project of similar type.

AI is good at structurally similar things. A Guidewire PolicyCenter implementation has a recognisable set of phases, workstreams, integration dependencies, and risk categories. A Guidewire BillingCenter go-live has a predictable set of cutover risks and UAT considerations. A mid-size insurance IT project has a standard governance structure. AI generates credible first-draft structures for all of these — in minutes, not days.

What it doesn't know: the specific client context, the political dynamics between the IT and business sponsors, the technical debt that will make the integration harder than the scope suggests, the previous implementation that failed in a similar way, the team composition that will affect how realistic the schedule actually is. That intelligence is yours. AI builds the shell; you fill it with what makes it true.

Without AI — project initiation week
Half day building project plan structure from scratch or a previous project
Half day drafting scope document sections
3–4 hours populating risk register with standard categories
3 hours building kickoff deck slides and flow
Remaining time for stakeholder conversations and actual project intelligence
With AI — same initiation week
45 min reviewing and adapting AI project plan skeleton to this specific engagement
45 min reviewing scope document draft — filling in client-specific context AI can't know
30 min reviewing AI risk register — adding the risks that come from this specific client, team, and history
45 min reviewing kickoff deck — adding the narrative and specific framing this client needs
✦✦Significantly more time for stakeholder conversations and project intelligence gathering
The planning discipline that doesn't change

AI-generated plans look credible. That's their risk as well as their value. A plan that looks professionally structured but contains assumptions that don't apply to this engagement is worse than a rougher plan built from your actual knowledge of the project. AI generates the structure; you are responsible for making sure every assumption in that structure is tested against reality before it's presented to a sponsor or client.

2

Scope and schedule acceleration — the PM adds what AI can't

Project scope documents and schedule frameworks for Guidewire implementations follow a well-established structure. AI can generate a credible first draft from a minimal briefing. Your job is to inject the project-specific intelligence that makes the difference between a template and a deliverable.

Prompt — Guidewire PolicyCenter implementation scope document
Project context Guidewire PolicyCenter implementation, Ontario personal auto and commercial auto lines, mid-size regional insurer, greenfield implementation (no legacy Guidewire). Approximately 18-month programme. Core integration: billing system (legacy mainframe), claims system (ClaimCenter, already live). Approximately 12 FTE client team, 8 FTE implementation partner.
Task Generate a project scope document structure for this implementation. Cover: project objectives, in-scope deliverables by workstream, explicitly out-of-scope items, assumptions, dependencies, and constraints.
Known project-specific context Known constraints: client IT team has strong mainframe expertise but limited Guidewire experience — plan must include knowledge transfer. The commercial auto line has complex rating rules not yet fully documented — scope will include requirements workshops as a Phase 1 deliverable. Data migration from legacy policy system is in scope for personal auto only; commercial auto to be keyed fresh.
Format Section-by-section document structure. Mark any item with [CONFIRM WITH CLIENT] where the scope boundary depends on a decision not yet made. I will review every assumption against what's been agreed in the discovery sessions.
What the PM adds after reviewing the AI draft

Checks every assumption: AI assumes standard Guidewire integration patterns — PM confirms whether the billing system integration has a published API or requires custom file-based integration, which changes the integration workstream scope and schedule significantly.

Adds the political context: AI doesn't know that the IT Director has committed to a go-live date at the board level that's tighter than the schedule the PM considers achievable. That dependency needs to be explicit in the document, with the PM's professional assessment of the risk.

Removes the generic items: AI includes a data quality workstream in the default structure. The PM knows from discovery that the legacy policy data is relatively clean and this workstream is likely a monitoring task, not a major programme component. Adjust accordingly.

Validates the out-of-scope list: AI lists "mobile portal development" as out-of-scope — the PM confirms this is correct, then adds "integration with the external MVR service" which AI missed as a scope boundary that needs explicit agreement.

Knowledge Check
AI generates a project schedule for a Guidewire BillingCenter implementation. The schedule includes a 6-week UAT phase, which is standard for implementations of this type. You know from a previous engagement with this client that their business team has historically been able to commit only 2 days per week to testing due to operational demands. What should you do?
3

Risk register first drafts — standard categories plus your specific intelligence

Risk registers for insurance IT projects have a predictable set of standard categories: requirements quality, data migration complexity, integration dependencies, testing resource availability, business change readiness, regulatory compliance, key person dependency. AI populates these reliably. What it misses are the project-specific risks that come from this client's history, this team's dynamics, and this implementation's particular context.

Prompt — risk register for Guidewire implementation
Project context 18-month Guidewire PolicyCenter implementation, Ontario personal and commercial auto, regional insurer, greenfield. Integration with legacy billing (mainframe) and live ClaimCenter.
Task Generate a risk register with the standard risk categories for a Guidewire implementation of this type. For each risk: description, probability (H/M/L), impact (H/M/L), mitigation approach, and owner category (PM, technical lead, business sponsor, vendor).
Known specific risks The commercial auto rating rules are not yet fully documented. The client's mainframe team lead is planning to retire mid-programme — he's the only person with deep knowledge of the billing system APIs. The insurer has a regulatory filing due in month 8 that will require business team attention during what would otherwise be peak UAT.
Format Risk register table format. Include the specific risks I've flagged as high-priority items. Flag any risk where the mitigation depends on a decision or commitment not yet obtained — these need early sponsor conversations.
The risks AI consistently misses

AI generates comprehensive standard risk categories. The risks it misses are the ones that require knowing something about this specific engagement: the key person who holds institutional knowledge and is planning to leave, the regulatory deadline that conflicts with the testing schedule, the political tension between the IT and business sponsors that will affect decision-making speed, the previous implementation attempt that failed and why. These are the risks that actually drive project outcomes. You have them; AI doesn't. They go in the register explicitly — not as generic "key person dependency" but as specific named risks with specific mitigations.

Knowledge Check
AI generates a 24-item risk register for your PolicyCenter implementation. It's well-structured and covers all standard categories. You review it and notice that it doesn't include a risk you consider the most likely cause of delivery failure: the business sponsor has committed to go-live in month 14 at the board level, but the technical team's preliminary estimate puts the realistic delivery at month 18. This gap hasn't been formally acknowledged yet. What do you do?
4

Kickoff and stakeholder materials — structure from AI, narrative from you

Project kickoff materials follow a structure that AI handles well: objectives, scope, timeline, governance model, team introductions, workstream overview, what success looks like, what we're asking of each stakeholder group. The structure is predictable. What makes a kickoff effective — the specific framing that this client needs to hear, the tone that matches the sponsor's style, the acknowledgement of the previous attempt that didn't go well — comes from you.

Prompt — kickoff deck narrative and agenda
Context I'm preparing materials for the project kickoff of a Guidewire PolicyCenter implementation at a regional Ontario insurer. This is a strategic programme — the insurer's current policy system is end-of-life and this implementation is business-critical. Audience: IT leadership, business leadership, and the implementation team (~35 people). Duration: 2 hours.
Task Generate: 1) a recommended kickoff agenda with time allocations, 2) talking points for the "what success looks like" section, 3) the specific asks section — what we need from each stakeholder group to deliver successfully.
Specific framing I want to incorporate The insurer had a failed Guidewire implementation attempt 3 years ago with a different partner. This is explicitly not being positioned as "we're doing it again" — the framing is that we've learned from the industry's experience, the product has matured, and this team and approach are different. The IT Director is visibly nervous about committing to another large programme. The business sponsor is enthusiastic and needs to be partnered with, not managed.
Format Agenda with time slots. Talking points as bullet frameworks — I'll personalise the language. Specific asks section: one paragraph per stakeholder group (IT leadership, business sponsors, business SMEs, implementation team). Tone: direct, confident, not over-promising.

The context section of that prompt — the failed previous implementation, the nervous IT Director, the enthusiastic business sponsor, the "not doing it again" framing — is what makes the AI output useful rather than generic. AI can generate a standard kickoff deck in minutes. A kickoff that actually works for this specific room takes your knowledge of the people and history in it. The more specific context you provide, the less rewriting you do after.

Knowledge Check
AI generates kickoff talking points that include the line: "We have a proven implementation methodology that has delivered successful Guidewire implementations across Canada." You know that your firm's methodology is solid but that two of the reference implementations AI is likely referring to had significant overruns and one required a scope reduction to achieve go-live. What should you do with this talking point?
5

Module summary

AI builds the structure, you build the truth

Scope documents, schedules, risk registers, kickoff materials — AI generates credible first-draft structures from your project brief. Your job: inject the project-specific intelligence that makes each assumption in that structure accurate for this engagement.

Every assumption requires a test

AI-generated plans look professionally structured. That's their risk as well as their value. Before any planning artefact reaches a sponsor or client, every assumption must be tested against what you actually know about this project. A polished wrong plan is worse than a rough right one.

Specific risks require specific entries

AI covers standard risk categories. The risks that actually determine project outcomes — the key person about to leave, the schedule gap that hasn't been acknowledged, the political dynamic between sponsors — come from you. They go in the register by name, not by category.

Context is the multiplier

The more specific context you give AI — client history, team dynamics, known constraints, framing requirements — the less rewriting you do after. Generic prompts produce generic output. Project-specific context produces a first draft you can actually use.

Ready for Module 02

Module 02 — Status, Reporting, and the Truth Problem — covers the ongoing delivery side of AI assistance: status updates, steering committee reporting, and the specific discipline of accuracy when AI makes it dangerously easy to sound confident about things that aren't confirmed.

Module 01 Complete

Planning at Delivery Speed is done. Continue to Module 02: Status, Reporting, and the Truth Problem.