Planning at Delivery Speed
The planning artefacts that take a PM days — scope documents, schedule frameworks, risk registers, stakeholder maps, kickoff decks — follow recognisable structures that AI handles well. What AI can't do is fill them with the project intelligence you've built from conversations, context, and delivery experience. This module shows how to use AI for the structure while keeping your judgment at the centre.
Where PM planning time actually goes — and what AI changes
Ask a PM what took most of their time at the start of a new engagement and the answer is usually some version of: building the project plan, writing the scope document, preparing the kickoff deck, setting up the risk register, drafting the governance framework. These artefacts are essential. They're also structurally similar across every Guidewire implementation, every insurance IT delivery, every project of similar type.
AI is good at structurally similar things. A Guidewire PolicyCenter implementation has a recognisable set of phases, workstreams, integration dependencies, and risk categories. A Guidewire BillingCenter go-live has a predictable set of cutover risks and UAT considerations. A mid-size insurance IT project has a standard governance structure. AI generates credible first-draft structures for all of these — in minutes, not days.
What it doesn't know: the specific client context, the political dynamics between the IT and business sponsors, the technical debt that will make the integration harder than the scope suggests, the previous implementation that failed in a similar way, the team composition that will affect how realistic the schedule actually is. That intelligence is yours. AI builds the shell; you fill it with what makes it true.
AI-generated plans look credible. That's their risk as well as their value. A plan that looks professionally structured but contains assumptions that don't apply to this engagement is worse than a rougher plan built from your actual knowledge of the project. AI generates the structure; you are responsible for making sure every assumption in that structure is tested against reality before it's presented to a sponsor or client.
Scope and schedule acceleration — the PM adds what AI can't
Project scope documents and schedule frameworks for Guidewire implementations follow a well-established structure. AI can generate a credible first draft from a minimal briefing. Your job is to inject the project-specific intelligence that makes the difference between a template and a deliverable.
Checks every assumption: AI assumes standard Guidewire integration patterns — PM confirms whether the billing system integration has a published API or requires custom file-based integration, which changes the integration workstream scope and schedule significantly.
Adds the political context: AI doesn't know that the IT Director has committed to a go-live date at the board level that's tighter than the schedule the PM considers achievable. That dependency needs to be explicit in the document, with the PM's professional assessment of the risk.
Removes the generic items: AI includes a data quality workstream in the default structure. The PM knows from discovery that the legacy policy data is relatively clean and this workstream is likely a monitoring task, not a major programme component. Adjust accordingly.
Validates the out-of-scope list: AI lists "mobile portal development" as out-of-scope — the PM confirms this is correct, then adds "integration with the external MVR service" which AI missed as a scope boundary that needs explicit agreement.
Risk register first drafts — standard categories plus your specific intelligence
Risk registers for insurance IT projects have a predictable set of standard categories: requirements quality, data migration complexity, integration dependencies, testing resource availability, business change readiness, regulatory compliance, key person dependency. AI populates these reliably. What it misses are the project-specific risks that come from this client's history, this team's dynamics, and this implementation's particular context.
AI generates comprehensive standard risk categories. The risks it misses are the ones that require knowing something about this specific engagement: the key person who holds institutional knowledge and is planning to leave, the regulatory deadline that conflicts with the testing schedule, the political tension between the IT and business sponsors that will affect decision-making speed, the previous implementation attempt that failed and why. These are the risks that actually drive project outcomes. You have them; AI doesn't. They go in the register explicitly — not as generic "key person dependency" but as specific named risks with specific mitigations.
Kickoff and stakeholder materials — structure from AI, narrative from you
Project kickoff materials follow a structure that AI handles well: objectives, scope, timeline, governance model, team introductions, workstream overview, what success looks like, what we're asking of each stakeholder group. The structure is predictable. What makes a kickoff effective — the specific framing that this client needs to hear, the tone that matches the sponsor's style, the acknowledgement of the previous attempt that didn't go well — comes from you.
The context section of that prompt — the failed previous implementation, the nervous IT Director, the enthusiastic business sponsor, the "not doing it again" framing — is what makes the AI output useful rather than generic. AI can generate a standard kickoff deck in minutes. A kickoff that actually works for this specific room takes your knowledge of the people and history in it. The more specific context you provide, the less rewriting you do after.
Module summary
AI builds the structure, you build the truth
Scope documents, schedules, risk registers, kickoff materials — AI generates credible first-draft structures from your project brief. Your job: inject the project-specific intelligence that makes each assumption in that structure accurate for this engagement.
Every assumption requires a test
AI-generated plans look professionally structured. That's their risk as well as their value. Before any planning artefact reaches a sponsor or client, every assumption must be tested against what you actually know about this project. A polished wrong plan is worse than a rough right one.
Specific risks require specific entries
AI covers standard risk categories. The risks that actually determine project outcomes — the key person about to leave, the schedule gap that hasn't been acknowledged, the political dynamic between sponsors — come from you. They go in the register by name, not by category.
Context is the multiplier
The more specific context you give AI — client history, team dynamics, known constraints, framing requirements — the less rewriting you do after. Generic prompts produce generic output. Project-specific context produces a first draft you can actually use.
Module 02 — Status, Reporting, and the Truth Problem — covers the ongoing delivery side of AI assistance: status updates, steering committee reporting, and the specific discipline of accuracy when AI makes it dangerously easy to sound confident about things that aren't confirmed.
Planning at Delivery Speed is done. Continue to Module 02: Status, Reporting, and the Truth Problem.