AcademyPM Accelerator › Module 03

Risk, Issues, and Stakeholder Judgment

Risk management in insurance IT delivery is partly a structured process and partly a judgment call about people — who's actually aligned, who's hedging, where the real resistance is, and what conversation needs to happen before the issue log entry becomes a programme crisis. AI handles the structured process well. The judgment about people is irreducibly yours.

⏱ 30–35 min 3 knowledge checks Risk management / issue escalation
03
PM Module
Your progress
0%
1

What AI can and can't do for risk management

Risk management has two components: the structured process of identifying, documenting, rating, and monitoring risks — and the judgment component of knowing which risks actually matter for this specific project, team, and client. AI handles the structured process reliably. The judgment component requires your knowledge of the specific context.

AI strong
Standard risk category generation
For any Guidewire implementation, AI can generate a comprehensive list of standard risk categories — requirements quality, data migration, integration complexity, testing resources, business change readiness, regulatory, key person dependency. Reliable first draft that you refine with specific context.
AI strong
Risk entry formatting and documentation
Give AI your risk description and it will format it with probability/impact ratings, mitigation approach, owner, and review date. Useful for maintaining a clean register without the formatting overhead. You supply the content; AI structures it consistently.
AI strong
Risk prompting — what am I missing?
Paste your current risk register to AI and ask "what risk categories for a Guidewire implementation of this type are typically on this register but missing from mine?" Useful periodic check that catches standard categories you haven't explicitly considered for this specific project.
PM judgment
Project-specific risks from context
The mainframe team lead who's retiring mid-programme. The business sponsor whose enthusiasm is outrunning their authority. The IT Director who privately thinks this project is a mistake but won't say so in steering. The history of the previous failed implementation that's making the operations team resistant to change. AI doesn't have this. You do.
PM judgment
Risk severity calibration for this client
"Key person dependency" is on every risk register. Whether it's High or Medium probability depends on whether you know the key person is actually planning to leave, not just that the category exists. AI rates by standard probability defaults. You calibrate by what you know about this specific situation.
PM judgment
Which risks to surface to which stakeholders
Not every risk in the register surfaces to the steering committee at every meeting. Knowing which risks need which conversations — and with whom — is a judgment call about who has the authority and motivation to help, and who surfacing the risk would create more problems than it solves. That's political intelligence AI doesn't have.
Knowledge Check
You use AI to review your risk register and ask "what am I missing?" AI suggests adding a risk for "regulatory filing requirements conflicting with testing schedule." You know from your project context that there is in fact a provincial regulatory filing due in month 8 that will pull the business team's attention during UAT. What do you do?
2

Issue escalation discipline — when to escalate and how

Issues are risks that have materialised. The escalation decision — what to escalate, to whom, when, and how — is where PM judgment is most visible. Escalate too fast and you're seen as unable to manage at the delivery level. Escalate too slowly and problems compound until they become crises. The escalation question is always: can this be resolved at the working level, or does it need authority, resources, or a decision that only the governance level can provide?

AI assists with escalation communications — drafting the issue description, structuring the options, framing the decision required. The judgment about when and whether to escalate is yours.

Prompt — issue escalation note
Context I need to escalate an issue to the steering committee. The issue is that the client's IT Director has declined to approve the integration design proposed by the implementation partner, citing concerns about performance impacts on the mainframe. The implementation partner considers the design sound and has provided evidence. The disagreement has been unresolved for 3 weeks and is now blocking the integration workstream.
Task Draft an issue escalation note for the steering committee. Structure: issue description, timeline (when it started, what has been tried), impact on programme, options for resolution (with trade-offs), and the specific decision required from the steering committee.
Options I've identified Option A: Steering committee requests an independent technical review — 2 weeks, $15K, resolves the technical dispute with a neutral opinion. Option B: Steering committee directs both parties to accept the implementation partner design with a 90-day post-go-live performance review as a condition. Option C: Escalate to CIO level to direct the IT Director — politically sensitive, last resort.
Format One-page escalation note. Factual, neutral tone — not assigning blame to either party. Clear decision request at the end. I will verify all facts before distributing.
Knowledge Check
The escalation note AI generates is well-structured and neutral. However, you know from private conversations that the IT Director's real concern isn't technical performance — he's worried about his team's ability to support the new integration model and doesn't want to say so publicly. Do you include this in the escalation note?
3

Stakeholder intelligence — what AI cannot read

Stakeholder management in insurance IT delivery is not a process. It's a continuously updated map of who wants what, who's aligned and who isn't, where the real resistance is coming from, and what the political landscape means for every project decision. AI can help you structure stakeholder maps, draft stakeholder communications, and prepare for difficult conversations. What it can't do is tell you any of the things that actually matter about the specific people on your engagement.

👁️

What AI can give you

Stakeholder map templates, communication plan structures, stakeholder analysis frameworks (power/interest grids), question sets for stakeholder interviews, draft communications for each stakeholder group, preparation frameworks for difficult conversations.

🧠

What only you can provide

Whether the VP Operations actually supports this programme or is waiting for it to fail. Who the IT Director listens to when he's uncertain. Whether the "aligned" business sponsor is genuinely committed or just saying yes to avoid conflict. What the CEO said at an all-hands last month that signals the real priority. The political history between the IT and business divisions that shapes every governance interaction.

Prompt — stakeholder map analysis
Context I'm developing a stakeholder map for a Guidewire PolicyCenter implementation. I have 12 key stakeholders across IT leadership, business leadership, and operational teams.
Task Generate a stakeholder analysis framework for this implementation type. For each stakeholder category, suggest: the typical interests and concerns, the engagement approach, the communication frequency, and the key questions I should be answering for each person.
My stakeholder notes [paste your stakeholder descriptions and what you know about each person]
Format Per stakeholder: role, what they typically care about in implementations of this type, suggested engagement approach, key questions I should be able to answer about where they stand. I will add the specific intelligence I have about each person — this gives me the framework; I'll fill in the reality.
Knowledge Check
Your AI-generated stakeholder map categorises the CFO as "Low power, Low interest" in this implementation — typical for a finance executive who isn't directly involved in a technology programme. You know that the CFO personally approved the budget as a strategic initiative and has told you privately that she considers this the most important IT investment the company has made in five years. What do you do?
4

The political landscape — AI as thinking partner, not political analyst

Every major insurance IT programme has a political landscape — competing interests, historical tensions, sponsors who are publicly committed but privately uncertain, technical teams who feel their concerns aren't being heard. Navigating it is the PM's most important and least documented skill.

AI is a useful thinking partner for political navigation — not because it knows anything about your specific political landscape, but because articulating the situation to AI forces you to be explicit about what you actually know, and AI can surface options and considerations you might not have thought of. The analysis that results is only as good as what you put in. But the act of putting it in is often clarifying.

Prompt — political navigation thinking partner
Context I'm managing a Guidewire implementation where there's a growing tension between the IT Director (who wants to control all architectural decisions) and the implementation partner's technical lead (who has 15 years of Guidewire experience and believes some of the IT Director's requirements will create significant problems). I need to manage this without the IT Director feeling undermined and without allowing decisions that the partner believes are technically unsound.
Task Help me think through the stakeholder dynamics and potential approaches. I'm not looking for a script — I'm looking for a structured way to think through the options and their implications.
What I know about the people The IT Director has 20 years at this company — his credibility is built on his infrastructure expertise, not application development. He's protective of his authority and sensitive to being seen as technically outmatched by an external partner. The implementation partner's technical lead is technically excellent but has limited tolerance for what he sees as uninformed client interference. The CIO, who both report to, respects the IT Director but is ultimately focused on a successful go-live.
Format Structured thinking — not a step-by-step script. Identify the underlying interests of each party, where those interests could be aligned, where they fundamentally conflict, and what the PM's specific role should be in navigating each. I'll take this as input to my own thinking, not as a plan to execute.
The value of using AI as a thinking partner

AI's value in this prompt isn't that it knows something about your IT Director. It's that structuring the prompt forces you to articulate the underlying interests explicitly — "protective of authority, sensitive to being seen as outmatched" — which makes the options clearer. AI then surfaces considerations you might not have thought through: what does the IT Director need in order to be seen as the decision-maker on this, even if the technical outcome follows the partner's approach? How do you give both parties a narrative where they can credibly claim ownership? The PM's political intelligence shapes the analysis; AI's structured thinking is the accelerant.

5

Module summary

AI for structure, PM for specifics

Standard risk categories, register formatting, escalation note drafts, stakeholder map frameworks — AI handles these well. The specific risks from this engagement's context, the stakeholder intelligence you've gathered, the political navigation — these are yours.

Specific risks drive decisions

Generic risk categories get noted and monitored. Specific named risks — the regulatory filing in month 8, the mainframe lead retiring in month 9 — drive decisions. Use AI to identify gaps in your categories; use your intelligence to make the entries specific enough to act on.

Escalation options reflect real dynamics

Escalation notes are factual. But the options you present should be shaped by what you actually know about the real issue — including what the stated dispute is hiding. Adding an option that addresses the real concern, without attributing it publicly, is professional stakeholder intelligence at work.

AI as political thinking partner

Use AI to structure your thinking about complex stakeholder dynamics — not because it knows the people, but because articulating the situation explicitly is itself clarifying. AI surfaces options and considerations; your intelligence determines which ones are viable.

Ready for Module 04

Module 04 — Running the Room — covers the real-time PM work: meeting facilitation, action tracking, difficult conversation preparation, and the decisions you make in the room when the plan meets reality. AI prepares you. The judgment calls in the meeting are entirely yours.

Module 03 Complete

Risk, Issues, and Stakeholder Judgment is done. Continue to Module 04: Running the Room.