AcademyBA Accelerator › Module 03

Documentation That Writes Itself

BA documentation — BRDs, process docs, impact assessments, traceability matrices — is where analysis time goes to die. This module shows how AI handles the structure and first draft so you can focus on the content that only you can provide.

⏱ 30–35 min 3 knowledge checks Guidewire / insurance examples
03
BA Module
Your progress
0%
1

The documentation burden — what it actually costs

Documentation is the part of BA work that most experienced analysts are honest about privately: it's necessary, it's important, and it consumes time that could be spent on analysis. A mid-size Guidewire implementation might require a BRD for each functional area, process documentation for both current-state and future-state flows, an impact assessment for each significant change, and a traceability matrix connecting everything. That's a significant volume of structured writing — most of which follows predictable patterns.

This is exactly where AI is most useful. Not because it replaces the analytical thinking that determines what goes in the document, but because it handles the structural scaffolding, the consistent language, and the first-draft volume that makes documentation so time-consuming to produce from a blank page.

The right mental model

Think of AI as your documentation co-author who works fast, knows every standard template, never complains about formatting, and will draft any section you describe to them. They don't know your project — you do. You describe the content, they produce the structure and prose. You review, correct, and approve. That's the collaboration.

There's an important professional point here too: the time you save on documentation first-drafts is time you can reinvest in the harder, higher-judgment work — stakeholder relationship management, scope negotiation, architecture tradeoff analysis, reviewing what the dev team has built against what was actually asked for. AI doesn't reduce the value of BA work; it shifts where your time goes within it.

2

BRDs and functional specifications — structure first, content second

A Business Requirements Document is a structured container. The structure is largely standard; the content is project-specific. AI is excellent at generating the structure and first-pass prose for standard sections, which you then populate with the actual requirements content from your elicitation work.

Here's how AI leverage maps across a typical BRD's sections:

Executive summary
Overview of business need, project objectives, and expected outcomes. AI drafts from your project description; you adjust for political nuance and strategic framing.
High AI value
Business objectives
Structured list of measurable objectives. AI formats from your notes; you verify each objective is genuinely measurable and traceable.
High AI value
Scope and out-of-scope
AI can structure scope statements from your description — but scope boundaries are a business decision. AI suggests; you confirm with stakeholders.
Medium AI value
Stakeholder analysis
AI generates template stakeholder table from a list you provide. You populate influence/interest and add political context AI doesn't have.
Medium AI value
Functional requirements
The core of the document. AI helps with formatting and consistency but the requirements themselves must come from elicitation — not AI generation.
Low AI generation
Assumptions and constraints
AI is good at generating a comprehensive prompt list of assumption categories for you to work through — often surfaces things you would have missed.
High AI value
Risks and dependencies
AI can suggest common risks for this type of project in this domain. Flag for review — not all will be relevant, and project-specific risks require your knowledge.
Medium AI value
Glossary
AI is excellent at generating domain glossaries for insurance and Guidewire terminology. Verify technical definitions; add project-specific terms.
High AI value
Prompt — BRD executive summary and objectives
Role / context I'm a BA writing a Business Requirements Document for a Guidewire PolicyCenter implementation at Northshore Mutual Insurance, a mid-size Ontario P&C insurer. The project replaces their 20-year-old custom policy administration system for personal lines (auto and home).
Task Draft the Executive Summary and Business Objectives sections of the BRD. The executive summary should explain the business context, the need for change, and what the project is expected to deliver. The business objectives section should list 5–7 measurable objectives.
Context — project background Key drivers: the legacy system can no longer be modified cost-effectively; the insurer is losing competitive ground on time-to-market for new products; straight-through processing rates are under 30% vs industry benchmarks of 65%+; the system requires manual workarounds for regulatory reporting. Expected outcomes from discovery sessions: improved STP rates, faster product launch capability, reduced operational overhead, better data quality for analytics and regulatory reporting.
Format Executive summary: 2–3 paragraphs, professional and factual, suitable for VP-level audience. Business objectives: numbered list, each objective written as a measurable outcome (include draft success metrics where possible). Flag any metric I should confirm with the business before including.
Knowledge Check
You use AI to generate the assumptions and constraints section of a BRD for a ClaimCenter implementation. The AI produces 18 assumptions covering data migration, integration, regulatory, and operational areas. What is your next step?
3

Process documentation — current state, future state, and the gap

Process documentation is the BA artefact that most often determines whether a development team builds the right thing. A well-documented current-state process shows why the change is needed. A well-documented future-state process shows what "done" looks like. The gap between them is where the requirements live.

AI is particularly useful for process documentation because structured process descriptions follow predictable patterns — steps, actors, decision points, exception paths — that AI can produce quickly from a verbal description or rough notes.

Prompt — current-state process documentation from session notes
Role / context I'm a BA documenting the current-state auto insurance claims FNOL (First Notice of Loss) process at a mid-size Ontario insurer. I have rough notes from a process walk-through session with the claims operations team.
Task From my notes below, produce a structured current-state process description for the FNOL intake process. Include: the triggering event, each process step with the responsible actor, decision points with the decision criteria, exception paths, and the end state (when the process is considered complete).
Notes from session Customer calls in or submits online. If online, CSR reviews and calls back within 2 hours. CSR takes loss details: date, location, description, involved parties. System check for policy validity — if policy lapsed, supervisor intervention required. If valid, CSR creates claim shell in legacy system manually. If accident involves injuries, immediate escalation to injury specialist queue. Towing and rental auth if vehicle undriveable — CSR has $500 auth limit, anything above needs supervisor. Claim number generated manually from separate numbering spreadsheet. Customer gets claim number verbally — no automated email. Reserve set by CSR at intake based on rough estimate, reviewed by adjuster within 24h. Average FNOL call: 18 minutes.
Format Numbered steps with actor clearly labelled. Decision points formatted as: [Decision: criteria → path A / path B]. Exception paths indented under the step that triggers them. End with a "pain points evident in current process" section — identify inefficiencies implied by the notes that are likely driving the modernisation.
The pain points section

Asking AI to identify process pain points from your current-state notes is one of the most underused BA applications. The manual claim number spreadsheet, the verbal-only claim number to the customer, the 18-minute average call time — AI will surface these as improvement opportunities in a structured way that makes your gap analysis much easier to write. You still need to validate that these are actually pain points the business wants to address, but the identification step is fast.

Knowledge Check
You've used AI to document the current-state FNOL process from your session notes and it looks accurate. You're now writing the future-state process for how FNOL will work in Guidewire ClaimCenter. What is the correct approach?
4

Impact assessments — what changes and what it means

Impact assessments are one of the most consistently underestimated BA deliverables in insurance IT projects. A change that looks straightforward — adding a new field to PolicyCenter, modifying a payment workflow — often has ripple effects across reporting, downstream systems, regulatory filings, operational processes, and training requirements. Surfacing these proactively is where a strong BA adds significant project value.

AI is good at impact assessment for two reasons: it has broad knowledge of how insurance systems interconnect, and it can systematically work through dimensions of impact that a time-pressured BA might miss.

Prompt — impact assessment for a PolicyCenter change
Role / context I'm a BA on a Guidewire PolicyCenter implementation. A scope change has been approved: the insurer wants to add a new endorsement type — Rideshare Coverage — to their personal auto policies in PolicyCenter.
Task Produce a structured impact assessment for adding Rideshare Coverage as a new endorsement type in PolicyCenter. Assess the potential impact across: the PolicyCenter configuration (product model, rating, forms), downstream system integrations (claims, billing, reporting), regulatory and filing requirements in Ontario, operational impacts (underwriting guidelines, CSR training, broker portal), and data and analytics impacts.
Context The insurer currently has no rideshare endorsement. Ontario rideshare coverage has specific FSRA regulatory requirements — note where these may apply but flag that specific requirements must be verified from FSRA directly. The insurer uses a broker portal integrated with PolicyCenter. Billing is in a separate Guidewire BillingCenter instance.
Format Section per impact area. For each area: rate the impact level (High/Medium/Low), describe the specific changes required, and list the stakeholders who need to be consulted. End with an overall impact summary and recommended next steps. Flag anywhere that the assessment requires information I'd need to verify with the business or from regulatory sources.

A prompt like this will produce a genuinely useful impact assessment starting point in 60–90 seconds that would have taken 2–3 hours to produce manually. The output won't be complete — it won't know this insurer's specific integration architecture, their specific FSRA filing status, or their existing underwriting guidelines for rideshare. But it will structure your thinking, surface impact areas you might have missed, and flag where you need to go for verified information.

Real scenario — where AI impact assessment saved a sprint

Situation: A BA is asked to write an impact assessment for a "simple" change — adding a co-applicant field to the PolicyCenter auto quote flow. They use AI to generate the assessment.

What AI surfaced: The co-applicant relationship affects the MVR (motor vehicle record) pull logic, the named insured definition for claims purposes, the billing account structure if the co-applicant is the payor, the privacy consent workflow under PIPEDA, the broker portal display logic, and the credit bureau integration for rate calculation. Seven impact areas for what the sponsor called a "one-field change."

The result: The BA presented the impact assessment in sprint planning. Three of the seven areas were confirmed as in-scope impacts that needed separate stories. The "one sprint" estimate became four. Finding this in sprint planning rather than mid-sprint saved the project a messy scope conversation later.

Knowledge Check
AI generates an impact assessment for a PolicyCenter change that includes "FSRA filing requirements may be triggered if the endorsement affects rating factors — verify with the compliance team." What should you do with this item?
5

Module summary

BRD structure vs content

AI handles structure and first-draft prose for standard sections. The requirements content comes from your elicitation. Never use AI to generate functional requirements — those come from stakeholders, not AI training data.

Process documentation

Current-state: describe your session notes to AI and get structured process documentation. Future-state: describe the agreed design. Never ask AI to design the future state — that's a stakeholder decision, not an AI output.

Impact assessment leverage

AI systematically works through impact dimensions you might miss under time pressure. Use the output as a structured checklist. Flag speculative items as confirmed action items with owners — don't include or remove without accountability.

The consistent principle

Across every documentation type: AI provides structure, consistency, and volume. You provide domain knowledge, stakeholder context, and professional accountability for every word that goes into a client deliverable.

Ready for Module 04

Module 04 — Stakeholder Communication — covers the other half of BA work: the emails, presentations, difficult conversations, and executive summaries that determine whether your excellent analysis actually lands. AI makes BA communication faster and more polished — without losing the relationship intelligence that makes it effective.

Module 03 Complete

Documentation That Writes Itself is done. Continue to Module 04: Stakeholder Communication.