Requirements work is where BA value is most visible — and most time-consuming. This module shows how AI accelerates user story writing, acceptance criteria, elicitation follow-up, and requirements quality checking without removing the human judgment that makes requirements actually work.
Where AI fits in requirements work — and where it doesn't
Requirements work has two distinct components that AI helps with very differently, and conflating them is the fastest way to create problems.
The first component is elicitation — the process of drawing out requirements from stakeholders through interviews, workshops, observation, and analysis. This is fundamentally a human activity. AI cannot sit in your workshop, read the room when a stakeholder hesitates, notice the tension between what the IT VP says and what the business VP says, or ask the follow-up question that changes the direction of the conversation. This is where your experience and judgment are irreplaceable.
The second component is articulation and documentation — translating what you learned in elicitation into clear, testable, complete requirements artefacts. User stories, acceptance criteria, use cases, business rules, data dictionaries. This is where AI is genuinely transformative — not because it replaces your thinking, but because it dramatically accelerates the drafting, review, and refinement of artefacts once you have the content.
AI transforms the articulation side of requirements work — not the elicitation. The session is still yours.
This module focuses on the right side — where AI helps you produce better artefacts faster. If requirements elicitation prep is what you need, that's Module 01. What we're doing here is the work that happens after the session: turning notes, recordings, and rough outputs into polished, testable, complete requirements that a development team can actually build to.
2
Better user stories, faster — the AI drafting loop
User story writing is deceptively time-consuming. A single functional area in a Guidewire implementation might require twenty or thirty well-formed stories. Writing them from scratch — getting the actor, the action, the value proposition, and the acceptance criteria right every time — is slow. AI can produce first drafts from your notes or session summaries in minutes, which you then review, correct, and complete.
The quality difference between a good and a weak user story is significant. Weak stories are vague, untestable, or bundle multiple behaviours together. Good stories are atomic, testable, and written at the right level of granularity. AI can help you maintain consistency across a large story set in a way that manual drafting often doesn't.
User story quality — before and after AI-assisted refinement
❌ Weak story
As a user, I want to be able to view and manage my policy information so that I can keep my details up to date.
Problems: "User" is too vague — which actor? "View and manage" bundles multiple behaviours. "Policy information" is undefined. "Keep details up to date" doesn't specify what update means. Untestable as written.
✓ Stronger story
As a personal lines policyholder, I want to update my primary residential address in PolicyCenter so that my policy documents and correspondence are sent to my current location.
Improvements: Specific actor (personal lines policyholder). Atomic behaviour (address update only). Specific system (PolicyCenter). Clear value statement. Testable — can verify address updates and document routing.
Here's the prompt pattern that produces this level of quality from rough session notes:
Prompt — user story generation from session notes
Role / context
I'm a Business Analyst on a Guidewire PolicyCenter implementation for a mid-size Ontario P&C insurer. We've just completed a discovery session covering policyholder self-service functionality. I have rough notes from the session.
Task
Based on the notes I'll paste below, generate a set of user stories for the policyholder self-service portal. Each story should follow the standard format: "As a [specific actor], I want to [specific action] so that [specific value]." Keep each story atomic — one behaviour per story.
Context + notes
Actors: personal lines policyholder, broker, underwriter. Key themes from session: address updates, payment method changes, document downloads (pink slips, certificates), policy renewal opt-in, adding/removing drivers. Out of scope: claims submission (separate workstream). System: PolicyCenter portal integration.
[paste your session notes here]
Format
Group stories by actor. For each story, flag if there's an obvious dependency on another story or a business rule that needs clarification. Do not write acceptance criteria yet — I'll do that in a follow-up prompt once I've reviewed the stories.
What to do with the output
Review every story before accepting it. AI will produce plausible stories that may not reflect what was actually discussed in the session — especially around scope boundaries and actor distinctions. Add, remove, split, and merge as needed. The value is the starting volume and consistency, not the content accuracy. You provide the content accuracy.
Knowledge Check
After using AI to generate a set of 25 user stories from your session notes, you review them and find that most look reasonable. What is the most professionally responsible next step before sharing them with the development team?
Correct. The stories are a first draft — your review is what makes them reliable. AI generated them from your notes description, not from actual knowledge of what was agreed in the session. Stories that look plausible may not reflect actual scope, actual actors, or actual business rules. Your job is to verify each one against what you actually learned, correct what's wrong, fill gaps, and cull anything outside scope. You own these stories — the dev team will build to them.
Option 2 is the right approach. A visual review isn't sufficient — plausible-looking stories may still misrepresent what stakeholders said about scope, actor distinctions, or business rules. AI cannot review its own accuracy against your session content. Adding a note that stories are AI-generated doesn't transfer your professional responsibility — you're the BA, you own the requirements. A thorough review against your notes and memory is the professional standard.
3
Acceptance criteria that actually work
Acceptance criteria are where requirements become testable — and where weak BA work most exposes itself. Vague ACs mean the development team builds what they think you meant, the QA team tests what they think you meant, and the business stakeholder accepts or rejects based on what they think they asked for. The mismatch is where most defect conversations originate.
AI is particularly useful for AC writing because it is excellent at generating structured, scenario-based criteria in Given/When/Then (Gherkin) format — and it's good at spotting the scenarios a BA might have missed when focused on the happy path.
What makes acceptance criteria actually testable
✓Specific and unambiguous — "the system displays the updated address" not "the system handles the address change"
✓Covers the failure path — what happens when validation fails, data is invalid, or the system is unavailable
✓Covers edge cases — what happens at the boundary (last day of policy period, maximum number of drivers, zero balance invoice)
✓Observable and verifiable — QA can verify it without asking the BA what was meant
✓Business rule explicit — if a business rule governs the behaviour, it's stated in the AC, not assumed
Prompt — acceptance criteria with edge case coverage
Role / context
I'm a BA writing acceptance criteria for a Guidewire PolicyCenter implementation. The user story is: "As a personal lines policyholder, I want to update my primary residential address in PolicyCenter so that my policy documents and correspondence are sent to my current location."
Task
Write acceptance criteria for this story in Given/When/Then format. Cover: the happy path (successful address update), validation failures (invalid postal code, missing required fields), the downstream impact (what systems or documents should reflect the new address), and any edge cases specific to insurance policy administration — for example, mid-term address changes that may affect risk rating.
Context
Business rules confirmed in session: address changes during the policy period trigger a re-rating review by underwriting if the change crosses rating territory boundaries. The policyholder should be notified of this. Canadian postal codes only. The address update should propagate to the insured's correspondence address in all active policies, not just the one they're editing.
Format
Given/When/Then for each scenario. Label each scenario: happy path, validation error, business rule trigger, edge case. After the criteria, list any assumptions you've made that I should verify with the business.
Notice what the prompt does: it gives the AI the specific business rules that came out of the session. AI doesn't know that an Ontario P&C insurer has territory-based rating or that a postal code change might trigger a re-rating review. That domain knowledge came from your elicitation work — you're giving it to the AI so the generated ACs actually reflect the business context, not generic software behaviour.
This is the collaboration model in practice: you bring the domain knowledge, AI brings the structured drafting speed and the edge case thinking that's easy to miss when you're focused on the happy path.
Edge case identification — Guidewire ClaimCenter context
Story: As a claims adjuster, I want to assign a claim to an available adjuster in a specialist queue so that complex claims are handled by qualified personnel.
What a BA might write: Happy path, unassigned claim pool, successful assignment. Three or four scenarios.
What AI adds when prompted to think about edge cases: What happens when the specialist queue is empty? What if the adjuster being assigned to already has a caseload above threshold? What if the claim type changes mid-assignment (reclassification)? What happens to assignment if an adjuster goes on leave mid-claim? What if the same claim is being assigned simultaneously by two supervisors?
The result: A materially more complete AC set that surfaces real implementation questions — many of which the dev team would have had to come back and ask anyway. Better to surface them in requirements than in a sprint review.
Knowledge Check
You've used AI to generate acceptance criteria for a complex Guidewire billing story. The output includes 12 scenarios including several edge cases you hadn't considered. One of the edge cases involves how an NSF (non-sufficient funds) payment should affect the policy status — something you didn't discuss in the session. What should you do?
Exactly right. AI surfacing an edge case you didn't cover is genuinely valuable — that's one of its best contributions to AC work. But the behaviour for an NSF in insurance billing is absolutely a business rule specific to this client: it may trigger a grace period, a notice, a cancellation workflow, or a combination depending on the insurer's policy. This isn't a standard pattern AI can reliably know. Flag it as TBD, raise it with the business, and get explicit confirmation before it goes to dev. You've just found a gap before it became a defect.
Option 2 is the professional approach. AI surfacing an unconsidered edge case is genuinely useful — this is one of its strongest contributions. But NSF handling in insurance billing is a business-specific rule: grace periods, notice requirements, cancellation triggers, reinstatement conditions all vary by insurer and may have regulatory implications. You cannot assume the AI's general pattern applies here. Remove it from scope? No — it's a real scenario that needs addressing. Include as written? Risky — could be wrong. The right answer is flag it as TBD and get explicit business confirmation.
4
Elicitation follow-up — turning session outputs into requirements
The window immediately after a discovery session is critical and often poorly used. You have fresh notes, a recording if you're lucky, and a head full of context that will fade within 24 hours. This is the highest-leverage moment for AI assistance in requirements work.
Four specific post-session tasks where AI dramatically reduces the time between "session ends" and "requirements are documented":
📋
Session summary and action items
Paste your raw notes or a transcript. Ask AI to extract: decisions made, open questions raised, action items assigned, and topics deferred. Takes 3 minutes instead of 30.
🔍
Gap and ambiguity identification
From session notes, ask AI: what requirements topics weren't addressed, what statements were ambiguous, and what dependencies were implied but not confirmed. Surfaces what you need to follow up on.
✉️
Follow-up questions to stakeholders
Based on identified gaps, draft a structured follow-up email to specific stakeholders. AI formats it professionally and groups related questions — you add any relationship context it doesn't have.
📊
Initial process mapping
From a verbal description of a business process captured in session, ask AI to produce a structured step-by-step process description or a list of decision points. Starting point for your process diagrams.
Prompt — post-session gap analysis
Role / context
I'm a BA who just completed a 90-minute discovery session on a Guidewire ClaimCenter implementation covering the first notice of loss (FNOL) process for auto claims. I have rough session notes below.
Task
Analyse my session notes and identify: 1) Requirements topics we covered that still need confirmation or clarification, 2) Topics that are typically important in an FNOL process that we didn't address at all, 3) Statements from the session that are ambiguous and need follow-up, 4) Any apparent contradictions between stakeholder statements.
Notes
[paste session notes here]
Format
Four numbered sections matching the four analysis areas. For each item, suggest the right stakeholder to follow up with based on their role (Claims Manager, IT Lead, Underwriting, etc.). End with a prioritised list of the top five follow-up items.
The time compounding effect
Each of these four tasks individually saves 30–60 minutes. Run them all after a single discovery session and you've saved 2–3 hours of processing time — while actually producing more complete documentation than you would have manually. Do this across a 20-session engagement and you've freed up a meaningful amount of time for the higher-judgment work that AI can't do: stakeholder relationship management, conflict resolution, architecture tradeoff analysis.
Knowledge Check
During a discovery session on Guidewire ClaimCenter, two stakeholders gave contradictory information: the Claims Manager said that all claims over $10,000 must be reviewed by a senior adjuster before payment, while the Finance VP said that only claims over $25,000 require senior review. You didn't resolve this in the session. How should you handle it?
Correct. A contradicted business rule is not a requirements decision — it's an escalation. Documenting it and surfacing it explicitly is exactly the right BA behaviour. AI can help you draft the clarification request professionally and structure it so both stakeholders understand what they need to resolve. What you must not do is resolve the contradiction yourself by picking a number, defaulting to industry practice, or letting developers decide. This is a business decision that has downstream financial, compliance, and operational consequences.
Option 4 is the correct approach. Contradicted business rules must be escalated, not resolved by the BA or guessed at. The consequences of using the wrong threshold could be significant — underauthorised payments, compliance issues, or operational friction. The BA's role is to surface and document the contradiction clearly, and use the appropriate channel to get a confirmed answer from the right authority. AI helps you draft the follow-up communication efficiently. It cannot tell you what the right threshold is — that's a business decision.
5
Module summary
✅
Elicitation vs articulation
AI helps you prepare for elicitation and transforms articulation. The session itself — reading the room, resolving conflict, drawing out the unstated — remains your professional skill.
✅
User story generation
Give AI your session notes and a clear prompt. Review every story against what was actually discussed. Add, remove, split, merge. You provide the accuracy; AI provides the volume and consistency.
✅
Acceptance criteria depth
Include your business rules in the prompt. Ask AI to cover edge cases. Flag AI-generated scenarios for unknown business rules as TBD — raise them with the business before they go to dev.
✅
Post-session processing
Session summary, gap analysis, follow-up drafting, and process mapping — each saves 30–60 minutes. Run them all immediately after sessions while context is fresh.
Ready for Module 03
Module 03 — Documentation That Writes Itself — covers the broader documentation artefacts: BRDs, process documentation, impact assessments, and traceability matrices. If requirements work is where you spend time, documentation is where that time accumulates. Module 03 is where that changes.
✓
Module 02 Complete
Requirements at Speed is done. Continue to Module 03: Documentation That Writes Itself.