AcademyDeveloper Accelerator › Module 01

Code at a Different Speed

AI-assisted development isn't about generating code you don't understand. It's about compressing the low-signal parts of the job — boilerplate, scaffolding, documentation lookup, routine pattern implementation — so your thinking goes to the parts that actually require a developer: system design, edge case reasoning, and production ownership.

⏱ 30–35 min 3 knowledge checks Guidewire / insurance / integration contexts
01
Dev Module
Your progress
0%
1

Where developer time actually goes — and what AI changes

Ask most developers to describe a productive day and you'll hear something like: solved a hard design problem, figured out a tricky integration issue, got a complex piece of logic right. Ask them to describe an average day and you'll hear something different: wrote boilerplate, looked up API documentation for the third time this month, reformatted data between two systems that don't agree on anything, wrote unit tests for straightforward cases.

The gap between these two descriptions is where AI creates most of its value for developers. Not by replacing the hard design thinking — but by compressing the average day so it contains more of the productive one.

In insurance IT specifically, this compression has real delivery impact. Guidewire configuration involves significant repetitive structure — PCF page components, Gosu rule implementations, product model definitions, integration message handlers. Integration work involves constant schema translation, error handling patterns, and data mapping that follows recognisable patterns. AI handles the recognisable pattern; you handle the decision about whether this specific implementation of it is right.

Without AI — average developer day
45 min writing unit tests for 6 straightforward methods
30 min looking up Guidewire PCF widget attributes for the fourth time
60 min writing data transformation code between two known schemas
20 min writing boilerplate error handling and logging
Remaining time for actual design thinking and complex logic
With AI — same developer day
12 min generating and reviewing unit test suite — AI drafts, you verify coverage
8 min asking AI about PCF widget behaviour with a specific question
20 min generating data transformation skeleton — you validate mapping logic
5 min generating standard error handling pattern
✦✦Much more remaining time for design thinking and complex logic
The productivity argument — and its limit

The time savings are real. But the savings accrue to you only if you spend the recovered time on higher-value work — not on generating more AI code you don't fully understand. A developer who uses AI to produce three times as much code in the same day, without proportionally increasing the rigour of their review, hasn't become three times more productive. They've created three times more surface area for production problems. The right model: AI compresses the low-signal work; you invest the recovered time in judgment, review, and design quality.

2

High-value AI use cases for insurance IT developers

Not all developer AI use creates equal value. Some tasks are genuinely high-return for AI assistance — they're time-consuming, follow recognisable patterns, and the output is easy to verify. Others are lower-return, either because the task requires design judgment that AI isn't equipped for, or because verifying AI output takes as long as writing it yourself. Knowing which is which is what separates effective AI use from random experimentation.

🔧

Boilerplate and scaffolding

POJO/DTO classes from a schema, service stubs, repository patterns, PCF page skeletons, message handler frameworks. High AI return — these follow rigid patterns, are tedious to write, and verification is straightforward. Generate, review structure, adapt to project conventions.

🔄

Data transformation and mapping

Schema translation between systems, field mapping implementations, format conversion. Insurance integration work is full of this. AI generates the mapping skeleton from two schemas; you validate every field mapping against the business rules and transformation requirements.

🧪

Unit test generation

AI generates test cases for defined methods — happy path, nulls, boundary values, exception paths. You review for coverage gaps and verify every assertion tests the right thing. (Same discipline as QA Module 02: assertions that pass for the wrong reason are worse than no tests.)

📝

Documentation and comments

JavaDoc, inline comments, API documentation, README sections. AI drafts from your code; you verify accuracy and add the context that only you know — why this decision was made, what the edge cases are, what changed from the original design.

🔍

Syntax and API lookup

Gosu syntax, Guidewire API method signatures, Java library usage, REST endpoint patterns. Faster than documentation when you have a specific question. Verify against official documentation for anything that goes into production — AI can confidently state outdated API details.

♻️

Code refactoring suggestions

Paste a method and ask for refactoring options — extract method candidates, complexity reduction, naming improvements. Useful for getting a second perspective. You decide which suggestions improve readability for the team that will maintain this code, not just for abstract style.

Where AI is lower-return for developers

System architecture decisions, database schema design, security implementation patterns in regulated environments, complex business logic that requires domain understanding, and integration designs where the failure modes matter. These aren't off-limits for AI — AI can provide useful perspective on all of them. But the design judgment is yours, the verification overhead is higher, and the cost of a wrong output getting into production is significant. Use AI as a thinking partner, not as the decision-maker.

Knowledge Check
A junior developer on your team starts using AI for all their development work. After two weeks you notice they're producing significantly more code volume than before, but code reviews are taking longer because the reviewers are finding subtle logic errors — things that look plausible but don't correctly handle the insurance business rules. What is the root problem and what should you recommend?
3

Prompting for code — the patterns that produce usable output

Vague prompts produce vague code. The more context you give AI about the environment, the conventions, and the specific requirements, the less time you spend rewriting the output to fit your actual situation. Developer AI prompting has a specific structure that works better than just describing what you want.

Prompt — Gosu business rule implementation
Environment / stack Guidewire PolicyCenter 10.x, Gosu language. I'm implementing a payment plan eligibility rule for Ontario personal auto policies.
Task Write a Gosu function that determines which payment plan options are available for a given PolicyPeriod. Return a List<PaymentPlanSummary>.
Business rules (confirmed) Monthly plan: available if total premium >= $500 and policy is not in cancellation pending status. Quarterly plan: available if total premium >= $300. Annual (full pay): always available. High-risk policies (surcharge code = "HR"): monthly plan not available regardless of premium. Apply rules in order — if monthly is excluded, still evaluate quarterly.
Format / constraints Follow Guidewire Gosu conventions. Use existing PolicyPeriod API methods — do not invent method names. Add inline comments for each eligibility check. Return empty list if no plans available. I will review against the actual PaymentPlanSummary API before using this.

The last sentence of that prompt — "I will review against the actual PaymentPlanSummary API before using this" — isn't just a safety note for the AI. It's a professional commitment to yourself. AI will sometimes use method names that look right but don't exist, or use an API in a way that's slightly off from the actual implementation. The review against official documentation or your actual codebase is always the final step.

Prompt — REST integration handler with error handling
Environment / stack Java 17, Spring Boot 3.x. Building a REST client for a third-party address validation service used in Guidewire PolicyCenter quote workflow.
Task Generate a Spring service class that calls the address validation endpoint, handles the response, and implements appropriate error handling and retry logic.
API contract POST /validate-address, request: {streetNumber, streetName, city, province, postalCode}, response: {valid: boolean, correctedAddress: object, confidence: 0.0-1.0, errorCode: string}. Error handling: on HTTP 5xx — retry up to 3 times with exponential backoff. On HTTP 4xx — do not retry, log and return a validation failure. On timeout (10s) — retry once, then fail gracefully (don't block quote workflow). Service should degrade gracefully — if address validation unavailable, log warning and allow the quote to proceed unvalidated.
Format Full service class with Spring annotations, RestTemplate or WebClient (your choice), retry using Spring Retry. Include structured logging at appropriate levels. Mark any place where I need to add environment-specific configuration (URLs, timeouts) as TODO. No hardcoded values.
Why the context sections matter most

The task section tells AI what to produce. The context and constraints sections are what make it produce something you can actually use. The business rules, API contract, error handling requirements, and framework constraints — everything that makes your implementation different from the generic version — lives in context. The more complete your context, the less you have to rewrite. And rewriting still requires you to understand what you're changing and why.

Knowledge Check
AI generates a Gosu method that implements your payment plan eligibility logic. The code looks correct and the logic follows the business rules you provided. Before submitting it for code review, what is the minimum responsible verification you should do?
4

The ownership principle — every line in production is yours

This is the through-line of the entire pathway, stated plainly: you are responsible for every line of code that goes into production under your name, regardless of who or what wrote the first draft.

This isn't a conservative position or a hedge against AI adoption. It's the professional standard that makes AI-assisted development sustainable and trustworthy. When something breaks in production — when an insurance rating calculation produces wrong premiums, when a claims integration drops messages, when a data migration corrupts policy records — no professional inquiry ends with "but AI wrote that part." The developer who submitted the code owns it.

In insurance IT this ownership carries specific weight. The systems you're working on process financial transactions, hold sensitive personal information, and produce outputs that are subject to regulatory scrutiny. A defect in a Guidewire rating implementation doesn't just create a support ticket — it affects policyholders, creates financial exposure for the insurer, and in some cases creates regulatory liability. The standard of review you apply before shipping reflects your understanding of that context.

AI GENERATES first draft plausible, fast DEVELOPER REVIEW API verification logic tracing edge case testing YOUR CODE you own every line regardless of origin PRODUCTION real policies real premiums

AI writes the first draft — you own the output that reaches production

Knowledge Check
A data transformation function you submitted — largely generated by AI, reviewed by you, and approved in code review — has been in production for two weeks when a bug is found. The bug causes certain policy endorsements to be dropped during migration processing. Your team lead asks you to explain the root cause. What is the professionally correct response?
5

Module summary

Compress the low-signal work

Boilerplate, scaffolding, data transformations, unit tests, documentation — high-return AI use cases. Generate fast, review rigorously, adapt to project conventions. Time recovered goes to design quality and complex reasoning, not more AI generation.

Context-rich prompts

Environment, task, business rules, API contract, constraints. The context section is what makes AI output usable without a full rewrite. The more specific your context, the closer the first draft is to what you actually need.

Minimum responsible review

API method verification, logic tracing through edge cases, return type confirmation, dev environment testing where available. You should be able to explain every line before it reaches review — regardless of who wrote the first draft.

Every line is yours

AI authorship is not a professional defence. Every line you submit under your name is your responsibility — for correctness, for security, for maintainability. Production defects don't have an AI exception. Own the code or don't ship it.

Ready for Module 02

Module 02 — Reviewing What You Didn't Write — goes deeper into the specific review discipline for AI-generated code: the failure patterns that look plausible but aren't, the security considerations that AI consistently underweights, and what a rigorous AI code review actually looks like in practice.

Module 01 Complete

Code at a Different Speed is done. Continue to Module 02: Reviewing What You Didn't Write.