AcademySA Accelerator › Module 03

Integration Architecture in Insurance IT

Integration design is where most insurance IT SA engagements spend the most time and create the most risk. Legacy systems with specific interface constraints, Guidewire platform integration patterns, data migration architecture, and the regulatory environment of Canadian insurance — all of it shapes design choices that generic architecture patterns don't anticipate. This module is where AI assistance is most valuable and where SA judgment is most critical.

⏱ 40–45 min 3 knowledge checks Guidewire integration patterns + data migration
03
SA Module
Your progress
0%
1

Guidewire integration patterns — where AI is useful and where it isn't

Guidewire has well-established integration architecture — the Integration Framework (GIF), the Guidewire Cloud APIs, the messaging patterns supported by each product. AI knows these patterns at a general level. What AI doesn't know is the specific version constraints your client is on, the specific cloud vs. self-managed deployment differences that affect API availability, or the specific integration scenarios that Guidewire's published patterns handle poorly.

The most useful AI role in Guidewire integration design is generating the first-cut options based on the integration requirement type — which prompts you to consider patterns you might have moved past too quickly — and then helping articulate the tradeoffs between options. The decision about which pattern fits this specific client's environment is yours.

Real-time event-driven integration When: <60s latency required
Guidewire fires events (policy change, claim created, payment applied) that are consumed by downstream systems via message queue or event streaming platform. Guidewire Integration Framework handles event publication; downstream systems subscribe and process asynchronously.
⚠️ SA must validate: event payload completeness for downstream needs, ordering guarantees for multi-event sequences (e.g. policy issued + billing account created must arrive in order), and dead-letter handling for failed downstream processing.
Batch integration When: latency >1hr acceptable, high volume
Scheduled data extracts from Guidewire database or APIs, transformed and loaded into downstream systems. Simple to implement, operationally straightforward, well understood by most teams. Appropriate for reporting, analytics, and non-time-sensitive synchronisation.
⚠️ SA must validate: batch window availability (Guidewire maintenance windows, downstream processing windows), incremental vs. full extract approach, failure recovery and re-run procedures, and data consistency during batch windows when Guidewire transactions are active.
Synchronous API integration When: transactional, requires immediate response
Guidewire or external system makes synchronous REST/SOAP call and waits for response before proceeding. Required for rate lookups, address validation, credit bureau checks that happen during the quote or bind transaction and must complete before the user can proceed.
⚠️ SA must validate: timeout thresholds (what happens in Guidewire if the external call times out — does the transaction fail or degrade gracefully?), circuit breaker pattern for external service failures, and response time SLAs from external providers under peak load.
Data replication / co-existence integration When: brownfield, parallel running required
During brownfield Guidewire implementations, the new system and legacy system must operate simultaneously for a period. Policy data may exist in both, with a defined source-of-truth boundary. Requires careful master data strategy and clear cutover sequencing.
⚠️ SA must validate: the source-of-truth boundary is explicitly defined and enforced — ambiguous boundaries create data drift. Reconciliation processes must be designed, not assumed. The cutover sequencing must be tested with production-representative data volumes, not only test data.
Knowledge Check
The product team asks for a synchronous API call from Guidewire PolicyCenter to an external telematics scoring service during the auto insurance quote process. The telematics provider's SLA states a 3-second average response time, with a 95th percentile of 8 seconds. The quote binding transaction in Guidewire has a 5-second timeout configured for external calls. What architectural concern must you raise before recommending this integration pattern?
2

Data migration architecture — the highest-risk SA decision

Data migration is consistently the highest-risk workstream in a Guidewire brownfield implementation — and the area where AI-assisted design needs the most SA scrutiny. The data migration architecture encompasses what data moves, when, through what mechanism, with what validation, and what the fallback is if migration validation fails during cutover. AI generates reasonable frameworks for this problem. The SA must apply the specific intelligence about the legacy data environment that makes the framework real.

Prompt — data migration architecture framework
Context Guidewire PolicyCenter brownfield implementation for an Ontario P&C insurer. Replacing a 25-year-old policy administration system. Personal auto and commercial auto lines in scope. Approximately 180,000 active policies plus 8 years of historical data.
Task Generate a data migration architecture framework for this project. Include: migration scope decisions (what to migrate vs. archive vs. leave in legacy access), migration approach options (big bang vs. phased by line of business vs. phased by policy cohort), validation strategy, cutover architecture, and rollback conditions.
Known constraints The legacy system will remain accessible (read-only) for 3 years post go-live for claims on pre-migration policies. Regulatory requirement: all policy records must be retained and accessible for 7 years. Data quality assessment is in progress — preliminary results show approximately 12% of commercial auto records have incomplete vehicle schedules. The go-live date is fixed by a regulatory filing deadline.
Format Framework document with sections for each component. For the migration approach options, present each as a genuine option with specific advantages and disadvantages given the fixed go-live date and the 12% data quality issue in commercial auto. Flag the data quality issue as a named risk that affects the migration approach decision — I will validate these options against the actual data quality assessment results before recommending an approach.
The 12% data quality issue is an architecture decision, not just a risk

AI will note the 12% incomplete commercial auto vehicle schedule issue as a risk in the migration framework. The SA must go further: this data quality issue affects which migration approach is viable. A big-bang migration that includes commercial auto with known data quality problems creates a go-live risk that the fixed regulatory deadline cannot accommodate. A phased approach that migrates personal auto first (if data quality is clean) and handles commercial auto with additional remediation time may be the architectural response to this constraint — not just the risk mitigation. The SA's job is to let the data quality reality shape the architecture, not treat it as a problem to be managed around a predetermined plan.

Knowledge Check
The data quality assessment completes and reveals that the 12% incomplete vehicle schedule issue is concentrated in commercial auto policies written before 2018. These 8,700 policies have missing or inconsistent vehicle schedule data that cannot be automatically mapped to the new Guidewire data model — they require manual remediation. The project manager wants to proceed with the original big-bang migration plan and handle the 8,700 policies as "exceptions" to be remediated post-go-live. As the SA, what is your response?
3

Legacy system connectivity — constraints AI doesn't know about

Legacy insurance systems have interface characteristics that no AI training set fully captures — because they're specific to each insurer's system version, configuration, and customisation history. The SA working on a legacy system integration must discover these constraints through direct investigation: technical documentation review, conversations with the legacy system team, and in many cases, direct testing.

📡

Interface capability discovery

Before designing any legacy integration, the SA must establish: what interfaces the legacy system actually exposes (documented interfaces may differ from what's actually deployed), what data is available through each interface, what the throughput and message size limits are, and what the error handling behaviour is for various failure conditions. AI can generate a discovery checklist; you must do the discovery.

🔄

Data format idiosyncrasies

Legacy insurance systems frequently use data formats, field naming conventions, and coding schemas that predate modern standards. The transformation layer between Guidewire and the legacy system must handle these — and the rules are often tribal knowledge held by the legacy system team rather than documented anywhere. The SA must extract this knowledge before designing the transformation architecture.

📅

Change window constraints

Legacy insurance systems in production often have rigid change windows — scheduled maintenance periods during which the system is unavailable. For a real-time integration, these windows create gaps in availability that must be architecturally handled. Queue-based integration can buffer messages during maintenance windows; synchronous integration cannot. This constraint shapes the integration pattern choice.

🏛️

Support and knowledge risk

In many insurance IT engagements, the team who built and understands the legacy system is small, aging out of the workforce, and not fully documented. The SA must assess the knowledge risk: if the one person who understands the legacy system's MQ interface leaves during the project, what happens to the integration design? This is an architectural consideration that affects how much tolerance the design should have for legacy system opacity.

4

Using AI for integration design — the right conversation pattern

Integration architecture design with AI works best as an iterative dialogue — not a single prompt. You start with the integration requirement, AI generates the pattern space, you add constraints, AI narrows and refines, you challenge the assumptions, AI identifies gaps you can then fill from your knowledge. Each cycle tightens the design toward something specific and defensible.

Integration design iteration — constraint challenge
Turn 1 — initial options [Generate options for integration requirement] — AI provides 3 pattern options.
Turn 2 — add constraints "Option B (event-driven with cloud broker) is eliminated by a 6-month security review backlog for new cloud platforms. Option A (direct MQ) has a 512KB message size limit we discovered in testing. Revise the analysis to reflect these constraints and identify what Option A's message size limit means for the commercial auto policy change payloads."
Turn 3 — challenge assumptions "You've assumed the legacy system's MQ interface supports durable subscriptions. We haven't confirmed this — the legacy team has only provided read-only documentation from 2019. What should I test or confirm before committing to Option A?"
Turn 4 — architecture review "Review the Option A design as currently specified for: single points of failure, failure scenarios not addressed, NFR gaps against the 30-second latency SLA, and anything a critical reviewer at an architecture review board would raise. Be specific — I'll take this to the client's architecture review board next week."
Knowledge Check
After three rounds of AI-assisted integration design iteration, you have a well-documented Option A architecture with all constraints applied. AI's final review raises one concern: "The design assumes the legacy system's MQ queue has sufficient depth to buffer messages during the nightly maintenance window (2am-4am). This should be validated against actual queue configuration." You haven't validated this. The architecture review board presentation is tomorrow. What do you do?
5

Module summary

Pattern selection requires constraint application

AI generates the integration pattern options reliably. The right pattern for your specific client depends on constraints AI doesn't know: legacy system capabilities, team skills, regulatory requirements, timeline constraints. Apply the constraints before selecting the pattern.

Data migration architecture is the highest-risk SA decision

The migration scope, approach, validation strategy, and rollback design shape the entire go-live risk profile. Data quality issues are architecture decisions, not just risks. Let the data reality shape the migration approach rather than fitting the data into a predetermined plan.

Legacy system constraints must be discovered

Interface capabilities, message size limits, throughput constraints, change windows, knowledge risk — these are specific to each legacy system and must be discovered through direct investigation. AI can generate the discovery checklist; the SA must do the discovery before committing to a design.

Open items must be explicit, not footnoted

Unvalidated assumptions in architecture presentations should be explicit open items with resolution dates and fallback options — not footnotes. Architecture review boards approve designs based on what's in the main document. Know what you've confirmed and present that status clearly.

Ready for Module 04

Module 04 — Communication & Governance — covers the other half of the SA role: translating complex architecture decisions for non-technical audiences, running architecture review processes, and maintaining the SA's position as the trusted technical authority across the engagement.

Module 03 Complete

Integration Architecture in Insurance IT is done. Continue to Module 04: Communication & Governance.