AI generates architecture patterns remarkably fast — integration topologies, component diagrams, reference architectures, technology option comparisons. What it generates is always a pattern answer to a pattern question. The Solution Architect's job is to know which constraints the pattern doesn't account for, which tradeoffs the diagram doesn't show, and which decisions require judgment that no pattern can provide.
What AI generates quickly — and what Solution Architects must own
Solution architecture has always had a pattern recognition component. An experienced SA working on a Guidewire PolicyCenter integration to a legacy billing system has seen variations of this problem before. They know the standard integration patterns, the common failure modes, the typical component boundaries. AI has seen more variations than any individual SA ever will — which makes it genuinely useful for generating starting-point architectures and identifying options that might not have been in immediate consideration.
The problem is that patterns are context-free by nature. A reference architecture for Guidewire real-time integration doesn't know that your client's legacy system can only handle 50 concurrent connections, that data sovereignty requirements mean certain components must stay on-premises, or that the insurer's regulatory environment in Quebec creates constraints that don't exist in Ontario. Those constraints are what make the architecture specific — and they're entirely invisible to AI unless you provide them.
The SA who uses AI well generates options fast, challenges each option against the actual constraint set, and makes the decision with full awareness of the tradeoffs. The SA who uses AI poorly presents an AI-generated architecture as the architecture — and discovers the missing constraints when the design is already in implementation.
⚙️Client-specific system constraints — throughput limits, connection limits, data format idiosyncrasies of legacy systems
🏛️Regulatory and data sovereignty requirements — provincial, federal, jurisdiction-specific rules that shape component placement
📜Non-functional requirements from the actual contract — specific SLAs, RTO/RPO commitments, performance thresholds
👥Operational reality — what the client's team can actually build, operate, and support
⚠️Named risks from engagement context — vendor limitations, team skill gaps, timeline constraints that affect architecture choices
2
High-value SA prompting patterns
The most productive SA use of AI is options generation and structured comparison — not asking "what's the best architecture?" but "given these constraints, what are the viable options and what are the tradeoffs of each?" The former gets a pattern answer. The latter gets analysis you can stress-test against your specific context.
Prompt — Guidewire integration options analysis
Environment
Guidewire PolicyCenter 10.x implementation for a mid-size Ontario P&C insurer. We need to integrate PolicyCenter with a legacy claims system (IBM mainframe, COBOL-based) that will not be replaced for at least 3 years. Integration requirement: real-time policy data updates must reach the claims system within 30 seconds of a policy change event in PolicyCenter.
Task
Generate 3 viable integration architecture options for this requirement. For each option, describe: the integration pattern, the key components, the data flow, and the primary technology choices. Do not recommend a single option — present all three as viable alternatives.
Known constraints
The legacy system has a published MQ interface for inbound messages. It cannot receive REST calls directly. The client's IT team has strong MQ skills but limited cloud platform experience. Data sovereignty: all policy data must remain within Canada. The client has an existing IBM DataPower gateway. No new middleware vendors can be introduced without a procurement process that takes 3 months minimum.
Format
For each option: Option name, Pattern description, Component list, Data flow summary, Advantages (specific to these constraints), Disadvantages (specific to these constraints), Key risk. Do not invent constraints or assume capabilities not stated. Flag any assumption you're making about the legacy system's MQ interface behaviour.
Why "do not recommend a single option" matters
Left to its defaults, AI will frequently select one option as preferred and frame the others as inferior. This selection is based on general best practice — not on your specific constraint set. By explicitly asking for three viable options without a recommendation, you force AI into the analysis mode that's actually useful: generating the option space for you to evaluate with your constraint knowledge. The recommendation decision is yours. AI's role is to make sure you've considered the options you might have overlooked.
Context
I'm conducting a current-state architecture assessment for a mid-size Canadian P&C insurer considering a core system modernisation programme. I need to structure the assessment framework before starting stakeholder interviews.
Task
Generate a current-state architecture assessment framework for a P&C insurance company. Include: assessment domains, key questions for each domain, artefacts to collect, and typical pain points to investigate in each area.
Scope
The insurer operates in Ontario and Quebec. Personal auto, commercial auto, and home lines. Core systems include a 25-year-old policy administration system, a separate billing system, and Guidewire ClaimCenter (implemented 4 years ago). About 400 staff. No cloud infrastructure currently — all on-premises.
Format
Assessment framework with 6-8 domains. For each domain: domain name, why it matters for a modernisation decision, 5-6 diagnostic questions, artefacts to request, and common findings in similar assessments. I will adapt this framework based on what I learn in the first three stakeholder conversations.
Knowledge Check
AI generates three integration options for your PolicyCenter-to-legacy-claims integration. Option B uses an event-driven architecture with a cloud-based message broker. The analysis notes it as "highly scalable and modern." You know that the client's IT team has limited cloud experience and their security team has a 6-month backlog on cloud platform security reviews. What is the correct way to handle Option B?
Correct — and this is the options analysis discipline that makes SA work valuable. Removing Option B prevents the client from seeing the full landscape and understanding why it's not being recommended. Presenting it as "modern direction" without the constraints misleads. Regenerating without cloud excludes an option the client might ask about anyway. The right approach: keep Option B, attach the real constraints — specific ones, not generic cautions — and let the client make an informed decision with full information. Your job is to give them the complete picture with the constraint layer applied. If Option B is genuinely off the table for this engagement, the constraints you've added will make that clear without you having to exclude it.
Option 3 is the correct approach. Excluding options from the analysis removes information the client needs. Advocating for modern approaches without attaching real constraints is advocacy, not architecture guidance. Regenerating without cloud produces a cleaner presentation but a less complete one — and if the client asks "did you consider cloud-based messaging?" you have no good answer. The constraint layer is what you add to the AI-generated analysis. It belongs in the document, attached to the specific option it affects, so the client can see exactly what makes each option viable or not viable for their specific situation.
3
The constraint layer — what patterns don't account for
Every AI-generated architecture pattern has an implicit constraint set: the "typical" environment it was designed for. When your actual environment differs from typical — which it almost always does in insurance IT — the pattern needs adjustment. The SA's job is to identify every place the pattern assumption diverges from the actual constraint and document what changes as a result.
⚙️
System capacity constraints
Legacy insurance systems have specific throughput limits, connection pool sizes, and message format requirements that AI patterns don't know about. A reference architecture that assumes a modern REST-capable system may need significant modification for a mainframe with a fixed MQ interface and a 512-byte message size limit. These constraints shape the entire integration design.
🏛️
Regulatory and data sovereignty
Canadian insurance IT operates under PIPEDA, provincial privacy legislation, and OSFI guidelines. Quebec's Law 25 creates specific requirements that differ from the rest of Canada. These constraints determine where data can reside, how it must be encrypted, and what audit trails are required — all of which affect component placement and architecture choices that patterns don't reflect.
👥
Operational team capability
The architecturally correct answer and the operationally sustainable answer are sometimes different. A microservices architecture might be the right technical approach for a Guidewire integration ecosystem — and completely unsupportable by a 3-person IT team with no container orchestration experience. Architecture that can't be operated is architecture that will create production problems.
📋
Contractual and SLA commitments
NFRs in the actual contract create hard constraints on architecture decisions. A 99.9% uptime SLA with a 4-hour RTO means something specific about redundancy, failover, and recovery design. AI patterns assume generic SLAs. Your architecture must account for the specific ones your client has committed to — or that you are recommending they commit to.
🔗
Vendor and product constraints
Guidewire has specific integration patterns, API capabilities, and performance characteristics that vary by version and deployment model (cloud vs. self-managed). Integration partners have their own constraints. The pattern AI generates may be correct in general but incompatible with the specific Guidewire version, deployment configuration, or third-party system the client is actually using.
⏱️
Timeline and sequencing constraints
The best architecture and the achievable architecture given a specific delivery timeline are often different things. An architecture decision that requires a 3-month procurement process, a 6-month security review, or a team training programme may not be viable for a project with a regulatory go-live deadline. Acknowledging these constraints in the architecture document is part of the SA's job — not a compromise to be apologised for.
Knowledge Check
You're presenting an architecture for a Guidewire PolicyCenter integration to the client's CTO. AI helped you generate the reference architecture and the options analysis. The CTO asks: "Does this architecture meet our Quebec Law 25 data residency requirements?" You haven't specifically validated the architecture against Quebec Law 25. What do you do?
Correct — and this is one of the most important professional disciplines for SAs working in Canadian insurance IT. Quebec Law 25 creates specific, binding requirements around personal information handling, consent, and data residency that have architecture implications — particularly for any component that handles personal insurance data. Claiming general compliance when you haven't done the specific assessment is professionally risky and potentially misleading. The honest answer — "I haven't validated against Law 25 specifically and I will" — is more trustworthy than a confident but unverified claim. It also gives you the information you need to adjust the architecture if the assessment reveals gaps.
Option 3 is the correct professional response. General "designed with data sovereignty in mind" statements don't constitute Law 25 compliance validation — the law has specific requirements that need specific verification. Deflecting to the legal team is appropriate for legal interpretation, but the architecture question of whether specific components create Law 25 exposure is an SA responsibility. Raising a change request treats a regulatory compliance requirement as optional scope — which is the wrong framing for a regulatory obligation. The right answer: acknowledge what you haven't validated, commit to doing it, and adjust the architecture if the validation reveals gaps.
4
Architecture ownership — the SA signs the design
The architecture document has the SA's name on it. When it goes to the client's CTO, the steering committee, or the implementation team, it represents the SA's professional judgment about how to solve the technical problem given the actual constraints. AI-generated origin doesn't change that — it just means the first draft came faster.
Architecture ownership has a specific meaning in insurance IT: if the design goes into implementation and the integration fails to meet the latency SLA, if the data migration approach causes data quality issues, if the component that was supposed to be highly available turns out to have a single point of failure — the SA who signed the design owns that outcome professionally. Not as blame, but as accountability that creates the standard of care the work deserves.
The SA who says "AI recommended this integration pattern" when it fails production load testing has abdicated the professional responsibility they were engaged to carry. The SA who says "I recommended this pattern based on the requirements as I understood them at the time, and here's what I missed" is operating at the professional standard the role requires.
The architecture review prompt — challenge your own design
Before presenting any AI-assisted architecture, run it through an explicit challenge prompt: "Review this architecture for: single points of failure, constraint violations, NFR gaps, security considerations, and operability concerns given a team with [specific skills]. What would a critical reviewer raise?" AI will find things you've missed — and finding them before the client review is significantly better than finding them during it.
Knowledge Check
Six weeks after your Guidewire integration architecture is approved and implementation begins, the development team discovers that the message transformation component you specified can only handle 200 messages per minute — but peak policy change volume during renewal season is estimated at 800 messages per minute. This constraint wasn't in the requirements document you were given. What is the professionally correct response?
Correct — and this is the ownership standard for SAs. Peak throughput under renewal season conditions is a foreseeable operational scenario for a P&C insurance policy integration. An SA conducting an NFR assessment should be asking "what is peak volume, when does it occur, and how does it compare to average?" regardless of whether the requirements document spells it out. "It wasn't in the requirements" is accurate but insufficient — experienced SAs probe for constraints that requirements documents commonly omit. Own the gap, fix the design, document the lesson. The client deserves an SA who improves their practice from this kind of miss, not one who deflects to process.
Option 2 is the professionally correct response. A change request may ultimately be appropriate, but leading with it frames an architecture gap as a scope issue — which deflects accountability. Escalating for a root cause review of who should have provided the data is the right long-term process improvement, but it doesn't solve the immediate architecture problem and it shifts focus away from your own role in the gap. Framing it as "normal architecture evolution" is accurate in some respects but doesn't acknowledge that this particular evolution was foreseeable and preventable. Own it, fix it, learn from it — that's the standard.
5
Module summary
✅
AI generates patterns; SAs own constraints
Reference architectures, options comparisons, technology analysis — AI produces these reliably. The constraint layer — system limits, regulatory requirements, operational capability, SLA commitments — is what you add. The constraint layer is where the value is.
✅
Options analysis over recommendation
Ask AI for viable options with tradeoffs, not for the best answer. The recommendation decision requires constraint knowledge AI doesn't have. Generate the option space; apply the constraint layer; make the call with full information.
✅
Validate before presenting
Challenge your own design with an explicit review prompt. Regulatory requirements — Law 25, PIPEDA, OSFI — need specific validation, not general assurance. NFRs need to be checked against actual business data, including peak scenarios. Don't present what you haven't validated.
✅
Your name is on the design
AI-generated origin doesn't transfer professional accountability. When the architecture goes to the CTO, it's yours. When it goes into implementation, it's yours. When it fails a load test, it's yours. Own the design, own the gaps, own the corrections.
Ready for Module 02
Module 02 — Design Decisions & Tradeoff Documentation — covers the discipline of recording architecture decisions so they survive beyond the SA engagement: what was decided, what alternatives were considered, what constraints shaped the choice, and what future architects need to know to extend the design safely.
✓
Module 01 Complete
Architecture at Speed is done. Continue to Module 02: Design Decisions & Tradeoff Documentation.