The Engine That Transforms How IHG Grows

Hello Wei,

Thank you for the opportunity to share the architectural thinking we started at the recent dinner. This microsite represents my vision for IHG's agentic enterprise architecture and how I would build it as VP reporting to you. The video is the overview, and the details are in this microsite in the tabs.

Looking at the competitive field, Marriott is spending $1.1B on an "agentic mesh" pilot at six properties while Hilton is running 41 disconnected use cases. IHG's path is different and already articulated: Jolie has named the strategy Invisible AI, and you are building the organization to deliver it. The VP, AI Platform & Engineering role is the execution force that turns both into reality.

The operating mode I want to bring is the one you described: intensely competitive about impact, genuinely optimistic about what's possible, urgency and hope running in parallel.

I'm looking forward to making this a reality with you.

Thank you,
Pavol
Watch the vision · ~5 min
Wei, here's what I would build for IHG, and how.

The Foundation Is Built and
The Platform Comes Next

From what my research tells me, although gaps remain, IHG has done the initial work and made platform commitments rather than just buying technology. GCP has been your cloud since 2022, BigQuery anchors the data layer, Vertex AI and Gemini are already live in production through the travel planner, Salesforce Einstein 1 powers loyalty, the new AI-driven RMS has rolled across the eligible estate, Oracle OPERA Cloud is the approved next-generation PMS, and IHG Concerto still runs reservations underneath. The infrastructure foundation is in place.

The constraint that defines every architectural choice from here, however, is one that you have named publicly: IHG operates a 70%-franchised network, which means AI adoption cannot be mandated top-down. The strategy has to turn operators, franchisees, and employees into AI creators rather than just users.

The platform that wins is the one that treats them as customers rather than delivery targets, and that's the platform I propose to build.

What IHG Already Has · The Foundation
Platform & Engineering Foundation
Google Cloud Platform
Cloud provider since 2022
AWS + Aviatrix Multicloud
AWS + GCP multicloud, Aviatrix mesh
Microsoft Azure
Third cloud (via Amadeus migration)
Equinix
Data center interconnection fabric
BigQuery
Enterprise data warehouse
Reltio Data Cloud
Cloud MDM for unified guest/property data
Google Cloud Data Fusion
ETL: Teradata → BigQuery migration
Vertex AI + Gemini
AI travel planner live
Anthropic Claude
Multi-model reasoning, on Vertex AI
NL Search · Multi-Model
Google + OpenAI + Anthropic (May 2026)
Machine Translation
20+ brands, global reach
Apigee API Gateway
Secure API gateway for external systems
Kafka + GCP Pub/Sub
Event-streaming backbone across the stack
Hapi Integration Platform
PMS data normalization across 3,600+ hotels
ServiceNow
Enterprise IT service management
BigPanda
AIOps event correlation
Tricentis Tosca
Continuous test automation across SDLC
GitHub
Source control & dev platform
Commerce, Distribution & Hotel Operations
IHG Concerto
Reservation backbone · attribute-based pricing
Amadeus GRS
Next-gen cloud reservation system
Oracle OPERA Cloud (PMS)
2,000 hotels in 2025 · approved Jan 2026
HotelKey PMS
Cloud PMS for select-service brands
Shiji PMS
Cloud PMS, China + global estate
Revenue Analytics N2Pricing
RM platform · 3,400 hotels (2025)
DerbySoft Distribution
B2B distribution + channel switch
FreedomPay
Unified payments, 4,000+ US/Canada properties
Guest, Loyalty & Workforce Experience
Salesforce Einstein 1
First in hospitality on Loyalty Mgmt + Data Cloud
Salesforce Service Cloud
360 guest view, service & case mgmt
Salesforce Marketing Cloud
Email, SMS, push to guests
IHG One Rewards App
Digital travel companion
Adobe AEM + Workfront
Web content & brand workflow, 16+ languages
Medallia
Guest feedback, HeartBeat surveys
Genesys Cloud
Global contact center, 8 sites unified
Speakeasy AI · Voice Cloud
Conversational chatbots & voice
Apple + Google Business Messages
Native messaging channels for guests
Workday
Enterprise HRIS, global HR system
Paradox (Olivia)
Conversational AI hiring, 6,500+ hotels
The Competitive Reality · Where the Field Stands
HiltonMoving Fast
41 active AI use cases deployed across operations
Google + OpenAI + Anthropic: multi-partner AI strategy
AI planning tool designed to keep guests inside Hilton's ecosystem
CEO committed to AI as a core competitive differentiator for 2025–2027
MarriottInvesting Heavily
$1–1.2B projected tech spend, fundamental re-architecture
Cloud-native rebuild of core operational systems
AI agents for room assignments and enterprise reporting
Unified guest data strategy across all brands
Industry Signal+250% in 2025
250% increase in hospitality AI investment in 2025 vs prior year
15–25% revenue uplift from AI dynamic pricing at scale
80% of hotels using or planning AI-driven personalization
$58.56B global AI in hospitality market by 2029 at 30% CAGR

One Rewards Is IHG's Most Valuable AI Asset.

20%
Member Spend Premium
One Rewards members already spend 20% more than non-members per stay, and AI personalization is what makes that number grow further.
Direct Booking Rate
Members are 9× more likely to book directly, and every additional point of loyalty intelligence protects IHG's margin against the OTAs.
7,000+
Hotels · 100 Countries
Scale creates signal: more properties means more guest touchpoints and more data, which compounds into a data advantage that grows with every property added.

My read is that the ingredients are already assembled: GCP as the foundation, One Rewards as the fuel, and the franchisee network as the constraint that shapes every architectural decision. What's missing is the layer that turns those ingredients into durable advantage: AI built on IHG's own logic, scars, and foresight rather than a generic stack.

Three Domains of AI at IHG
For Compounding Returns

From what I've seen working across hospitality and other regulated industries, AI for a 7,000-hotel company falls into three categories, and most operators only build the first two well.

Domain 1 makes the team faster, Domain 2 reimagines the guest experience, and Domain 3 (the back-of-house magic) is where margin compounds quietly over time. The third is the category Jolie has named "Invisible AI": AI that makes the employee more effective so the guest experience feels more human.

Done right, all three domains run on one platform and share one governance layer. Below I propose representative use cases in each domain that show where the platform compounds first.

Domain 01 · The workforce is the ROI
AI for the Team
Automation is just the floor; scope is where the real unlock lives.
In a 70%-franchised network, centralized AI delivered top-down can't reach the work that actually matters. What I propose instead is turning employees, operators, and franchisees into AI creators rather than passive users. Trophy projects don't scale, but democratized adoption does. In your own words, "one worker saving an hour a day is interesting, but ten thousand workers saving an hour a day is a new operating model."
1.1
AI-Native Software Delivery
Engineering Operating Model
Velocity Multiplier
3–5×
Engineering throughput when SDLC is redesigned around AI, not bolted onto it
The cost of writing code is collapsing toward zero, and the engineering instincts that came with expensive code (caution, gated approvals, multi-stage handoffs) were optimized for a world that no longer exists. Most organizations respond by bolting Copilot or Gemini Code Assist onto the same SDLC and calling it transformation, which I see as a category error. The deeper play is to redesign the SDLC for the world we're in now: code is cheap, judgment is the bottleneck, and the operating model has to shift from certainty about process to certainty about outcomes.
What I propose is an engineering organization where Claude Code, Gemini Code Assist, and agentic PR review aren't treated as novelties but as the operating model itself: architects describing systems in machine-readable contracts, agents implementing the boilerplate, generating the Tricentis Tosca test suites that already gate IHG's SDLC, and shipping across the AWS + Azure + GCP multicloud without translating between teams. Senior engineers end up spending their time on architecture rather than plumbing.
GitHub, the GCP DevOps stack, and frontier coding agents already exist. What's missing is the executive conviction to rebuild the SDLC around them, plus the architectural taste to know which parts to automate and which to keep human.
1.2
Hospitality-Tuned Cognitive Partners
Function-Specific Productivity
Gartner Benchmark
25%
Operational efficiency lift for organizations blending human expertise with AI
IHG has 300,000+ colleagues across 7,000 properties, and 70% of those properties are franchisee-run, which means top-down AI rollout is structurally impossible. "Assistant" undersells what these tools actually are. A cognitive partner is software that understands IHG's processes, policies, and brand DNA. Generic AI tools don't survive contact with revenue managers, front-desk teams, or property-level operators. Hospitality-context-aware cognitive partners (built on IHG's processes, policies, and brand DNA) do.
Purpose-built cognitive partners for the roles that matter most: revenue managers making pricing decisions on top of N2Pricing, guest services teams responding to high-value One Rewards members, operations teams managing multi-property portfolios on OPERA Cloud, franchisees building their own tools on the platform, and corporate functions handling procurement, finance, and legal workflows. Not generic chat in a sidebar, but partners that understand IHG's context deeply and that franchisees actively want to use, because the platform makes their job better.
GCP foundation, Gemini and Claude on Vertex AI, IHG's proprietary process and brand data, and the same runtime that powers the workforce intelligence layer (1.3) provide the engine. The platform-as-product disposition is what makes the difference between adoption and the slow, low-adoption rollout most enterprise platforms suffer.
1.3
Workforce & Talent Intelligence
People Lifecycle
Turnover Reduction
30%+
Reduction in 90-day exit rate when onboarding, scheduling, and retention signals are fused into one intelligence layer
Hospitality runs on people, and hospitality loses people faster than almost any industry. Annual hourly turnover hovers around 70-80% across the sector, and each lost hire is $5,000-$15,000 in re-hire and re-training cost before the productivity hit lands. IHG's enterprise foundation is already in place: Workday is the global HRIS, Paradox Olivia is live across 6,500+ hotels as the hiring conversation layer, and Medallia HeartBeat reaches every guest-facing colleague for sentiment. What's missing is the intelligence layer that connects the pieces into one view of the workforce.
A people lifecycle intelligence layer that does for the workforce what the platform does for the guest. Hiring shifts from reactive to predictive: which roles are about to turn over at which properties, which candidates fit which brand cultures, which assignments will retain a high-performer. Onboarding shifts from a packet of PDFs to a brand-tuned cognitive partner that walks a new hire through their first 90 days in their language. Retention shifts from exit interviews to early-warning signals that surface to the GM weeks before a notice would land on the desk.
Workday, Paradox, Medallia HeartBeat, and the cognitive-partner runtime from 1.2 already provide the signal. The intelligence layer sits on top: behavior, sentiment, schedule, and brand context fused into a single retention model that runs at the property level and is observable at the portfolio level.
Domain 02 · The agent layer question
AI for the Guest
The guest journey is becoming machine-mediated. The question isn't if we show up there; it's who owns the agent layer.
The visible AI: search, planning, personalization, loyalty, and the new agentic distribution surface where AI agents (not humans) decide what brands surface at all. "Your competitors aren't just other brands anymore. They're the AI agents deciding whether to surface your products." IHG's announced multi-model NL Search initiative is the entry move. The platform is what makes IHG agent-readable across the whole guest lifecycle, not just at the front door.
2.1
Guest Intelligence
Guest Experience
Value Signal
20%+
Revenue uplift per member. With AI personalization, this number grows significantly.
IHG's announced natural-language hotel search across 7,000 properties is a public commitment to multi-model AI at the front door. But search is only the entry point. Guests deserve a system that knows their preferences, anticipates their needs, and personalizes every touchpoint from first search to post-stay follow-up. The data exists in One Rewards and Salesforce Data Cloud. The AI capability exists in Vertex AI alongside Claude on multi-model. The missing piece is the Guest DNA architecture that connects them.
A unified guest intelligence layer that builds a behavioral profile across every stay, search, and Service Cloud interaction, then uses it to personalize recommendations, room assignments, dining, offers, and Marketing Cloud journeys for the individual guest, at 7,000-hotel scale.
Vertex AI + Gemini + Claude + Salesforce Einstein 1 (first in hospitality on Loyalty Mgmt + Data Cloud) + One Rewards behavioral history already in place. This extends what exists, it doesn't start over.
2.2
Loyalty Intelligence
IHG One Rewards
Direct Booking Multiplier
Members already book direct 9× more. AI turns loyalty data into a retention engine.
IHG One Rewards is IHG's most strategically valuable asset, and IHG is the first global hospitality company to have standardized on Salesforce Einstein 1 + Loyalty Management + Data Cloud. That foundation is rare. What it enables is rarer still: predictive loyalty intelligence at the individual level. Members already spend 20% more and book direct 9× more often, and that's before loyalty intelligence is in the picture.
Predictive loyalty intelligence that goes far beyond points and tiers. AI models that predict which members are at risk of churning, which are ready to upgrade their travel patterns, and which promotions will convert for which individual, before a competitor makes the offer. The activation runs through Marketing Cloud to reach the member in the channel they actually use. Turn the 20% spend lift into 35%+ through hyper-personalized engagement at the moment that matters.
Salesforce Einstein 1 + Loyalty Management + Marketing Cloud are live and integrated. BigQuery holds the behavioral history. The AI intelligence layer is the missing accelerant.
2.3
Agentic Commerce
The Agent Layer
Channel Shift Horizon
30%+
Industry estimates of bookings flowing through AI agents by 2030. Owning the agent layer means owning that channel before it becomes the next OTA tax.
Guest journeys are getting machine-mediated faster than anyone wants to admit. ChatGPT's commerce surface, Gemini's agent mode, Perplexity's booking push, Apple Intelligence's app-intents layer: these are the next OTA, and the margin math is going to be familiar. You named this risk publicly when you wrote that competitors aren't just other brands anymore, they're the AI agents deciding whether to surface our products at all. The Multi-Model NL Search rollout in May 2026 was IHG's first public move on that question. It can't be the last.
What I propose is making IHG agent-readable on the outside and agent-orchestrated on the inside. On the outside: governed APIs through Apigee, structured data and intent vocabularies that let third-party agents (whether ChatGPT, Gemini, Perplexity, or whatever comes next) discover, compare, and book IHG without depending on the OTA layer. On the inside: a first-party agent surface in the One Rewards app and on ihg.com that handles the multi-step trip composition Hilton's planner promises but inside IHG's own ecosystem. The One Rewards 20% spend premium and 9× direct-booking lift become defensible against the agent wave instead of vulnerable to it.
Apigee for the governed external surface, NL Search Multi-Model as the front door already in market, Vertex AI + Claude + Gemini for the reasoning, Einstein 1 + One Rewards for the personalization, and the agent runtime in Layer 5 of the platform for execution. Hilton's planner is a walled garden. The IHG move is the opposite: agent-friendly on the outside, agent-orchestrated on the inside.
2.4
Multilingual Guest Experience
Global Reach as a Moat
Native-Tongue Moments
100%
Of guest interactions delivered in the guest's preferred language at every channel: search, app, voice, message, on-property, and post-stay
IHG is the most globally distributed of the major hospitality brands: 20+ brands across 100+ countries, with 16+ languages already supported in the Adobe AEM + Workfront content stack. That global footprint is currently underleveraged from an AI standpoint. Machine translation is already a chip on the foundation board, but translation alone isn't multilingual hospitality. A French guest checking into a Holiday Inn in Tokyo isn't looking for English subtitles, they're looking for hospitality that feels native to them across every interaction.
Multilingual AI as a first-class capability across every guest touchpoint: search (multi-model NL Search in any language), messaging (Apple Business Messages, Google Business Messages, WhatsApp in the guest's language), voice (Speakeasy AI tuned to regional accents and dialects), on-property (cognitive partners helping front-desk colleagues converse with guests in languages they don't personally speak), and post-stay (Service Cloud and Marketing Cloud delivering follow-up in the guest's native context). The 16+ languages becomes 30+, then 50+, and the experience stops feeling like "an American brand in a foreign market" and starts feeling like a brand built for the world.
Adobe AEM + Workfront + Machine Translation + Speakeasy AI Voice + Apple/Google Business Messages + Genesys Cloud (8 sites already unified) are in place. The intelligence layer ties them into one consistent voice. This is a moat the US-centric majors structurally can't match in their next planning cycle.
Domain 03 · The signature
AI for the Operation
The magic the guest never sees: reservations, revenue, distribution, operations.
Front of house gets the headlines, but back of house is where margin compounds over time. What I'm describing here is AI as a peer that runs the engine room, not a tool that helps humans run it. Looking at the competitive field, Marriott is spending $1.1B on an "agentic mesh" pilot at six properties while Hilton is running 41 disconnected use cases. IHG's path is different: Invisible AI, deployed across the estate with discipline, on the platform that already works. This is where real advantage hides, and where most of the P&L upside lives.
3.1
Autonomous Reservations
Concerto · Distribution · OPERA
Margin Recovery
$100s M
Revenue retained from intelligent cancellation, rebooking, and distribution decisions made by agents instead of call queues
IHG Concerto is the reservation backbone. Oracle OPERA Cloud (newly approved in January 2026) is the next-generation PMS. Together they handle billions in transactions across 7,000 hotels worldwide. Today, much of it is procedural: humans interpret edge cases, escalations queue, distribution complexity gets managed by exception. Every minute a complex booking sits unresolved is revenue at risk. A modern reservation system isn't a database lookup; it's a real-time negotiation between guest, property, channel, and inventory. AI is the only operator that can run that negotiation at IHG's scale.
What I propose is reservation handling that operates with judgment rather than just rules: group bookings reconciled autonomously, distribution channel optimization in real time, loyalty edge cases resolved without a human in the loop, and cancellations and rebookings that retain revenue rather than leak it. The reservation desk evolves from a phone bank of last resort into a supervisor of agents handling the routine work directly.
Concerto, the OPERA Cloud rollout, and the One Rewards data already exist. The missing layer is agentic orchestration that lets the system act on its own, which is exactly what Layer 5 (Agent Runtime) of the platform delivers, working with Layer 1 (Integration & Tool Fabric) underneath.
3.2
Intelligent Revenue
Pricing & Yield
Industry Benchmark
15–25%
Revenue lift at properties with AI-native dynamic pricing implemented at scale
In 2025, IHG completed rollout of its new AI-driven revenue management system to 3,400 hotels: its eligible estate. Extraordinary infrastructure, but infrastructure on its own is table stakes now, not differentiation. An AI-native pricing intelligence layer on top of the RMS turns a tool into a competitive weapon. Static rules and nightly batch jobs leave revenue on the table. The gap between what IHG's pricing knows today and what it could know (in real time, at every property) is measured in hundreds of millions of dollars.
What I propose is AI-native dynamic pricing at hourly granularity, incorporating demand signals, competitor rates, local events, weather, loyalty segment mix, and historical patterns simultaneously, with consistent strategy across the portfolio and intelligent execution at the property level. Every hotel ends up always priced for what the market will bear, not what yesterday's model predicted.
The RMS rollout already provides the data pipeline and property-level integration across the estate. The AI intelligence layer sits on top, no rip and replace required.
3.3
Operations Intelligence
Portfolio Operations
Maintenance Cost Reduction
30–50%
Fewer emergency repairs and maintenance savings at properties with predictive AI
Across 7,000+ properties, reactive maintenance, manual staffing decisions, and energy waste represent billions in avoidable cost. Each property experiences these inefficiencies in isolation. The accelerated PMS rollout (2,000 hotels in 2025 alone) creates an unprecedented operational data pipeline. Every additional hotel on OPERA Cloud increases the portfolio signal density. AI changes the model: pooling signals across the entire portfolio to predict failures before they happen, optimize staffing against actual demand, and manage energy intelligently.
What I propose is a portfolio-level operations intelligence layer that gives every property manager the predictive awareness previously only available at the largest, most sophisticated individual properties: maintenance predicted before it fails, staffing optimized against demand and local events, energy managed against occupancy forecasts. Managed centrally, delivered locally.
PMS modernization is already in flight. Operations intelligence is the natural next layer on top: the kind of long-arc bet most operators de-prioritize, and the one that separates winners from the rest of the field over a five-year horizon.

My intent is that these three domains aren't separate programs, they're one platform expressed three ways. Domain 1 makes the team faster, Domain 2 reimagines what the guest experiences, and Domain 3 (the back-of-house work the guest never sees) is where margin compounds quietly over time. Most operators only do the first two well. Domain 3 is where IHG can build the kind of operational leverage that compounds quietly and is hard to replicate.

A GCP-Native AI Platform
Built to Compound

Most enterprise AI is a collection of point tools that don't reinforce each other. The platform I propose is a unified, GCP-native system where each layer makes the next one more useful: data feeds models, models drive agents, agents produce outcomes that come back as better data. Governance is built in from day one, so engineers spend their time building rather than assembling.

The platform spans architecture, governance, and the adoption culture that makes it work. It serves three masters: the business units who need AI use cases shipped fast, the risk function that needs every model accountable and auditable, and the colleagues and franchisees who need to adopt it. Good platform design makes all three allies.

This is the platform that runs underneath those three domains, and each layer below is sized to what AI for the Team, AI for the Guest, and AI for the Operation demand. The compounding only works because all three share one platform and one governance layer.

Design Principles
Adoption-First, Not Adoption-Last
My read on most enterprise AI platforms is that they fail not because they're technically wrong, but because they're practically ignored. So I propose to build this one to be always easier to use than to avoid, friction-free by design rather than by evangelism.
Governed from Day 1
Governance retrofitted onto an AI platform is slow, expensive, and tends to create adversarial relationships with risk and compliance, which is why I propose to embed governance from day one, treating it as a feature of the platform rather than a tax on it.
Multi-Cloud and Multi-Model as a Competitive Lever
The platform goes deep on GCP where most of the workload lives, but it should stay flexible enough to let workloads run where they're the best fit. Multi-model orchestration lives on Vertex AI: Gemini by default, Claude for reasoning depth, specialized models where they earn it. When the business case demands a different cloud or AI, best tool wins.
Platform and Org Solution Layers
Layer 1 · Integration & Tool Fabric
Pub/Sub · Kafka · Adapter Registry · MCP Tool Catalog · Domain Boundaries
The Connective Tissue
Event Bus & Streaming
Pub/Sub on GCP, Kafka where partner systems use it. The fabric where guest, reservation, pricing, and property events flow in real time. Same bus everyone consumes from and emits to.
Adapter Registry
Partner-agnostic connectors for OPERA Cloud (PMS), IHG Concerto (reservations), the RMS, the CMS, and Salesforce Einstein 1 (loyalty). Reusable archetypes per system, not per-deal one-offs.
Canonical Transformer
Normalizes data into a shared semantic model (Guest, Reservation, Property, Rate, Stay) so agents work in IHG's domain language, not vendor schemas.
Tool Catalog (MCP)
Discoverable registry of tools agents can call via Model Context Protocol, with per-tool authorization, scopes, and rate limits. Makes "agent calls cancel-booking" auditable, not magic.
Domain Service Boundaries
Clear ownership lines across Guest, Reservation, Property, Revenue, Loyalty, and Operations. Keeps the platform from becoming integration spaghetti.
Layer 2 · Data & Intelligence Foundation
BigQuery · Vertex AI Feature Store · Real-time event streaming
Already Exists · Extend
BigQuery Enterprise DW
Unified data lake across all guest, loyalty, operational, and commercial signals
Vertex AI Feature Store
Reusable, governed feature sets shared across all models and use cases
Real-Time Event Streams
Guest touchpoint streaming from all 7,000+ properties into the intelligence layer
Data Governance & Lineage
End-to-end data provenance, so every model knows where its training data came from
Unified Guest Identity
One Rewards as the connective thread, so every signal resolves to a guest identity
Layer 3 · Model Management
Vertex AI · Gemini · Claude · Hospitality-tuned · LLMOps
Multi-model on Vertex AI
Gemini Foundation Models
GCP-native generative AI via Vertex AI, already in production at IHG, now scaled systematically across use cases
Claude (Anthropic) Integration
Best-in-class reasoning and long-context tasks (contract analysis, complex guest resolution, structured knowledge work) routed to Claude where it outperforms
Hospitality-Tuned Models
Fine-tuned on IHG's proprietary data: guest preferences, property signals, loyalty behavior
Model Registry & Routing
Cost-based and capability-based routing across providers. The right model for the job, decided at runtime rather than hard-coded into prompts.
LLMOps Infrastructure
Model versioning, evaluation pipelines, drift detection, and production health monitoring across all providers
Prompt Management
Centralized prompt registry, version-controlled, tested, and governed across all teams and models
Layer 4 · AI Gateway & Token Economics
Quotas · Cost-based routing · Semantic caching · API gateway · Spend telemetry
Costs Become Architecture, Not Surprise
Token Quota Enforcement
Per-project and per-domain token budgets. Costs become predictable commitments per business owner, not runtime surprises.
Cost-Based Routing
Route each call to the model that best fits the job and the budget. Cheap models handle simple work; reasoning models handle complex tasks; specialized models handle regulated workflows.
Semantic Caching
Cache results by meaning, not just exact prompt match. Repeat questions get answered once and reused, dropping cost and latency together.
API Gateway
Centralized auth, rate limiting, and observability for every model and agent call. One place to see what's running, what it costs, and who invoked it.
Spend Telemetry
Real-time cost dashboards tied to business outcomes. Every dollar of model spend traces back to a use case, a domain, and an outcome.
Layer 5 · Agent Runtime
Vertex AI Agents · Orchestration · Identity · State · Coordination
Where Agents Run
Multi-Step Orchestration
Multi-step agentic workflows across the Human + Agent team model, spanning guest, operations, and revenue use cases.
Agent Identity
Every agent is a first-class service identity, distinct from the user who authorized it. Auditable down to which agent ran which tool on whose behalf.
State & Memory
Conversational and operational state persisted across long-running workflows; resilient to retries, escalations, and human handoffs.
Multi-Agent Coordination
Agents delegate, sub-task, and escalate based on capability and authorization, not hard-coded handoffs.
Retry, Circuit Breaker & Graceful Failure
Loops are bounded, calls retry with backoff, and humans pick up when the system asks for help. Agents fail safely, not silently.
Layer 6 · Application Intelligence
Guest · Revenue · Loyalty · Operations · Corporate
Ship Use Cases
Guest Experience APIs
Personalization, recommendations, and trip intelligence served to app, web, and property systems
Revenue Optimization
AI-driven dynamic pricing, demand forecasting, and yield management at property and portfolio level
Loyalty Intelligence Engine
Predictive member engagement, lifetime value models, and next-best-offer generation
Operations AI
Predictive maintenance, intelligent staffing, energy optimization across the portfolio
Corporate Cognitive Partners
Function-specific agents for finance, procurement, legal, HR, and marketing. Not generic assistants, partners that understand IHG's operating context.
Layer 7 · Responsible AI & Governance
Intake Portal · Trust Tiers · Risk-Tiered Review · Value Realization
Embedded from Day 1
Trust Tiers (your framework)
Tier 3 exploratory · Tier 2 operational · Tier 1 mission-critical. Match the data-quality bar and review depth to the use-case stakes. Don't wait for perfect data; route by tier.
AI Use Case Intake Portal
Single front door for any AI use case proposed across the 7,000-hotel estate, 20+ brands, and corporate functions. Built on ServiceNow, available to corporate teams and franchisee operators, multi-language. Replaces the maze of emails, decks, and one-off approval threads that slows AI to a crawl in most enterprises.
Risk-Tiered Review Orchestration
Each Trust Tier triggers its own parallel review stream. Tier 3 gets a lightweight review and ships fast. Tier 2 routes through Risk, Privacy, Brand, and Architecture in parallel. Tier 1 gets full ARB-level scrutiny. Rigor matches stakes; speed matches conviction.
Value Realization & Portfolio Dashboard
Every approved use case is tracked from intake to production to measurable business outcome. Executive view shows AI spend, ROI, operational impact, model dependency, and risk exposure across the entire IHG estate, updated in real time.
Sequence Governance
Humans think first. AI refines, evaluates, and scales. The platform encodes the sequence so diversity of judgment is preserved at the source, not commoditized into median outputs.
Responsible AI Framework
Bias testing, fairness controls, and ethical review embedded in the model deployment pipeline. Optimism without responsibility is negligence (your words, our default).
Variance Budgets & Graceful Degradation
Non-deterministic systems need civil-engineering instincts: explicit variance budgets, redundancy, human-in-the-loop as architecture not afterthought.
Disclosure Culture
"Show Your Work" norms turn AI-use disclosure into a status-positive signal. The shame tax on disclosure is what kills enterprise adoption, and we retire it on day one.
Audit Readiness · GenAI Guardrails · Data Privacy
Output filters, hallucination detection, PII isolation, consent management. When regulators ask questions, answers already exist, in real time.
Layer 8 · Adoption & Culture
AI fluency · Self-service tooling · Communities of practice
Make It Inevitable
Self-Service Developer Platform
Engineers across the business build on the platform without a ticket queue or a meeting request
AI Fluency Programs
Meet colleagues where they are, from prompt-curious to production-capable, at every level
Communities of Practice
Expertise spreads organically, not by mandate, but because the platform makes it easy to share
AI Lab
Rapid prototyping space where the next platform capability gets born, not where PoCs go to die
Impact Measurement
Every AI initiative tied to a business metric. Adoption is measured in outcomes, not usage stats.
From EA to EAIA · How the Discipline Itself Shifts

Beyond the layers above, your role spans architecture and AI together; the & in your title is doing real work. The biggest question I'm hearing inside other enterprises right now is what happens to traditional Enterprise Architecture when AI agents become first-class citizens of the operating model. My read is that EA doesn't get replaced; it pivots. Capability maps stay valuable, reference patterns stay valuable, the Architecture Review Board stays valuable. What changes is the artifacts those practices produce, the cadence at which they're versioned, and the audience they serve. Here is how I see the shift.

Traditional EA Practice
EAIA Equivalent
Capability maps as static yearly deliverables
Living capability and agent maps, updated at the cadence of each Trust Tier release rather than once a year
Reference architectures locked once a year
Reference patterns versioned per Trust Tier (3 / 2 / 1), each with its own template, review cadence, and exit criteria
Architecture Review Board as a gate
Enabling guardrails encoded as policy-as-code; the ARB shifts from gatekeeper to pattern curator and exception handler
Business-IT alignment artifacts produced as one-way translation
Outcome-as-platform: every capability ties to a measurable business metric, surfaced in real time to the owner who needs it
Application portfolios tracked by IT
Application, agent, and tool portfolios co-owned with the business and tracked side by side
TOGAF and Zachman as the primary frameworks
TOGAF foundation extended with non-deterministic-system patterns: variance budgets, graceful degradation, sequence governance, agent identity
Architecture as static documentation in wikis and PDFs
Architecture as executable contracts: machine-readable interfaces that agents can read, act on, and check themselves against

The pivot keeps everything an EA team already knows how to do and points it at faster cadences, more directly measurable outcomes, and machine-readable artifacts. The EA practice survives the AI era when it leans into the rigor it already has and updates the artifacts to match the new substrate. The discipline becomes EAIA: Enterprise AI Architecture, where the same governance, standards, and pattern thinking apply to a world where some of the actors running on the platform are not human.

This view is the conversation starter and it isn't supposed to be exhaustive, it's indicative of where my thinking is going and what I see as the priorities right now. We'll add to this as we go, and we'll add a lot.

How I Would Build It
Sequenced Across Four Milestones

In my experience, the AI transformations that fail are the ones that started with the slide deck and never arrived at the reality, and most plans I review look like Gantt charts.

What I propose instead is grounded in Retire, Reinvent, Build: retire what no longer earns its place, reinvent what's structurally sound but procedurally stuck, and build the orchestration layer that turns IHG's existing stack (GCP, BigQuery, Vertex AI + Gemini, OPERA Cloud, the new AI-driven RMS, and the Salesforce Einstein 1 loyalty backbone) into a single working platform.

What follows is the milestone sequence: each step earns the right to the next, and none of them skip the hard parts.

The Architectural Thesis · Retire, Reinvent, Build
Retire
What no longer earns its place.
Siloed governance committees that gate every architectural change and slow greenfield work to a crawl.
Bolt-on AI thinking. Copilots dropped on legacy SDLC, treating AI as a feature instead of an operating model.
Vendor-led roadmaps that fragment the platform and turn every new capability into another integration debt.
Cloud-agnostic hedging that dilutes the GCP investment IHG already made.
Reinvent
Structurally sound. Procedurally stuck.
Concerto reservations: from procedural and exception-driven → agentic and self-supervising.
RMS pricing: from rules and nightly batches → AI-native, real-time, demand-aware across the eligible estate.
One Rewards loyalty: from points and tiers → predictive intelligence that anticipates churn and upgrade signals at the individual level.
Architecture governance: from gatekeeping committees → enabling guardrails. The same shift you wrote about as the operating model for the AI era.
Build
The net-new foundation.
The orchestration layer. Multi-model agent runtime spanning Gemini, Claude, and fine-tuned hospitality models.
Responsible AI + Trust Tiers, embedded from day one. Tier 3 exploratory, Tier 2 operational, Tier 1 mission-critical. Match the data-quality bar to the use-case stakes.
Forward Deployed Engineering pods (the Swarm Model). Small, multi-disciplinary teams paired to business problems, not technology stacks. The platform Strategic AI Solutions Delivery runs on.
An unlearning program for senior leadership. Explicit retraining of the directors and VPs promoted for predictability who now lead in non-deterministic environments. Not a values slide. A program.
Disruptor density. Recruit ten AI-native engineers and architects who recognize each other. One disruptor gets worn down. Ten form a new norm.
"Show Your Work" disclosure culture. Turn AI-use transparency into a status-positive signal. The shame tax is killing enterprise adoption.
The Governance Spine · Runs Alongside Every Milestone

Across the regulated enterprises I've worked with, the AI program that scales is the one whose governance was designed for scale on day one, not retrofitted in year two. The pattern that works in practice: a single intake, risk-tiered review, value-realization tracking, and an executive portfolio view, all running from the first use case forward. Below is how that spine matures across the four milestones, so governance becomes the connective tissue rather than the brake.

Phase 1 · with M1
Foundation
A single front door for any AI use case across the enterprise.
  • Centralized AI intake on ServiceNow, available to corporate teams and franchisee operators
  • Trust Tier framework (3 / 2 / 1) codified using your published structure
  • Cross-functional coordinating group stood up across Risk, Privacy, Brand, Architecture, and Vendor
  • Baseline risk register and enterprise use case catalog
By end of M1, every AI use case has one front door.
Phase 2 · with M2
Orchestration
Workflows replace ad-hoc approval threads.
  • Risk-tiered review workflow: each Trust Tier triggers its own parallel review stream
  • Standardized templates for business case, model card, data privacy, and brand alignment
  • Value realization framework defined so every use case has a measurable outcome before launch
  • Integration with Apigee for technical gates and BigPanda for operational signals
By end of M2, intake-to-deployment runs in weeks, not months.
Phase 3 · with M3
Scale
Governance reaches the franchisee network.
  • Franchise-aware extension: pre-approved use case catalog so franchisees self-serve from a vetted library
  • Multi-jurisdiction policy-as-code (GDPR, LGPD, APPI, PIPL, US state regimes)
  • Multi-model registry tracking which model from which provider touched which use case for which regulatory frame
  • Executive portfolio dashboard live: spend, ROI, risk exposure, and model dependency at a glance
By end of M3, governance scales with the platform instead of dragging behind it.
Phase 4 · with M4
Steady-State
Governance becomes a product the business consumes.
  • Self-service intake with AI-assisted use case classification and risk pre-screening
  • Predictive review: AI flags likely failure modes before humans do
  • Continuous value tracking with closed-loop signal back to the use case owners
  • IHG governance posture published as a hospitality-industry reference pattern
By end of M4, governance is no longer a function; it's a capability.
Why a Spine, Not a Stage Gate

Most enterprises treat governance as a stage gate after the platform is built, which is how you get the bottleneck this section is designed to avoid. The spine model runs governance and platform in parallel from day one, with each phase earning the right to the next. The four phases above borrow from a transformation pattern I've seen succeed at a major regulated enterprise: centralized intake, risk-tiered review, cross-functional orchestration, and value tracking, all designed for scale on day one rather than retrofitted in year two.

Milestone Plan
M1
Orient & Architect
  • Deep discovery: Map the real state of the GCP stack, data assets, in-flight AI initiatives, and existing team capabilities, not the slide deck version but the actual version
  • Stakeholder mapping: Understand your strategic priorities, Jolie's product agenda, and what each business domain leader needs from AI, without assuming I already know
  • Architecture assessment: Audit the Vertex AI, BigQuery, Salesforce Einstein, and RMS integrations for gaps, redundancies, and quick-win connection points
  • Team design: Define the org architecture before any hires are made, shaping the Head of Strategic AI Solutions Delivery role, the FDE model, and the governance structure
  • Platform hypothesis v1: Synthesize discovery into a clear architectural direction, shared with you and stakeholders before a single line of platform code is written
"The platform hypothesis I bring you at the end of M1 reflects IHG's actual state, not a pre-packaged playbook."
M2
Platform Foundation Live
  • Enterprise AI platform skeleton: Model registry, feature store extensions, agent orchestration framework, and LLMOps infrastructure live on GCP
  • First 2 use cases in production (Tier 3 first, Tier 1 last): Domain 1.2 (Hospitality-Tuned Cognitive Partners) as the Tier 3 starting point: internal-facing, lower-risk, the fastest internal-confidence win. Domain 3.2 (Intelligent Revenue) follows as the Tier 2 use case: operational, measurable RevPAR lift. Mission-critical Tier 1 use cases ship only after the platform has earned trust through the lower tiers.
  • Team formed: Head of Strategic AI Solutions Delivery hired, first wave of engineers onboarded, Forward Deployed Engineering teams established in one business domain
  • Governance · Phase 2 live: the risk-tiered review workflow, standardized templates, and value-realization framework are all operating against the intake portal stood up in M1. (See Governance Spine above.)
  • Self-service tooling: First version of the developer platform live, so teams outside the AI org can start building on the platform without a ticket queue
"The first use cases will earn the trust that every subsequent milestone depends on. The platform model itself is what's being proven."
M3
First Wins at Scale
  • 3+ business domains running on platform: Guest, Revenue, and Loyalty intelligence use cases in production, with measurable business outcomes documented and socialized
  • AI Lab launched: Dedicated space for rapid prototyping, with the first external cohort established with universities, startups, and internal product teams
  • Governance · Phase 3 live: the franchise-aware extension reaches the 70%-network, multi-jurisdiction policy-as-code is in force, and the executive portfolio dashboard is shipping. Governance now scales with the platform, not behind it.
  • LLMOps mature: All production models monitored, versioned, evaluated, and maintained, with no "deploy and forget" AI anywhere in IHG's estate.
  • Colleague cognitive partners: First internal productivity tools live, and colleagues and franchisees feel the difference. The narrative shifts from "AI is coming" to "AI is here."
"By M3, AI has shifted from corporate initiative to operating environment. Adoption stops needing internal evangelism."
M4
Industry Leadership
  • IHG cited as a reference customer: The platform generates results you can bring to boards, investors, and industry forums, not as aspiration but as demonstrated outcome
  • All three Domains operating at scale: Team, Guest, and Operation AI all in production, interconnected systems reinforcing each other instead of running in parallel
  • The team runs itself: Forward Deployed Engineering teams embedded in business domains, swarm model operating in practice, Human + Agent delivery standard across the org
  • Private intelligence at scale: The AI Lab's output feeds back into the core platform, with AI becoming a high-resolution mirror of IHG's unique logic, scars, and foresight. The kind of private intelligence no competitor can replicate.
  • Measurable competitive outcomes: RevPAR lift, loyalty retention improvement, operational cost reduction: numbers that belong in IHG's annual report.
"By M4, the platform is in production with measurable outcomes. The next year's work builds on top of it rather than alongside it."

My experience is that the milestone sequence matters more than any single milestone. The retirements come first because unlearning is harder than building. The reinvents follow because they piggyback on infrastructure IHG already owns. The builds earn their place at the end rather than dictating the start. By M4 there are no "AI initiatives" anymore. There's an operating system, and the work shifts from launching it to scaling it.

A New Kind of Engineering Org
Built for an AI-Native Era

The team I propose is built for end-to-end accountability.

VP, AI Platform & Engineering, reporting to you, owns both the platform engine and the AI solutions delivery org as one system. Every AI capability that ships at IHG runs on this shared platform, under shared strategy, governance, and standards.

Platform and delivery move at the same speed, because they are the same system.

Forward Deployed Engineering
Inspired by the model pioneered in frontier tech: dedicated AI SWAT teams embedded directly alongside the business domains they serve. No ticket queues. No lengthy handoff cycles. Engineers working hand-in-hand with the teams who own guest experience, loyalty, revenue management, and operations. Co-owning outcomes, not delivering deliverables.
The Swarm Model
The org doesn't wait for a reorg to allocate capacity to what matters most. Resources rally dynamically around the highest-priority strategic initiatives, then reorganize as business needs shift. The bottleneck is never headcount; it's focus. Governance makes fluidity a feature, not a bug.
Team Architecture · One Accountable Line
Reporting Structure · Platform Engine + Delivery Org under VP, AI Platform & Engineering, reporting to SVP Wei Manfredi
SVP, Enterprise AI & Architecture · Wei Manfredi Reports to CPTO Jolie Fleming
VP, AI Platform & Engineering This Role · Pavol
Platform Engine + Delivery Org
↳ The Engine
Principal AI Engineer + Lead AI Engineers 2 Open · Platform
↳ GCP Core, LLMOps, Multi-model orchestration, MCP
Director, Product Management · AI Platform Open · Platform-as-product owner
Manager, AI Governance & Trust Tiers Open · Day 1 hire
Integration Manager + Solutions Engineer 2 Open Roles
AI Lab · Rotating Cohort Launch at M3
↳ The Delivery Org
Head of Strategic AI Solutions Delivery Direct Report · Open
↳ Runs Human + Agent delivery under shared strategy and standards
Forward Deployed Engineering · Domain teams Guest · Revenue · Operations
↳ Franchisee-facing engineering · university partners
Solution Delivery Leads Per-pod hires
Adoption & Change Management Open · M2
How This Team Works
Human + Agent
AI agents act as active delivery participants rather than tools being managed: team members contributing to outcomes, which means engineers get multiplied rather than replaced. The team that ships in year two ends up operating at a different scale than the team that started in year one.
Forward Deployed Engineering
FDE teams earn trust by working inside business domains, not behind a backlog. They understand the revenue manager's context, the guest services lead's constraints, and the operations team's pressures. That's how AI gets adopted: not by mandate, but by solving the real problems those teams face.
Focus Over Headcount
The swarm model means the answer to "we need more AI" is rarely "hire more people"; it's "reallocate focus." Governance structures make dynamic prioritization feel like a feature rather than a friction, so the team is always working on what matters most rather than what's queued up in the backlog.

My read is that the structure is the strategy. Keeping platform thinking and delivery thinking moving as one system, under shared strategy, governance, and standards, is what makes the operating model work. The team this produces is small, embedded, agent-multiplied, and focused. Not a department, an operating model.

My Vision and Delivery Experience
Applied to IHG

To recap, I don't take the rarity of this role lightly.

I've done the public research, combined it with my opinions and experience, laid out the thinking across this microsite and had a lot of fun doing so. Every architectural decision I have ever done has been pressure-tested in production over the last many years across hospitality, financial services, life sciences, and high tech. I expect you will now pressure-test this document as well.

What I intend to bring is an open mind, flexibility, and the engineering depth to build the platform, the executive fluency to defend it, and the operating-model conviction you also mentioned: the hard part isn't the architecture, it's the delivery.

I am ready to begin.

Thank you again,
Pavol
01
AI & Cloud Stack Depth
GCP-native, multi-cloud capable. GCP is IHG's stack and mine: Vertex AI, BigQuery, Gemini, serverless. Depth across AWS and Azure as well, with the conviction to go native on the stack IHG has chosen rather than hedge into cloud-agnostic mediocrity.
Claude and Anthropic, practitioner not observer. Production experience with the Claude API: multi-turn reasoning, tool use, MCP-based integrations. Claude Code is my daily engineering environment, and this microsite was built with it. Anthropic's constitutional AI and safety-first design map directly to the responsible AI layer IHG needs.
Agentic systems and LLMOps fluency. Built and shipped agentic AI workflows using Claude, Gemini, and custom ML in production. Multi-model strategy in practice: knowing when Gemini serves, when Claude's reasoning depth is the right call, and how to keep models healthy, observable, and accountable once they ship.
02
Hospitality DNA
500,000+ guests at premium event scale. Led the global delivery of a 100% serverless, real-time guest experience platform for an international premium hospitality event. Engineers across the US, Mexico, and Europe, with on-site hypercare during go-live.
Guest DNA recommendation engine. Architected a hospitality personalization platform on proprietary guest behavioral signals. This is exactly Domain 2.1 (Guest Intelligence); I've done a version of it before.
Real-time integrations at hospitality scale. Delivered integrations across ticketing, B2B and B2C commerce, accommodation, and registration systems under real speed, resilience, and DevSecOps constraints.
03
Enterprise at Scale
Accenture Managing Director and Certified Master Technology Architect. Led programs that earned C-suite trust across financial services, hospitality, logistics, life sciences, and high tech. The firm's highest technical designation, earned through delivery rather than tenure.
Global delivery across the US, Europe, and Latin America. Directed engineers across multiple continents, time zones, and organizational cultures. IHG's 100-country footprint is familiar territory, not a learning curve.
Atlanta, GA · Day 1 in the building. Same city, no relocation, no remote compromise. If the policy is three days a week in the office, I'll be there five when the work calls for it.
04
Thought Leadership & Voice
Co-founder of OpenSlava, public speaker. Co-founded one of Central Europe's largest open technology conferences. Recent talks on 100% serverless platforms at scale, AI applications in practice, and the reality of AI transformation beyond the hype cycle.
I bridge the executive and the engineer. Equally comfortable presenting to a board and reviewing a pull request. Claude Code is my daily engineering environment; this microsite was built with it. The VP role works best when both registers live in the same person.
Master of Computer Science, Comenius University. CS fundamentals aren't decoration; they're what lets me reliably tell the difference between an AI that will work in production and one that will only be impressive in a demo.