data and a padlock illustrating the risk to hotels of adopting AI faster than security controls and how this will increase risk

It took 527 stolen credentials to leak the data of over 5 million guests. In March 2026, CyberNews discovered that an attacker had compromised the credentials of accounts on Chekin and Gastrodat, hospitality platforms based out of Spain and Austria, and obtained plain-text copies of usernames, passwords, and tokens.

NB: This is an article from Caleta

Subscribe to our weekly newsletter and stay up to date

They used these credentials to extract guest names, booking details, and even ID documents. Everything streamed directly into the attacker’s Telegram by a simple set of Python scripts.

This type of leak exposes guests to identity theft and opens both guests and hotels to targeted phishing attacks. Hotels are now deploying AI agents rapidly and sometimes handing them the same capabilities that the hackers worked to get: plain text credentials and the autonomy to use them.

Good Agents Need Good Controls

AI agents act on your behalf to complete tasks. Answer a question about the property, re-assign guest rooms, look up availability and rates. When built well, these agents can seem like magic with seemingly instant access to the right information and systems all without you needing to intervene. However, good agents need good controls. Preventing a runaway agent from deleting a reservation or guest profile, sending PII through a text message, or being exploited by a bad actor requires diligent work on the part of the developer to get right. When AI agents aren’t controlled, they represent a new scale of risk for your property. Rather than having credentials leaked to a bad actor, these agents have systematized using your plain-text credentials.

Take Computer Use Agents (CUAs to their friends). These AI agents work analogously to a human. Look at the screen, click a button, type in a text box. Like a human, it needs to know your password in order to login. Unlike a human, it takes screenshots of your pages (often with guest credit card data and PII visible), processes them, and uses that information to take its next action.

In 2025, Microsoft researchers identified this as one of seven major risk classes associated with CUAs: you’re giving an agent access to private guest and company information with no guarantee of where that data ends up. When you give a CUA the login information for your PMS, it can now access anything that the login entails. Login-based permission management systems were built for human actors, not AI agents or automated systems.

All of us have seen a sticky note with the PMS password written on it stuck to a monitor. CUAs and other inadequately guardrailed agents take that security breach, scale it, and add a whole new layer of attack patterns and potential hallucinations on top of it.

This Isn’t a Hypothetical

When McDonald’s McHire app was breached, exposing the private data of 64 million job applicants, it was traced to a single default credential from their AI chatbot provider. No multi-factor authentication, no access controls or API restrictions, one password set at “123456”, 64 million people’s data.

Around that same time, attackers breached multiple hotels’ Booking.com credentials using a technique called ClickFix, a social engineering method where fake error messages trick hotel staff into running malicious commands on their own machines. This type of attack is heightened against AI agents which can be misdirected, prompt-injected, or otherwise confused by hidden text that wouldn’t even be seen by a human.

Those aren’t isolated incidents. Between January 2025 and February 2026, at least 20 separate security incidents have occurred across AI-powered applications. Nearly all had preventable causes like mishandled credentials or lax authentication.

Where Did My Data Go?

When something goes wrong in these systems, who is there to hold accountable? Can you even track down what went wrong?

In many cases, these systems aren’t built with auditability as a first-class feature. When the AI took a screenshot of your VIP guest’s profile, did it see their passport information? Cardholder data? Where did that data go when the screenshot was processed? Is that data being used non-anonymized to train a third party’s AI model? If the agent wasn’t built with this in mind, tracing that path during an audit may not be difficult; it may be impossible.

Auditability is an age-old standard in hospitality. Folio items can’t be deleted, voiding them requires a reason code, and every void is audited at the end of the day. Hold your AI partners to that standard and higher. You need to be able to understand where every bit of your data is being used, how it is being used and, when an AI agent makes a decision, why that decision was made and what was impacted.

The Real Cost of Getting This Wrong

When a data breach is discovered and the leak is stopped, the problems keep building. The guest data now lives in a phishing pipeline, risking those guests being scammed and making your liability last for years afterwards. MGM’s 2023 breach cost $100 million from operational disruption in a single quarter, with another $45 million added in settlement costs. Marriott paid $52 million in penalties and entered a 20-year consent order.

Second order costs stack up even higher. Cyber insurance premiums can spike by up to 50%. Breaching PCI-DSS can result in $100k/ month in fines, or in the worst case cost your property the ability to process payment cards. Customer return rates can fall by 38% or more. IBM found that in 2025, the average data breach costs $4.44 million when you account for forensics, legal fees, notification, and lost business.

It isn’t a question of whether you need a safe and compliant AI system, it’s a question of if you can afford the alternative. For your teams, these aren’t abstract numbers. For a major brand or chain, this is a massive hit to their bottom line and the trust they’ve worked so hard to build. For a small chain or independent, it’s a death sentence.

Why They Trust Us

Hotels run on trust above all else. Guests trust us with their most personal information and expect us to protect it, and hold us accountable if we don’t. Will those 5 million guests come back to the hotel whose system leaked their data? Or has that trust been permanently eroded, taking your brand reputation, revenue, and legal safety with it?

AI agents running at your properties need to be built to improve guest trust and maintain stricter standards than human team members. AI adoption will open remarkable efficiencies and personalization at scale. To do this right, keep a few things in mind when assessing AI systems. When a vendor offers instant, no-setup integration into all of your systems, that speed comes at the cost of security and opens you up to the same type of credential attack that hit Chekin and Gastrodat.

Ask your AI vendors where your data goes, how their audit trail works, and who controls your information. If they can’t answer clearly, pause and assess before that agent gets your PMS login.

Where We’re Going

To me, the core of hospitality is the personal experience. When I worked at Four Seasons, I saw real magic happen every day. When a guest falls into a cactus on their hike, a real person can sit with them, help pull out the needles, and even get them to laugh about it. A real person can walk them to the restaurant and comp their lunch for them. That turns a 1 star day into a five star review. That’s the kind of magic that only hoteliers who care deeply can bring.

AI doesn’t make that human connection less important, it makes it exponentially more important. Staff now have the ability to customize every interaction to each guest in real time, making every guest feel like they’re the center of the world. AI is a force to serve that human connection, and it should never put those guests at risk.

Five million guests had their data leaked. The question you should be asking yourself and your vendors is simple: are AI tools solving problems for my property, or are they just automating risk?

Read more articles from Caleta