I recently caught a pie-baking competition at a local county fair.
While admiring the pies vying for the blue ribbon, I naturally started thinking about how pies and hotel demand actually have something in common: their sampling size.
I mean, think about it: Would one enter a small slice of pie into a competition? Or would they enter the whole pie for the win?
In the revenue technology realm, what about using small slices of hotel demand—such as lost business—to make big-picture forecasts? Is that enough, or should technology evaluate a larger set of considerations to determine total hotel demand?
To illustrate further, let’s bring this back to an era some hoteliers refer to as The Glory Days. You know, before the internet and third-party OTAs became a thing, and hotels were living their best life.
As part of the process to determine total hotel demand, hotels had to find ways to quantify their lost business. Lost business, in general, is comprised of two types of data: a regret and a denial.
Both represent a lost opportunity; however, they are classified using different reasons:
- A regret represents a missed booking because a rate exceeded the shopper’s budget
- A denial represents a guest turned away because the date or room type was not available
Not exactly what you’d call a scientific method for evaluation, right? But at the end of the day, classifying missed opportunities in a simple manner helped fill a void in understanding total hotel demand.
But even in those simpler times, it still wasn’t simple to determine the exact reason behind every piece of lost business.
Now fast-forward back to today.
Guests have ample channels to search, shop and book hotel accommodations. Thousands of travel site choices exist, and they range from OTAs, meta and brand.com sites to traditional voice and GDS channels.
This growing technology superstorm means that collecting lost business data has become both insurmountable and impractical—especially when it comes to using denials to forecast hotel demand.