Skip to main content

Claude blows ChatGPT out of the water with the Hospitable MCP Server

  • April 29, 2026
  • 5 replies
  • 52 views

Andrew A.
New Participant
Forum|alt.badge.img

Claude (Opus 4.7) vs ChatGPT (5.5) with Hospitable MCP — Real Use Case

I wanted to properly test Claude Opus 4.7 vs ChatGPT using the Hospitable MCP in a real scenario—not benchmarks, but something actually useful for hosting.

 

The Question

Where do my guests actually come from, and how has that changed over the past ~5 years?

We operate in Tbilisi, so this isn’t just curiosity—regional shifts matter a lot given everything going on globally.

Setup

  • MCP setup was easy in both tools

  • Same prompt used in both (written manually)

  • No special tuning or iteration upfront

The Prompt I Used

“Can you visualize over time since the very beginning of each listing (grouped is fine) where our guests have come from? I want to see a changing or constant ‘demographic’. Watch out though, the location is not normalized that comes back from Hospitable / Airbnb. Codex had a lot of trouble with this. So look at the data then think it through first!”

 

Claude Experience

Claude basically just ran with it. The first response already included:

  • Clean chart

  • Clear breakdown

  • Actual insights (not just description)

What stood out most:

It didn’t just analyze — it challenged the dataset.

It flagged gaps, suggested ways to improve the analysis, and then executed on that idea in the next run.

Second run:

  • Stronger dataset coverage

  • Updated visuals (after asking)

  • More confident, real-world insights

It felt like:
“Here’s what your data says—and here’s what it actually means.”

Downside:
It burned through usage fast (2 runs + a few follow-ups and I was basically done but so was my usage for the next 3 hours!).

 

ChatGPT Experience

With the same prompt, ChatGPT took a more structured, step-by-step approach.

  • It correctly identified the messy data

  • But instead of acting immediately, it:

    • Explained the issue in detail

    • Suggested prompts to run

    • Waited for me to drive

From there, the workflow became:

  1. Run suggested prompt

  2. Get results + long explanations

  3. Get another suggested prompt

  4. Repeat

Outputs included:

  • Dashboards (a bit “retro” in feel)

  • Lots of bullet-point insights

  • Heavy focus on methodology rather than conclusions

Even when I asked for a CEO-style summary, it still mixed:

  • Final insights

  • With explanations of how it got there

It kept suggesting next steps like:
“Monthly booking share by region (with vs without enrichment)”

Each step required manual prompting and steering.

 

The Real Difference

This wasn’t about which model is “better”—it’s about how they think. Wait who wrote this? ChatGPT. It really is about which model is better.

Claude

  • Proactive

  • Makes decisions

  • Improves the analysis independently

  • Focuses on insight

ChatGPT

  • Guided

  • Waits for direction

  • Suggests frameworks and workflows

  • Focuses on process

 

What This Means for Hosts

If you're using MCP with Hospitable:

Claude is great when you want good answers quickly
→ “Tell me what’s going on in my data”

ChatGPT is is off the hook here
→ “Help me build this analysis step by step and make me guide you through every single step”

 

My Takeaway

For this kind of messy, real-world question:

Claude got me to meaningful insights faster and with less effort
ChatGPT almost got there too—but needed much more steering

 

One Practical Lesson

If the underlying data is messy (which it often is in Hospitable/Airbnb exports), the model’s ability to interpret and adapt becomes the real differentiator.

Curious if others here have tested MCP with both—are you seeing the same pattern, or different results?


PS: Slightly ironic footnote—this post was polished with ChatGPT… because I had already burned through my Claude usage 😄

5 replies

Andrew A.
New Participant
Forum|alt.badge.img
  • Author
  • New Participant
  • April 29, 2026

Quick follow-up / correction to my previous post

I made a small mistake: my original comparison was actually with ChatGPT 5.3, not 5.5 because it defaulted to the 5.3 model without me setting it explicitly to 5.5. in the chat.

I reran everything in a fresh chat using 5.5, and the difference was noticeable.

What improved with 5.5:

  • Thought longer before responding (more deliberate = more careful reasoning). This took 10-15 minutes though!

  • Less verbose overall and a lot less emojis

  • Focused more on the data itself, less obsessed with the methodology

That said, the workflow was still quite hands-on:

  • It paused multiple times to describe the reservations schema and show raw samples before cleaning

  • I still had to give it the “hint” to improve the dataset (which it initially chose not to use on its own)

  • It wanted to align on visualization structure before proceeding

  • Built a dashboard (in HTML canvas), but:

    • Initially all black and not differentiated by color

    • Couldn’t fix styling in the same pass due to too issues

    • Needed a restart to get a usable version

After that:

  • It built a solid dashboard structure

  • Initially only worked with ~1/3 of the data (JSON size issues via MCP) with a huge disclaimer

  • Later handled full data via pagination

  • Finally produced deeper insights (beyond just demographics—actually quite interesting as a host)

Bottom line (after testing 5.3 and 5.5):

ChatGPT 5.5 is a clear step up from 5.3—especially in:

  • Interaction quality

  • Data handling

  • Output structure

But even with that…

👉 Claude still wins for this use case—pretty comfortably.

Less steering, faster path to insight, more “just gets it” behavior.

Curious if others have seen similar improvements with 5.5?


Petra Podobnik
Hospitable Team Member
Forum|alt.badge.img+1
  • Hospitable Team Member
  • April 29, 2026

Thanks ​@Andrew A. for sharing it here with hosts. 💜


Tom Beerley
Hospitable Hero
Forum|alt.badge.img+3
  • Hospitable Hero
  • April 29, 2026

@Andrew A. thanks for sharing this. I pasted your prompt verbatim into Claude and got some quantitative insight to what I kinda knew already, which is that our town is a classic, stable, drive-in regional beach town that draws almost entirely from our tri-state area. 

The summary it gave me was:

The actual demographic story: It's been remarkably constant. Of the 117 stays where origin was shared, PA is #1 every quarter (40 stays total), MD is #2 (28), then a long tail through NY (9), VA (8), NJ (5), DE (3). The drive market — PA + NJ + MD + DE + VA — accounts for ~76% of every month's known stays. You're running a regional Mid-Atlantic destination, which is exactly what Ocean City is. There's no demographic shift to interpret.

 


Tom Beerley
Hospitable Hero
Forum|alt.badge.img+3
  • Hospitable Hero
  • April 29, 2026

@Petra Podobnik has there been talk of creating a dedicated prompt library for MCP users? Rather than posting about our efforts through conversational threads, I can envision a database of prompts, with explanation of what it is, what it does, what problem it solves… and sample outputs (sample text, sample graphics), categorized or tagged in different ways, maybe even with a voting mechanism for “I used this and found it useful” so that they could be ranked by popularity. It’s amazing how quickly things are getting built, and I’m sure I’m not alone in being hungry for more concrete examples of what we could be doing :-)


Tom Beerley
Hospitable Hero
Forum|alt.badge.img+3
  • Hospitable Hero
  • April 30, 2026

@Petra Podobnik nevermind, I just saw another post where you said there will be an MCP Library. Perfect, and awesome!