Your Buyer Just Asked ChatGPT What to Offer. Now What?
I watched an agent lose a deal last month because their buyer went home after an open house and asked ChatGPT what they should offer.
ChatGPT pulled comps from the neighborhood (not wrong, technically), estimated a value range (reasonable, on paper), and suggested the buyer come in 8% under asking. The buyer, who had been ready to write a competitive offer that afternoon, suddenly wanted to "think about it." By the time they came back around, the property had six offers and was in contract.
The agent was furious. At the technology. At the buyer. At the idea that a chatbot had just tanked their deal.
I get the frustration. But the chatbot didn't tank anything. The conversation that should have happened before the open house never did.
Here's what I mean. In San Francisco, the listed price on a property is almost never the price it's going to sell for. You already know this if you've been here longer than five minutes. A house listed at $1.5 million in Noe Valley might sell for $1.8 million or $2.1 million depending on the week, the competition, the condition, and about forty other variables that don't show up in any data set a chatbot can access.
ChatGPT doesn't know that the seller's disclosure has a footnote about the foundation that's going to scare off half the competition. It doesn't know that the listing agent priced it low intentionally to generate a bidding war. It doesn't know that the unit two floors up in the same building just had a $180,000 special assessment. It doesn't know that the "view" disappears the moment the lot next door gets developed, which anyone paying attention to the planning commission agenda would already know is coming.
AI is excellent at pulling data. It is terrible at reading a room.
And that's not going to change. Not because the technology isn't improving (it is, fast, and I would knowโฆ I'm a Founding Partner at Maira, an AI company built specifically for real estate). It's because the value of a San Francisco home has never lived in the data alone. It lives in the interpretation. It lives in the context that only comes from standing inside the property, knowing the building, knowing the block, knowing the history of what sells and what sits and why.
The Inman piece that got everyone talking nailed the core point: AI isn't creating doubt in real estate transactions. It's surfacing doubt that was already there. When a buyer gets wobbly after asking ChatGPT, it's not because the AI gave a better answer. It's because the buyer never fully understood the basis for the decision they were about to make.
That's an agent problem, not a technology problem.
The agents who are going to thrive in this environment aren't the ones fighting AI. They're the ones who make it irrelevant by doing the work upfront. By the time my buyer walks into an open house, they already understand why the list price is where it is, what the realistic range looks like, what the competition might do, and how we're going to structure our approach. When they go home and ask ChatGPT, the answer they get back either confirms what we already discussed (which builds trust) or contradicts it (which gives us a conversation to have, not a crisis to manage).
I use AI every single day in my business. I helped build an AI company. And I'm telling you: the best thing AI has done for real estate is force agents to be clearer, earlier, and more honest with their clients. If a chatbot can destabilize your deal, the deal wasn't stable to begin with.
The technology isn't the threat. The silence before the open house is.
-
AI tools like ChatGPT can provide general market data, but they lack the hyperlocal knowledge needed for San Francisco's 70+ micro-markets. Use AI for research, but rely on a local agent for strategy.
