Kate Eichhorn
Testing and Optimizing an AI-based Lead-Generation System
Role: UX Researcher
Client: Roopler, Los Angeles
Focus: UX Research
Timeline: 2017-2018
​
Background
​
The client, a start-up led by two former executives from one of the country's largest real estate listing companies and a top-selling real estate broker, was in the process of building an AI-powered lead-generation system for real estate brokers, teams, and agents. The platform focused on delivering exclusive, pre-qualified leads and sustaining engagements until prospective clients are ready to meet with an agent or broker to sell or buy a property. Despite rapid adoption in certain markets, client feedback revealed variability in agent trust of the automated systems and inconsistency in when and how automated conversations were being handed off to agents and brokers for follow-ups.
​
Objectives
​​
-
Improve usability and agent trust in the platform.
-
Ensure AI interactions with potential leads are consistent, professional, and believable.
-
Improve lead qualification to ensure only market-ready leads are passed on to agents and brokers.
-
Refine the criteria for determining when leads should be passed off to agents and brokers (i.e., when it is necessary to initiate an in-person call or set an in-person meeting).
​​​​
Strategy
​
Phase 1: Market & Cultural Research
-
Conducted exploratory interviews with real estate agents and brokers to understand lead-generation workflows, follow-up practices, and decision-making under time pressure.
-
Mapped how agents prioritize leads within existing CRM systems and where automated tools either supported or disrupted their routines.
-
Benchmarked competing lead-generation platforms to identify common patterns, expectations, and gaps in agent-facing experiences.
Phase 2: Messaging, Trust & Human Handoff Design
-
Led research to define the language and framing needed to make AI-generated leads legible, credible, and actionable for agents.
-
Identified clear thresholds for when automated engagement should transition to a human agent, based on user expectations, trust signals, and risk tolerance.
-
Conducted usability testing with both active users and prospective clients to evaluate how AI messaging, summaries, and handoff cues were interpreted in real-world scenarios.
​
Phase 3: Stakeholder Engagement & Feedback Loops​
-
Established lightweight feedback loops (surveys, usage signals, onboarding feedback) to support ongoing iterations of the product post-launch.
Outcomes and Impact​
​​​
-
Defined clear human handoff thresholds, improving agent confidence in when and how to step into automated conversations.
-
Established a shared cross-functional language around AI trust, transparency, and responsibility across product, engineering, and sales.
-
Delivered validated insights on how tone, timing, and explanation affect trust in AI-assisted sales workflows.
​