Every technique I use to recruit respondents — accurately, quickly, and affordably — for ecomm brands, agencies, and Fortune 500s.
"How do you get people to take your surveys?" I get this question a lot. The short answer is: I pay them.
Even if you've never thought about recruitment, understanding it will save you thousands in fees and help you reach the right people.
This guide covers market surveys — not in-app user surveys, customer feedback tools, or post-purchase forms. A market survey measures populations using representative samples. It involves paying people to take your survey.
Part I covers how to define and size the population you want to sample. Part II covers the vendors you can use to recruit from that population.
Before you recruit anyone, you need to know exactly who you're trying to reach — and how many of them exist.
Use verifiable metrics — age, gender, geography — that you can cross-reference with Census.gov, or any trusted data source. Generational labels (e.g., Gen Z, Millennials) don't count.
Your job: define which of these people you care about, using metrics you can verify.
Once you've defined the demographics, look up the actual population count. For 18–34 year olds in the U.S., that's approximately 75 million people — derived from the Census by combining age brackets. Knowing the base population size anchors everything downstream: your sample size, margin of error, and quota structure.
74 out of 333 dots are highlighted — proportional to the actual population share.
Gender splits roughly 50/50, but the age distribution within 18–34 is uneven — about 40% fall in the 18–24 bracket and 60% in the 25–34 bracket. Set quotas based on actual composition, not intuition.
The 25–34 cohort (darker dots) is 50% larger than the 18–24 cohort (lighter dots) — a meaningful difference when setting quotas.
For strategic work — defining markets, sizing segments — I typically use n=1,100 to achieve a ±3% margin of error. Each dot below represents one respondent, split proportionally across the same four groups as the population in Step 3.
These are the primary sources I use. If you're based outside the U.S., reference your own national census equivalent.
One of the most common mistakes I see: using age brackets that don't align with U.S. Census brackets. If your brackets don't match, you can't compare your survey data to the broader population. Use the ones on the right.
| ✕ Non-Census brackets | ✓ Census-aligned brackets |
|---|---|
| 15–25 | 18–24 |
| 26–35 | 25–34 |
| 36–50 | 35–44 |
| 51–65 | 45–54 |
| 65+ | 55–64 |
| 65+ |
Are you targeting a household decision or an individual one? For products used across a household — air purifiers, streaming services, cleaning supplies — think in households. For personal products — toothbrushes, skincare, supplements — think in individuals.
Scale reference: there are approximately 130 million households and 333 million individuals in the United States. That distinction will affect your incidence rate and your respondent fees.
If you need to target a niche segment — daily Vitamin C users, Tesla owners, fintech app adopters — Census data won't get you there directly. Search for existing industry reports and use them to estimate prevalence rates within the broader population. A few reliable sources:
- Step 01 Define verifiable demographics aligned with U.S. Census data — age brackets, gender, geography.
- Step 02 Find the actual size of the base population you're sampling from.
- Step 03 Set quotas based on the real composition of that population, not intuition.
This list prioritizes self-service options — they're more cost-effective and give you more control. Self-serve isn't complicated, and the savings are real if you're willing to put in a bit of time.
You manage everything: pricing, respondent communication, quality control.
The Wild West of recruitment vendors. Filled with bots. The interface looks like early-2000s internet. And yet: unmatched speed and cost. You set the price, you communicate with respondents directly, you control everything.
I use Turk for pilot testing — typically 20 respondents before running the full study. Catching design problems before you spend real money on data collection is one of the highest-leverage things you can do in this process.
Originally built for academic research, Prolific focuses on representative general-population samples rather than specific shopper segments. It doesn't have the raw scale of CINT, but the interface is clean and the pricing is transparent.
One pricing quirk worth knowing: you pay for respondents who screen out (i.e., don't qualify for your specific criteria). Most platforms only charge for completers. Factor that in when you're estimating costs for surveys with tight screeners.
Self-serve portal to configure your sample, then invoiced through an analyst. Good middle ground if you want control without full DIY.
My default recommendation for most projects. CINT claims access to 335M+ respondents — that number is inflated, but the scale is real. The platform integrates Census data directly, making it straightforward to set age and gender quotas. Think of it as an Airbnb for panel respondents: CINT aggregates panels from a wide range of partners rather than owning them outright.
Data quality is the main drawback. Include open-text questions in every survey you field through CINT — it's the most reliable way to identify and remove low-quality respondents before you analyze anything.
A direct CINT competitor with competitive pricing and comparable reach. The output is acceptable. The interface is not — you'll end up consulting several PDFs to figure out how to link with your survey platform, and the reconciliation process is a headache.
Worth considering if CINT quotes run high, but go in with low expectations for the setup experience.
Request a quote or book a call. The vendor coordinates pricing and supplier matching. Expect higher costs and more hand-holding — which can be worth it if you don't want to manage the process yourself. I've used all of these and would recommend any of them. Reach out to me directly if you need an introduction.
Subscription or seat-based models. The most expensive category — but worth it for teams doing ongoing brand tracking or continuous consumer research.
Two variables drive most of the cost: Incidence Rate (IR) — how common your target respondent is relative to the general panel — and Length of Interview (LOI) — how long the survey takes to complete.
A high IR and short LOI means cheap, fast data. A niche segment with a 30-minute survey will cost significantly more per complete. Know your IR before you request a quote — vendors will ask for it, and a ballpark estimate helps you avoid sticker shock.
No matter which vendor you use, include at least one open-text question. Bots can't write coherent sentences. Respondents rushing through for the payment reward will write gibberish or copy-paste from nearby fields. A single open-text question lets you flag and remove low-quality responses before they contaminate your analysis.
Look for: duplicate responses, single characters, irrelevant content, or responses that have nothing to do with the question asked.
Some platforms charge per response rather than per respondent — which sounds reasonable until you do the math. Here's a real example from a client decision:
| Option A — Per-Response Subscription | Option B — Alchemer + CINT | |
|---|---|---|
| Platform cost | $0.50 per response (annual cap) | $1,020/year (Alchemer) |
| Respondent fee | Included | $1.36 per respondent × 1,600 = $2,100 |
| Responses | 16,000 (1,600 people × 10 questions) | 1,600 respondents |
| Total cost | $8,000 | $3,120 |
The per-response model sounds flexible. It isn't. The economics work against you almost every time.
Vendors love to advertise exclusive, vetted respondent databases. Sometimes those panels are genuinely high-quality. More often, "proprietary panel" means a panel they own but supplement heavily with external sample partners — which is the same aggregated pool everyone else is drawing from.
Ask vendors directly: what percentage of a typical sample comes from your proprietary panel vs. external partners? The answer tells you a lot.
- Do Use demographic data to define your sample before you touch a vendor portal. Know your IR.
- Do Choose the lowest-cost vendor that meets your quality bar. Start self-serve; escalate to assisted only when needed.
- Do Always include an open-text question. It's your quality filter.
- Don't Pay pass-through markups for built-in panel tools from survey platforms.
- Don't Take "proprietary panel" claims at face value.

