I asked 232 Americans if they thought AI was "mostly good" or "mostly bad"
I wanted to know whether people think AI tools such as Claude, ChatGPT, and Gemini will "mostly help" or "mostly hurt" society. So last week I surveyed 232 Americans age 18+.
That's a big question. And big questions are hard to answer cold. So before answering, respondents read a set of statements for and against AI.
Here are the statements.
Statement #1: Opponents worry that relying on AI could weaken our ability to think and reason — that outsourcing writing, research, and problem-solving will make us intellectually lazy.
Statement #2: Opponents also worry about the economic impact. AI could automate many kinds of work, leading to widespread job loss and greater inequality.
Statement #3: Proponents say AI could accelerate breakthroughs in medicine, helping scientists discover treatments for diseases like cancer or Alzheimer's.
Statement #4: Proponents also say AI could reshape education by giving every student access to a personalized tutor — something once available only to those who could afford it.
I recruited 232 respondents through RepData, a survey vendor that draws from a large panel of Americans 18 and older. The sample is broadly representative — 50% female, 50% male — though because I run these surveys for fun rather than as formal research, I didn't enforce strict age or gender quotas.
You can take the survey here.
After reading the four statements respondents answered, "Given these perspective, where do you stand on AI today?"
Here are the results.
Percent of people who think AI will mostly help vs. mostly harm society.
I also asked people about the strength of their belief, then sorted people who feel strongly and not so strongly across each group.
In the next part of the survey, I shifted the focus from society to individual lives. I wanted to know whether people think these tools will be "mostly good" or "mostly bad" for them personally.
Again, I didn't want to ask a "big" question cold. So I had respondents read four short scenarios first — two where AI helped someone and two where it hurt someone.
Here's what they read.
The case against AI: After a semester of using AI to write her essays, a college student sat down to write a cover letter on her own and couldn't organize her thoughts. "It was like a muscle I'd stopped using," she said.
The case against AI: A freelance graphic designer lost three long-term clients in one year after they all switched to AI-generated designs. He's now retraining in a completely different field at age 41.
The case for AI: An AI system analyzing genetic data flagged a treatment pathway that a woman's doctors had missed for years. Within months of starting the new treatment, her symptoms improved dramatically.
The case for AI: A student in a rural town with no AP courses used a free AI tutoring tool to learn calculus, prep for the SAT, and revise his college essays. He became the first person in his family to get into a four-year university.
After reading each scenario, I asked, "Now thinking about your own life, do you believe AI will be mostly good or mostly bad for you personally?"
Here are the results.
Percent of people who think AI will be mostly good vs. mostly bad for them personally.
Again, I asked people if they feel strongly or not so strongly.
Next, I wanted to see if there was a relationship between the first two questions.
I placed all 232 respondents on a simple two-by-two grid based on their answers.
The grid plotted what they think AI will do to society (Question 1), and what they think it will do to them personally (Question 2).
Here's what it looks like.
I also asked respondents how often they use AI: Daily, Weekly, Monthly, or Rarely / Never.
People who use AI daily or weekly were far more likely to say AI will "mostly help" society and be "mostly good" for them personally.
People who rarely / never use AI were far more likely to say AI will "mostly harm" society and be "mostly bad" for them personally.
There's one last part to the survey.
After each question, I asked respondents to explain their thinking. The grid tells me where people landed but I wanted to know why they landed there.
After the first question about AI impacting society I asked, "Can you explain your thinking? I'm curious what was on your mind as you answered the last question."
After the second about AI impacting them personally I asked, "As you answered the last question, what part of your life, if any, were you thinking about?"
I grouped respondents by their quadrant in the grid, then pulled out recurring themes from their responses — the fears, the hopes, the reasoning people used to justify their positions.
Here's what I found.
People repeated the same tradeoffs they had just read.
In both Helps Society + Good for Me and Harms Society + Bad for Me, respondents mentioned things like medical breakthroughs and economic inequality.
The middle columns – Helps Society + Bad for Me and Harms Society + Good for Me – are the most interesting.
These are the people who see a conflict between what AI does to society and what it does to them personally.
Helps Society + Bad for Me
People here worry about keeping up: "use it or lose it," "good for the young," "not for me."
They're pessimistic about themselves.
Harms Society + Good for Me
Many said some version of: "I'll use it responsibly — other people won't."
They're pessimistic about other people.
Retired respondents cluster here.
Half of respondents who said they're retired fell into Harms Society + Bad for Me.
Familiarity seems to matter.
As we saw in the 2×2 grid, many people in this group said they use AI rarely or never.
Tools vs. forces.
Frequent users see a tool. People who rarely use it see it as a force.
People repeated the same tradeoffs they had just read.
In both Helps Society + Good for Me and Harms Society + Bad for Me, respondents mentioned things like medical breakthroughs and economic inequality.
The middle columns – Helps Society + Bad for Me and Harms Society + Good for Me – are the most interesting.
These are the people who see a conflict between what AI does to society and what it does to them personally.
Helps Society + Bad for Me
People here worry about keeping up: "use it or lose it," "good for the young," "not for me."
They're pessimistic about themselves.
Harms Society + Good for Me
Many said some version of: "I'll use it responsibly — other people won't."
They're pessimistic about other people.
Retired respondents cluster here.
Half of respondents who said they're retired fell into Harms Society + Bad for Me.
Familiarity seems to matter.
As we saw in the 2×2 grid, many people in this group said they use AI rarely or never.
Tools vs. forces.
Frequent users see a tool. People who rarely use it see it as a force.
This survey contained six questions. The median completion time was three minutes.
Most of that time wasn't spent answering questions — it was spent reading.
The respondents written responses were thoughtful.
Not because they were trying to be polite or impress anyone, but because they had something to react to. By the time they reached the written questions, they'd already read several perspectives and had a few ideas in mind.
When people are just trying to get through a survey, their answers tend to be short and generic. These weren't.
I also asked respondents whether they were employed and what industry they worked in. The idea was to see whether views on AI differed across types of work. But with a sample of 232 people, the industry groups were too small to analyze responsibly.
One clear pattern did emerge, though: many respondents who identified as retired selected "Rarely or never" when asked how often they use AI tools — a reminder that age almost certainly plays a large role in how people experience and evaluate this technology.
If you're working on a survey and want a second set of eyes — on the questions, the structure, or whether the format is doing enough work — that's what a Survey Roast is for.
You send me a draft. I send back a 15-minute Loom video with tips and recommendations.
No jargon. Just honest feedback.
You can see an example here.
Cheers,
Sam

