Simile AI Raises $100M to Simulate Populations, Not Just Chatbots
The Stanford team that coined "foundation model" and invented generative agents is betting that AI's real value isn't in assistants, it's in predicting what millions of people will do.
By Theo Ballard | February 12, 2026
Most AI companies want to build you a smarter assistant. Simile wants to build a million versions of you.
The San Francisco startup emerged from stealth today with $100 million in Series A funding to develop what its founders call "generative agents," AI systems that simulate realistic human behavior at population scale. The round was led by Index Ventures, with participation from Bain Capital Ventures, A* Capital, and Hanabi Capital. Notably, AI luminaries Fei-Fei Li and Andrej Karpathy also invested.
This isn't another chatbot company chasing the ChatGPT wave. Simile is making a different bet entirely: that the real commercial potential of large language models lies not in simulating one helpful personality, but in simulating thousands of distinct individuals whose behavior can be predicted and analyzed.
The Population Simulation Thesis
Here's the core idea. When you talk to ChatGPT or Claude, you're interacting with a single simulated personality, an assistant optimized to be helpful. But the underlying language models were trained on text from billions of people. In theory, they should be able to simulate any of them.
Karpathy, who contributed to the round as an angel investor, has called this direction "under-explored territory" in AI development. Why simulate one person when you could simulate a population?
Simile's answer is to create what they call "digital twins," AI agents modeled on real individuals that can predict how those people would respond to new products, policies, or experiences. Companies can then run scenarios, essentially asking "what if" questions about customer reactions or market responses, without the cost and delay of traditional market research.
"Simile expanded our qualitative research scope by 15x without losing depth," said Andrew Crocco, Head of User Research at Wealthfront, one of the company's early customers. "Speaking with their simulated customers helps us quickly identify areas of opportunity for ongoing research."
How It Works
The technical approach starts with interviews. Simile conducts two-hour qualitative interviews with real people, capturing not just demographics but decision-making patterns, values, and reasoning styles. This data, combined with behavioral information like transaction histories, trains agents that represent specific individuals.
The research backing this approach is substantial. In a November 2024 paper titled "Generative Agent Simulations of 1,000 People," the Simile team demonstrated that their agents replicate participants' responses on the General Social Survey with 85% accuracy. For context, that's comparable to how accurately participants replicate their own answers when re-surveyed two weeks later.
The founding team has serious credibility here. CEO Joon Sung Park, along with co-founders Percy Liang and Michael Bernstein, published the original research on generative agents at Stanford. They also introduced the term "foundation model" in a 2021 paper.
Why Now
The timing reflects a broader shift in how companies think about AI applications. For the past two years, the industry has focused almost exclusively on single-agent assistants, chatbots and copilots designed for one-on-one interaction. But a growing contingent of researchers and investors sees multi-agent simulation as the next frontier.
Index Ventures partner Nina Achadjian, who led the investment, compared the market pull to the early days of Wiz. "When the largest companies in the world, across industries and geographies, express the same pain, the market pull is undeniable," she wrote. "I haven't experienced this level of market pull since the early days of Wiz."
The customer list supports this claim. Simile counts Wealthfront, Telstra, CVS Health, Suntory Beverages & Food, and Banco Itaú among its clients. These aren't pilot projects. Srikant Narasimhan, VP of Enterprise Customer Experience at CVS Health, described using Simile to "fail safely in a controlled environment" before testing ideas with real customers.
The Moat Question
The obvious question: if frontier language models are already trained on text from diverse populations, what stops OpenAI, Anthropic, or Google from building this themselves?
Simile's edge appears to be in the fine-tuning data. While base models learn general patterns of human text, Simile's agents are trained on specific individuals through structured interviews and behavioral data. The company has essentially built a proprietary dataset of how real people think and make decisions, something that can't be scraped from the internet.
There's also the question of focus. For the big AI labs, population simulation is a research curiosity. For Simile, it's the entire business. That kind of concentrated effort tends to matter in enterprise software, where depth of integration and customization wins contracts.
Open Questions
Simile's vision raises issues that go beyond typical startup risks. Can AI agents truly capture the complexity of human decision-making, or will they reflect only the patterns easiest to model? The 85% accuracy figure is impressive, but it also means one in seven predictions is wrong. In high-stakes contexts like policy decisions or product launches, that margin could matter enormously.
There's also the question of emergent properties. What happens when thousands of simulated agents interact with each other? Karpathy has wondered publicly about this in discussions of agent networks, asking what unexpected behaviors might arise from "similes in loops." The answer might be fascinating, or concerning, or both.
And then there's bias. Simile claims their architecture reduces accuracy biases across racial and ideological groups compared to agents given only demographic descriptions. But AI systems trained on human data have a troubling history of encoding the prejudices in that data. The company will need to demonstrate ongoing vigilance here.
For now, the market seems willing to bet that these challenges are solvable. $100 million is serious money, even in AI. And the team that gave us both "foundation models" and "generative agents" has earned at least the benefit of the doubt.
The question is no longer whether AI can simulate human behavior. It's whether it can simulate human behavior well enough to trust.
Simile is based in San Francisco and New York. The company was founded in 2025 and has not disclosed its valuation.