
Y Combinator Startup Podcast · July 29, 2025
Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan
Highlights from the Episode
Jared KaplanAnthropic co-founder
00:00:00 - 00:15:33
Future of AI: human-level tasks and organizational knowledge →
“
We can speculate about the future of AI. In AI 2027, people did just that. This suggests that over the next few years, AI models may perform tasks taking not just minutes or hours, but days, weeks, months, or even years. Eventually, we envision AI models, perhaps millions working together, capable of performing the work of entire human organizations or even the entire scientific community.
Jared KaplanAnthropic co-founder
00:00:00 - 00:15:33
Key ingredients for human-level AI →
“
A crucial ingredient is relevant organizational knowledge. We need to train AI models that don't start from scratch, but can learn to operate within companies, organizations, and governments. They should possess the same context as someone who has worked there for years. Models need to work with knowledge and require memory. I distinguish memory from knowledge in that, for long tasks, you must track your progress. You need to build and utilize relevant memories.
Jared KaplanAnthropic co-founder
00:00:00 - 00:15:33
Scaling laws predict AI progress →
“
We were truly amazed by these precise trends, comparable to those found in physics or astronomy. This gave us strong conviction that AI would predictably continue to advance. As early as 2019, we observed these trends across many orders of magnitude in compute, dataset size, and neural network size. When something holds true across such a wide range, you expect it to continue for a long time. This has been a fundamental principle underlying AI improvements.
Jared KaplanAnthropic co-founder
00:00:00 - 00:15:33
AI capabilities: flexibility and task horizon →
“
I view AI capabilities along two axes. The less interesting, yet still very important, axis is AI's flexibility—its ability to meet us where we are. For example, AlphaGo would fall far below the x-axis on this figure. While AlphaGo was incredibly intelligent and surpassed any human Go player, it could only operate within the confines of a Go board. However, since the advent of large language models, we've made steady progress. We now have AI that can handle numerous modalities, similar to human capabilities.
Jared KaplanAnthropic co-founder
00:00:00 - 00:15:33
Building products at the AI frontier →
“
I always recommend a few things. First, it's a good idea to build things that don't quite work yet. This is always a good approach; we should always be ambitious. Specifically, AI models are improving very quickly, and this trend will continue. This means if you build a product that doesn't quite work because, for example, Claude 4 isn't advanced enough, you can expect Claude 5 to make that product functional and valuable. Therefore, I always suggest experimenting at the boundaries of AI's capabilities, as those boundaries are rapidly expanding.
Jared KaplanAnthropic co-founder
00:18:35 - 00:19:37
AI as a manager and human-AI collaboration →
“
As people work with AI, skeptics correctly point out that AI makes many mistakes. While it can produce brilliant, surprising results, it also makes basic errors. A key difference between AI and human intelligence is that humans can often judge whether something was done correctly, even if they can't do it themselves. For AI, the judgment and generative capabilities are much closer. This suggests that a major role for people interacting with AI is to act as managers, sanity-checking its work.
Jared KaplanAnthropic co-founder
00:21:38 - 00:24:17
Leveraging AI's breadth of knowledge →
“
In many scientific fields, especially biology, psychology, or history, success often hinges on synthesizing vast amounts of information from diverse areas. AI models, during their pre-training phase, absorb a significant portion of human knowledge. I believe there's great potential in leveraging this AI capability, as it possesses far more knowledge than any single human expert. This allows us to gain insights by integrating numerous areas of expertise, for example, across different biological research domains.