No Priors: Artificial Intelligence | Technology | Startups cover

No Priors: Artificial Intelligence | Technology | Startups · December 5, 2025

Scaling Legal AI and Building Next-Generation Law Firms with Harvey Co-Founder and President Gabe Pereyra

Highlights from the Episode

Gabe PereyraHarvey Co-Founder and President
00:17:41 - 00:19:45
Challenges in evaluating AI for legal tasks
This is one of the biggest problems. We had early conversations about the right evaluation structure. The hardest aspect of legal work is that most tasks involve long-form text generation. While some legal tasks are highly verifiable, such as finding change of control provisions in a data room, which allows for traditional datasets, others are not. For instance, generating a merger agreement makes it difficult to provide a simple binary evaluation of good or bad. This has been a significant research problem for the labs we collaborate with and internally. The open question remains: how do you build that reward function?
Gabe PereyraHarvey Co-Founder and President
00:00:43 - 00:02:03
Evolving AI for legal workflows
Yes, that's how the product began. When we first raised capital from OpenAI, we gained access to GPT-4. The leap from GPT-3 to GPT-4 was so significant that our initial thought was to simply provide the model to lawyers and let them experiment with it. The legal industry is incredibly text-heavy, so interacting with these models offered immense value. However, as soon as lawyers started using it, we encountered the models' limitations: they hallucinate and aren't connected to much of our context. Consequently, the first two years of the company focused on building an integrated development environment (IDE) for lawyers around these models, connecting them to all the necessary context for individual lawyers to be productive.
Gabe PereyraHarvey Co-Founder and President
00:06:46 - 00:08:28
Agentic AI for legal tasks
We're starting to implement this now. At DeepMind, much of my RL research focused on this. When we first accessed GPT-4, we had a strong intuition that we'd be able to string together many model calls or eventually create reasoning models where the entire agent is differentiable. On the very first day we had access to GPT-4, Winston spent 14 hours in his room, redoing many of his associate tasks. His work essentially involved a hacky, agentic process: he'd look up case law, summarize it, and then use that summary to draft documents. Witnessing this gave us early insight into the future direction of this technology.
Gabe PereyraHarvey Co-Founder and President
00:10:38 - 00:12:26
Restructuring law firms with AI
Another conversation we're having is how to generally restructure firms. We have some intuitions, but much depends on the firm, region, size, specialty, and client types. A significant challenge with law firms is that they are collections of various practice areas. Firms specializing in litigation differ from those focusing on large transactions, which also differ from those handling mid-sized transactions. Typically, large firms manage a combination of these. Therefore, we spend a lot of time analyzing practice area by practice area. For example, we might sit with a fund formation group and their private equity clients to consider workflows, staffing, and pricing.
Gabe PereyraHarvey Co-Founder and President
00:14:16 - 00:16:18
Expert legal reasoning as an RL environment
He is incredibly good at understanding the entire legal entity, much like a senior engineer grasps a complex system. For instance, when they undertook the largest debt offering ever, they had to invent a new financial instrument. His expertise lies in structuring transactions, like knowing how to raise a specific amount of money for a particular part of a deal. His value isn't just in relationships, but in his technical understanding of how to architect these complex financial structures, similar to architecting large software projects. This translates to an RL environment where public models often lack the process of analyzing an entity and determining the correct structure for a merger, given all the context. This reasoning process is crucial.
Gabe PereyraHarvey Co-Founder and President
00:20:12 - 00:22:28
Customization and data integration for enterprise AI
Our operating model isn't a full Palantir approach where we build custom software from scratch. It's closer to Sierra's agent engineering program. Initially, we excelled at building a horizontal platform with minimal customer-specific customization. For instance, with legal clients, we developed features like a workflow builder, allowing customers to customize the product themselves. We only undertook significant customization for very large clients, such as PwC. However, we're now frequently engaging with law firms who want to leverage their data to build models or agents. This often requires us to integrate directly into their environment to connect all their data.
Gabe PereyraHarvey Co-Founder and President
00:25:39 - 00:27:24
Empowering law firms with AI, not competing
For us, the best outcome is helping every law firm become AI-first, rather than building one ourselves. The real problem we're trying to solve is making every law firm more profitable. This involves improving how they work with clients and enabling clients to receive better, faster, and cheaper legal services. Solving that equation at scale presents a much bigger opportunity than building a single law firm, which would create conflicts and limit scalability. While we've been asked about this, it's not our company's focus.
Gabe PereyraHarvey Co-Founder and President
00:41:20 - 00:42:33
Transitioning from individual to organizational AI productivity
It's crazy. The interesting thing will be the transition from individually smart models to their use in massive organizations. This continues the trend seen over the past 20 years with SaaS, where software enabled significant growth. For example, law firms have grown tenfold since the advent of computers and the internet. I believe this will happen again, though perhaps differently than in the last two decades. Many still focus on copilots and individual productivity. However, we are increasingly considering organizational productivity and building scalable systems. For our internal engineering team, an interesting question arises: making someone program 20% faster doesn't necessarily mean building a product 20% faster.

Keep up with new drops

Follow us on X for fresh highlight threads, release notes, and live listening sessions.

Highlight drops, launch threads, and behind-the-scenes from the podmark team.

Follow Podmark on X
00:00:0000:00:00