01

About_Us

Preference Model is building the next generation of training data to power the future of AI.

02

Our_Thesis

Today's models are powerful but fail to reach their potential across diverse use cases because so many of the tasks that we want to use these models for are outside of their training distribution. Preference Model creates reinforcement learning environments that encapsulate real-world use cases, enabling AI systems to practice, adapt, and learn from feedback grounded in reality.

03

Our_Investors

We raised $16M led by a16z with participation from SignalFire, South Park Commons, Scale Angel Group, and amazing angels like Dr. Fei-Fei Li, Ian Goodfellow, Julian Schrittwieser.

04

Our_Research

Path attribution methods are a gradient-based way of explaining deep models. These methods require choosing a hyperparameter known as the baseline input. What does this hyperparameter mean, and how important is it? In this article, we investigate these questions using image classification networks as a case study. We discuss several different ways to choose a baseline input and the assumptions that are implicit in each baseline. Although we focus here on path attribution methods, our discussion of baselines is closely connected with the concept of missingness in the feature space - a concept that is critical to interpretability research.

Read More

05

Our_Team

Our founding team has previous experience building data and infra at Anthropic, Stripe and Datology. We’re partnering with frontier AI labs to build capabilities for the next generation of LLMs.

Jennifer_Zhou

Jennifer_Zhou

CEO, Founder
Previously Anthropic, Stripe

Ning_Cao

Ning_Cao

Cofounder
Previously Datology, Moontide Capital

Tom_Macie

Tom_Macie

Member of Technical Staff
Previously Anthropic, Stripe

Luke_Johnston

Luke_Johnston

Member of Technical Staff
Previously professional poker, Hive

06

Get_In_Touch

We are a small team committed to making big impact.

07

Use_Cases