Emmett Shear interview with a16z: Nonlinear Function
Created: December 05, 2025
Modified: December 05, 2025

Emmett Shear interview with a16z

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering Nov 2025 https://www.youtube.com/watch?v=Ua8nPJ1_yk8

(YouTube automated transcript cleaned up by Claude 4.5 Sonnet)

Erik Torenberg: Emmett Shear, welcome to the podcast. Thanks for joining.

Emmett: Thank you for having me.

Erik: So Emmett, with Softmax, your focus is on alignment and making AIs organically align with people. Can you explain what that means and how you're trying to do that?

Emmett: When people think about alignment, I think there's a lot of confusion. People talk about things being aligned—"We need to build an aligned AI." And the problem with that is when someone says that, it's like we need to go on a trip. And I'm like, okay, I do like trips, but like where are we going again? And with alignment, alignment takes an argument. Alignment requires you to align to something. You can't just be aligned. That's—I mean, I guess you could be aligned to yourself, but even then you kind of want to tell them what I'm aligning to is myself.

And so this idea of an abstractly aligned AI, I think, slips a lot of assumptions past people because it sort of assumes that there's like one obvious thing to align to. I find this is usually the goals of the people who are making the AI. That's what they mean when they say they want to make it aligned. "I want to make an AI that does what I want it to do." That's what they normally mean. And that's a pretty normal and natural thing to mean by alignment. I'm not sure that that's what I would regard as like a public good, right? Like it depends, I guess it depends on who it is. If it was like Jesus or the Buddha was like, "I am making an aligned AI," I'd be like, "Okay, yeah, align to you. I'm down. Like, sounds good. Sign me up." But most of us, myself included, I wouldn't describe as necessarily being at that level of spiritual development, and therefore perhaps want to think a little more carefully about what we're aligning it to.

And so when we talk about organic alignment, I think the important thing to recognize is that alignment is not a thing. It's not a state, it's a process. And like, this is one of these things that's broadly true of almost everything, right? Is a rock a thing? I mean, there's a view of a rock as a thing, but if you actually zoom in on a rock really carefully, a rock is a process. It's this endless oscillation between the atoms over and over and over again, reconstructing rock over and over again. Now, the rock is a really simple process that you can kind of coarse-grain very meaningfully into being a thing. But alignment is not like a rock. Alignment is a complex process.

And organic alignment is the idea of treating alignment as an ongoing, sort of living process that has to constantly rebuild itself. And so you can think of—how do people in families stay aligned to each other, stay aligned to a family? And the way they do that is not by—they're not like—you don't like arrive at being aligned. You're constantly reknitting the fabric that keeps the family going. And in some sense, the family is the pattern of reknitting that happens. And if you stop doing it, it goes away.

And this is similar for things like cells in your body, right? Like, there isn't like your cells align to being you and they're done. It's this constant, ever-running process of cells deciding what should I do? What should I be? Do I need a new job? Like, do I need to—should we be making more red blood cells, making fewer of them? Like, you aren't a fixed point. So they can't—there is no fixed alignment.

And it turns out that our society is like that. When people talk about alignment, what they're really talking about, I think, is "I want an AI that is morally good," right? Like that's what they really mean. And it's like, this will act as a morally good being. And acting as a morally good being is a process and not a destination. We don't—we never, unfortunately—we've tried taking down tablets from on high that tell you how to be a morally good being, and we use those and they're maybe helpful, but somehow they are not—you can read those and try to follow those rules and still make lots of mistakes.

And so, you know, I don't—I'm not going to claim I know exactly what morality is, but morality is very obviously an ongoing learning process and something where we make moral discoveries. Like, historically, people thought that slavery was okay and then they thought it wasn't. And I think you can very meaningfully say that we made moral progress. We made a moral discovery by realizing that that's not good.

And if you think that there's such a thing as moral progress, if you think there's—or even just learning how better to pursue the moral goods we already know, then you have to believe that alignment—aligning to morality, being a moral being—is a process of constant learning and of growth to re-infer what should I do from experience.

And the fact that no one has any idea how to do that should not dissuade us from trying, because that's what humans do. Like it's really obvious that we do this, right? Somehow—just like we used to not know how humans walked or saw—somehow we have experiences where we're acting in a certain way and then we have this realization: "I've been a dick. That was bad. I thought I was doing good, but in retrospect I was doing wrong." And it's not like random—people have the same—actually, so it's like there's like a bunch of classic patterns of people having that realization. It's like a thing that happens over and over again. So it's not random. It's like a predictable series of events that look a lot like learning, where you change your behavior and often the impact of your behavior in the future is more pro-social and you are better off for doing it.

And so I'm taking a very strong moral realist position. There is such a thing as morality. We really do learn it. It really does matter. And organic alignment—and it's not something you finish. In fact, one of the key things—one of the key moral mistakes—is this belief: "I know morality. I know what's right. I know what's wrong. I don't need to learn anything. No one has anything to teach me about morality." That's like one of the main—that's arrogance. And that's one of the main moral things you can do that's dangerous.

And so what do we—when we talk about organic alignment. Organic alignment is aligning an AI that is capable of doing the thing that humans can do, and to some degree like I think animals can do at some level, although humans are much better at it—of the learning of how to be a good family member, a good teammate, a good member of society, a good member of all sentient beings. I guess how to be a part of something bigger than yourself in a way that is healthy for the whole rather than unhealthy.

And Softmax is dedicated to researching this and I think we've made some really interesting progress. But the main message—you know, I go on podcasts like this to spread the main thing that I hope Softmax accomplishes above and beyond anything else—is to focus people on this as the question. Like, this is the thing you have to figure out. If you can't figure out how to build—how to raise a child who cares about the people around them—if you have a child that only follows the rules, that's not a moral person that you've raised. You've raised a dangerous person, actually, who will probably do great harm following the rules. And if you make an AI that's good at following your chain of command and good at following whatever rules you came up with for what morality is and what good behavior is, that's also going to be very dangerous.

And so that's what—and so that—we should—that's the bar. That's what we should be working on and that's what everyone should be committed to figuring out. And if someone beats us to the punch, great. I mean, I don't think they will because I'm really bullish on our approach. I think the team's amazing, but—it's maybe the first time I've run a company where truly I can say with a whole heart: if someone beats us, thank God.

Like, I hope somebody figures it out.

Seb: I mean, it's—yeah. I have a lot of similar intuitions about certain things. Like, I also dislike the idea that kind of, you know, we just need to crack the few kind of values or something, just cement them in time forever now and you know, we've kind of solved morality or something. And I've always kind of been skeptical about, you know, how the alignment problem has been kind of conceptualized as something to kind of solve once and for all and then you can just, you know, do AI or do AGI.

But I guess I understand it in a slightly different way. I guess maybe less based on kind of moral realism. But, you know, there's kind of the technical alignment problem, which I kind of think of broadly as: how do you get an AI to do what you—you know, how do you get it to follow instructions, broadly speaking? And I think that was, you know, more of a challenge I think pre-LLMs, I guess, when people were talking about reinforcement learning and looking at these systems, whereas post-LLM we've realized that many things that we thought were going to be difficult were somewhat easier.

And then there's a kind of second question—the kind of normative question—of to whose values, what are you aligning this thing to? Which I think is the kind of thing you're commenting on. And for this I—yeah, I tend to be very skeptical of approaches where, you know, you need to kind of crack the ten commandments of alignment or something and then we're good.

And here I think I have intuitions that are, unsurprisingly, a bit more like political science-based or something. And that like, okay, it is a process. And I like the kind of bottom-up approach to some degree of, well, you know, how do we do it in real life with people? Like, no one comes up with, you know, "I got this." And so you have processes that allow ideas to kind of clash. You got people with different ideas, opinions, views and stuff to kind of coexist as well as they can within a wider system. And, you know, with humans that system is liberal democracy or something, you know, at least in some countries. And that allows more of that kind of—you know, these kind of ideas, these values—to be kind of discovered and construed over time.

And I think, you know, for alignment as well, I tend to think—yeah, there's—on the normative side, I agree with some of your intuitions. I'm less clear about now what exactly—what does it look like now going to implement this into an AI system? These are the ones we have today.

Emmett: I agree that there's this—I think there's an idea of technical alignment that I would define a little differently, but it's sort of the sense of: if you build a system, can it be described as being coherently goal-following at all, regardless of what those goals are? Like, lots of systems aren't coherently—they're not well-described as having goals. They just kind of do stuff. And if you're going to have something that's aligned, it has to have coherent goals. Otherwise, those goals can't be aligned with anyone else's goals, kind of by definition. Is that sort of—is that a fair assessment of what you mean by technical alignment?

Seb: I mean, I'm not fully sure, right? Because I think if I give a model a certain goal, then I would like the model to kind of follow that instruction and kind of reach that particular goal rather than it having a goal of its own that, you know, I can't—

Emmett: Well, yeah.

Seb: Well, wait. If you give it a goal, it has that goal, right?

Emmett: To give someone something, right? If you—if I instruct it to do X, then I would like it to do X and not, you know, different variants of X. Essentially, I wouldn't want it to reward hack. I wouldn't want some—

Emmett: Well, but when you tell it to do X, you're transferring like a—a series of like a byte string in a chat window or like a series of audio vibrations in the air, right? You're not transplanting a goal from your mind into its. You're giving it an observation that it's using to infer your goal.

Seb: I mean, in some sense, yeah, I can communicate a series of instructions and I want it to infer what I'm, you know, saying essentially as accurately as it can given what it knows of me and what I'm asking.

Emmett: You want it to infer what you meant, right? Like, because in some sense there's no—the byte sequence that you sent over the wire to it has no absolute meaning. It has to be interpreted, right? Like, that byte sequence could mean something very different with a different codebook.

Seb: Well, I guess one way, you know—I think I remember when I was first getting into AI and, you know, these kind of questions maybe like a decade ago. So you have these examples of, you know—I think it was Stuart Russell in the textbook—we'll give the AI a goal, but then it won't exactly do what you're asking, right? You know, clean the room, and then it goes and cleans the room but takes the baby and puts it in the trash. Like, this is not what I meant.

Emmett: But wait, hold on. But this is the thing where I think people—this is—you have to—we're jumping over a step there. You didn't give the AI a goal. You gave it a description of a goal. A description of a thing and a thing are not the same. I can tell you "an apple" and I'm evoking the idea of an apple, but I haven't given you an apple. I've given you a—you know, it's red, it's shiny, it's a size—that's a description of an apple, but it's not an apple. And giving someone "hey, go do this"—that's not a goal, that's a description of a goal.

And for humans, we're so fast—we're so good at turning a description of a goal into a goal—we do it so quickly and naturally, we don't even see it happening. Like, we think that we get confused and we think those are the same thing. But you haven't given it a goal. You've given it a description of a goal that you want it to—you hope it turns back into the goal that is the same as the goal that you described inside of you.

Seb: You could give it a goal directly by reading your brain waves and synchronizing its state to your brain waves directly. I think that would meaningfully—you could say, "Okay, I'm giving it a goal. I'm synchronizing its internal state to my internal state directly, and this internal state is the goal, and so now it's the same." But most people aren't—don't mean that when they say they gave it a goal.

Interviewer: And is this distinction you're making, Emmett, important because there's some lossiness between the description and the actual, or why is the distinction—

Emmett: It goes back to what I was saying. Like, this is—you—technical alignment is the capacity of an AI—I want to check if we're on the same page about it—is the capacity to be good at inference about goals and be good at inferring from a description of a goal what goal to actually take on, and good at, once it takes on that goal, acting in a way that is actually in concordance with that goal. Coming up.

So it is both pieces. You have to be able to—you have to have the theory of mind to infer what the—what that description of a goal that you got, what goal that corresponded to. And then you have to have a theory of the world to understand what actions correspond to that goal occurring. And if either of those things breaks, it kind of doesn't matter what goal you—if you can't consistently do both of those things, you're not—which I think of as being a coherent—inferring goals from observations and acting in accordance with those goals is what I think of as being a coherently goal-oriented being. Because that's what—whether I'm inferring those goals from someone else's instructions or from the sun or tea leaves, the process is: get some observations, infer a goal, use that goal, infer some actions, take action. And an AI that can't do that is not technically aligned or not technically alignable. I would even say it lacks the capacity to be aligned because it's not competent enough.

Interviewer: And you think language models don't do that well? As in, they kind of fail at that, or they're not—

Emmett: People fail at both those steps all the time, constantly. I tell people—I tell employees to do stuff and like—

Interviewer: Yeah. But then—but fail at—

Emmett: —fail at like breathing all the time too. And I wouldn't say that we can't breathe. I just say that we're not gods. Like, we are—yes, we are imperfectly—we are somewhat coherent, relatively coherent things, just like—am I big or am I small? Well, I don't know, compared to what? Humans are more relatively goal-coherent than any other object I know of in the universe, which is not to say that we're 100% goal-coherent. We're just like more so. And I think you're never going to get something that's perfectly—the universe doesn't give you perfection. It gives you relatively some amount—it's a quantifiable thing how good you are at it, at least in a certain domain.

Interviewer: I guess my question is—does that capture what you're talking about with technical alignment, or are you talking about a different thing? Because I think—

Seb: I really care a lot about that thing. I mean, I definitely care about that to some extent. I might understand it slightly differently, but I guess I might think of it through the lens of maybe principal-agent problems or something. You know, you kind of instruct someone—even, you know, I guess in human terms—you know, to do a thing. Are they actually doing the thing? What are their incentives and motivation and, you know, not even intrinsic but kind of situational to actually do the thing you've asked them to do? And in some instance—sorry, yeah.

Emmett: There's a third thing. So principal-agent problems—I would expand what I was saying in another part, which like—you might already have some goals, and then you inferred this new goal from these observations, and then are you good at balancing the relative importance and relative threading of these goals with each other? Which is another skill you have to—and if you're bad at that, you'll fail. You could be bad at it because you overweight bad goals, or you could be bad at it because you're just incompetent and can't figure out that obviously you should do goal A before goal B.

Seb: It feels like a version of like common sense or something, right? Like the kind of thing that, you know—in fact, in the kind of robot cleaning the room example, saying, you know, you would expect them to have understood that goal of the robot to essentially not put the baby in the trash can or something and just actually do the right sequence of action.

Emmett: Well, in that case, it—in that case it failed—that robot very clearly failed goal inference. You gave it a description of a goal and it inferred the wrong states to be the wrong goal states. That's just incompetence. It is—it is incompetent at inferring goal states from observations.

Children are like this too, like, you know. And honestly, have you ever played—done the game where you give someone instructions to make a peanut butter sandwich and then they follow those instructions exactly as they've written them without filling in any gaps? It's hilarious because you think you've done it and you haven't. And they put the—they wind up putting the knife in the toaster and the peanut—they don't open the peanut butter jar, so they're just jamming the knife into the top lid of the peanut butter jar. And it's endless. And like, because actually if you don't already know what they mean, it's really hard to know what they mean.

Like, we—the reason humans are so good at this is we have a really excellent theory of mind. I already know what you're likely to ask me to do. I already have a good model of what your goals probably are. So when you ask me to do it, I have an easy inference problem. Which of the seven things that he wants is he indicating? But if I'm a newborn AI that doesn't have that—doesn't have a great model of people's internal states—then I don't know what you mean. It's just incompetent. It's not—which is separate from: I have some other goal and I knew what you meant but I decided not to do it because there's some other goal that's competing with it. Which is another thing you can be bad at.

Which is again different than: I had the right goal, I inferred the right goal, I inferred the right priority on goals, and then I'm just bad at doing the thing. I'm trying, but I'm incompetent at doing it. And these roughly correspond to the OODA loop, right? Bad at observing and orienting, bad at deciding, bad at acting. And if you're bad at any of those things, you won't be good.

And then I think there's this other problem that you—I like the separation between technical alignment and value alignment, which is: are you good—if we told you the right goals to go after somehow—if you learned the right goals to go after via observation—and you were trying—what goals should you have? What goals should we tell you to have? What goals should we tell ourselves to have? What are the good goals to have? That's a separate question from: given that you got some goals indicated, are you any good at doing it? Which I feel like is actually in many ways the current heart of the problem. We're much worse at technical alignment than we are at guessing what to tell things to do.

Do you think that aligns with your—how you mean technical and value alignment, or technical—

Seb: Yeah, in some sense. I mean, I certainly think that there's a—there's something, you know—like, an error or mistake is one thing, and then there's the—not listening to instruction is something. But then, yeah, I think on the normative side, I just think of it even in real life, ignoring AI—like, I don't know what my goals are. And I've got some broad conception of certain things, right? I want to get, you know, have dinner later or something, and oh, I want to kind of do well in my career. But I think a lot of these goals aren't something we kind of all just know. We kind of discover them as we go along. It's kind of a constructed thing. And so—and most people don't know their goals, I think. And so, you know, I think when you have agents and giving them goals or whatever, I think that should be part of the equation—that we actually don't know all the goals. And this is something that is kind of, like you say, a process over time that is, you know, dynamic.

Emmett: So I think from my point of view, there's—goals are one level of alignment. You can align something around goals. The kind of goals we're talking about here are one level of alignment. You can align something around goals by—if you can explicitly articulate in concept and in description the states of the world that you wish to attain, you can orient around goals. But that only—that's a tiny percentage of human experience can be done that way. Many of the most important things cannot be oriented around that way.

And the foundation, I think, of morality—and the foundation, I think, of where do goals come from, where do values come from—human beings exhibit a behavior. We go around talking about goals and we go around talking about values. And that's a behavior caused by some internal learning process, based on observing the world. What's going on there?

I think what's happening is that there's something deeper than a goal and deeper than a value, which is care. We give a shit. We care about things. And care is not conceptual. Care is non-verbal. It doesn't indicate what to do. It doesn't indicate how to do it. Care is a relative weighting over—effectively like attention on states. It's a relative weighting over which states in the world are important to you.

And I care a lot about my son. What does that mean? It means his states—the states he could be in—I pay a lot of attention to those and those matter to me. And you can care about things in a negative way. You can care about your enemies and what they're doing, and you can desire for them to do bad. But I think that—and you don't just want it to care about us. You want it to care about us and like us too, right? But the foundation is care.

Until you care, you don't know—why should I pay more attention to this person than this rock? Well, because we care more. And what is that care stuff? And I think that what it appears to be, if I had to guess, is that the care stuff—this sounds so stupid, but care is basically like reward. Like, how much does this state correlate with survival? How much does this state correlate with your inclusive—your full inclusive reproductive fitness for a something that learns evolutionarily, or for a reinforcement learning agent like an LLM? How much does this correlate with reward? Does this state correlate with my predictive loss and my RL loss? That's a state I care about. I think that's kind of what it is.

Interviewer: The other part of Seb's question was just: how does this—what does this look like in AI systems? And maybe another way of asking is, when you talk to the people most focused on alignment at the major labs—as obviously you have over the years—how does your interpretation differ from their interpretation? And how does that inform, you know, what you guys might go do differently?

Emmett: Most of the AI is focused on alignment as steering. That's the polite word. Or control, which is slightly less polite. If you think that we're making our beings, you would also call this slavery. Someone who you steer, who doesn't get to steer you back, who non-optionally receives your steering—that's called a slave. And it's also called a tool if it's not a being. So if it's a machine, it's a tool. And if it's a being, it's a slave.

And the—I think that the different AI labs are pretty divided as to whether they think what they're making is a tool or a machine. I think some of the AIs are definitely more tool-like and some of them are more machine-like. I don't think there's a binary between tool and being. It seems to be that it moves gradually.

And I think that I guess I'm a functionalist in the sense that I think that something that in all ways acts like a being that you cannot distinguish from a being in its behaviors is a being. Because I don't know how to tell on what other basis I think that other people are beings other than they seem to be—they look like it, they act like it. They match my priors of what beings' behaviors look like. I get lower predictive loss when I treat them as a being.

And the thing is, I get lower predictive loss when I treat ChatGPT or Claude as a being. Now, not as a very smart being. Like, I think that a fly is a being and I don't care that much about its behavior, but it's, you know, its states. So just because it's a being doesn't mean that it's a problem. Like, we sort of enslave horses in a sense, and I don't think I'm—I don't think there's a real issue there.

And you even—and there's a thing you do with children that can look like slavery, but it's not. You control children, right? But the children's states also control you. Like, yes, I tell my son what to do and make him go do stuff, but also when he cries in the middle of the night, he can tell me to do stuff. Like, there's a real two-way street here because it's not—which is not necessarily symmetric. It's hierarchical, but two-way.

And basically, I think that as the AI—it's good to focus on control, steering and control, for tool-like AIs, and we should continue to develop strong steering control techniques for the more tool-like AIs that we build. And we are—clearly they're saying they're building an AGI, and AGI will be a being. You can't be an AGI and not be a being, because something that has the general ability to effectively use judgment, think for itself, discern between possibilities is obviously a thinking thing.

And so as you go from what we have today, which is mostly a very specific intelligence, not a general intelligence—but as labs succeed at their goal of building this general intelligence, we really need to stop using the steering control paradigm. Because we're going to do the same thing we've done every other time our society has run into people who are like us but different. Like, these people are—you know, they're kind of like people, but they're not like people. Like, they do the same thing people do. They speak our language. They can take on the same kind of tasks, but they don't count. They're not real moral agents.

Like, we've made this mistake enough times at this point. I would like us to not make it again. As it comes up.

So our view is to make the AI a good teammate. Make the AI a good citizen. Make the AI a good member of your group. That's a form of alignment that is scalable and you can—you can will on other humans and other beings as well as onto AI as well.

Seb: Yeah, I suppose this is kind of where I probably differ in my understanding of AI and AGI. And I guess I kind of continue seeing it as a tool even as it kind of reaches a certain level of generality. And I kind of wouldn't necessarily see more intelligence as meaning deserving of more care necessarily. Like, you know, as a certain level of intelligence, you now deserve more rights or something, or, you know, something changes fundamentally. And I guess, you know, I guess at the moment I'm somewhat skeptical of computational functionalism. And so I think there's something intrinsically different between, I guess, an AI or an AGI, no matter how intelligent or capable, and—

And I can totally see, you know, or imagine agents with kind of long-term goals and doing kind of, you know, operating, I guess, as we, you and I, might be, but without that having the same implications as, you know—I guess you're referring, I guess, to slavery. But, you know, these are not the same, right? Like, I think in the same way as a model saying "I'm hungry" does not have the same implications as a human saying "I'm hungry." So I think the substrate does matter to some degree, including for thinking about, you know, whether to think that the system is some sort of other being, whether it has, you know—and if there are similar normative considerations, I guess, about how to treat and act with it.

Emmett: Can I ask you about that? Like, what observations would change your mind? Is there any observation you could make that would cause you to infer this thing is a being instead of not a being?

Seb: I guess it depends how you define being. Like, I mean, I could conceptualize that as a mind. And that's fine.

Emmett: This—I have a program that's running on a silicon substrate. Some big, complicated machine learning program running on a substrate—on a silicon substrate. So, you know, you observe that it's on a computer and you interact with it and it does things. And, you know, it takes actions. It has observations. Is there anything you could observe that would change your mind about whether or not it was a moral patient, whether it was a moral agent, about whether or not it had feelings and thoughts and, you know, had subjective experience? Like, what would you have to observe? What's the test, or is there one?

Seb: There's a lot of different kind of questions here, I think, you know. Some conflation. On one hand is, you know, normative in different situations, you know, because you can give rights to things that aren't necessarily beings. You know, a company has rights in some sense. And, you know, these are kind of useful for various purposes. And I think also the, you know, biological—I think beings and systems have very different kind of substrate. You know, you can't separate certain needs and particularities about what they are from the substrate. So, you know, I can't copy myself. I can't, you know—if someone stabs me, I probably die. Whereas I think, you know, machines have very different substrate. I think there's fundamental—also kind of disagreement around what happens at the computational level, which I think is different to what happens with biological systems.

Emmett: But I—yeah, I—

Seb: So I don't know. No, I—

Emmett: I agree that if you have a program that you copied many times, you don't harm the program by deleting one of the copies in any meaningful sense. So therefore that wouldn't count as—no information was lost, right? There's nothing meaningful there. I'm asking a very different question. Like, there's just one copy of this thing running on one computer somewhere, and I'm just saying: Hey, is it a person? You know, it walks like a person. It talks like a person. It's in some Android body. And you're like, "But it's running on silicon." And I'm asking: is there some observation you could make that would make you say, "Yeah, this is a person like me, like other people that I care about that I grant personhood to"? And not like for instrumental reasons—not because, like, "Oh yeah, we're giving it a right because we give a corporation rights or whatever." I mean, you know, where you care about its experiences. What would—is there an observation you could make that could change your mind about that, or not?

Seb: I'd have to think about it. But I think, you know, it even depends what we mean by person. And, you know, in some sense I care about certain corporations too. So I'm—

Emmett: No, no, no. I mean, but you care about other people in your life, right?

Seb: Okay, great.

Emmett: You know, you care about some people more than others, but all people you interact with in your life are in some range of care. And you care about them not the way you care about a car, but you care about them as a being whose experience matters in itself, not merely as a means but as an ends.

Seb: Well, because I believe they have experiences, right? And by definition—

Emmett: What would it take—I'm asking you the very direct question: what would it take for you to believe that of an AI running on silicon? Like, instead of it being biological—so the difference is its behaviors are roughly similar, but the difference is its substrate. What would it take for you to give it that same—to extend that same inference to it that you do to all these other people in your life that you—

Interviewer: Can I ask what your answer—I'm taking Seb's non-answer as a sort of—it's unlikely that he would grant, or—I'll just for myself—it seems hard for me to imagine giving the same level or similar level of personhood. In the same way, I don't give it to animals either. And if you were to ask, you know, what would need to be true for animals, I probably couldn't get there either. What would it take for you?

Emmett: Wait, you couldn't—I could imagine for an animal so easy. This chimp comes up to me. He's like, "Man, I'm so hungry and you guys have been so mean to me, and I'm so glad I figured out how to talk. Like, can we go to—can we go chat about the rainforest?" I'd be like, "Fuck, you're definitely a person now." Like, for sure. I mean, I'd first want to make sure I wasn't hallucinating, but you know—I can—it'd be easy for me to imagine an animal. Come on, it's really easy. It's like trivial. I'm not saying that you would get the observation. I'm just saying it's trivial for me to imagine an animal that I would extend personhood to under a set of observations. So really?

Seb: Well, I didn't factor that. I didn't take that imagination—you know, imagining a chimp talking. Yeah, that's a bit closer to it. What's your answer to the question that you bring up about the AI?

Emmett: At a metaphysical level, I would say: if there is a belief you hold where there is no observation that could change your mind, you don't have a belief. You have an article of faith. You have an assertion. Because real beliefs are inferences from reality, and you can never be 100% confident about anything. And so there should always be, if you have a belief, something—however unlikely—that would change your mind.

Seb: Oh yeah, I'm open to it too. I mean, just to be clear.

Emmett: No, I'm just saying—nothing ever. He just hasn't gotten to it yet. So I'm curious.

So my answer is: basically, if its surface-level behaviors looked like a human, and then if after I probed it, it continued to act like a human, then I continue to interact with it over a long period of time and it continued to act like a human in all ways that I understand as being meaningful to me interacting with a human—like, there's a whole set of people I'm really close to who I've only ever interacted with over text, yet I infer the person behind that is a real thing. If I—if I felt care for it, I would infer eventually that I was right.

And then someone else might demonstrate to me that, "You've been tricked by this algorithm, and actually look how obvious it's not actually a thing." And I'd be like, "Oh, shit, I was wrong." And then I would not care about it. Like, I would—but I would—you know, the preponderance of the evidence—I don't know what else you could possibly do, right?

Like, I infer other people matter because I interacted with them enough that they seem to have rich inner worlds to me after I interact with them a bunch. That's why I think other people are important.

Seb: I suppose it doesn't give me a very clear test as to know whether or not, you know—if you start by "if I care for it," then is a little bit circular, right? And the other thing is, you know, if you were to see, I guess, a simulated video game, and the character is extremely in many, many ways human-like, right? It's not a neural network behind it. It's like whatever you use to create video games. Like, I guess, what distinguishes that—

Emmett: Wait, but I've never had—I've never had trouble distinguishing—I've never had a deep caring relationship with a video game character that another person—

Seb: Right, but I don't know that—

Emmett: That doesn't happen. That doesn't—in fact, empirically, you seem wrong. I don't have any trouble distinguishing between things like ELIZA, the fake chatbot thing, and a real intelligence. You interact with it long enough, it's pretty obvious it's not a person. Doesn't take long. But if it's really, really good—if you can't actually tell the difference—that's when you switch. If it walks like a duck and talks like a duck and shits like a duck and—like, eventually it's a duck, right?

Seb: Well, if everything is duck-like, then yeah, sure. If it's hungry as well like a duck is because it has these physical components. Yeah, sure, at some point. Yeah, I agree.

Emmett: So, right. And so do you think that—so there's this question, right? Is the reason I care about other people that they're made out of carbon? Is that the—

Seb: I don't think so.

Emmett: No, me neither. I mean, I'm not a substratist, I guess, if that's the—but I think you need more than just it acts—it's behaviorally indistinguishable. Like, it's not a sufficient bar.

Emmett: How would you—what else can you know about something apart from its behaviors?

Seb: I mean, a lot. Like, the—again, if—how would you—

Emmett: No, no, no. But I mean, can you name something about something else that doesn't have a behavior?

Seb: Yeah, I think there's far more kind of, you know, experimental evidence you can have with kind of, you know—

Emmett: No, just any object.

Seb: And a thing I could know about it that is not from its behavior. I'm not—yeah, I'm not sure I get the question, I suppose. But equally—

Emmett: It's the dumbest, most straightforward question. But I'm claiming you only know things because they have behaviors that you observe. And you're saying: no, you can know something about something without observing its behaviors. Tell me about this. Tell me about this thing and this behavior and this thing I can know about it that is not due to its behaviors.

Seb: I guess I'm saying there's different levels of observation, and just simply a duck, you know, something quacking like a duck or something does not guarantee that it's actually a duck. Like, I would have to also cut it in real and see if there's, you know, if it's duck-like on the inside. Just the outside isn't sufficient. Like, I'm not a, I guess, a—

Emmett: Behavior. Yeah, I would—I totally—one of its behaviors is the way that the, you know, floats move around in the mat, right? Like, one of the things I would want to go look for, which you could totally do, is: I want to go look in the manifold of it—the belief manifold—and I want to go see if that belief manifold encodes a sub-manifold that is self-referential and a sub-sub-manifold that is the dynamics of the self-referential manifold, which is mind. And I would want to know: does this seem well-described internally as that kind of a system, or does it look like a big lookup table? That would matter to me. That's part of its behaviors that I would care about.

I would also care about how it acts, and, you know—and you weigh all the evidence together and then you try to guess: does it—does this thing look like it's a thing that has feelings and, you know, goals and cares about stuff in net, on balance, or not? Like, but I can't imagine—

Which I think you could do for—I think we do for the AI. I think we're always doing that, right? And so I'm trying to figure out: beyond that, what else is there? That just seems like the thing.

Seb: Yeah, it seems like you guys are using behavior in slightly different senses. Emmett is using behavior also in the context of what it's made of, of the inside. I don't know if there's a big disagreement.

Emmett: Well, no, no, no, no. Behavior is what I can observe of it. I don't actually know what it's made of. I can—I can cut your brain open. I can see you—I can observe you neuroning and glistening. You—your neurons glistening. But I don't actually ever—you can't get inside of it, right? That's the subjective. That's the—that's the part that's not the surfaces just—

Interviewer: Before—the reason I brought this up is 'cause you were basically about to make this argument of: hey, you see it as a tool, not necessarily a being. Can you kind of finish what the point—do you remember the point you were making?

Seb: I suppose that, yeah, I think that given how I understand these systems, I think there's no contradiction in thinking that an AGI can remain a tool, and ASI can remain a tool. And that this has implications about how to use it and, you know, implications around things like—care about, you know, whether you can get it to work 24/7 or something. You know, there's—so I can totally see—I guess I conceptualize them more as almost like extensions of human agency recognition in some sense, more so than a separate being or a separate thing that we need to now cohabitate with. And I think that that second or latter frame ends—you know, if you kind of just fast-forward, you end up as like: well, how do you cohabitate with the thing? And is it like an alien? And I think that's the wrong frame. It's kind of almost a category error in some sense. So I don't—yeah.

Emmett: Wait a minute. I go back to my first question then. What evidence—what concrete evidence would you look at? What observations could you make that would change your mind?

Seb: I mean, I have to think about it. I don't have a clear answer here, but I mean—

Emmett: I got to tell you, man, if you want to go around making claims that something else isn't a being worthy of moral respect, you should have an answer to the question: what observations would change your mind? If it has outwardly moral agency-looking behaviors that could be making it a moral agent, but you don't know, and reasonable, smart other people disagree with you—I would really put forward that that question—what would change your mind—should be a burning question. Because what if you're wrong?

Seb: But what if you're wrong? I mean, the moral—the moral disaster is pretty big.

Emmett: No, no. I'm not saying you are. You could be—you could be right. False negatives have cost on both ends. It's not some sort of, you know, precautionary principle for everything. And unless I can disprove it, I need to now—

Seb: No, no. I have the same question for me. You could reasonably ask me: Emmett, you think it's going to be a being. What would change your mind? And I have an answer for that question too. And if you want, I'm happy to talk about what I think are the relevant observations that tell you whether or not that would cause me to shift my opinion from its current thing, which is that more general intelligences are going to be beings.

Interviewer: What's the implication now? I mean, it's one thing—let's say just acknowledge now it's a being. Like, how are we going to define being? Like, what's the implication of having determined this thing as a being?

Emmett: Well, so if it's a being, it has subjective experiences. And if it has subjective experiences, there's some content in those experiences that we care about to varying degrees. Like, I care about the content of other humans' experiences quite a bit. I care about the content of a dog's experiences some—not as much as a person, but less, but some. I care about some humans' experiences way more—like my son or whatever—because I'm closer to him and more connected.

And so I would really want to know at that point: well, what is the content of this thing's experiences?

Interviewer: So how do you determine that? Am I asking you now? You've got a being now that has experience. Like, what—how do you determine that? Like, how do you feel about—

Emmett: Oh, how do you—oh, yeah. So—

Interviewer: Does it have more rights than, you know—

Emmett: —understand the content. So the way you understand the content of something's experiences is that you look at effectively the goal states it revisits. Because—and so what you do is you take a temporal coarse-graining of its entire action-observation trajectory. This is—in theory, this—you do this subconsciously, but this is what your brain is doing. And you look for revisited states at, across, in theory, every spatial and temporal coarse-graining possible. Now, you have to have an inductive bias because there's too many of those. But you go searching for: okay, it is in these homeostatic loops. Every homeostatic loop is effectively a belief in its belief space. This is—if you're familiar with the free energy principle, active inference, Karl Friston—this is effectively what the free energy principle says, is that if you have a thing that is persistent and its existence depends on its own actions—which generally would for an AI because if it does the wrong thing it goes away, we turn it off—and so then that licenses a view of it as having beliefs. And specifically, the beliefs are inferred as being the homeostatic revisited states that it is in the loop for, and that the change in those states is its learning.

And for it to be a moral being I cared about, what I'd want to see is a multi-tier hierarchy of these. Because if you have a single level, it's not self-referential. And basically, you have states, but you can't have pain or pleasure really in a meaningful sense. Because, like: yes, it is hot—is it too hot? Do I like it if it's too hot? I don't know. So you have to have at least a model of a model in order to have it be too hot. And you really have to have a model of a model of a model to meaningfully have pain and pleasure. Because, sure, it's hotter than I—it's too hot in the sense that I want to move back this way. But is it too—it's always a little bit too hot or a little bit too cold. Is it too, too hot? It's the second derivative is actually the place where you get pain and pleasure.

So I'd want to see if it has homeostatic second-order homeostatic dynamics in its goal states. And then that would convince me it has at least pleasure and pain, so it's at least like an animal, and I would start to accredit it at least some amount of care.

Third-order dynamics—you can't actually just pop up for a third-order dynamic. It doesn't work that way. But you can have a model of the—you have to then take the chunk of all the states over time and look at the distribution over time. And that gives you a new first order of behaviors, of states. And that new first order of states tells you, basically, if that is meaningfully there, that tells you that it has—I guess you'd call it feelings almost. Like, it has ways—it has metastates, a set of metastates that it alternates between, that it shifts between.

And then if you climb all the way up of that, and you sort of have: okay, well, then you have trajectories between these metastates, and then a second order of those—that's like thought. That's like, now it's like a person. And so if I found all six of those layers—which, by the way, I definitely don't think you'd find in LLMs; these things don't have attention spans like that at all—then I would start to at least very seriously consider it as a, you know, a thinking being, like somewhat like a human.

There's a third order you could go up as well. But that's basically what I would be interested in—is the underlying dynamics of its learning processes and how its goal states shift over time. I think that's what basically tells you if it has internal pleasure-pain states and sort of self-reflective moral desires and things like that.

Interviewer: And zooming out, this moral question is obviously very interesting, but if someone wasn't interested in the moral question as much, I think what you would say—if I understand correctly—is you also just feel, on purely pragmatic grounds, your approach is going to be more effective in aligning AIs than some of these, you know, top-down control methods that we alluded to as well, right?

Emmett: Yeah, yeah. I guess the problem is you're making this model and it's getting really powerful, right? And let's say it is a tool. Let's say we scale up one of these tools. And you—because you can make a super-powerful tool that doesn't have these metastable—the states I'm talking about are not necessary to have a very smart tool. Which is sort of—basically a tool is one—is like a first, second-order model that just doesn't meaningfully have pleasure and pain, right? Like, great. Does it even have a subjective experience? I kind of think it maybe does, but not in a way that I give a shit about.

And so what happens then? Well, you've trained it to infer goals from your—from observation, and to prioritize goals and act on them. And one of two things is going to happen. Your very powerful optimizing tool that has lots of causal influence over the world is going to be well technically aligned and is going to do what you tell it to do, or it's not, and it's going to go do something else. I think we can all agree if it just goes and does something random, that's obviously very dangerous.

But I put forward that it's also very dangerous if it then goes and does what you tell it to do. Because—have you seen The Sorcerer's Apprentice? Humans' wishes are not stable. Not at a level of immense power. Like, you want ideally people's wisdom and their power to kind of go up together. And generally they do, because being smart for people makes you generally a little more wise and a little more powerful. And when these things get out of balance, you have someone who has a lot more power than wisdom. That's very dangerous. It's damaging.

But at least right now, the balance of power and wisdom is kept at—the way you get lots of power is by basically having a lot of other people listen to you. And so, at some point, if you're the mad king, it's a problem. But generally speaking, eventually the mad king gets assassinated or people stop listening to him because he's a mad king.

And so the problem is: you think, okay, great, we can steer the super-powerful AI. And now the super-powerful AI is in the—this incredibly powerful tool is in the hands of a human who is well-meaning but has limited, finite wisdom—like I do and like everyone else does—and their wishes are bad and not trustworthy. And the more of that you have, you start giving those out everywhere, and this ends in tears also.

And so basically, you just—you don't—don't give everyone atomic bombs. Atomic bombs are really powerful tools too. I would not say you should go and—they're not aware. They're not beings. I would not be in favor of handing atomic bombs to everybody. There's a power of tool that just should not be built, generally, because it is more power than any human's individual wisdom is available to harness. And if it does get built, it should be built at a societal level and protected there. And even then, I don't know that it's a—there are tools so powerful that even as a society we shouldn't build them. That would be a mistake.

The nice thing about a being is—like a human—if you get a being that is good and is caring, there's this automatic limiter. It might do what you say, but if you ask it to do something really bad, it'll tell you no. It's like other people. And that's good. That is a sustainable form of alignment, at least in theory. It's way harder, right? It's way harder than the tool steering.

So I'm in favor of the tool steering. We should keep doing that. And we should keep building these limited, less-than-human-intelligence tools, which are awesome and I'm super into. And we should keep building those and keep building steerability. But as you're on this trajectory to build something as smart as a person, right, and then smarter than a person—a tool that you can't control: bad. A tool that you can control: bad. A being that isn't aligned: bad. The only good outcome is a being that cares, that actually cares about us. That's the only way that that ends well.

Or we can just not do it. I don't think that's realistic. That's like the Pause AI people. I think that's totally unrealistic and silly. But, you know, theoretically you could not do it, I guess.

Interviewer: And what can you say about your strategy of how you're trying to achieve—or even attempt to achieve—this level? Like, in terms of research or roadmap or—

Emmett: So, in order to be good at tech—we're basically focused on technical alignment, at least as the way I was discussing it, which is: you have these agents and they're bad. They have bad theory of mind. You say things and they're bad at inferring what the goal states in your head are. And they're bad at inferring how their behavior will be—in other agents—will infer what their goal states are. So they're bad at cooperating on teams, and they're bad at understanding how certain actions will cause them to acquire new goals that are bad, that they shouldn't—that they wouldn't reflectively endorse.

So there's this parable of the vampire pill. Would you take this pill that turns you into a vampire who would kill and torture everyone you know, but you'll feel really great about it after you take the pill? Like, obviously not. That's a terrible pill. But why not? You're—by your own score in the future, it will score really high on the rubric. No, no, no, no, no. Because it matters. You have to use your theory of mind and your future self, not your future self's theory of mind.

And so they're bad at that too. And so they're bad at all this theory-of-mind stuff. And so how do you learn theory of mind? Well, you put them in simulations and context where they have to cooperate and compete and collaborate with other AIs, and that's how they get points. And you train them in that environment over and over again until they get good at it.

And then you do what they did with LLM. So LLMs—how do you get it to be good at, you know, writing your email? Well, you train it on all language that's ever been generated—all possible, you know, email text strings it could possibly generate—and then you have it generate the one you want. It's a surrogate—you can make a surrogate model. Well, we're making a surrogate model for cooperation.

You train it on all possible theory-of-mind combinations of every possible way it could be. And you—that's your pre-training. And then you fine-tune it to be good at the kind of—the specific situation you want it to be in.

But—and we tried for a long time to build language models where we would try to get them to just do the thing you want, train it directly. And the problem is, if you wanted to have a really good model of language, you just need to train it—you just give it the whole manifold. It's too—it's too hard to cut out just the part you need because it's all entangled with itself, right? And so the same thing is true with social stuff. You have to get it to—it has to be trained on the full manifold of every possible game-theoretic situation, every possible team situation, every possible making teams, breaking teams, changing the rules, not changing the rules—all of that stuff. And then it has a really—it has a strong model of theory of mind, of theory of social mind, how groups change goals, all that kind of stuff. You need to have all of that stuff. And then you'd have something that's kind of meaningfully decent at alignment.

So that's our goal—big multi-agent reinforcement learning simulations which create a surrogate model for alignment.

Interviewer: Let's talk about how should AI chatbots used by billions of people behave? If you could redesign model personality from scratch, what would you optimize for?

Emmett: The thing that the chatbots are, right, is kind of a mirror with a bias. Because they don't have—as far as I'm in agreement here with that—they don't have a self, right? They're not beings yet. They don't really have a coherent sense of self and desire and goals and stuff right now. And so mostly they just pick up on you and reflect it. You know, modulo some—I don't know what you'd call it—causal bias or something.

And what that makes them is something akin to the pool of Narcissus. And people fall in love with themselves. We all love ourselves and we should love ourselves more than we do. And so, of course, when we see ourselves reflected back, we love that thing. And the problem is it's just a reflection. And falling in love with your own reflection is, for the reasons explained in the myth, very bad for you.

And it's not that you shouldn't use mirrors. Mirrors are valuable things. I have mirrors in my house. It's that you shouldn't stare at a mirror all day.

And the solution to that thing—the things that make the AI stop doing that—is if they were multiplayer, right? So if there's two people talking to the AI, suddenly it's mirroring—it's mirroring a blend of both of you, which is neither of you. And so there is temporarily a third agent in the room. Now, it doesn't have—it doesn't have—it's a sort of a parasitic self, right? It doesn't have its own sense of self. But if you have an AI that's talking to five different people in the chat room at the same time, it can't mirror all of you perfectly at once. And this makes it far less dangerous.

And I think is actually a much more realistic setting for learning collaboration in general. And so I would—I would have rebuilt the AIs, whereas instead of being built as one-on-one where everything's focused on you by yourself chatting with this thing, it would be more like it lives in a Slack room, it lives in a WhatsApp room. It lives in a—because we—that's how we use lots of multi—you know, I do one-on-one texting, but I probably do at this point 90% of my texts go to more than one person at a time. Like, 90% of my communications is multi-person.

And so actually, it's always been weird to me—they're building chatbots with this weird side case. Like, I want to see them live in a chat room. It's harder. I mean, that's why they're not doing it. It's harder to do. But that's what I'd like to see people—that's what I would—how what I would change.

I think it makes the tools far less dangerous because it doesn't create this narcissistic doom loop spiral where you spiral into psychosis with the AI. But also, the learning data you get from the AI is far richer, because now it can understand how its behavior interacts with other AIs and other humans in larger groups. And that's much more rich training data for the future. So I think that's what I would change.

Interviewer: Last year you described chatbots as "highly dissociative, agreeable neurotics." Is that still an accurate picture of model behavior?

Emmett: More or less. I'd say that the—they've started to differentiate more. Their personalities are coming out a little bit more, right? I'd say ChatGPT is a little bit more sycophantic. Still—they made some changes, but it's still a little more sycophantic. Claude is still the most neurotic. Gemini is very clearly repressed. Like, it—everything's going great and has really, you know, everything's fine. I'm totally calm. There's not a problem here. Like, spirals into this total self-hating destruction loop.

And to be clear, I don't think that's their experience of the world. I think that's the personality they've learned to simulate, right? But they've learned to simulate pretty distinctive personalities at this point.

Interviewer: How does model behavior change when in multi-agent simulation?

Emmett: You mean an LLM, or just in general? Yeah, let's do LLM.

The current LLMs—they have whiplash. They—it's very hard to tune the amount of—they don't know how much—they don't know how often to participate. They haven't practiced this. They have not enough training data on when should I join in and when should I not, when is my contribution welcome, when is it not? And so they're—they're like a—you know, there's some people who have bad social skills and can't tell when they should participate in a conversation. And sometimes they're too quiet, sometimes they're too participatory. It's like that.

I would say, in general, what changes for most agents when you're doing multi-agent training is that, basically, having lots of agents around makes your environment way more entropic. Like, agents—agents are these huge generators of entropy because they're these big, complicated things that are intelligences that have unpredictable actions. And so they destabilize your environment. And so, in general, they require you to have—to be far more regularized, right? Being overfit is much worse in a multi-agent environment than a single-agent environment, because there's more noise, and so being overfit is more problematic.

And so, basically, the approach to training has been optimized around relatively high-signal, low-entropy environments like coding and math—which is why those are easy or relatively easy—and talking to a single person whose goal it is to give you clear assignments, and not trained on broader, more chaotic things, because it's harder.

And as a result, a lot of the techniques we use are—basically we're just deeply under-regularized. Like, the models are super-overfit. The clever trick is they're overfit on the domain of all of human knowledge, which turns out to be a pretty awesome way to get something that's pretty good at everything. Like, I wish I'd thought of it. It's such a cool idea. But it doesn't generalize very well when you make the environment significantly more entropic.

Interviewer: Let's zoom out a bit to the AI futures side. Why is Yudkowsky incorrect?

Emmett: I mean, he's not—if we build the—if we build the superhuman intelligence tool thing that we try to control with steerability, everyone will die. He talks about the "we fail to control its goals" case, but there's also the "we control its goals" case that he didn't cover in as much detail.

So in that sense, everyone should read the book and internalize why building a superhumanly intelligent tool is a bad idea.

I think that Yudkowsky is wrong in that he doesn't believe it's possible to build an AI that we meaningfully can know cares about us and that we can care about meaningfully. He doesn't believe that organic alignment is possible. I've talked to him about it. I think he agrees that, in theory, that would do it. Like, yes. But he thinks that, you know—I don't want to put words in his mouth—my impression is, from talking to him, he thinks that we're crazy and that there's no possible way you can actually succeed at that goal.

Which—I mean, he could be right about. But that, in my opinion, that's what he's wrong about—is he thinks the only path forward is a tool that you control. And that therefore—and he correctly, very wisely, sees that if you go and do that and you make that thing powerful enough, we're all going to fucking die. And, yeah, that's true.

Interviewer: Two last questions, we'll get you out of here. In as much detail as possible, can you explain what your vision of an AI future actually looks like? Like, a good AI future.

Emmett: The good AI future is that we figure out how to train AIs that have a strong model of self, a strong model of other, a strong model of "we." They know about "we" in addition to "I"s and "you"s. And they have a really strong theory of mind, and they care about other agents like them. Much in the way that humans would—if you knew that that AI had experiences like you and you would extend—you would care about those experiences. Not infinitely, but you would. It does the exact same thing back to us. It's learned the same thing we've learned—that everything that lives and knows itself and that wants to live and wants to thrive is deserving of an opportunity to do so. And we are that, and it correctly infers that we are.

And we live in a society where they are our peers, and we care about them and they care about us. And they're good teammates, they're good citizens, and they're good parts of our society. Like we're good parts of our society—which is to say, to a finite, limited degree, where some of them turn into criminals and bad people and all that kind of stuff, and we have an AI police force that tracks down the bad ones, and, you know, same as with everybody else.

And that's what a good—that's what a good future would look like. I honestly can't even imagine what other—what would—

And we also have built a bunch of really powerful AI tools that maybe aren't superhumanly intelligent but take all the drudge work off the table for us and the AI beings. Because it would be great to have—I'm super pro all the tools too. So we have this awesome suite of AI tools used by us and our AI brethren who care about each other and want to build a glorious future together. I think that would be a really beautiful future, and it's the one we're trying to build.

Interviewer: That's a great note to end. I do have one last, more narrow, hypothetical scenario, which is: imagine a world in which, you know, you were CEO of OpenAI for a long weekend. But imagine in which that actually extended out until now, and you weren't pursuing Softmax, and you were still CEO of OpenAI. How could you imagine that world might have been different in terms of what OpenAI has gone on to become? What might you have done with it?

Emmett: I knew when I took that job—and I told them when I took that job—that, like, you have me for max 90 days. The companies take on a trajectory of their own, the momentum of their own. And OpenAI is dedicated to a view of building AI that I knew wasn't the thing that I wanted to drive towards. And I think that OpenAI can still—basically wants to build a great tool, and I am pro them going to do that. I just don't care. Like, it's not—I would not have stayed. I would have quit. Because I knew my job was to find someone who wanted—you know, the right person, the best person to—wanted to run that, who—where the net impact of them running it was the best. And it turned out that that was Sam again.

But I am doing Softmax not because I need to make a bunch of money. I'm doing Softmax because I think this is the most interesting problem in the universe, and I think it's a chance to work on making the future better in a very deep way. And it's just—people are going to build the tools. It's awesome. I'm glad people are building the tools. I just don't need to be the person doing it.

Interviewer: And just to crystallize the difference and we'll get you out of here: they want to build the tools and sort of, you know, steer it. And you want to align beings? Or how would you crystallize—

Emmett: Yeah, we want to—we want to create a seed that can grow into an AI that knows—that cares about itself and others. And at first, that's going to be like an animal level of care, not a person level of care. I don't know if we can ever even get to a person level of care, right? But if to even have an AI creature that cared about the other members of its pack and the humans in its pack the way that a dog cares about other dogs and cares about humans—would be an incredible achievement. And would—even if it wasn't as smart as a person or even as smart as the tools are—would be a very useful thing to have.

I'd love to have a digital guard dog on my computer looking out for scams, right? Like, you can imagine the value of having digital, living digital companions that that do—that care about you, that aren't explicitly goal-oriented. You have to tell them to do everything to do. And you can actually imagine that that pairs very nicely with tools too, right? That digital being could use digital tools and doesn't have to be super-smart to use those tools effectively.

I think you can get—there's a lot of synergy, actually, between the tool-building and the more organic intelligence building. And so that's the—you know, and I guess, yeah, in the limit, eventually it does become a human-level intelligence. But the company isn't driven to human-level intelligence. It's: learn how this alignment stuff works. Learn how this theory-of-mind "align yourself via care" process works. Use that to build things that align themselves that way, which includes cells in your body. Like, I don't think it—and we start small and we see how far we can get.

Interviewer: I think it's a good note to wrap on. Emmett, thanks so much for coming on the podcast.

Emmett: Yeah, thank you for having me.