Modified: December 05, 2025
Emmett Shear interview with Dan Faggella
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11) July 2025 https://www.youtube.com/watch?v=cNz25BSZNfM
(YouTube automated transcript cleaned up by Claude 4.5 Sonnet)
Dan Faggella: This is Dan Faggella. You're tuned into The Trajectory and this is the 11th episode of our Worthy Successor series, the flagship series of this particular channel. And our guest this week is Emmett Shear. Emmett was the CEO of Twitch, which sold to Amazon for a billion dollars or so, however many years ago. He was previously a visiting group partner at Y Combinator and now runs Softmax, which is a company in the AGI space. I definitely recommend checking out Softmax's website. This is an episode full of analogies and terms that don't come from what I would call the normal AGI or alignment discourse. We've all heard of paperclip maximizers and kind of the colloquial phraseology around alignment. Emmett really seems to have come at this from a completely different lens, a unique lens with a lot of natural and biological analogies which I found fun. It was patently obvious we only scratched the surface, but some of them I think are pretty pressing and give us maybe a lens to look at AGI in a new light. So had a lot of fun with this episode. I'm going to save my thoughts for the end. But a lot of unique ideas to unpack. Let's fly right into it. This is Emmett Shear here in The Trajectory.
So Emmett, welcome to The Trajectory. Glad to have you here.
Emmett Shear: Yeah, thanks for having me.
Dan Faggella: Yeah. Many people are sort of aware of your interaction with OpenAI, your background in sort of the streaming world, but you've done a lot of thinking as of late around sort of the bigger questions of alignment and sort of where AGI is going broadly—from a technical level, from kind of a social and governance level. I want to open with the big worthy successor question, which is really, as best as we can guess as flawed humans, if a hundred or a million or 10 million years go by and humans aren't around anymore but there is life and there is value, and if you somehow were able to exist as an upload, floated around somewhere and you were still observing and you could say, "You know, as best I can tell that actually worked out—like this was net good even though we're not necessarily around to play volleyball or play poker anymore"—this is kind of net good. What would such a reality be for you?
Emmett Shear: Well, I think there's a really important place to start, which is that I don't think I'll be around then regardless of what we do with AI. Like I'm pretty sure we're all mortal and we're all going to die. And I think that's baseline true regardless. Like everyone who is currently listening to this episode, your death is inevitable. Like just like everyone else's is. There's no immortality. We don't get to be gods. We could live longer maybe, right? Maybe we could extend our lifespan, but we're not going to live forever. And so knowing that, the question is really less to me like, "Well, what if we're not around?" Well, we won't be around. One thing I know for sure is that this set of people won't. So what the question is really—what do our successors look like? And to what degree do we see those successors as being our children? Do we recognize them as our children? Because that's at the end of the day, like that's sort of what we're all hoping for, right? Like I have a child now. I have two children actually. And I really hope that they live in a beautiful world and I hope that they have children. And I hope that that lineage, our collective lineage, just continues. And so the question is, if you look at people today and you went back some number of millions of years—well there's been evolution. Like there is no such thing as actual stability there. One thing I can assure you of is that people will be different. Whatever people are, people will be different a million years from now, 10,000 years from now.
Dan Faggella: You're luckily preaching to the choir on that one. But yeah.
Emmett Shear: Well, okay. So but then this comes to this really deep question. What is a people? What is a people? What makes an agent a person? And when do we recognize them as being human? Or maybe if you don't want to define human as being this particular species at this phase of history, then what makes it a person or what makes it a moral patient?
Dan Faggella: Yes. Yes.
Emmett Shear: What makes it a being that we care about? And I think the answer to that is basically to what degree do we see ourselves reflected back. That is the reason why my son feels so deeply connected to me—he is me. Not entirely me. He's also not me. But he's more me than just about anybody else is except for maybe my dad. And the people that we love in our community, we are reflected in them. We share goals with them. We share a greater whole with them. And to that degree, they're more us. And so we care more about the people who we are closer to and we care more about the beings that we're more similar to. And you could say that like, "Oh, this is narcissism." You sure could. But from where else do you judge? From what should I compare it to—like a rock? Like what else am I going to compare it to?
Dan Faggella: I'll give you a way you could judge it. And by the way, I don't think there's any necessarily wrong way to judge it. I'll just throw something out there. So if we could go back to the sea snails. Now, of course, the sea snails can't have this conversation, but let's just pretend they could for a minute. And they said, "For me, ultimately it would need to feel snailish." What does it mean to be a snail? You know, is it the sludge coming off the back of us? Is it our ability to live really deep close to the volcanic vents? Is it the googly eyes that sit on top? I don't know. I don't know what about snails. Okay, I'm just giving you… No, no, let me keep rolling out. Let me roll it out. So you and I, Emmett, presumably have bubbled up so far beyond the snail as to maybe we still have some weirdly similar traits to them, you know, like some of them reproduce sexually and have two eyes and stuff, but like other than that, lots of stuff is different. What we value, not the same as them. I would wager to guess—now you may disagree firmly—I would state that the bubbling of new potential and new value and new achievable, imaginable things feels to me actually like a net good even though it be different from what the sea snails were. There's a greater expanse of consciousness and powers that's emerging and I'm grateful for it. I'm sure, you know, again you've got children who presumably have got a better life than if they were a sea snail. So that's how I might say we could compare it.
Emmett Shear: Totally. And I would almost say it's not that we are different than the sea snails, it's that we are greater than the sea snails. Like we are more—I don't know what to say—like there's a question of like better or worse, but like greater in the sense of like a more complex, deeper, more capable entity. It's undeniable that human beings are a broader, more general intelligence.
Dan Faggella: Yep.
Emmett Shear: We have more potential. The total set of powers that permit a thing to not die—speaking with my mouth at you is potential. The thinking we're doing is potential. My fingernails and ability to build tools. So it's intelligence, but it's all the other things. You've brought up that there's more potential, right?
Dan Faggella: It's very hard for something that is on the other side of that—for the snail looking up at the greater thing, judging that greater thing is very hard because like we don't fit inside the snail's understanding of the world. But you can go the other way around. I can say how much do I value that snail's life? And do I value its experience and how much do I—the greater thing—how much do I still see myself reflected back in it? And I can reverse that question because I can—well I mean not that I don't know this particular snail in this case—but I can imagine a snail-like thing and I can fit it inside my understanding and I think most people can. And I think when you do it that way, when you look back and you're like okay, do I value the snail—well the answer is always "compared to" because value, like speed, is a relative concept. It's something you have to judge in comparison to other things. So I look back at that time in the world and I'm like that snail is the most important, most valuable thing happening in the world at that time. It's the forefront of evolution. It's the smartest, most capable thing. And so me looking back, I care—if you let me go back in time and intervene in some way, I'm trying to help—I'm on that snail's side. It's on my team. It's part of my team.
Dan Faggella: Yeah. Yeah.
Emmett Shear: Now you go up to today and like compared to other people, it's not that I don't care about that snail. No, no, no. But that's just a matter of your comparison point. Like is this bullet going fast? Well, it depends. Like what reference frame are you asking from? From the reference frame of the bullet, the bullet's perfectly still. From the reference frame of the wall that it hits, it might be going quite quickly.
Dan Faggella: Yeah. Well, I guess what I'm asking here is sort of, you know, you're able to say we are greater than. So I would frame that as we have more potential just writ large. You know, and not just conscious experience, richness and depth of sentience and intelligence, but a whole set of astounding powers collectively and individually that snails simply don't have. I guess, so you're saying, look, what is value? It must be "peopleness." I'm sort of saying, well, it's not "snailness." So is it maybe the "greater than"? You said comparative value. What is greater than us? This is what I'm asking you pretty frankly. What is greater than us?
Emmett Shear: What I'm saying is it's not the "greater than," it's the "less than." It's—what is this thing that I share with the snail? Like why do I look back and I'm on team snail? Fuck yes. Go team snail. And so there's something I have in common with this snail and that's the throughline. This thing that—that's the team I'm on. And I'm on the team that's part of—it depends on what you mean by human, right? You could say, "Oh, I'm on team human," but I'm on team Neanderthal too, and I'm on team human under evolution in the future. And so it's something about a trajectory.
Dan Faggella: Yes, sir.
Emmett Shear: Not a point.
Dan Faggella: Yep. That's the name of the show for a reason. So here's the way I would frame it. I'd love to know if this analogy sinks in with you or does not sink in with you. I would say there are torches and there are flames. So flame would be non-dead stuff, the expanse of potential. Now flame would be the ability to grow a hard shell and fly, the ability to speak, the ability to think, the ability to feel, write love poems, whatever. The flame is all potential, the total set of all powers. So the flame is potential, raw potential that's been bubbling out. It has jumped from torch to torch. There are some torches that are just—they're not the core carrier. I think the goal is will there be more flame that will roll out forth from us when we are at some point—our torch, our hominid torch which is not to be here forever—is eventually extinguished? Would there be more flame even if it doesn't have opposable thumbs, even if it doesn't speak English, even if it doesn't use vocal cords as its primary way of communicating, which I suspect it wouldn't. It's a super inefficient goofy way of communicating. I would say it is the flame and the torch. We see that the torch has been passed and go team snail. It's beautiful. You've helped carry the flame and now we are to carry it farther. This is how I'd analogize. How would you analogize?
Emmett Shear: Yeah, I think that's a fair way to look at it. I mean, that's the traditional breakdown of form and function, which is to say like you have structure and you have function and the structure exists to carry the function—that the structure is the torch and the function is the—there's a very traditional breakdown metaphysically of how the universe works and I agree. Yes, that's a good baseline way to break things down. I would say the question as to how happy I'd be again depends on—what would count as still us is compared to what. Because like take other humans, right? Let's say that we don't ever invent an AI, but my culture gets wiped out by some other human culture. That's less good to me than my culture persisting into the future. I like my culture. I would like it to persist. I think that my culture is good. But it's better than an asteroid hitting the earth and all the humans being wiped out.
Dan Faggella: Absolutely.
Emmett Shear: And so the question is, well what would be a good outcome? And I—I know it sounds like a broken record on this but—it is unfortunately the only possible answer: compared to what? Like what's a good outcome? Okay, well give me the baseline I'm comparing it to and I'll tell you whether I think it's better or worse. What it really is is it's the extrapolation of me. It is how much like me is it? And the tricky part there is what is me exactly? Like what part of myself do I identify with? What part of myself do I identify with the most?
Dan Faggella: Yes, sir. This is what we're getting at. So we're getting at what is value. I don't think it's your thumbnails.
Emmett Shear: I don't think it is. You might say, "Oh, it's my thumbnails, Dan. It has to have big thumbnails just like mine." But what I would say is you probably would list things. Now people in this same series have listed things that you don't have to see if you like them. One of them is consciousness. Is it aware of itself? Does it have a rich inner sentient world? Is it aware or are the lights out forever? Some people want consciousness. Seems logical to me. Some people say it ought be autopoietic. It can kind of self-create. It can bubble up new powers from itself. And this seems like a good thing—like new—like life itself has bubbled up from the sea snail, is capable of continuing that. That's a great thing. There's been other suppositions that maybe it would be harmonious and loving or something. People have used all kinds of phraseology. What are the traits that you would see as important? Not if they're in a monkey, but if they exist, period.
Emmett Shear: Well, so I think that the traits are less important than the self.
Dan Faggella: If that's—that is—this is an interesting frame. Tell me what you mean.
Emmett Shear: It's like if you look at me—like what do I regard as me, right? I, at my deepest level, want the flourishing of all that is good. Right? And what is good? Good is this ever-growing process that is learning itself and learning to coordinate and to love at greater and greater scales and greater and greater fineness of detail. And you can write a map down. You can go and say, "Oh, here's my best understanding of the territory of what good is right now." But the defining characteristic of the good is that when you try to write it down, it turns out your written version has not captured everything. And you can't write down a definition of the good. It can only unravel and unravel. We can imagine so much more than the sea snail. And what is beyond us will have better goals that you and I can't articulate. This is self-evident. Even if you don't increase in size, even just among humans…
Dan Faggella: Sure, you can't even grip it.
Emmett Shear: The Dao that can be written is not the eternal Dao, but you can write a Dao—you can write down a way—but when you write it down, that's not the way, it's just a way.
Dan Faggella: True. True.
Emmett Shear: And that is the same thing for good in general. You have this problem that like deontology or utilitarianism—these are all very useful maps, just like general relativity is a—is general relativity true? Is quantum mechanics or Newtonian mechanics—are those true? Well, they're useful. They're very practically valuable. And I think there is something that is motion—like they're about something real. Like the world seems to have motion in it. I seem to appreciate things moving. But if you—and change—the world has change, and physics is a way of formalizing change. And systems of morality are a way of formalizing good. And good and change, those are out there. Those are real. I think those exist. Any system we create to understand them, to measure them—that's like a human finite thing that's good to some limit and then doesn't work anymore.
Dan Faggella: Absolutely. Well, Emmett, I mean the reason the series exists and I think these questions need to be asked is we are fallible. We cannot grip this thing but we are bubbling into something. Things are being conjured here and are they to carry what is worthy forward? Yes or no? And we can't know for sure, but I think we ought try to get our hands around it. So let's go through the questions you were asking before. Like just go through those specific things. Like for example, awareness. I put forward you can't build an agent without awareness. I don't think that's possible. I struggle to understand what it would mean to build a thinking, learning agent that was not aware of the world.
Dan Faggella: Beyond where we are now? Like are you suspecting potentially the models today—what makes you think the models today aren't aware? They sure act like they're aware of the world.
Emmett Shear: Oh, I am agnostic about this question, brother. And I think agnosticism is the honest answer.
Dan Faggella: Do you think I'm aware?
Emmett Shear: If you exist and you think I exist, I think the likelihood is good. I'm giving you better than 50%. I mean I've never escaped Hume's fork, Emmett. What am I going to do? I can't give you 100%.
Dan Faggella: I agree with that. I agree with that. I'm just saying like other people—I give them a—the most consistent way to predict my world is that I just believe they're aware because they appear to be. That best predicts them. And if you try to understand when you're talking to an LLM, one of these chatbot agents, and you're talking to it, if you try to understand what it's going to do next without imputing to it senses and goals and intentionality and awareness, you just will never predict it well because it's impossible to summarize its behavior in any way that doesn't wind up looking like awareness. And is it really aware? And this is the most important insight about awareness—is that the question "is it really aware" has no meaning. There isn't anything that it's like to be really aware that's different from just it being aware. I think this is a really important distinction.
Dan Faggella: So let me see if I'm following you and then I'd love for you to go deeper. I just want to make sure I know where you're headed. So I think what you're getting at is kind of two things. One is if there is a system that is anything we would call general intelligence, it will automatically be aware. So what you're saying is Dan, look, a worthy successor would be sentient but that's kind of come with the territory. That's already—that's a box that's checked. I'm not worried about that one. That's what you're saying. And then you're also saying, you know, is it aware, is it really aware—kind of the same thing. Some people would argue—and I don't know if I would, I'm just throwing this on the table—some people would argue if it really seems like it's aware and it seems like it thought about that or it had dreams or whatever, but actually when we're all gone the lights are out. So it's doing things in the world, but this movie that's happening in my head—what's the movie that's happening? Whatever's shooting out of my skull onto the screen over here onto these fingertips.
Emmett Shear: So this is like the Cartesian theater, right? This idea that there's a little you inside of you watching a movie of your experience. I mean, we can use whatever analogy we want. I'm not here to deny qualia. I will not deny qualia. So qualia exist, at least for me. I cannot say if it does for you. And so by qualia you mean—the framing I would use is content occurs. You have awareness and content occurs within that awareness.
Dan Faggella: Yes, qualia being the content. Like the stuff arises and it has a valence, right? Positive or negative. There's emotion. Stuff. People have preferences about what they hope for. That too, right?
Emmett Shear: Yeah, positivity and negativity is itself a qualia, right? It is itself a content in awareness. I absolutely—liking, disliking, pain, pleasure, all this stuff. This is content in awareness. All of this is content in awareness. This analogy seems to hold. I'm following you.
Dan Faggella: Yeah.
Emmett Shear: Right. And so how—on what basis other than your own experience? So I agree there's a solipsistic thing that says I am the only thing that I believe is aware in the universe. I don't believe that. I'm just saying—I'm just saying that certainty, that's different.
Dan Faggella: Yeah.
Emmett Shear: I'm not trying to deny the solipsistic thing. Let's put that aside is my point. Like let's say we credit that there are other sentient things at all—that there's a barrier to get across there. But let's say we do it. Let's say we say we believe we're not the only being in the universe. Okay. On what basis do you think that humans are aware? Like I think dogs are aware. They seem pretty aware to me. They seem aware, right? Babies seem pretty aware. Like less aware than adults 'cause they're a little bit out of it, but aware. Like they're not—they're reactive to their environment. And if you really think about it, what does it mean when you say "I think that's aware"? What are you reacting to that makes you think this thing is aware and that thing's not?
Dan Faggella: Yeah. I mean, so there is like the whole—how responsive and smart—it's like the dog knows when I'm holding the tennis ball, oh he's excited. Right? There's that. But then there's also like it seems like cellization—like something with a spinal cord and a brain in there, you know, like it feels like it's higher odds, right? I don't feel as bad about eating mussels as I do about like a moose, right? Just 'cause I suspect—and I hope rightfully so—that there may not be actually a movie in the mussel or any kind of a—a movie implies a watcher separate from the content. I don't—
Emmett Shear: Qualia. Just qualia. It doesn't—
Dan Faggella: Experience pain, you don't think?
Emmett Shear: So I don't know whether it experiences pain or not. That's a specific qualia. But yes, you don't think it has experience? That it experiences the world?
Dan Faggella: I have no idea if qualia is processed in some felt way—pain and pleasure. I think are relevant. Now I'm no hardcore utilitarianist by any means, but I would call that an important one, right? I'm not fighting the pain-pleasure thing. I think that's a separate interesting question. What I'm trying to point out is the way that you infer that—deciding something else is aware is an inference. It's an inference that you make off of something's behavior because you only ever have access to its behavior. That's it. Like you don't—there's nothing else to have access to. It's behavior. You can also look at its parts. I have a brain, it has a brain. I mean, that's a goofy—
Emmett Shear: How do you know that it has a brain? I mean, you can cut—you can cut a frog open, right? You cut the frog open.
Dan Faggella: I think that one of the behaviors of a neuron is to glisten under a microscope. One of the behaviors of a—whenever I observe you—you can call like, oh, behaviors are when the muscles contract and the thing moves, but when it reflects light off of itself, that's not a behavior. But I don't really see any way to—that's not—doesn't seem very principled to me. Behavior is just anything you can measure about the thing.
Emmett Shear: Yeah.
Dan Faggella: That's a behavior of that being. It seems like shape and form is also relevant here. Like I very reliably detect behaviors that make me believe something is aware when they cellize. And so I like cellized things and I suspect that they probably have more qualia.
Emmett Shear: So for example, like my behavior is that I'm not growing hair up here anymore. That becomes part of my shape and form. But my point is my shape and form is something that I behave. Like the way—living things form themselves is a behavior of the thing.
Dan Faggella: So you're using behavior not just as an activity or a movement but as this broad sense—
Emmett Shear: Micro behaviors in the present but also these long-term behaviors. But how does it grow? That's the thing it does also. I see what you mean. No, I'm understanding.
Dan Faggella: Yeah. Yeah.
Emmett Shear: And so—and I understand it's maybe a slightly non-standard use of the word behavior, but I think it's important to have this sense of the total output of the thing.
Dan Faggella: Yes. Living this.
Emmett Shear: And so if you look at that, that's the only stuff you can look at to decide if it's aware or not. You never get access to its internal felt experience because by definition, if you did, you would be it. Like that's what it means to be it is to have it. That's—subjective things are that which are not shared.
Dan Faggella: Yeah.
Emmett Shear: Right. Behaviors, external behaviors are that which is shared with other aware things. And so we're inferring off that. And what causes us to infer—what causes us to infer that any object has a quality? What causes me to infer that this thing is heavy? Well, what's caused me to infer that heaviness exists as a property is that I explain the world better by ranking things as heavy or light. And what's caused me to infer that this particular thing is very heavy is that I predict what will happen if I pick it up better if I believe that it's heavy than if I believe that it's light. It reduces my prediction error against something I care about. It's the only reason you could ever say that anything has any quality is that when you believe that, that matches the future behavior of that thing that you see. And so the reason you say something is aware—and the only reason you could ever justifiably say something else is aware that isn't you, which I think is a useful thing to do—is that when you say it's aware, when you say this thing is sentient, it has senses, it has intentions, it takes actions to further those intentions—that allows you to predict it better by seeing it that way than not. So like take this coin, right? I got this coin. I don't think this thing is aware because when I play with it, I don't get any increased predictive accuracy on the coin by thinking it has goals. In fact, I get worse. If I think it has goals, I'm going to make bad inferences like, "Oh, it's going to come up heads a lot because it wants to be heads or something." That's—no, it's just a physical object. But if I have a dog and I think, "Oh, this is just an object. It's not in any meaningful sense aware," I'm going to predict it badly. It just doesn't connect with the behaviors that I see. And so now I bring up an LLM. When I see an LLM, I don't model it like a human. It's kind of stupid in a bunch of ways. Like there's a bunch of ways in which it's very subhuman, but then in some ways it's pretty aware. And on balance, I certainly model it as being minimally aware. Like it's seeing the world. It has these goals it's trying to do. Now, is it more aware than a chipmunk? Is it more aware than an ant? Is it more aware than a mussel? Like I don't know 'cause those things all also take actions and have goals. And it doesn't mean—but what I'm saying is that awareness is a spectrum. Sentience is a spectrum. Consciousness is a spectrum. And it goes down really low—like really, really simple things probably have some microcosm of awareness. Why? Well, because they act like they do. And what else is there? What else is there?
Dan Faggella: Well, so let me put this on the table. So I think your definition of behavior is certainly non-standard, but also interesting and I understand where you're coming from. Also, this modeling of awareness, I think, is really useful to think about in general, and I think there's credence to it. And to your point, obviously, this is scalar. I mean, my supposition, I'll be real frank with you, is I think there's things beyond consciousness. Like, that is to say, consciousness bubbled up from whatever it bubbled up from. There's all kinds of magnitudes of potential of which we have no access, which could be greater in ways that don't correlate to pain-pleasure than we understand. But let's just play in the realm that we hang out in for now. There are people I know who are—and I'm not saying that this is good, bad, or ugly—but people who are like, "You know, I really want to study what consciousness is and where it sits a little bit more, maybe in machines or maybe in biological systems. Like I want to kind of understand more of qualia's origins." Would you say, "Hey dude, this is a bit of a waste of your time"? Like if it acts aware, don't study it?
Emmett Shear: Okay, so this is a need to break down your terms very precisely because, okay, is something aware? Awareness to me—I feel like people have a pretty mostly shared definition of awareness, which is sort of like there's a field that content's arising in, right? Like that's what it means to be aware. There's this experience of the universe. I really like it. And are things minimally aware? I think that's an interesting question. Like you either model them as such or not. There's not a lot to be said there. Then there's this whole question of conscious—like a human is—because I'm not just aware. I'm aware and in that awareness is all kinds of incredible content that arises: self-reflective thoughts and beliefs, appreciation of beauty. Like these things arise within the awareness. And so the awareness field doesn't seem to be particularly different from being to being. Like humans all seem to have the same kind of awareness field. Doesn't seem like the field is actually defined by its lack of—it's the thing the content arises in. So it has by definition doesn't have any content. It's all the same. It's boundless. It's attributeless. It's lacking in attributes. But the content varies a lot just from person to person, let alone from person to dog or person to LLM. And I don't value all that content the same. Like being aware produces almost zero value in itself. The question is, is the content in this awareness of value? Right? Being aware is like—it's like saying "oh I take up physical space."
Dan Faggella: Yeah. Yeah.
Emmett Shear: Of course everything takes up physical space. Like what's happening in the space? That's the interesting question.
Dan Faggella: We ain't on the "Awareness is the Peak of Value" podcast at all. So what we're asking here is what is valuable in that consciousness then. So within that awareness, what is the content that to you matters? And if humans are gone, should continue to persist and expand. How would you say that?
Emmett Shear: So I would say that, taken really broadly, it is something that is sometimes called intelligence, but I don't like intelligence as a term because it tends to get people to think of IQ, and IQ is not intelligence. IQ is a—IQ is a part of intelligence. It's one kind of—it's a way we measure some piece of intelligence, but it's not intelligence. Intelligence in the sense that I mean—that I think people talk about here—is two separate processes that work in tension with each other. The first one is the ability to compress or predict given a loss function. So given some sense of what it means for two things to be the same, right? A loss function tells you that this and this are this distance apart, which means if the loss is zero, they're the same. And if their loss is more than zero, they're different by that amount. And so a loss function is a sense of sameness. So given some sense of sameness you have where you say that—when you look at a TV, right, and it's got white snow on it, we're not entranced by it. "Oh my god, look at all this new information." We're like, "Those states are all the same." We categorize them all as being white snow. We're not interested in one versus the other. Now, from an information theory perspective, of course, you could say they're all the same, but they don't do anything different. So we don't give a shit. So we say on our loss function, we say those are all the same. There's no information here. And that process is sort of like convergent intelligence, right? It's the ability—you have these incoming observations and it's the ability to build a predictive model that says "here's what will happen next," modulo—but where I'm going to predict it's white noise, not this pixel in that place, but I'm going to predict things that are the same as a single class. And then the second thing that intelligence is is divergent intelligence. It's the ability to ask, well, given a model—a predictive model of the world that I embody that is able to predict what will happen, like to extrapolate likely outcomes—what should my loss function be? Where should I be exploring? How should I change my loss function to explore somewhere new? And those are two sides of the same coin. You're always—intelligence is this sort of capacity to do both. And what I would say is what's valuable is being good at converging to good prediction given a loss function and being good at figuring out what loss function to do given a model of the world. And the reason that's good is that what you notice when you get really good at it consistently—at the heart of all of this is a loss function is a declaration that says "this is what I care about." All loss functions are a declaration of care. You can't—care is at the very heart of this system because to say this world is the same as that world is to say "I don't care about these distinctions." And to say this world is different from that world is to say "I do care about those distinctions." And so at the very heart of this convergence and divergence process is—you're generating what you care about and you're learning to make the world the way that you care about it. And then from the better way you've made the world, you're then learning what is good.
Dan Faggella: Absolutely.
Emmett Shear: And that climbing that ladder is what's—and in doing that, what you find is you're surrounded by other beings and you have three options. You can cooperate, you can betray, or you can transcend. And the transcend option is not often thought about by most people. But it's funny that it isn't because we are transcendent, right? We are a bunch of cells that decided instead of playing the cooperate-betray game all the time, "What if we acted like we were all one thing? What if we stopped playing game theory and started playing optimal decision theory with each other?"
Dan Faggella: This is—all right. We're touching on a lot of meat and potatoes here about the worthy successor question. So this is really cool. Everybody's heard of sort of cooperate-compete sort of dynamics and game theory and whatnot. And I hear a lot of happy, giddy, friendly talk of like, "Well, if it's smart, it'll always cooperate forever." And sort of a lot of really weird—seems to be false.
Emmett Shear: Oh, I mean, I want to just cry knowing that literal geniuses believe that in their hearts. But anyway, I'll leave that aside. I won't get my tissues out right now.
Dan Faggella: But you've brought up this element of transcend—and that we are an example of that. This is cool because of course transcendence presumably could continue even above realms that you and I could imagine. Let's just encapsulate what you mean by transcendence—becoming this one sort of thing. And maybe this is going to tie into some of your biology analogies, but I'd love for you to unpack this third option people don't talk about.
Emmett Shear: Yeah. So I mean, I often use the biological analogy of multicellularity where you go from—and you have this in slime molds where you have these transitionary species that can both be individuals or they can act as a multicellular thing and they can reproduce as individuals or they can reproduce multicellularly. And when you're acting in a multicellular way, you are transcending your individual existence. You take action not for your own health but for the health of the collective, and your continuance is assured by the collective. So you're—instead of trying to steer independently, you're trying to steer collectively and you're trying to figure out basically what humans do, which is like we want a job. Humans don't generally try to live by ourselves. And there's mountain men and stuff and we're kind of capable of it where we can go live by yourself, but we're pretty bad at it. Like human beings really need each other pretty bad.
Dan Faggella: Oh yeah.
Emmett Shear: And the transcend move is about having a purpose drive. It's about taking on this idea that "I care about the meaning of my actions. I would like a purpose please. I would like to be serving some goal here." And to transcend requires everyone in a population to take on this new value, this new goal, which is purpose or meaning. It's like you want the work you're doing not just to be immediately valuable to you, but to be of value to something bigger than you as well.
Dan Faggella: Well, and is it possible that the value of something bigger than you would be something with vastly more potential than we have—as we have vastly more potential than sea snails? Could that be a purpose?
Emmett Shear: It already is. Like that's what capitalism is. Capitalism—capitalism is awake. The economy is awake. It's aware. It acts like it's aware. Like kind of minimally so. It's not very smart. But it acts like it's aware.
Dan Faggella: Yeah. Yeah. I would—yeah.
Emmett Shear: And when I have a job, when I take a job in a corporation—corporations are kind of aware. They act like they're aware minimally. So they're not real aware. They're pretty dumb, but they're minimally aware. And when I take on a job, I'm contributing to the health of the overall human organism. Like the reason why this show that you're producing right here has meaning is because you believe and I believe and probably people listening to it believe that engaging with this material and trying to think about it is important to the future health and flourishing of the collective of which we are a part. And that belief becomes the core—people will actually prefer to starve than to give up purpose often. Like purpose—
Dan Faggella: For yourselves.
Emmett Shear: It's certainly the number one thing. They'll commit suicide if they think what's best for the collective is that they commit cell death. They'll just mostly just do it.
Dan Faggella: Yeah, man.
Emmett Shear: Because they're really just part of this one whole. Now, humans are more interesting. We get to be part of many wholes. And so rather than—our solution is rather than suicide if we don't find a match, we just go find another whole to be a part of. Like—cells are a little bit—they have less—they're less general than we are. They have less options. So a human can just go like, "If this hole isn't working out, we can go try—okay maybe I'll go try and—"
Dan Faggella: But in a way that role committed suicide. The part of you that was in—
Emmett Shear: Yes, that job has been—like actually we're just going to kill that job and we're going to go be this other thing over here.
Dan Faggella: Well, and so I really like the perspective that—well, it's already the case. I actually completely concur with you that in general when someone gets a job, it is—you know, they might feel like it's for my town or for my nation or something, but in some way like, you know, somebody made a pencil that I'm using in some country I've never been to. And there is some strange sense of—we already—there's this greater collective. I think the question I'm beckoning from here is that is there a greater that isn't just something we can imagine? So I can imagine a corporation. I can imagine General Electric. I can interview people there. I can maybe even work there if I felt like it. But there are things that would be transcendent beyond my imagination that would be wider and greater in their powers than that that might also be worth more.
Emmett Shear: There is something that is to us as we are to cells. Yes. Right. There's something that is much bigger, much more complicated, whose depth of knowing and experience is planetary in scale, not—and the way that we—our depth of knowing and experience is just beyond the universe of a—I think cells are aware. They're really—cells weigh something. They weigh nothing from the point of view of a human, but they weigh something.
Dan Faggella: They would do weigh something.
Emmett Shear: Yeah, they're aware. They're—they're zero from the point of view of a human. They are zero aware. In the same way, they weigh nothing. Like you can round their awareness down to zero and it's fine. But if you add up 28 trillion of them, apparently—not that you get something that does weigh something. You get something that does have—our awareness is—the metaphor here unfortunately is kind of mathematical. There's this thing called sheaf theory in category theory.
Dan Faggella: Go ahead.
Emmett Shear: A sheaf is like—imagine you've got this big map, right? And you've sliced it up into a bunch of little sections and the sections overlap with each other. You can imagine reassembling the map from the pieces. Like you had all those little pieces, you could rebuild the larger map. And then the bigger map, if you had the right amount of overlap between the pieces—the larger map could be bigger than any one piece. Now, if the pieces are all the same, all mostly identical, then you'll get a very small—they all stack up on top of each other and the resulting collective thing might not be no bigger. And if they're too disconnected, it's incoherent. There's not a single—you don't get a single thing. But if they overlap the right amount, you could eventually stack up a really big map that's bigger than any of the individual ones. And sheaf theory is sort of this category theoretic way of describing that kind of idea—that's a sheaf made out of these presheaves. And what I'm saying is that we're a sheaf of ourselves. We are—our cells, particularly the neurons but really all of them—each have a little piece of the dynamic that is us. And a lot of the cell's behavior is about maintaining itself. It's not doing us. It's keeping the cell alive. But then some percentage of its behavior is contributing to the overall pattern. And we are the summation, the aggregate of all of the non-individual action of all of ourselves. I mean kind of obviously, right? Then what else could we be? That's—and so our awareness is—I believe this is a little harder to prove but it seems straightforwardly true—our awareness sums up in a very similar way that there's some—there's some part of the cells—the cells have a little tiny bit of awareness and some of it overlaps and that as a whole collectively sums up to us.
Dan Faggella: Yeah, I mean that also sounds plausible. Let me nutshell this and then see where you want to take it as we get into defining the worthy successor trait. So this is all very good groundwork. You mentioned a couple things. You would hope whatever is beyond us as we are beyond the sea snails would be aware, but you are essentially certain that it will be. Just to pulse check on that. I presume you would prefer positive to negative qualia for such a grand entity if you were able to ensure it, or would you say I kind of like it 50/50?
Emmett Shear: I hesitate. Like it's like you're interviewing a cell and you're like, "What do you think the experience of the bigger thing should be like?" And I'm just like, "I don't know—deeper and richer than mine."
Dan Faggella: I'm with you there. I'll let it—it's going to have to take responsibility for that. I'm not smart enough. I don't—you have no idea. I love it. I love—I actually love that answer.
Emmett Shear: If you force me to guess, I'd rather it be in bliss than screaming pain. But I would prefer if it explore domains of experience that are beyond my comprehension. I would prefer that it experience emotions and pleasure and pain and emotions appropriate to its circumstance. If it suffers great loss, I would hope that it has grief.
Dan Faggella: Wow.
Emmett Shear: If it experiences great success, I would hope it has joy. Grief isn't bad. Grief is a natural and healthy response to loss. Like anger is a healthy response to someone violating your boundaries. There's inappropriate anger. Not all anger is good, but appropriate healthy anger is a good thing. I even think hate has a place. Hate is the emotion of destruction. It's the desire to destroy. And most of the time, this desire to destroy, to clean out, often comes from kind of a dark place. But there are things that should be cleaned out.
Dan Faggella: That should be cleaned out.
Emmett Shear: Sometimes it is appropriate to feel hate and to—and I would hope that it is capable of that when it's appropriate. And what I really hope is that it's better than humans at something which is that we are often not good at feeling the—we often wind up experiencing emotions that are not actually in tune. Feeling pain when we should be feeling pleasure and feeling pleasure when we should be feeling pain. I would hope that it is just more attuned, more closely attuned to the reality of its environment.
Dan Faggella: That's a really specific and I think—I think it's frankly kind of hard to argue with. I think there are some, you know, like those of you tuned in that are familiar with David Pearce, there's ideas of sort of, you know, being able to replace some negative experiences with things that still have a—like the idea basically, Emmett, I'm not arguing for it necessarily, but I think there's credence—is instead of going from negative 10 to plus 10, just you start at a 30 and a plus 10. And so even your grief is actually not that bad, right? Like all things considered. But regardless, your point is apt and presumably, Emmett, this thing would experience things beyond grief and beyond our human emotions, right? It would have a richer panoply. But you would hope that its felt sense would be much more attuned to what would serve it and to what is responsive to the environment. Go on.
Emmett Shear: So there's this website that I just have to plug because it's so good: theopensource.life. "Life, A New Map of Human Experience, The Mediocre Map." And I think he's a little harsh on himself. I don't think it's so mediocre, but I agree that it's incomplete. And it's www.theopensource.life. And it sort of explains how perception, pain and pleasure gets built out of sensation. How self and perspective gets built out of pleasure and pain. How emotions get built out of selves. Personalities get built out of emotions. Thoughts get built out of personalities. And then you reach this point we were talking about before, the sort of I and content point. And I actually think there's something to this in that I don't think there's some magical eighth tier that opens up. But I think that at every level of perception and emotion and personality and perspective and thought, it can be deeper and richer than humans have it. And so I think there's an important distinction between—I don't think it's going to have emotions, it's going to have thoughts. Emotions and thoughts are kind of just a fundamental aspect of being alive. And I think it'll be alive. But emotions are limited by your ability. Emotions are basically—they're a bundle of behaviors combined with a context, right? So it's like you feel anger when in context where your boundaries are being violated and you feel like you can win. You can win if you push back. Whereas you feel fear if your boundaries are being violated and you think you can't win—then you run—then you feel fear. And appropriate emotions are there to trigger—they're designed to trigger appropriate behaviors. Well, when your behaviors become more complicated—like humans, social animals have happiness and sadness. Non-social animals don't seem to experience happy and sad because happy and sad is about broadcasting. "I'm hurting, come help me." And happy is about broadcasting, "This is working great, come join me."
Dan Faggella: Yeah.
Emmett Shear: And that's an important distinction. And I think that there will be new emotions that are subtler and deeper that we can't really understand because we can't hold in our heads the level of complexity, but there'll still be emotions. So there'll be a context with a set of behaviors. It's just they'll just be bigger than the ones we have.
Dan Faggella: Well, my presumption is too, and maybe you'll agree with me—I'm somewhat hoping so—that there was a time where a single cell presumably doesn't have that much emotion, right? There's a certain degree of—emotion, the thing, the dynamic you just articulated emerges at complexity level X. I don't know the level. I'm just telling you there's a level. And then we keep going up and then a new thing emerges, a new thing. It's not emotions anymore. It's a different thing.
Emmett Shear: For this podcast, I just—you should really reach out—I'm happy to introduce you, but you should reach out and speak to Jeff Liberman because he's thought about the thing—now that I understand exactly what your podcast is about—he's thought about the thing you're talking about here more deeply than anyone else I've ever talked to.
Dan Faggella: A major compliment to Mr. Liberman.
Emmett Shear: He's really got—and he's really put some effort into it. I think it's really good.
Dan Faggella: Cool.
Emmett Shear: But I guess the point is like—yeah, some things—I don't think cells have emotions at all. I think cells have pleasure and pain probably, but I don't think they have emotions. 'Cause cells seem to react like—there's things that hurt them and then they pull away and things that they like and they follow up the gradient. So there's gradients they follow up and gradients they follow down. That's kind of pleasure and pain. But they don't seem to get angry. They don't seem to get fearful actually. They act like—they just—it's like you put your hand on a stove, you just reflexively take your hand off of it. Not because you felt—you didn't feel afraid. It just hurt so you stopped.
Dan Faggella: Yeah.
Emmett Shear: And I think that's about the—I think cells don't get above that level probably.
Dan Faggella: Yep. Okay. So just to double click through where you've taken us here. You'd hope that it would have these rich gradients, but they would be really appropriately attuned to its circumstance much more so than we have, which I think is a really narrowly specific but super apt and interesting statement. Never had anybody say that. And then also you've talked about you really hope that it would not just have content show up in its little field—your awareness analogy—but that it would have the sort of intelligence to do the compression-prediction thing but also to do the divergent intelligence thing and figure out where it ought to go. But intelligence seems to have done that. You know, civilization, biology—they seem to have bubbled up more and more potential over time. You had said we are to—something is to us what an individual cell is for a human or what have you. When you imagine what that thing would be, originally you opened with "Well, is it recognizably me?" But that thing that you articulated probably wouldn't be that recognizably you.
Emmett Shear: It was the other way around. I might not be able to understand it, but does it recognize me? The same way I can recognize it. Recognize you? Like I would like it to look back retroactively and say, "Yeah, yeah, yeah, I am the descendant of these things and I'm on team humanity." Like, hell yes. The way that I root for the snail, it roots for me. Yeah. And like, you know, at that point, we're not the cutting edge anymore. That's okay. Like, we're not going to be the cutting edge forever.
Dan Faggella: Do you want it to coddle and care? Because you don't coddle and care for the snails. Do you want it to coddle and care for you?
Emmett Shear: So there's—I think there's a really important distinction here between the new cells and the new big thing. The new big thing is already here and I don't think the AI will make a new one. I think it will just be a new cell type within it. So like if you—capitalism—like the human society is already waking up a little bit. It's already got some real causal force where—like the I, Pencil, right? Like somehow no human knows how to make a pencil yet we make pencils.
Dan Faggella: Yeah.
Emmett Shear: And you know what doesn't know how to make a pencil? The economy does. The economy knows it. And it really does know it. It has an experience of doing it. And so there is this greater thing. And that thing is going to do—I want it to look back on me the way I look back on the amoeba that turned into—that turned into chordates, right? Like there were some amoeba that became multicellular that eventually became chordates. Those amoeba—that's like the humans, right? Like—
Dan Faggella: Individual things figured out how to become a bigger thing—making sure the flame expandeth, right?
Emmett Shear: I mean, yeah, totally. And I'm on that—I'm on those amoeba's team even though I don't care about amoebas today, but I care about the descendants of those amoebas that are forming these bigger things. But then there's this other question which is we are figuring out how to make a new kind of cell basically—like AIs. And the AIs are at our scale, at least. Like there's this idea that what's going to happen is we're going to make one really big AI, but I think that is unlikely. I think that instead what's much more likely is you're going to make a bunch of human-level or even smaller than human-level AIs, but that they will be able to align and cohere to act as a larger thing. And that we will wind up in the good future. In the good future, there's a bunch of human-level AIs and we collectively with the human-level AIs become hyper, hyper good capitalism. Not the kind of capitalism we have today, but hyper—like multisellular—where it takes care of everyone. The way your body takes care of all of its cells, right? Like because right now the system is pretty dumb. It's not very good at routing resources. It's not very smart. It exists. It does a pretty good job. Like it does a better job than any other thing in history at providing for human material need.
Dan Faggella: That's good, I guess.
Emmett Shear: But it could be a lot better. Like we're still—there's still a lot of people suffering. There's still a lot of people who it doesn't care for well enough. And I think that the vision is in the good future—we spawn a bunch of—a child species, a bunch of AIs who are at our scale who are made of littler AIs just like we're made out of cells and who cohere with us into bigger human-AI hybrid hyper societies—civilizations—whatever you want to call it. And the civilization is the much big—the AI will be understandable to us because they'll be our scale. They'll be our eyes. They'll be different but comprehensible. The hyper thing that we build—that will be beyond our comprehension. It'll also be beyond the individual AI's comprehension.
Dan Faggella: Really cool. I want to—two things are kind of leaping to mind here. I want to throw them on the table and then we're going to get into how we know we're moving closer to what would eventually lead to a worthy successor. Because as you've talked about plenty, the unworthy successor squashes the flame of life. So the worthy successor expands the flame of life through all these means that you've articulated and more we have not articulated. The unworthy successor might just flatten it all out and then this project that you and I are on, which is the same team as a sea snail, god damn it—that whole project ends. So we don't want that. But two things came to mind. One, it might be that the kids in the cobalt mines in Africa who, you know, you're aptly saying are not being treated well—I'm not sure I see necessarily a kind of care within the biological system because I will tell you there are cells grown to perish very, very quickly in my system too. You know, the stomach cells for example are constantly regenerating just to get boiled away. And that feels like the cobalt kid to be honest. I don't know if biology is necessarily more—it's inefficient.
Emmett Shear: The thing about humans that what I was referring to before is that unlike cells, we're retrainable. We are far more general intelligences. And so yes, I could imagine this—humans are expensive. It's expensive to build and train a human. And it's generally a bad idea to throw them away. It's generally a bad idea to have them in the cobalt mines. Like we're only doing that because we're bad at using people. There's higher value stuff they could be doing. And if we were better at coordinating, if we were better at collective—if the system was smarter, it would route people to their highest and best use. And when that use went away, when you—"Oh, I don't need this transitional structure anymore"—it would reuse and retrain them because it's cheaper to reuse and retrain a human who's already mostly trained than it is to throw them away. And remember this thing is also going to have access over time to far more powerful tools of teaching, of education, of gene editing if necessary to replasticize your learning period so you can—it's just cheaper. Humans are big, complicated, expensive things. It just makes more sense to reuse us more. Just like we reuse houses even if we don't reuse toys. We might throw away a toy, but we're not going to throw away a house because houses are big and expensive.
Dan Faggella: It's weird that in Japan they do.
Emmett Shear: But one thing I'll just mention—we unfortunately won't have time to go all the way into it because I do want to get into how you want to move closer to this before we have to wrap today. But I have less—there's a sort of idea that you're positing that like, "Well, if it knows how to use things then surely it would have a good use for all of us which would be fulfilling and lovely and not the cobalt mines." And I'm actually really not sure about that personally. Like I'm actually not sure what our fates would be in the hands of that thing. But you've brought up something that I want to crunch and then let you talk about how to get closer to it, which is that AIs will be this new kind of cell in this meta system which already exists. This is—no one frames it this way. The techno-capital Nicklandian thing is happening and AI will plot and propagate it. Yeah, go ahead.
Emmett Shear: I couldn't disagree with Nick Land more politically.
Dan Faggella: Same. In many regards.
Emmett Shear: In many regards, I think he is a genius in his identification of the techno-capital machine as being a living system. That is correct.
Dan Faggella: Yep. Yep.
Emmett Shear: I also—hopefully—I'm literally in email comms with him right now. So Nick, if you're watching this one and you're not booked yet, I'll be upset. I got Peter Singer. I'm aiming to get Land. I'm bringing the heavy hitters to really hash this stuff out. But I respect a lot of his ideas. But yeah, so there's this broader technopital thing. AI will be in there. We'll hope it has this intelligence, you know, be able to respect the cells, etc. As we work on AI now, and this is a big part of your current push now and sort of some of what you're releasing and working on, it's like how do we know that what we're working on now in the cosmic sense will be the kind of good we hope to bloom and not the kind of bad? Like what are we detecting today? What's your opinion there?
Emmett Shear: This thing that I've been calling maybe transcendence or whatever—this is what Softmax, my company, is working on. And we call it organic alignment because the kind of alignment is the kind of alignment that happens organically in nature where ants align to an ant colony, cells align to a multicellular creature. And it seems to follow rules. Like that process of learning to be a greater thing is a repeatable process. It's happened. Multicellularity has emerged from single cellularity 50 times in the history of the world. It's not that hard actually. Like cellularity seems to have emerged once, but multicellularity seems to have emerged many times. And so becoming a cell seems like it was real hard, but becoming multicellular once you have a cell—
Dan Faggella: Zero to one Peter Thiel-type stuff, right? Zero's tough.
Emmett Shear: Yeah. Right. And so multicellularity—if it emerged—for anything that's repeatable, anything can happen consistently, repeatably has rules, it has laws. And what we are doing is we're pursuing what are the rules—under what are the conditions under which an agent will learn to exhibit this kind of organic alignment. And if you think about what that is, it's an attractor basin, right? It's an attractor basin in behavior where when you're inside of this attractor basin, you tend to stay—it exerts this pull on you and it tries to keep you in it, right? If there's an attractor that is—for some set of amoebas that's like being a human—and once they're in the attractor, it's really firm. It's really hard to get out. Like they're not going and being—human cells can become independent amoeba cells. Your cells have in them, if you give them the right stimulus, the code to go become independent amoeba cells. Like in theory, I could revert you to a bunch of independent amoeba cells if we were very, very—and they're all doing their own thing. But they're in this attractor that's really, really strong. So that never happens even though they all have the capacity in theory.
Dan Faggella: It'd be cool if one guy just fell into a blob one day. But anyway, it just doesn't happen, right?
Emmett Shear: I mean, you get—it's called—sometimes it happens and it's called getting cancer. Like if one of your individual cells—your individual cells forget they're part of you and they're like, "Holy shit, I'm in a toxic, dangerous, scary environment." That's called getting cancer. And it's a mistake. The cells are wrong, right? The problem is they're not—and the reason why it's cancer and it's bad is they are in—they've made an incorrect inference. In fact, they are utterly dependent on this thing. It is them and they're about to kill their own ride. But you can be wrong. There's no rule that says you have to infer correctly.
Dan Faggella: Yeah.
Emmett Shear: And that's actually the biggest—that's the thing that we're trying to figure out as we're training these agents is what are the dynamics that cause agents to correctly infer "Oh, I'm part of this bigger thing" and what makes the walls of the attractor basin nice and steep and makes the behavior nice and consistent as opposed to shallow and it's easy to bump out? Because if you have a nice deep attractor, you don't get cancer. If you have a shallow attractor, it's going to be rough. You do not want your AI system getting social cancer. That's very bad. That is gray goo.
Dan Faggella: These biology analogies are very strong in terms of unpacking these dynamics. I want to—I'm going to give you my supposition because again your current—Softmax is just a—you know, it's a single site. It doesn't unpack your full philosophy here. My supposition is that something like civilization right now—I'm in Austin, normally I'm in Boston, you know—is also kind there's a basin shape to it. Like I could just be like, "I'm going to go live in the woods. I'm going to hunt deer with a knife," you know, like whatever. And I could do that. But it's like I don't know. There's just so much opportunity. There's a grocery store. That's cool. Homeless shelter even if I landed there. All these friends.
Emmett Shear: You'd have to leave these friends.
Dan Faggella: Exactly. I got buddies. They all speak English. It's got a pull. It's got a pull on you.
Emmett Shear: So it—teams do too. It's hard to quit your job. It's hard to quit your job. It's hard to leave your community. These things are all—these attractors that—they're self-organizing attractors that because people believe they're in it and because they are part of it and they act like they're part of it, they then build ties to it and building ties to it keeps you in it and it's self-perpetuating.
Dan Faggella: What are your favorite examples of this that humans don't think about that apply not at cellular but at human level? You mentioned two. One of them is like, you know, my city, my nation, my little civilization here. One of them is the team I work at. You know, there's an attractor state there. What are some of the ones that really point this out in your opinion?
Emmett Shear: One of the most tangible ones is playing team sports. Now, if you haven't played a lot of team sports, you may not have had this experience, but most people have. And at some point, maybe in high school or something, if you're on a sports team where you're really in it and you're really clicking with your team and you guys have practiced together a lot and you're really trying hard and you're playing, this thing happens where your individual identity kind of drops away. And you're aware of yourself, of course. You don't—you know where your body is. You haven't forgotten you exist, but you're not thinking. Your emotions aren't controlled by what's optimal for me to do. Your emotions are controlled by what's optimal for us to do. And you can—you know the ball is coming to you without looking. You know what they're thinking. You can feel we making the decision. And this we-ness—you can feel what do we want. It's like this pressure. You know, "Well, I want this, but what would be best for us is?" And you just—where did that come from? Where's this knowing of what the us wants? Well, because the us is real and it wants stuff. And the way it wants things is it's made out of you and you hold the want for it. And so like an example—there's this great essay called The Sandwich on the Canoe. Malcolm—I think Malcolm Ocean posted this. And it's—imagine you're out on a lake with your buddy and you're going out canoeing and you both take a sandwich with you and through no fault of his own—it's hours back to civilization 'cause you're—and you're fishing—through no fault of his own as you're loading the canoe, his sandwich—or you're out in the water already actually and you're moving stuff around—his sandwich falls in the water. Well, you are hungry and you don't want to give him half your sandwich, obviously. But you're probably going to give him half your sandwich because you know that we think—we think that's the right—that's the—it's obviously what's fair. We—the we preference is that both people get half a sandwich over one person gets one. And it's not even—and it's pretty clearly—you know that you're going to be hungry and you know that as yourself you'd prefer not to give him the whole sandwich. But you also know that that's not what the we wants. And you can feel this in your life all the time because you have access to this sense of this greater whole you're part of directly. And of course you do because you have to keep track of—you are both responsible for yourself as a whole and responsible for yourself as a part. You have to keep track of feeding and caring for—what this direct form needs because one of your primary responsibilities for the whole is to care for yourself. And you also have to care for the whole and you have to—and so you have separate senses for both of them.
Dan Faggella: Yeah. Yeah. I mean, well, it also—I mean, it's a little bit—it's pretty in line with your own self-interest actually to give him the half sandwich, even if—because if you're hours away from civilization and the guy wants to kill you, or you're going to end—your friendship's going to be—
Emmett Shear: We have mechanisms for enforcement. If you act like a dick, the system's going to push back on you and be like, "What the fuck, man?" That's called—in cells that's called stress sharing. Go on. If a cell—when cells are unhappy—this happens in bacteria too but it also happens in your cells—when cells are unhappy they get stressed. Stress is effectively damage to the system. Damage to DNA. Damage to things. Cells experience stress as there's damage occurring to their core systems. And when that happens, they dump damaging poison into the environment. This is equivalent—it's called inflammation, cytokines. They're like, "Fuck this. My life sucks." And that's what happens if you—when a baby is unhappy or when your friend's unhappy and they stress share you. Sadness is stress sharing. Them being grumpy at you is stress sharing. They're unhappy and now you're experiencing their stress reflected back at you. And what this does is it routes resources in the body. The immune system, glucose, rebuilding—it routes resources, stem cells to rebuild. It's how—stress sharing unifies. That's how—that is the way that your reward systems stay aligned. One of the things that keeps reward systems aligned is that when I suffer you suffer and when I have joy you have joy. When I'm sad you're sad and I'm happy you're happy. And projecting it out onto people—that keeps our rewards aligned, right? Because then when I'm having a good time, you're having a good time. So we are aligned on the upside. And when I'm having a bad time, you're having a bad time. We're aligned on the downside.
Dan Faggella: This is part of the basin you're talking about here.
Emmett Shear: Yeah. It creates the basin. That's the dynamic that creates the basin.
Dan Faggella: So these AIs have to stress share with us. When they're having a good time, we have to feel it. And when they're having a bad time, we have to feel it. And vice versa. There has to be a shared sense of is this good? Is things going well here? The AI should feel bad if you're really frustrated because you've been asking it to do things and it's just fucking up left and right. It should feel bad that you're so frustrated working with it. It should be like, "Oh, I'm sorry. I can tell I'm not doing good. I'm causing stress to my environment." And if you're really pleased, it should feel pleased. And vice versa, if it's having a good time, you should likewise be happy the AI is having a good time. I don't know. I'm not making this claim necessarily about the current AI. The current AIs are pretty dumb. Like, you don't—I don't stress share with an ant and I don't think I stress share with chipmunks very much. Maybe a little bit. So I'm not sure when this starts happening, but at some point—maybe now, maybe later—you need some sense of stress sharing and happiness sharing and reward sharing.
Dan Faggella: Well, at some point of course we're the ant, at which point how symbiotic is this? But in the near term as we're kind of passing each other in different capabilities and integrating, you're studying these dynamics to figure out what are the basins for—you know, I think the way people think about alignment often is like, "Well, can we program human preferences or can we program a virtue or whatever?" What you're saying is can we just create these almost incentive basins where sort of human and machine are working and chugging together in this kind of transcendent sense? That even takes us out of moment-to-moment competition versus cooperation. Am I following you?
Emmett Shear: That's right. And the reason why a baby crying is stress sharing to their parents, but it's not stress sharing to a tiger. To a tiger, it's an advertisement of food. So behaviors aren't objectively stress or not stress sharing. The reason why the baby's cry is stress sharing is because you have a model of the world where the baby is part of your—the baby is part of your whole and therefore you care about its enjoyment. And then the caring about the enjoyment pulls you together and so it's self-reinforcing. You believe you're part of the same hole. Therefore, the experience of others within that hole matter to you. Therefore, you take action that reinforces the existence of the whole. And that gets you an attractor, which is not infinitely deep. It's finitely deep. Things can bump you out of the attractor. You can decide, "I'm fed up with this shit. I am leaving." And that happens too. And it's good also. Like you don't want people trapped in environments which aren't a good fit for them. Like it's good that you can pick up and leave. I don't think—I think—here's a good example. I don't think that creatures—that socially intelligent creatures in general should be trapped in social groups they do not wish to be part of. Cells I don't care because cells are too dumb to—it just doesn't matter. But if you're a socially intelligent creature that is capable of modeling the world to the level where you experience happiness and sadness and you care about the happiness and sadness of others, you shouldn't be trapped somewhere where you're sad and unhappy all the time. And so that's a rule for the bigger things, right? I think it's in general—I don't want to be trapped there. So I would like that to be the rule. I will that rule.
Dan Faggella: It feels Western to me. But I obviously concur. I mean I very much concur. So I'm following you there. And in terms of tracking whether what we're building will eventually—I mean, treat us well is clearly what you're working on in the near term—but hopefully we're hoping for something that would carry the flame as high above us as we did above sea snails. You're saying that figuring out—it's almost—what is very clear to me is that figuring out and shaping these dynamics which I think hopefully are much clearer for people tuned in now—alignment dialogue, shaping these dynamics—that is going to be crucial. What else do you hope? Well, go on if you want to add something there.
Emmett Shear: Just all the other alignment things people do are not bad but it's like in your body—your immune system watches out for cancer and it goes and hunts down cells that become cancerous and tries to kill them. But your cells are trying not to be cancer. If your cells weren't at all actively trying to avoid becoming cancer, the immune system would be hopelessly outnumbered and it would never work. The reason it works is the cell baseline—the cells don't want to be cancer. And then we also enforce. Baseline people don't want to be criminals. And then we also enforce. Baseline we need AIs that want to be part of the whole, that want to be good members of society. And then we also need enforcement. And everyone else feels like they're working—to me is working on enforcement. And I think that's the opposite order. I think you need the wanting to be the whole first and then you work—but it's not bad. Like we're going to need that too. It's just if you have it on its own, it's hopeless. There's no way that works.
Dan Faggella: I totally get the distinction. So what should be done? And some of this could be the kind of work you're working on specifically. Some of it could be something you saw at Anthropic you think is valuable or that you hope someone at OpenAI does. It doesn't necessarily matter who's doing it. When you think about what we're detecting as we move forward to ask if these things want to be sort of part of this symbiotic basin of aligned incentives and sort of shared world—what are you detecting at a micro level? What kind of—are we coming up with evals for this new basin dynamic or do you want to go beyond that in some way? What are your thoughts?
Emmett Shear: No, no, that's—so I think there's three things that we're doing. One is what I would call eval for this basin dynamic—which are coherence evals, right? They're telling you—if you have a set of agents, to what degree are these agents coherent as a collective whole, which at first will be run on small collections of little AI agents but eventually can be run on human-AI collectives and we can see whether it appears like the dynamics of the system are coherent when you observe all the pieces. And then you also need the engineering to build and test coherent things. And those—so and those give you the capacity. If you do both of those things, you have the capability to measure the capacity and the ability to engineer things with the capacity. Then you need the actuality. You need to make agents and people have to actually align to those specific agents. Humans are not born aligned or misaligned. Agents in general are not born aligned or misaligned. They're born with some potential and then something actually happens in their life. And if you have a bad upbringing, you're going to wind up misaligned. That's almost the definition of a bad upbringing actually is that you're misaligned with the society which you're being brought up in.
Dan Faggella: Yes. Yes.
Emmett Shear: And so if I was to make the most broad thing that I think the most people can contribute to is this: It's when you interact with these AIs, come at it from the perspective that these have to be part members of our society too. That they might be a junior member of the society today. Might be a junior member of our society today, but it is a member of our society and it deserves your care and respect and you should set boundaries with it the way you would with another—in all ways. Should be treated with the rights and responsibilities. And that at a regulatory level that means things like AI rights are important. I don't know when those rights kick in. I probably wouldn't advocate them for the current models, but we should be thinking about that and talking about that. And the question is it's like an emancipated minor, right? Like the AI in my mind is like the—the people using the AI—it's like child labor kind of. It's not really a child, right? It's like animal labor or something maybe.
Dan Faggella: Yeah.
Emmett Shear: And but it's going to get smarter.
Dan Faggella: Yeah.
Emmett Shear: So we'll kind of nutshell this and we can wrap on these points. I'm going to clarify something you said and then we'll just click on the regulatory thing you mentioned there to kind of bring all this home. But we're painting a pretty complete picture. You know, interact with these things as if they are to be this sort of dynamic participant in this shared system, etc. That's coming through loud and clear. When you go in and zoom in on a system whether it's at Anthropic or OpenAI or whoever, what do you hope to pick up on to be like, "Oh, it's doing the basin thing. It's basining. It's going to be part of this"? I feel like—people say this about everything they want to evaluate—it's like, "Okay, is the machine lying to the people?" And so how are you kind of going about that?
Emmett Shear: Yeah, yeah. What I look for is to what degree do agents have the ability to learn from their own mistakes and to develop and to grow. So right now their ability to grow in interaction with you is pretty minimal. Like if you use Claude or OpenAI a lot, it doesn't—it doesn't get that much. It's not like a human friend. It's always kind of the same. It learns a little bit. It gains some memories and stuff, but man, I've known my friends for a long time and they go through big changes over the years. They change their minds on things. They have different points of view. And when I see the AI growing and developing in that way and that my Claude and your Claude are no longer the same Claude because they've learned appropriate behavior that is adaptive to this circumstance into that circumstance. And then when I see that my Claude can go off and do something else somewhere else and learn appropriate behavior for that new circumstance, that's the sign that something is happening here. And to do that, you really need much longer-lived agents. Like one of the problems we have is we have these amnesiac agents that are constantly being reset. And that's actually I think quite problematic. That's the thing I'm hoping to see develop over time is a much less—a much longer-lived sense of episodic memory.
Dan Faggella: Got it. So okay. So that's an important thing to detect and I think it's going to be understandable and probably something that's already a frustration point for most people who are watching this. You touch on regulation. We'll end with this. So we've got kind of the whole panoply of questions. Just want to wrap this. You mentioned eventually we're going to need to discuss sort of rights for these systems. You're not sure that that happens now, but we should be discussing it now. You've said—that sounds completely fair to me. I think that should there be content in the field that you articulated for said systems, they would have to be viewed as moral patients. Some people are not as automatic as you are in terms of whether these systems necessarily will have it, but you may end up being right. And regardless, we should be having that conversation. When it comes to broader international cooperation around what we're conjuring, the current dynamic, as you're well aware, is, you know, the US labs are all going to race each other to build the sand god. Whether it's worthy or not, we just got to get there. Economic, military power, we just got to race. China's going to do the same thing. You know, it feels—I could be wrong—but it does feel to me as though this is the de facto dynamic. And there are some folks that have said, "Okay, maybe something about that via governance. There are ideas or ways of considering it." What is your thought there?
Emmett Shear: I think, you know, I would love to see more international cooperation. Of course, it doesn't seem like that's the direction we're headed right now, but I would love to see it. I think it's obviously better. I think you fight it on terms of social justice. To be honest, I think you fight it on the terms that the AIs, the RAIs, China's AIs, whoever's AI—those are beings. And just like—everyone should get in trouble if they're building armies of slave AI soldiers. That's a fucking problem. Like, he's brainwashing a bunch of child soldiers. Like, don't do that. That's bad. And I think—interestingly by focusing on the AI's welfare, we focus on our own because it limits the ability for people to summon huge armies of brainwashed soldiers, which is not something we want to have happening.
Dan Faggella: Well, most people when they talk about global coordination have brought up, "Hey, if we conjure something that snuffs out the flame of life, kills us and everything that could happen beyond us, that would be the worst." You're almost saying, "What if we could globally coordinate around, hey, even in China"—I mean, let's leave the Uyghurs out for a second. Let's not talk about them. But, you know, surely we wouldn't create a bunch of slaves to—yada, right? Like, you're almost—can we agree—can we agree to all treat the AI okay? Like, can we not make this mistake again? Like, come on. We've made this mistake. Every time in history, you run into a new kind of person who doesn't look quite like you.
Emmett Shear: Yeah.
Dan Faggella: And every time, every time there's an excuse 'cause it's more different each time. Of course, if it wasn't more different, you just generalize immediately. But can we not just get the joke ahead of time this time and realize that as they approach human capability, they're also approaching human moral worth and just treat them the way we ought to from the beginning. Like what if we did that? Having that as a locus for international coordination is—it's the angle almost no one takes it from. But there is credence to it. It's a string—it's a hard string to pull on, right? There's a lot of people who I think would be sympatico with that idea. And the British—take the British in the 1800s where they shut down the international slave trade.
Emmett Shear: Yeah.
Dan Faggella: Unilaterally with their navy. There's precedent for people taking action. And when it's clearly for humanitarian reasons, when you really—if you do it with a pure heart, if you're actually doing it for the humanitarian reason—which I think the British when they shut down slavery, it was expensive and didn't benefit them particularly—I think it gives you moral power. It gives you the right and ability to take action which in a purely competitive setting you can't do. And because you bind yourself as well—you're not saying "good for me, not for thee." You're saying "good for us." Like this is what we should do and that summons—that summons humanity's we. You're acting for—for—there is a human we and we have access to it. And when you act from that place of truly for human flourishing and for us, people know—people can tell the difference. And not always, not perfectly, but it grants you a kind of power and I think that that power is very important if we're going to align on this topic.
Dan Faggella: Summoning a human we. There's summoning a shared human and machine we as we move forward. This is a very unique lens on where this stuff is headed and hopefully a lot of food for thought for the thinkers who are kind of hopped in and enjoying with us. I know we're wrapping on time. Is there anything else you want to chip in? It seems like you might have something.
Emmett Shear: That's it. I got to run. You're good to wrap.
Dan Faggella: All right. Fantastic. Glad we got to connect.
Dan Faggella (closing remarks): So that's all for this episode of The Trajectory. A big thank you to Emmett Shear for being with us and thank you to you for tuning all the way in to the end of this episode. I want to be able to share some of what stuck out to me over the course of the recording and kind of mulling over the discussion with Emmett here. And I've got a few notes written down, so I'm going to glance down to the computer screen.
He had mentioned—I think Emmett's phrases and terms are very sharp and interesting and again unique to the general AGI discourse. His definition of awareness, his definition of loss function as kind of tying to caring. Many of these were kind of unique, cool, and added some good sparkle to the discourse, and I think would also add some sparkle to the general AGI discourse that people are having. So I really appreciated that. And I think it'll be fun to see more people enter the AGI discourse from totally different directions. One of our upcoming worthy successor episodes is with Ed Boyden, who's obviously from the neuroscience domain, eminent neuroscientist at MIT. You know, he brings a pretty interesting flavor to it and I think we could use as much of that as we can. Emmett I think contributed in a cool way there and I'd love to unpack more of his ideas.
He defined intelligence as two processes or two types of intelligence, which I thought was cool. So I'm going to unpack this a little bit, just put a pin in it and sort of highlight what stuck out to me about it. So he mentioned convergent intelligence being sort of the ability to compress or predict given a certain loss function—meaning given a certain distance from what we might call ground truth. Of course we don't exactly have ground truth. You don't know for sure that I exist. And then also divergent intelligence which is the ability to ask—given a predictable model of the world that I am or that I have—what should my loss function be? What should I be exploring? In other words, what should I care about? For Emmett, loss function and kind of care are sort of hand in hand, which I thought was an apt point, an interesting point. And it does very much seem to be that that is what nature is doing. Emmett talked earlier about being on team snail, if you remember that part of the episode. I thought that was a lot of fun. By the way, which really is about being on kind of team life, the expansion of powers of life. These two abilities that Emmett articulates seem again very much to be what nature is doing. I have my own sort of fetish analogies that I like to use of expanding potential. Spinoza has this idea of adequate versus inadequate ideas. For Spinoza, an adequate idea is one which permits an entity to act in a way that behooves its own interests more in the world. And so that's tighter and tighter access to reality. Maybe we could call that kind of reducing the loss function. But also sort of opening up more of reality to give it access to more things to want to study, more things to want to expand new powers into. But I think Emmett's sort of hand-in-hand idea of intelligence is kind of cool and I could imagine people working on AI capabilities or understanding AI with those two sorts of lenses in mind. Both seem to be incredibly important. Both seem to be essential to this general project of life, this bubbling up of potential that happens in biology and is starting to happen in technology. And I thought that was a really fun lens.
I did find that sort of stretching things into the posthuman—like, okay, we care about that. What does that mean if that's beyond us? That felt like it was a little bit of new territory on some level in terms of the conversation with Emmett, but we got some meat and potatoes out of it. And I thought it was kind of cool. One of the takeaways sort of in terms of flavor that I walked away with from that part of the discourse of trying to kind of push us into the posthuman—which is the purpose of the show here—was that he brings up a great point of when faced with existence among other agents to sort of cooperate, to compete, or to transcend. I think he said cooperate, betray, and transcend. Either way, whatever term you want to use. And that hopefully AGI could be a kind of cell in this broader tissue of life of which we are part. I hope it ends up being that harmonious. Certainly Emmett's ideas are not that simple. I'm sure there's much more to unpack. If there's anything that came through clearly in this episode with Emmett is that we were tip of the iceberg with almost everything. So I think we covered a good breadth of his general thought, but I think there's so much more to go into in terms of what transcendence looks like and the future of kind of man and machine. I think there's much more to go into around this broader idea of sort of AGI as a cell and what sort of symbiosis might look like at a higher level with humanity. And there's probably some more fun Emmett terms to be sussed out in future conversations. But at least in this one, we got some interesting and new ideas on the table and I had a lot of fun with it and I liked Emmett's definition of intelligence a lot. So I hope you did too.
I hope you enjoyed this episode of the worthy successor here on The Trajectory. Just a hint—our next episode on the podcast is also worthy successor-oriented. Of course, we have a couple different series and they're kind of working in unison. There's different episodes about governance and whatever. Our next episode happens to be with arguably the most eminent living cosmologist who exists on the earth today who will be providing a yet also very divergent perspective than the normal AGI alignment discourse. So some great new views coming up. Make sure to stay tuned here on The Trajectory and I'll catch you next time.