A lot of people have said for many years That there will come a time when they Want to pause a little bit That time is now The following is a conversation with Max Tegmark his third time in the podcast in Fact his first appearance was episode Number one of this very podcast he is a Physicist and artificial intelligence Researcher at MIT co-founder of future Life Institute and author of Life 3.0 Being Human in the age of artificial Intelligence Most recently he's a key figure in Spearheading the open letter calling for A six-month pause on giant AI Experiments like training gpt4 the Letter reads Were calling for a pause on training of Models larger than GPT 4 for 6 months This does not imply a pause or ban on All AI research and development or the Use of systems that have already been Placed on the market our call is Specific and addresses a very small pool Of actors who possesses this capability The letter has been signed by over 50 000 individuals including 1800 CEOs and Over 1500 professors signatories include Joshua bengio Stewart Russell Elon Musk Steve Wozniak you all know a Harari Andrew Yang and many others This is a defining moment in the history Of human civilization or the balance of
Power between human and AI begins to Shift And Max's mind and his voice is one of The most valuable and Powerful in a time Like this His support his wisdom his friendship Has been a gift of forever deeply Grateful for This is the Lex Friedman podcast to Support it please check out our sponsors In the description and now dear friends Here's Max Ted mark You were the first ever guest on this Podcast episode number one so first of All Max I just have to say uh thank you for Giving me a chance thank you for Starting this journey it's been an Incredible journey just thank you for um Sitting down with me and just acting Like I'm somebody who matters that I'm Somebody who's interesting to talk to And uh thank you for doing it I meant a Lot all right thanks to you for putting Your heart and soul into this I know When you delve into controversial topics It's inevitable to get hit by what what Hamlet talks about the slings and arrows And stuff and I really admire this it's In an era you know where YouTube videos Are too long and now it has to be like a 20 minute Tick Tock 20 second Tick Tock Clip it's just so refreshing to see you Going exactly against all of the advice
And doing this with this really long Form things and that people appreciate It you know reality is nuanced and uh Thanks for And sharing it that way uh so let me ask You again the first question I've ever Asked on this podcast episode number one Talking to you do you think there's Intelligent life out there in the Universe let's revisit that question do You have any updates What's your view When you look out to the Stars so when We look after the Stars If you define our universe the way most Astrophysicists do not this all of space But the spherical region of space that We can see with our telescopes from Which light has the time to reach us Since our big bang I'm in the minority I I'm Estimate that we are the only life In this spherical volume that has uh Invented Internet radios gotten our Level of tech and um if that's true Then it puts a lot of responsibility on Us to not mess this one up because if It's true it means that life is is quite Rare and we are stewards of this one Spark of advanced Consciousness which if We nurture it then Help it grow it immensely life can Spread from here out into much of our Universe and we can have this just
Amazing future whereas if we instead um Are Reckless with the technology we Build and just snuff it out due to the Stupidity Or in fighting then Maybe the rest of cosmic history in our Universe was just going to be a play for Empty benches but I I do think That we are actually very likely to get Visited by aliens Alien intelligence quite soon but I Think we are going to be building that Alien intelligence So Uh we're going to give birth To an intelligent alien civilization Unlike anything that human the evolution Here on Earth was able to create in Terms of the path the biological path it Took yeah and it's gonna be much more Alien than Cats or even the most exotic animal on The planet right now because it will not Have been created through the usual Darwinian competition where it Necessarily cares about Self-preservation is afraid of death Um any of those things The space of alien Minds is just that You can build it's just so much faster Than what evolution will give you And with that also comes great Responsibility but the the for us to Make sure that the kind of Minds we
Create are those kind of Minds that um It's good to create Minds that will uh Share our values and and be good for Humanity and life and also mine don't Create Minds that don't suffer Do you try to visualize the full space Of alien Minds that AI could be to try To consider all the different kinds of Intelligences Sort of generalizing what humans are Able to do to the full spectrum of an Intelligent creatures entities could do I try but I would say I fail I mean it's It's very difficult for a human mind Really grapple with Something still completely alien even Even for us right if we just try to Imagine how would it feel if we were Completely indifferent towards death or Individuality if we even if you just Imagine that for example You could just copy my knowledge of how To speak Swedish boom now you can speak Swedish And you could copy any of my cool Experiences and then you could delete The ones you didn't like in your own Life just like that it It would already change quite a lot About how you feel as a human being Right if you probably spend less effort Studying things if you just copy them And you might be less afraid of death Because if the plane you're on starts to
Crash you'd just be like oh shucks I'm Gonna I haven't backed my brain up for Four hours so I'm gonna lose this all This wonderful experiences of this Flight We might also start feeling More like compassionate maybe with other People if we can so readily share each Other's experiences in our knowledge and Feel more like a hive mind it's very Hard though I I really Feel very humble about this To to Grapple with it that the how it Might actually feel that the the one Thing which is so obvious though it's I Think is just really worth reflecting on Is because the mind space of possible Intelligence is so different from ours It's very dangerous if we assume they're Going to be like us or anything like us Well there's a The entirety of uh human written history Has been through poetry through novels Been trying to describe her philosophy Uh try and describe the Human Condition And what's entailed in it like just like You said fear of death and all those Kinds of things what is love and all of That changes yeah if you have a Different kind of intelligence yeah like All of it the entirety all those poems They're trying to sneak up to what the Hell it means to be human all of that Changes how AI concerns and uh
Existential crises that AI experiences How that clashes with the human Existential crisis The Human Condition Yeah that's hard to hard to Fathom how To predict it's hard but it's Fascinating to think about also even in The best case scenario where we don't Lose control of over the ever more Powerful AI that we're building to Other humans whose goals we think are Horrible and where we don't lose control To the machines And AI Provides the things we want even then You get into the questions do you Touched here you know maybe it's the Struggle that it's actually hard to do Things is part of the things that gives Us meaning as well right so for example I found it so shocking that this new Microsoft gpt4 commercial that they put Together has this woman talking about And showing this demo how she's going to Give a graduation speech to her beloved Daughter and she asks gpt4 to write it It was freaking 200 words or so if I Realized that my parents couldn't be Bothered struggling a little bit to Write 200 words and Outsource that to their Computer I would feel really offended Actually And so I wonder if um eliminating too Much of the struggle from our existence
What Do you think that would also take away a Little bit of What it means to be human yeah We can't even predict I had somebody Mentioned to me that they use they Started using uh GPT with a 3.5 not 4.0 Uh to write what they really feel to a Person And they have a temper issue and they're Basically trying to get Chad gbt to Rewrite it in a nicer way to get the Point across but we write in a nicer way So we're even removing the inner From our communication so I don't you Know there's some positive Aspects of that but mostly it's just the Transformation of how humans communicate And it's scary because so much of our Society is based on this glue of Communication and if that we're now Using AI as the medium of communication That that does the language for us uh so Much of the emotion that's Laden in Human communication so much of the Intent That's going to be handled by an Outsourced AI how does that change Everything how how does it change the Internal state of how we feel about Other human beings what makes us lonely What makes us excited yeah what makes us Afraid how we fall in love all that kind
Of stuff yeah for me personally I have To confess the challenge is one of the Things that really makes my life feel Meaningful you know If I go hiking mountain with my wife Maya I don't want to just press a button And be at the top but I want to struggle And come up there sweaty and feel wow we Did this in the same way I want to constantly work on myself to Become a better person if I say Something in Anger that I regret I want To go back and and really work on myself Rather than just tell an AI just from Now on always filter what I write so I Don't have to work on myself because Then I'm not growing Yeah but then again it could be like With chess and AI Wants it significantly obviously Supersedes the performance of humans it Will live in its own world and provide Maybe a uh flourishing Civilizations for Humans but we humans will continue Hiking mountains and playing our games Even though AI is so much smarter so Much stronger so much Superior in every Single way just like with chess yeah so That that I mean that's one possible Hopeful trajectory here is that humans Will continue to Human Uh and AI will just be a Kind of A a medium that enables The Human
Experience to flourish yeah I would phrase that as rebranding Ourselves from Homo sapiens to homo Sentience you know right now with Sapiens the ability to be intelligent We've even put it in our species name We're branding ourselves as the smartest Yeah information processing Entity on the planet that's clearly Gonna Change if AI continues ahead So maybe we should focus on the Experience instead the subjective Experience that we have with Homo sentience and and so that's what's Really valuable the love the connection The other things And get off our high horses and get rid Of this hubris that only we can Do we do integrals so Consciousness and Subjective experience is a fundamental Value To what it means to be human make that Make that the priority That feels like a hopeful direction to Me but that also requires more Compassion not just towards other humans Because they happen to be the smartest On the planet but also towards all our Other fellow creatures on this planet And I I personally feel right now we're Treating a lot of farm animals horribly For example and the excuse we're using Is oh they're not as smart as us
But if we get that we're not that smart In the grand scheme of things either in The post aie Epoch you know Then surely we should value The subjective experience of a cow also Well Allow me to briefly look at the book Which at this point is becoming more and More Visionary that you've written I Guess over five years ago life 3.0 So first of all 3.0 what's 1.0 what's 2.0 was 3.0 and how's that Vision sort Of evolve the vision in the book evolved To today life 1.0 is really dumb like Bacteria and that it can't actually Learn anything at all during the Lifetime the learning just comes from This genetic Process from one generation to the next Life 2.0 Is Us and other animals which Have brains which can learn during their Lifetime a great deal right so And um you know you were born without Being able to speak English and at some Point you decided hey I want to upgrade My software let's install an English-speaking module So you did And life 3.0 does not exist yet can Cannot replace not only its software the Way we can but also it's Hardware And um that's where we're heading Towards at high speed we're already Maybe 2.1 because we can you know put in
A An artificial knee uh pacemaker Etc etc and if newer Link in other Companies succeed will be like 2.2 Etc But uh well the Company's trying to Build AGI are trying to make is of Course full 3.0 and you can put that Intelligence into something that also Has no Biological basis whatsoever so let's Constraints and more capabilities just Like the leap from 1.0 to 2.0 there is Nevertheless he's speaking so harshly About bacteria so disrespectfully about Bacteria there is still the same kind of Magic there That permeates Life 2.0 and uh and 3.0 It seems like maybe the thing that's Truly powerful About life intelligence and Consciousness was already there in 1.0 Is it possible I think we should be humble and not be So quick Make everything binary and say either It's there or it's not clearly there's a There's a great spectrum and there is Even controversy by whether some Unicellular or organisms like amoebas Can maybe learn a little bit You know after all so apologies if I Offended anything there yeah it wasn't By intent it was more that I wanted to Talk up how cool it is to actually have
A brain yeah where you can learn Dramatically within your lifetime Typical human and and the higher up you Get from 1.0 to 2.0 to 3.0 the more you Become the captain of your own desk of Your own ship the master of your own Destiny and the less you become a slave To whatever Evolution gave you right By upgrading our software which can be So different from previous generations And even from our parents Much more so than even a bacterium you Know no offense to them And if you can also swap out your Hardware take any physical form you want Of course it's really the sky's the Limit Yeah so the It accelerates the rate at which you can Perform the competition computation that Determines your destiny Yeah and I think it's it's worth Commenting a bit on what you means in This context also if you swap things out A lot right now This is controversial but my Current Understanding is that that you know Life is best thought of not as a bag of Meat or even a bag of Elementary particles but rather as in as Um A system which can process information And retain its own complexity
Even though nature is always trying to Mess it up so It's all about information processing And That makes it a lot like something like A wave in the ocean which is not it's It's water molecules right the water Molecules bob up and down but the wave Moves forward it's an information Pattern in the same way you Lex You're not the same atoms as during the First time you did with me you've Swapped out most of them but still you Yeah and The the information pattern is still There and um If you if you could swap out your arms And whatever You can still have this kind of Continuity it becomes much more Sophisticated sort of way before in time Where the information lives on I I lost Both of my parents since since our last Podcast and and it actually gives me a Lot of Solace that This way of thinking about them They haven't entirely died because A lot of mommy and daddy's um Sorry I'm getting a little emotional Here but a lot of their values And ideas and even jokes and so on they Haven't gone away right some of them Live on I can carry on some of them and They also live on a lot of other and a
Lot of other people so in this sense Even with Life 2.0 we can to some extent Already transcend Our physical bodies and our death And particularly if you can share your Own information your own ideas with many Others like you do in your podcast Then um You know that's the closest immortality We can get with our biobodies you carry A little bit of them in you yes yeah Uh do you miss them you miss your mom And dad of course of course what did you Learn about life from them if it can Take a bit Of a tangent On so many things Um For starters my my Fascination for Math And Um the physical mysteries of our University thinking I got a lot of that For my dad but I think my obsession for Really big questions and Consciousness And so on that actually came mostly for My mom And When I got from both of them which is Very core part of really who I am I Think is Is Um This um Just feeling comfortable with
Not buying into what everybody else is Saying just Dude what I think is right They both Very much just you know did their own Thing and sometimes they got flagged for It and it did it anyway That's why you've always been an Inspiration to me that you're at the top Of your field and you still You still willing to uh To tackle the big questions in your own Way you're one of the one of the people That represents MIT best to me you've always been an Inspiration in that so it's good to hear That you got that from your mom and dad Yeah you're too kind but but yeah I mean The real the good reason to do science Is because you're really curious you Want to figure out the truth If you think This is how it is and everyone else says No no that's and it's that way You know You sticked with what you think is true And and Even if Everybody else keeps thinking it's there's a certain Um I always root for the underdog when I Watch movies and my my dad once I I one Time for example when I wrote one of my
Craziest papers ever or I'm talking About our universe ultimately being Mathematical which we're not going to Get into today I got this email from a Quite famous Professor saying this is Not only but it's going to ruin Your career you should stop doing this Kind of stuff I sent it to my dad do you Know what he said what'd he say he Replied with a quote from Dante segil Tu Corso Follow your own path and let the people Talk Go Dad yeah this is the kind of thing You know he's dead but that that Attitude is not How did losing them as a man as a human Being change you How did it expand your thinking about The world how did it uh expand your Thinking about You know this thing we're talking about Which is humans creating another living Sentient perhaps uh being I think it uh Mainly do two things uh One of them just going through all their Stuff after they had passed away and so On just drove home to me how important It is to ask ourselves Why are we doing this things we do Because it's inevitable that you look at Some things they spent an enormous time On and you asked in hindsight would they
Really have spent so much time on this Or if would they have done something That was more meaningful Um so I've been looking more in my life Now and asking you know why am I doing What I'm doing and I I feel It should either be something I really Enjoy doing or it should be something That I find really really meaningful Because it helps Humanity And um If it's in none of those two categories Maybe I should spend less time on it you Know the other thing is dealing with Death up in personal like this it's Actually made me less afraid Of Um Even less afraid of other people telling Me that I'm an idiot you know which Happens regularly and just live my life Do my thing you know Um And um It's made it a little bit easier for me To focus on what I what I feel is really Important what about fear of your own Death Has it made it more real that this is That this is something that happens yeah It's made it extremely real and I'm next Next in line in our family now right It's me and my brother my younger
Brother but um They both handled it with such dignity It was there was a true inspiration also They never complained about things and You know when you're old and your body Starts falling apart it's more and more To complain about they looked at what Could they still do that was meaningful And they focused on that rather than Wasting time Talking about or even thinking much About things they were disappointed in I think anyone can make themselves Depressed if they start their morning by Making a list of grievances Whereas if you start your day and when The little meditation and just the Things you're grateful for you you Basically choose to be a happy person Because you only have a finite number of Days you should spend them Make It Count Being grateful yeah Well you do happen to be working on a Thing which seems to have a potentially Some of the greatest impact on human Civilization of anything humans have Ever created which is artificial Intelligence this is on the both Detailed technical level and in the high Philosophical level you work on so You've mentioned to me that there's an Open letter That you're working on it's actually uh Going live in a few hours so I've been
Having late nights and early mornings It's been very exciting actually I in Short I have you seen uh don't look up The film Yes yes I don't want to be the movie Spoiler for anyone watching this who Hasn't seen it but if you're watching This you haven't seen it watch it Because we are actually acting out it's It's life imitating art humanity is Doing exactly that right now except It's an asteroid that we are building Ourselves Almost nobody is talking about it People are squabbling across the planet About all sorts of things which seem Very minor compared to the asteroid That's about to hit us right uh most Politicians don't even have their radar This on the radar they think maybe in 100 years or whatever Right now We're at a fork on the road this is the Most important um Fork the humanity has Reached in its over a hundred thousand Years on this planet we're building Effectively a new species that's smarter Than us It doesn't look so much like a species Yet because it's mostly not embodied in Robots but um That's a technicality which will soon be Changed and and this arrival of
Artificial general intelligence that can Do all our jobs as well as us and Probably shortly thereafter super Intelligence which greatly exceeds our Cognitive abilities it's going to either Be the the best thing ever to happen Humanity or the worst I'm really quite Confident that there is Not that much Middle Ground there but it Would be fundamentally transformative To human civilization of course utterly And totally you know again we branded Ourselves as Homo sapiens because it Seemed like the basic thing where the King of the castle on this planet were The Smart Ones if we can control Everything else This could very easily change we're Certainly not going to be the smartest On the planet for very long if AI unless AI progress just Falls and we can talk More about why I I think that's true Because it's it's controversial And and then we can also talk about Reasons we might think it's gonna be the Best thing ever and the reason you think It's going to be the end of humanity Which is of course super controversial But What I think we can anyone who's working On uh Advanced AI Can agree on is it's it's much like the Film don't look up and that It's just really comical how little
Serious public debate there is about it Given how huge it is So what we're talking about is the Development of currently things like Gpt4 And the signs it's showing of uh rapid Improvement that may in the near term Lead to development of super intelligent AGI AI General AI systems and what kind Of impact that has on society exactly When that thing is achieves General Human level intelligence and then beyond That General superhuman level Intelligence There's a lot of questions to explore Here so one you mentioned halt is that Uh the content of the letter is to Suggest that maybe we should pause the Development of these systems exactly so This is very controversial From When we talked the first time we talked About how I was involved in starting the Future Life Institute and we worked very Hard on 2014-2015 was the mainstream AI Safety The idea that there even could be risks And that you could do things about them Before then a lot of people thought it Was just really kooky to even talk about It and a lot of AI researchers felt Worried that this was too flaky and Could be bad for funding and that the People had talked about it or just not
Didn't understand AI I'm very very happy with How that's gone in that now you know Just completely mainstream you go on any AI conference and people talk about AI Safety and it's a nerdy technical field Full of equations and simula and blah Blah yes Um As it should be uh But there's this other thing which has Been quite taboo up until now Calling for slowdown so what We've been constantly been saying Including myself I've been biting my Tongue a lot you know is that you know We we don't need to slow down AI Development we just need to win this Race the wisdom race between the growing Power of the AI and the growing wisdom With which we manage it and rather than Trying to slow down AI let's just try to Accelerate the wisdom do all this Technical work to figure out how you can Actually ensure that your powerful AI is Going to do what you wanted to do and Have Society adapt also With um incentives and regulations so That these things get put to good use Um sadly that Didn't pan out The progress on technical Ai and Capabilities has gone a lot faster than Than many people thought
Back when we started this in 2014 turned Out to be easier to build really Advanced AI than we thought Um And on the other side it's gone much Slower than we hoped with getting Um Policy makers and others to actually Put them incentives in place to to make Steer this in the in the good directions We can maybe we should unpack it and Talk a little bit about each so yeah why Did it go faster than we than a lot of People thought them In hindsight it's exactly like building Um Flying machines People spent a lot of time wondering About how the birds fly you know and That turned out to be really hard have You seen the Ted talk with a flying bird Like a flying robotic Bird yeah it flies Around the audience but it took a Hundred years longer to figure out how To do that than for the Wright brothers To build the first airplane because it Turned out there was a much easier way To fly And evolution picked a more complicated One because it had its hands tied it Could only build a machine that could Assemble itself which the Wright Brothers didn't care about they can only Build a machine they'll use only the
Most common atoms in the periodic table Wright brothers didn't care about that They could use steel Iron atoms and it had to be able to Repair itself and it also had to be Incredibly fuel efficient you know A lot of birds use less than half the Fuel of a remote control plane that's Flying the same distance For humans let's throw a little more put A little more fuel in a roof there you Go 100 years earlier That's exactly what's happening now with These large language models The brain is incredibly complicated Many people made the mistake you're Thinking we had to figure out how the Brain does human level AI first before We could build in the machine That was completely wrong you can take An incredibly simple Computational system called the Transformer Network and just train it to Do something incredibly dumb Just read a gigantic amount of text and Try to predict the next word And it turns out If you just throw a ton of compute at That and a ton of data it gets to be Frighteningly good like gpt4 which I've Been playing with so much since it came Out right And um There's still some debate about whether
That can get you all the way to full Human level or not But uh yeah we can come back to the Details of that and how you might get The human level AI even if A large language models don't Can you briefly if it's just a small Tangent comment on your feelings about Gpt4 so just that you're impressed by This rate of progress but where where is It can gpt4 reason What are like the intuitions what are Human interpretable words you can assign To the capabilities of gpt4 that makes You so damn impressed with it I'm both Very excited about it and terrified Interesting mixture of promotions all The best things in life include those Two somehow yeah I can absolutely reason Anyone who hasn't played with it I Highly recommend doing that before Dissing it It can do quite quite remarkable Reasoning and I've had to do a lot of things which I Realized I couldn't do that myself that Well even and and obviously does it Dramatically faster than we do too when You watch it type And it's doing that while servicing a Massive number of other humans at the Same time at the same time it cannot Reason As well as a human can on some tasks
Just because it's obviously a limitation From its architecture you know we have In our heads what in geekspeak is called The recurrent neural network there are Loops information can go from this Neuron the base neuron to this neuron And then back to this one you can like Ruminate on something for a while you Can self-reflect a lot uh these large Language models that are they cannot Like gpt4 it's it's a so-called Transformer where it's just like a One-way Street of information basically And geekspeak it's called the feed Forward neural network And it's only so deep so it can only do Logic that's that many steps and that Deep and it's not And you can so you can create problems Which will fail to solve you know for That reason Um But the fact that it can do so amazing Things with this incredible simple Architecture already it's quite stunning And and what we see in my lab at MIT When we look inside Large language models to try to figure Out how they're doing it that's the key Core focus of our research it's called Um mechanistic interpretability in geek Speak you know you have this machine it Does something smart you try to reverse Reverse engineer see how does it do it
Are you think of it also as artificial Neuroscience that's exactly what Neuroscientists do with actual brains But here you have the advantage that you Can you don't have to worry about Measurement errors you can see what Every neuron is doing all the time and And a recurrent thing we see again and Again There's been a number of beautiful Papers quite recently by By a lot of researchers some of them Here I am in this area is where when They figure out how something is done You can say oh man that's such a dumb Way of doing it and you immediately see How it can be improved like for example There was a beautiful paper recently Where they figured out how a large Language model stores certain facts like Eiffel Towers in Paris And they figured out exactly how it's Stored and where the proof that they Understood it was they could edit it They changed some of the synapses in it And then they asked it where's the Eiffel Tower and it said it's in Rome And then they asked you know how do you Get there oh how do you get there from Germany oh you take this train and to Roma Termini train station and this and That and what might you see if you're in Front of it oh you might see the Coliseum so they had edit it so they
Literally moved it to Rome but it the Way it's storing this information it's Incredibly dumb for for any fellow nerds Listening to this there was a big Matrix And a And roughly speaking there are certain Row and column vectors which encode These things and the they correspond Very highly related principal components And it will be much more efficient for Sparse Matrix just store it in the Database you know and and but and Everything so far we've figured out how These things do our ways where you can See they can easily be improved and the Fact that this particular architecture Has some roadblocks built into it is in No way going to prevent um craft the Researchers from quickly finding Workarounds and making Other kinds of architectures Sort of go all the way so so it's um in Short it's turned out to be a lot Easier to build human close to human Intelligence than we thought then that Means our Runway is a species that Get our together has shortened And it seems like the scary thing about The effectiveness of large language Models uh so Sam Altman every Conversation with And He really showed that the leap from gpt3 To gpt4 has to do with just a bunch of
Hacks A bunch of Uh little Explorations but with the Smart researchers doing a few little Fixes here and there it's not some Fundamental leap and transformation in The architecture and more data and more Compute and more data and compute but he Said the big leaps has to do with not The data in the compute but just Learning this new discipline just like You said so researchers are going to Look at these architectures and there Might be big leaps where you realize Wait why are we doing this in this dumb Way yeah and all of a sudden this model Is 10x smarter yeah and that that can Happen on any one day on anyone Tuesday Or Wednesday afternoon and then all of a Sudden you have a system that's 10x Smarter Um it seems like it's such a new Discipline it's such a new like we Understand so little about why this Thing works so damn well that uh the Linear Improvement of compute or Exponential but the steady Improvement Of compute steady Improvement of the Data may not be the thing that even Leads to the next leap it could be a Surprise little hack that improves Everything for a lot of little leaps Here and there because Because so much of this is out in the
Open also So many smart people are looking at this And trying to figure out little leaps Here and there and uh it becomes this Sort of collective race where if a lot Of people feel if I don't take the leap Someone else with and this is actually Very crucial for for the other part but Why do we want to slow this down so Again what this open letter is calling For is just pausing All training Of uh Systems that are more powerful than gpt4 For six months Let's give a chance For the labs to coordinate a bit on Safety and for society to adapt give the Right incentives to the labs because I You know you've interviewed a lot of These People who lead these labs and you know Just as well as I do that they're good People they're idealistic people they're Doing this First and foremost because they believe That AI has a huge potential to help Humanity and uh But at the same time they are trapped in This horrible race to the bottom Have you read meditations on malloc By Scott Alexander yes yeah it's a Beautiful essay on this poem by Ginsburg Where he interprets it as being about
This monster It's this game theory monster that that Pits people into against each other in This they race the bottom where Everybody ultimately loses the edit the Evil thing about this monster is even Though everybody sees it and understands They still can't get out of the race Right Most a good fraction of all the bad Things that we humans do are caused by Moloch and I I like uh Scott Alexander's Um Naming of the monster so we can we Humans can think of it as an f a thing If you look at why do we have Overfishing why do we have more Generally the tragedy of the commons why Is it that um I don't know if you've had her on your Podcast yeah she's become a friend yeah Great she made this awesome point Recently that beauty filters that a lot Of female Influencers feel pressured to use or Exactly malloc in action again first Nobody was using them And people saw them just the way they Were and then some of them started using It And becoming ever more Plastic Fantastic And then the other ones they weren't Using he started to realize that If they want to just keep their
Their market share they have to start Using it too And that and then you're in a situation Where they're all using it And and none of them has any more market Share or less than before so nobody Gained anything everybody lost And they have to keep becoming ever more Plastic Fantastic also right And uh But nobody can go back to the old way Because it's just Too costly right the malloc is Everywhere And um Molok is not a new arrival on on the Scene either we humans have developed a Lot of collaboration mechanisms to help Us fight back against Malik through Various kinds of constructive Collaboration the Soviet Union and the United States did sign the number of our Arms Control treaties Against moloch who is trying to stoke Them into Unnecessarily risky nuclear arms races Etc et cetera and this is exactly what's Happening on the AI front this time It's a little bit geopolitics but it's Mostly money where there's just so much Commercial pressure you know if you take Any of these Leaders of the top tech companies And if they just say you know this is
Too risky I want to pause For six months they're going to get a Lot of pressure from shareholders and Others We're like well you know if you pause But those guys don't pause We're If you don't want to get our lunch eaten Yeah and shareholders even have the Power to replace the the executives in The worst case right so We did this open letter because we want To help these idealistic Tech Executives To do What their heart tells them by providing Enough public pressure on the whole Sector Just pause so they can all pause In a coordinated fashion and I think Without the public pressure none of them Can do it alone Push back against their shareholders no Matter how good-hearted they are because Malik is a really powerful foe So the idea Is to For the major developers of AI systems Like this so we're talking about Microsoft Google Meta And anyone else well open AI is very Close with Microsoft and there are Plenty of smaller players and throw for Example anthropic which is very
Impressive there's conjecture there's Many many players I don't want to make a Long list so leave anyone out and Um For that reason it's so important that Some coordination happens that there's External pressure on all of them saying You all need the Pawns because then the The people the researchers in they were These organizations the leaders who want To slow down a little bit they can say Their shareholders you know Everybody's slowing down because of this Pressure and and it's the right thing to Do Have you seen in history their uh Examples what's possible to pause the Model absolutely And even like human cloning for example You could make so much money on human Cloning Why aren't we doing it Because biologists thought hard about This like this is way too risky we they Got together well in the 70s in the Cinema and decided even To stop a lot more stuff also just Editing the human germline right Gene editing that goes in To our offspring And decided let's let's not do this Because it's too unpredictable what it's Going to lead to We could lose control over what happens
To our species so they paused Uh there was a ton of money to be made There so it's it's very doable but you Just need you need a public awareness of The of what the risks are and the Broader Community coming in and saying Hey let's slow down and you know another Another common pushback I get today is We We Can't Stop in the west because China And in China undoubtedly they also get Told we can't slow down because the West Because both sides think they're the Good guy yeah But look at human cloning you know Did China Forge ahead with human cloning There's been exactly one human cloning That's actually been done that I know of It was done by a Chinese guy do you know Where he is now right in jail And you know who put him there Who Chinese government Not because westerners said China look This is no the Chinese government put Them there because they also felt they Like control the Chinese government if Anything maybe they are even more Concerned about having control then the Western governments have no incentive of Just losing control over where Everything is going And you can also see the Ernie bot that Was released by I believe I do recently They got a lot of pushback from the
Government and had to rein it in you Know in a big way Um I think once this basic message comes Out that this isn't an arms race it's a Suicide race Where everybody loses if anybody's AI Goes out of control it really changes The whole dynamic it it's not It's I'll say this again because this is this Very basic point I think a lot of people Get wrong because a lot of people Dismiss the whole idea that AI can Really get Very superhuman because they think There's something really magical about Intelligence such that it can only exist In human Minds you know because they Believe that they think it's going to Kind of get to just more or less Gpd4 plus plus and then that's it They don't see it as a super as a Suicide race they think whoever gets That first they're going to control the World they're going to win That's not how it's going to be and we Can talk again about The the scientific arguments from why It's not going to stop there but The way it's going to be is if if Anybody completely loses control and you Know you don't care if if Some some if someone manages this Takeover the world who really doesn't
Share your goals you probably don't Really even care very much about what Nationality they have you're not going To like it it's much worse than today Uh who if it's if you live in orwellian Dystopia who you what do you care who's Created it right and if someone if it Goes farther and and We just lose control even to the Machines So that it's not US versus them it's US Versus it What do you care who who created this This underlying entity which has goals Different from humans ultimately and we Get marginalized we get made obsolete we Get replaced That's why what I mean when I say it's a Suicide race you know it's um it's kind Of like we're rushing towards this cliff But the closer to the cliff we get the More Scenic the views are and the more Money there is there and the more so we Keep going But we have to also stop at some point Right quit while we're ahead and uh It's um It's a suicide race which cannot be won But the way that really benefit from it Is To continue developing awesome AI a Little bit slower so we make it safe Make sure it does the things that humans Want and create a condition where
Everybody wins the technology has shown Us that you know geopolitics and and Politics and general is not a zero-sum Game at all So there is some rate of development That will lead Us as a human species to lose control of This thing and the hope you have is that There's some lower level of development Which will not which will not allow us To lose control this is an interesting Thought you have about losing control so What if you have somebody if you're Somebody like Sandra pracha or Sam Altman at the head of a company like This you're saying if they develop an AGI they too will lose control of it So no one person can maintain control no Group of individuals can maintain if It's if it's created very very soon and As a big black box that we don't Understand like the large language Models yeah then I'm very confident They're going to lose control but this Isn't just me saying you know Sam Altman And then Mr sabis have both said Themselves acknowledge that you know There's really great risks with this and They they want to slow down once they Feel it gets scary It's but it's clear that they're stuck In this again molok is forcing them to Go a little faster than they're Comfortable with because of pressure
From just commercial pressures right To get a bit optimistic here of course This is a problem that can be ultimately Solved Uh It's just to win this wisdom race It's clear that what we hope that was Gonna happen hasn't happened the the Capability progress has gone faster than A lot of people thought then and the Part the progress in in the public Sphere of policy making and so on has Gone slower than we thought even the Technical AI safety has gone slower a Lot of the technical Safety Research was Kind of banking on that um Large language models and other poorly Understood systems couldn't get us all The way that you had to build more of a Kind of intelligence that you could Understand maybe it could prove itself Safe you know things like this And um I'm quite confident that this can be Done um so we can reap all the benefits But we cannot do it as quickly as uh This is out of control Express train We're on now is gonna get the AGI that's Why we need a little more time I feel Is there something to be said well like Sam Allman talked about which is while We're in the pre-agi stage to release Often and as transparently as possible To learn a lot
So as opposed to being extremely Cautious release a lot don't uh don't Invest in a closed development where you Focus on AI safety while is somewhat Dumb Quote unquote Uh release as often as possible and as You start to see signs of Uh human level intelligence or Superhuman level intelligence then you Put a halt on it well What a lot of safety researchers have Been saying for many years is the most Dangerous things you can do with an AI Is first of all teach it to write code Yeah because that's the first step Towards recursive self-improvement which Can take it from AGI to much higher Levels okay oops we've done that And uh another thing high risk is Connected to the internet Let It Go to Websites download stuff on its own and Talk to people Oops we've done that already you know Elias yukowski you said you interviewed Him recently right yeah so he had this Tweet recently which said Gave me one of the best laughs in a While and he's like hey people used to Make fun of me and say you're so stupid Eliezer because you're saying you're Saying um You have to worry of obviously Developers wants to get to like really
Strong AI first thing you're going to do Is like never connect it to the internet Keep It In The Box yeah where you know Where you can really study it So he had written it in the like in the Meme form so it's like then yeah and Then that and then now Let's LOL let's make a chatbot Yeah yeah and the third thing is Stuart Russell yeah you know Amazing AI researcher he had he has Argued for a while that We should never teach AI anything about Humans Above all we should never let it learn About human psychology and how you Manipulate humans That's the most dangerous kind of Knowledge you can give it yeah you can Teach it all it needs to know how to About how to cure cancer and stuff like That but don't let it read Daniel Kahneman's book about cognitive biases And all that and then Oops lol you know let's invent social Media I'll recommender algorithms which do Exactly that they they get so good at Knowing us and pressing our buttons That we've we're starting to create a World now where we just have ever more Hatred Because they figured out that these Algorithms not for out of evil but just
To make money on Advertising that the Best way to get more engagement The euphemism Get people glued to their little Rectangles right is just to make them Pissed off that's really interesting That a large AI system that's doing the Recommender system kind of task on Social media is basically just studying Human beings because it's a bunch of us Rats giving it signal Non-stop signal it'll show a thing and It would give signal on whether we Spread that thing we like that thing That thing increases our engagement gets Us to return to the platform and it has That on the scale of hundreds of Millions of people constantly so it's Just learning and learning and learning And presumably if the param the number Of parameters the neural network that's Doing the learning and more end-to-end The learning is The more it's able is just to basically Encode how to manipulate human behavior How to control humans at scale exactly And that is not something you think is a New man in his interest Yes right now it's mainly letting some Humans manipulate other humans for Profit And Power Which is already Caused a lot of damage and eventually
That's a sort of Skill that can make ai's persuade humans To let them escape and whatever safety Precautions yeah but you know there was A really nice article um and the New York Times recently by a you all know a Harari and and um two co-authors Including Justin Harris from the social Dilemma and They have this phrase in there I love Humanity's first contact with Advanced AI Or social media And we lost that one We now live in a country where there's Much more hate in the world where There's much more hate in fact And in our democracy that we're having This conversation then people can't even Agree on who won the last election you Know And we humans often point fingers at Other humans and say it's their fault But it's really molok and these AI Algorithms We got the algorithms and then molok Pitted the social media companies around Against each other so nobody could have A less creepy algorithm because then They would lose out on our Revenue to The other company is there any way to Win that battle back just if we just Linger on this one battle that we've Lost in terms of social media is it
Possible To redesign social media this very Medium in which we use as a civilization To communicate with each other to have These kinds of conversations to have Discourse to try to figure out how to Solve the biggest problems in the world Whether that's nuclear war or the Development of AGI is is it possible Uh to do social media correct I think It's not only possible but it's it's Necessary who are we kidding that we're Going to be able to solve all these Other challenges if we can't even have a Conversation with each other that's Constructive the whole idea the key idea Of democracy is that you get a bunch of People together And they have a real conversation the Ones you try to Foster on this podcast Or you respectfully listen to people you Disagree with And you realize actually you know there Are some things actually we some common Ground we have and that's it's yeah we Both agree let's not have a nuclear Wars Let's not do that Um etc etc We're kidding ourselves the thinking we Can face the off the Second contact with with ever more Powerful AI that's happening now with This large language models if we can't Even
Have a functional Conversation in the public space that's Why I started to improve the news Project to improve the news.org but um I I'm an optimist fundamentally in um And that there is a lot of intrinsic Goodness in in in people And that uh what makes the difference Between someone doing good things for For Humanity and bad things is not Some sort of fairy tale thing that this Person was born with the evil Gene and This one was not born with a good Gene No I think it's whether we put whether People Find themselves in situations that bring Out the best in them or they bring out The worst in them and I feel we're Building an internet And a society that brings out the worst But it doesn't have to be that way no it Does not it's possible to create Incentives and also create incentives That make money they both make money and Bring out the best in people I mean in The long term it's not a good investment For anyone you know to have a nuclear War for example And you know is it a good investment for Humanity if we just ultimately replace All humans by machines and then we're so Obsolete that eventually the There are no humans left Well it depends against how you do the
Math but like if I would say by any Reasonable economic started if you look At the future income of humans and there Aren't any you know that's not a good Investment Moreover like why why can't we have a Little bit of pride in our species damn It you know why should we just build Another species that gets rid of us if We were Neanderthals Would we really consider it a smart move If the If we had really Advanced biotech to Build homo sapiens You you know you might say hey Max you Know yeah let's build build the These Homo sapiens they're going to be Smarter than us maybe they can help us Defend us better against the Predators And help fix their bar caves make them Nicer and we'll control them undoubtedly You know so then they build build a Couple a little baby girl little baby Boy you know and And then you have some some wise old and Neanderthal Elder was like hmm I'm Scared that uh we're opening in Pandora's Box here and that we're going To get outsmarted by these Super Neanderthal intelligences and There won't be any neanderthals left and Then but then you have a bunch of others In the cave right yeah are you such a Luddite scaremonger of course they're
Going to want to keep us around because We are their creators and and why you Know the smaller I think the smarter They get the nicer they're gonna get They're gonna leave us they're gonna They're going to want this around and It's going to be fine and and besides Look at these babies they're so cute Clearly they're totally harmless that's Exact those babies are exactly gpt4 yeah It's not I want to be clear it's not Gpt4 That's terrifying it's the gpt4 is a Baby technology You know and Microsoft even had a paper Recently out Uh with a title something like sparkles Of AGI whatever basically saying this is Baby AI I like these little Neanderthal babies And it's going to grow up there's going To be other systems from from the same Company from other companies they'll be Way more powerful and but they're going To take all the things Ideas from these babies And before we know it we're gonna be Like Those last neanderthals who are pretty Disappointed and when they realized that They were getting replaced well this Interesting point you make which is the Programming is it's entirely possible That gpt4 is already
The kind of system that can Change everything By writing programs sorry it's yeah it's Because it's Life 2.0 The systems I'm afraid of are going to Look nothing like a large language model And they're not But once it gets once it or other people Figure out a way of using this Tech to Make much better Tech right it's just Constantly replacing its software and From everything we've seen about how how These work under the hood they're like The minimum viable intelligence they do Everything you know the dumbest way that Still works sort of yeah and um So they were life 3.0 except when they Replace their software it's a lot faster Than when you when when you decide to Learn Swedish And moreover they think a lot faster Than us too so when uh you know we don't Think uh have One Logical step every nanosecond or a few Or so the way they do and we can't also Just suddenly scale up our Hardware Massively in the cloud which we're so Limited right So they are in it they are also life Have Can soon be become a little bit more Like life 3.0 and that if they need more Hardware hey just rent it in the cloud
You know how do you pay for it well with All the services you provide Yeah And what we haven't seen yet Which could change a lot is uh entire Software System so right now programming Is done sort of in bits and pieces uh as As an assistant tool to humans but I do A lot of programming and with the kind Of stuff that gbt4 is able to do I mean It's replacing a lot what I'm able to do But I you still need a human in the loop To kind of manage the design of things Manage like what are the prompts that Generate the kind of stuff to do some Basic adjustment of the codes let's do Some debugging but if it's possible to Add on top of GPT for kind of a feedback Loop of of uh self-debugging improving The code and then you launch that system Out into the wild on the internet Because everything is connected and have It do things have it interact with Humans and then get that feedback now You have this giant ecosystem yeah of Humans that's one of the things that uh Yeah Elon Musk recently sort of tweeted As a case why everyone needs to pay Seven dollars or whatever for Twitter to Make sure they're real they're make sure They're real we're now going to be Living in a world where the the Bots are Getting smarter and smarter and smarter To a degree where you can't uh you can't
Tell the difference between a human and A bot that's right and now you can have Uh Bots outnumber humans by yeah one Million to one which is why he's making The case why you have to pay yeah to Prove you're human which is one of the Only mechanisms which is depressing and I yeah I feel we have to remember As individuals we should from time to Time ask ourselves why are we doing what We're doing all right then as a species We need to do that too So if we're building as as you say Machines that are outnumbering us And more and more outsmarting us and and Replacing us on the job market not just For the dangerous and and boring tasks But also for writing poems and doing art And things that a lot of people find Really meaningful God ask yourself why Why are we doing this Uh we are the answer is moloch is Tricking us into doing it And it's such a clever trick that even Though we see the trick we still have no Choice but to fall for it right Come also the thing you said about you Using uh Co-pilot AI tools to program faster how Many time what factor faster would you Say your code now does it go twice as Fast or I don't really Uh because it's such a new tool yeah It's I don't know if speed is
Significantly improved but it feels like I'm a year away from being Uh five to ten times faster so if that's Typical for programmers then uh you're Already seeing another kind of self Recursive self-improvement right because Previously uh one like a major Generation of improvement of the codes Would happen on the human r d time scale And now if that's five times shorter Then it's going to take five times less Time than it otherwise would to develop The next level of these tools and so on So this these These are the this is Exactly the sort of beginning of an of An intelligence explosion they can be Humans in the loop a lot in the early Stages and then eventually humans are Needed less and less and the machines Can more kind of go along but you what You said there is just an exact example Of these sort of things another thing Which which um I was kind of lying on my psychiatrist Imagining I'm on a psychiatrist's couch Here saying what are my fears that People would do with um AI systems Another So I mentioned three that I had fears About many years ago that they would do Uh namely uh teacher the code uh Connected to the internet then teach it To manipulate humans a fourth one is Building an API
Where code can control all the super Powerful thing right that is very Unfortunate because one thing that Systems like gpt4 have going for them is That they are an Oracle in the sense That they just answer questions there's No robot connected to the gpt4 gpt4 Can't go and do stock trading based on Its thinking yeah it is not an agent an Intelligent agent is something that Takes in information from the world Processes it To figure out what action to take based On its goals that it has and then does Something Back on the world but once you have an API for example tpd4 Nothing Stops Joe Schmoe and all and a lot of other people From building real agents Which just keep making calls somewhere In some inner loop somewhere to these Powerful Oracle systems And which makes them themselves much More powerful that's another kind of um Unfortunate development which I think we Would have been better off uh delaying I Don't want to pick on any particular Companies I think they're all under a Lot of pressure to make money yeah And um Again we the reason we're calling for This pause is to give them all cover to Do what they know is the right thing Slow down a little bit at this point but
Everything we've talked about I hope Will can will make it clear that people Watching this you know why These sort of human level tools can Cause a gradual acceleration you keep Using yesterday's technology to build Tomorrow's technology yeah and when you Do That over and over again you naturally Get an explosion you know that's the Definition of an explosion in science Right like If you have Two people Um they fall in love now you have four People and then they can make more Babies and now you have eight people and Then then you have 16 32 64 Etc that's We call that a population explosion Where it's just that each If it's instead free neutrons in a Nuclear reaction that if H1 can make More than one then you get an Exponential growth in that we call it a Nuclear explosion all explosions are Like that and an intelligence explosion It's just exactly the same principle That some quantity some amount of Intelligence can make more intelligence Than that and then repeat you always get Exponentials What's your intuition why does you Mention there's some technical reasons Why it doesn't stop at a certain point
What's your intuition and uh do you have Any intuition why it might stop It's obviously going to stop when it Bumps up against the laws of physics There are some things you just can't do No matter how smart you are right Allegedly Um Laws of physics yeah yeah right Seth Lloyd wrote a really cool paper on the Physical limits on uh computation for Example if you make it Put too much energy into it and finite Space it'll turn into a black hole you Can't move information around Fashion The speed of light stuff like that but Uh it's hard to store Way more than than a modest number bits Per atom Etc but you know those limits Are just astronomically above like 30 Orders of magnitude above where we are Now so no Bigger different bigger jump in Intelligence than if you go from uh from An ant to a human I think Of course what we want to do is have Have a controlled Thing the nuclear reactor you put Moderators in to make sure exactly it Doesn't blow up out of control right When we do um Experiments with Biology and cells and So on you know we also try to make sure
It doesn't get out of control Um We can do this with ai2 the thing is we Haven't succeeded yet and Morlock is Exactly Doing the opposite just fueling just Egging everybody on faster faster faster Or the other company is going to catch Up with you or the other country is Going to catch up with you Really those we have to want this stuff We have and and I don't believe in this Just asking people to look into their Hearts and do the right thing it's Easier for others to say that but like If if you're in a situation where your Company is going to get screwed If you By other companies that are not stopping You know you're putting people in a very Hard situation the the right thing to do Is change the whole incentive structure Instead And and this is not an old Maybe I should say one more thing about This because molok has been around as Humanity's Number one or number two enemy since the Beginning of civilization and we came up With some really cool counter measures Like first of all already over a hundred Thousand years ago Evolution realized That it was very unhelpful that people Kept killing each other all the time
Yeah so it genetically Gave us compassion And made it so that it like if you get Two drunk dudes getting into a pointless Bar fight They might give each other black eyes But they have a lot of inhibition Towards towards just killing each other That's a gen and similarly if you find a Baby lying on the street when you go out For your morning jog tomorrow you're Gonna stop and pick it up right even Though maybe it make you late for your Next podcast So Evolution gave us these genes that Make our own egoistic incentives more Aligned with what's good for the greater Group or part of right and then uh as we Got a bit more sophisticated and Developed language We invented gossip which is also a Fantastic anti-malark right because now It it really discourages Liars Moochers cheaters because it their Own incentive now is not to do this Because word quickly gets around and Then suddenly people aren't going to Invite them to their dinners anymore or Trust them and then when we got still More sophisticated and bigger societies You know invented the legal system Where even strangers who didn't couldn't Rely on gossip and and things like this Would treat each other would have an
Incentive now those guys in the bar Fights even if they someone is so drunk That he Actually wants to kill the other guy He also has a little thought on the back Of his head that you know do I really Want to spend the next 10 years Eating like really crappy food in a Small room uh I'm just gonna I'm just Gonna chill out you know so and we we Similarly have tried to give these Incentives to our corporations by having Having regulation and all sorts of Oversight so that their incentives are Aligned with the greater good we've Tried really hard Um and um the the big problem that we're Failing now Is not that we haven't tried before but It's just that the tech is growing much Is developing much faster than The Regulators been able to keep up right so Regulators it's kind of comical the European Union right now is doing this AI Act Right which and in the beginning They had a little opt-out exception that Gpt4 would be completely excluded from Regulation Brilliant idea what's the logic behind That Some lobbyists post successfully for This so we were actually quite involved With the future Life Institute Um Mark Bracco Mr ook Anthony gear and
Others you know we're quite involved With um talking to very educating Various people involved in this process About These general purpose AI models coming And pointing out that they would become The laughing stock if they didn't put it In so it the French started pushing for It it got put in to the draft and it Looked like all was good and then there Was a huge counter push from lobbyists Yeah there were more lobbyists in Brussels from tech companies and from Oil companies for example and it looked Like it might it's going to maybe get Taken out again And now gpt4 happened And I think it's going to stay in but This just shows you know monologue can Be defeated But the the challenge we're facing is That the tech is generally much faster Than what the Policymakers are And a lot of the policy makers also Don't have a tech background so it's you Know we really need to work hard to Educate them on on how on what's taking Place here So so we're getting the situation where The first kind of non so you know I Define artificial intelligence just as Non-biological intelligence right And by that definition
A company a corporation is also an Artificial intelligence because the Corporation isn't it's humans it's the System If its CEO decides the CEO of a tobacco Company decides one morning that she or He doesn't want to sell cigarettes Anymore they'll just put another CEO in There It's not enough to align the incentives Of individual people or in align Individual computers incentives To their owners which is what Technically AI Safety Research is about You also have to align the incentives of Corporations With a greater good and some Corporations have gotten so big and so Powerful very quickly and that in many Cases they're lobbyists instead align The Regulators to what they want rather Than the other way around the classic Regulatory capture right is is uh the Thing that the Slowdown hopes to achieve Is give enough time to the regulars to Catch up or enough time to the companies Themselves to breathe and understand how To do AI safety correctly I think both And but I think that the vision the path To success I see is first you give a Breather actually to to the people in These companies their leadership who Wants to do the right thing and they all Have safety teams and so on other
Companies give them a chance to Get together with the other companies And the outside pressure can also help Catalyze that right And and work out What is it that's What are the reasonable safety Requirements one should put on future Systems before they get rolled out There are a lot of people also in Academia and elsewhere outside of these Companies who can be brought into this And have a lot of very good ideas And then um I think it's very realistic that within Six months You can get these people coming up so Here's a white paper here's where we all Think it's reasonable Um you know you didn't just because cars Killed a lot of people they didn't ban Cars but they got together a bunch of People and decided you know in order to Be allowed to sell a car it has to have A seat belt in them They're the analogous things that you Can start requiring a future AI systems So that they are are safe And uh once this have this heavy lifting This intellectual work has been done by Experts in the field which can be done Quickly I think it's beginning to be quite easy To get policy makers to
To see yeah this is a good idea and it's It's you know For the fight for the companies to fight Molok They want and I I believe Sam Altman has Explicitly called for this they want The Regulators to actually adopt it so that Their competition is going to abide by It too right you don't want Uh You don't want to be invade enacting all These principles then you abide by them And then there's this one little Company that Doesn't sign on to it and then now they Can gradually overtake you then the Companies will get Be able to sleep it's secure knowing That everybody's playing by the same Rules so do you think it's possible to Develop guard rails That keep the systems from Uh for basically damaging irreparably Humanity while still enabling sort of The capitalist fueled competition Between companies as they develop how to Best make money with this AI you think There's a balancing totally that's Possible absolutely I mean we've seen That in many other sectors where you've Had the free market produce quite good Things without uh causing particular Harm Um when the guardrails are there and
They work You know Capitalism is very effective good way of Optimizing for just getting the same Things done more efficiently it was but It was good you know and like in Hindsight and I never met anyone Even On parties way over on the right in in Any country who think it was a bad Thinks it was a terrible idea to ban Child labor for example so Yeah but it seems like this particular Technology has gotten so good so fast Become Powerful To a degree where you could see in the Near term the ability to make a lot of Money and to put guard rails develop Guard rails quickly in that kind of Contact seems to be tricky it's not uh Similar to cars or child labor it seems Like the opportunity to make a lot of Money here very quickly is right here Before so again there's this cliff On the ground you can pick up or Whatever so you want to drive there very Fast but it's not in anyone's incentive That we go over the cliff and it's not Like everybody's in their own car all The cars are connected together with a Chain yeah so if anyone goes over They'll start dragging others down the Other's down too and so ultimately it's
In the selfish Interests also of the people in the Companies to to Slow down when the when it just start Seeing the characters of The Cliff there In front of you right and the problem is That um even though the people who are Building the technology And the CEOs they really get it the Shareholders and these other Market Forces they are people who don't Honestly Understand that the cliff is there they Usually don't you have to get quite into The weeds to really appreciate how Powerful this is and how fast and a lot Of people are even still stuck again in This idea that Intelligence in this carbon chauvinism As I like to call that that you can only Have our level of intelligence in Humans that there's something magical About it whereas the people in the tech Companies Who build this stuff they all realize That intelligence is Information processing of a certain kind And it really doesn't matter at all Whether the information is processed by Carbon atoms in neurons in brains or by Silicon atoms and some technology we Build You brought up capitalism earlier and There are a lot of people who love
Capitalism and a lot of people Who really really Don't and It struck me recently that the what's Happening with capitalism here is Exactly analogous to the way in which Super intelligence might wipe us out So You know you know I studied economics For my undergrad Stockholm School of Economics yay Well no no I tell me someone's very Interested in how how you could use Market forces to just get stuff done More efficiently but give the right Incentives to Market so that it wouldn't Do really bad things So Dylan had filmonell who's a a Professor and colleague of mine at MIT Wrote this really interesting paper with Some collaborators recently Where they proved mathematically that if You just up take one goal that you just Optimized for On and on and on indefinitely Do you think it's gonna Bring you in the right direction What basically always happens is in the Beginning it will make things better for You But if you keep going at some point That's going to start making things Worse for you again and then gradually It's going to make it really really
Terrible so just as a simple The way I think of the proof is like Suppose you want to go from here Back to Austin for example and you're Like okay yeah let's just let's go south But you put in exactly the right sort of The right direction just optimize that South as possible you get closer and Closer to Austin But uh you there's always some little Error So you you're not going exactly Towards Austin but you get pretty close But eventually you start going away Again and eventually you're going to be Leaving the solar system yeah and they They proved it's a beautiful Mathematical proof this happens Generally And this is very important for AI Because for even though Stuart Russell Has Written a book and given a lot of talks On why it's a bad idea to have ai just Blindly optimize something that's what Pretty much all our systems do yeah we Have something called the loss function That we're just minimizing or reward Function we just minimize maximizing Stuff And um Capitalism is exactly like that too we Want we wanted to get stuff done more Efficiently that people wanted so Introduce the free market
Things got done much more efficiently Than they did in Say communism right and it got better But then it just kept optimizing it and Kept optimizing and you got every bigger Companies and every more efficient Information processing and now also very Much powered by it And uh eventually a lot of people are Beginning to feel wait we're kind of Optimizing a bit too much like why did We just chop down half the rainforest You know and why why did suddenly these Regulators get Captured by lobbyists and so on it's Just the same optimization that's been Running for too long If you have an AI that actually has Power over the world and you just give It one goal and just like keep Optimizing that most likely everybody's Gonna be like Yay this is great in the Beginning things are getting better But um It's almost impossible to give it Exactly the right direction to optimize In and then Eventually all hey Breaks Loose right Nick Bostrom and others are given the Example to sound quite silly like what If you just want to like tell it Cure cancer or something and that's all You tell it maybe it's going to decide To
Take over entire continents just so we Can get more super computer facilities In there and Figure out how to cure cancer backwards And then you're like wait that's not What I want then right and Um The the the issue with capitalism and The issue with runaway I have kind of Merged now because the malloc I talked About Is exactly the capitalist molloch that We have built an economy that has Optimizing for only one thing Profit Right and that worked great back when Things were very inefficient and then Now it's getting done better and it Worked great as long as the companies Were small enough that they couldn't Capture the regulators But That's not true anymore but they keep Optimizing And now They realize that that they can these Companies can make even more profit by Building ever more powerful AI even if It's Reckless But optimize more and more and more and More and more So this is molok again showing up and I Just want to anyone here who has any Concerns about about uh late stage
Capitalism having gone a little too far You should worry about Super intelligence because it's the same Villain in both cases it's more like and Optimizing one objective function Aggressively blindly is going to take us There yeah we have this pause from time To time and look into our hearts and ask Why are we doing this is this uh am I Still going towards Austin or have I Gone too far you know maybe we should Change direction And that is the idea behind the halt for Six months what six months it seems like A very short period just can we just Linger and explore different ideas here Because this feels like a really Important moment in human history where Pausing would actually have a Significant positive effect We said six months because we figured The number one pushback that we're gonna Get in the west was like but China And Everybody knows there's no way that China is going to catch up with the West On this in six months so it's that Argument goes off the table and you can Forget about geopolitical competition And just focus on The real issue that's why we put this That's really interesting but you've Already made the case that uh even for China if you actually want to take on
That argument China too would not be Bothered by a longer halt because they Don't want to lose control even more Than the West doesn't That's what I think that's a really Interesting argument like I have to Actually really think about that which The the kind of thing people assume is If you develop an AGI That open AI if they're the ones that do It for example they're going to win but You're saying no they're everybody loses Yeah it's gonna get better and better And better and then Kaboom we all lose That's what's gonna happen when lose and Win are defined in a metric of basically Quality of life For human civilization and for Sam Altman To be blunt my personal guess you know And people can quibble with this is that We're just gonna they won't be any Humans that's it that's what I mean by Lose You know if you if we can see in history Once you have some species or some group Of people who aren't needed anymore It doesn't usually work out so well for Them right yeah There were a lot of horses for that were Used for traffic in Boston and then the Car got invented and most of them got You know well we don't need to go there And uh if you look at them
Humans you know right now we why did the Labor movement succeed and after the Industrial Revolution because it was Needed Even though we had a lot of mallocs and There was child labor and so on you know The company still needed to have workers And that's why strikes had power and so On if we get to the point where most Humans aren't needed anymore I think It's like it's quite naive to think that They're going to still be treated well You know we say that yeah yeah um Everybody's equal and the government Will always will always protect them but If you look in practice Groups that are very disenfranchised and Don't have any actual power It usually gets screwed And uh now in in the beginning so Industrial Revolution We automated away muscle work But that got went worked out pretty well Eventually because we educated ourselves And started doing working with our Brains instead and got usually more Interesting better paid jobs But now we're beginning to replace brain Work so we've replaced a lot of boring Stuff like we got the pocket calculator So you don't have people adding Multiplying numbers anymore at work fine They were better jobs they could get but Now dpt4 you know
And the stable diffusion and techniques Like this they're really beginning to Blow away some real some jobs that People really loved having it was a Heartbreaking article just post just Yesterday on social media I saw about This guy who was doing 3D modeling for Gaming and if And all of a sudden now they got this New software you just give says prompts And he feels his whole job that he loved It's lost its meaning you know and uh I asked the gpt4 to rewrite Twinkle Twinkle Little Star in the style of Shakespeare I couldn't have done such a good job it Was just really impressive you've seen a Lot of the Arts coming out here right so I'm all for Automating away the dangerous jobs and The boring jobs but I think um You hear some arguments which are too Glib sometimes people say well that's All that's going to happen we're getting Rid of the boring boring uh tedious Dangerous jobs it's just not true there Are a lot of really interesting jobs That are being taken away now journalism Is getting going to get crushed uh Coding is gonna get crushed like I Predict uh the job market for Programmers salaries are going to start Dropping You know if you said you can code five
Times faster you know then you need five Times fewer programmers maybe there will Be More output also but you'll still end up Using fewer programs needing fewer Programmers than today and I love coding You know I I think it's super cool Um so we we need to stop and ask Ourselves why again are we doing this as Humans like I feel that AI should be built by Humanity for Humanity and let's not Forget that it shouldn't be by malloc For molok or what it really is now is Kind of by Humanity for moloch which Doesn't make any sense it's for us that We're doing it then and um It would make a lot more sense if we Build the develop figure out gradually Safely how to make all this Tech and Then we think about what are the kind of Jobs that people really don't want to Have you know and automate them all the Way and then we ask what are the jobs That People really find meaning in like maybe Taking care of children in the daycare Center maybe doing art etc etc and and Even if it were possible to automate That away but we don't need to do that Right we built these machines Well it's possible that we Redefine or ReDiscover what are the jobs That give us meaning so for me the thing
It is really sad like I Half the time I'm excited half the time I'm uh crying as I'm yes I'm generating Code because I kind of love programming it's uh it's The Act of Creation you you have an idea You design it and then you bring it to Life and it does something especially if There's some intelligence that it does Something it doesn't even have to have Intelligence printing hello world on Screen you you you made a little machine And it comes to life yeah and uh there's A bunch of tricks you learn along the Way because you've been doing it for for Many many years and then for to see AI Be able to generate all the tricks you Thought were special yeah Um I don't know it's very It Um it's it's scary it's almost painful Like a loss uh loss of Innocence maybe Like yeah maybe when when I was younger Uh I remember before I learned that Sugar is bad for you you should be on a Diet I remember I enjoyed candy deeply in a Way I just can't anymore that I know is Bad for me I enjoyed it unapologetically Fully just intensely and I just I lost That now I feel like a little bit of That is lost for me with program or Being lost with programming similar as It is for uh the the 3D modeler no
Longer being able to really enjoy the Art of modeling uh 3D things for gaming I don't know I don't know what to make Sense of that maybe I would ReDiscover That the true magic of what it means to Be human is connecting with other humans To have conversations like this I don't Know to uh to have sex to have to eat Food to really intensify the value from Conscious experiences versus like Creating other stuff you're pitching the Rebranding again from Homo sapiens But the meaningful experiences and just To inject some optimism in this here so We don't sound like a bunch of gloomers You know we can totally have our cake And eat it you hear a lot of totally claims that we can't afford Having more teachers yeah you have to Cut the number of nurses you know that's Just nonsense obviously With anything even Quite far short of AGI we can Dramatically improve Grow the GDP and produce this wealth of Goods and services it's very easy to Create a world where everybody is better Off than today Including the richest people can be Better off as well right it's not a Zero-sum game you know technology again You can have two countries like Sweden And Denmark had all these ridiculous Wars Century after century
And uh Sometimes that Sweden got a little Better off because it got a little bit Bigger and then Denmark got a little bit Better off because being a little bit Smaller and and but then we then Technology came along and we both got Just dramatically wealthier without Taking away from anyone else it was just A total win for everyone And uh AI can do that on steroids If you can build safe AGI If you can build super intelligence yeah Basically all the limitations that cause Harm today can be completely can be Completely eliminated right it's a Wonderful you talk possibility in this This is not sci-fi this is something Which is clearly possible according to Laws of physics And I we can talk about ways of making It safe also Um but unfortunately That'll only happen if we stare in that Direction that's absolutely not the Default outcome that why Income inequality keeps going up that's Why the life expectancy in the US has Been going down now I think it's four Years in a row I just read a Heartbreaking study from CDC about how Something like one-third of all teenage Girls in the US have been thinking about Suicide you know like
Those are steps and they're totally the Wrong direction and and it's important To keep our eyes on the prize here that We can we have the power now for the First time in the history of our species To harness artificial intelligence to Help us really flourish And help bring out the best in our Humanity rather than the worst of it To help us have really fulfilling Experiences that feel truly meaningful And you and I shouldn't sit here and Dictate the future Generations what they Will be let them figure it out but let's Give them a chance to live and and not Foreclose all these possibilities for Them by just messing things up right Well for that we'll have to solve the AI Safety problem I just it would be nice If we can link on exploring that a Little bit so one interesting way to Enter that discussion is uh you tweeted And Elon replied you tweeted let's not Just focus on whether gpt4 we'll do more Harm or good on the job market but also Whether it's coding skills will hasten The arrival of super intelligence that's Something we've been talking about right So Elon proposed one thing in the reply Saying maximum truth seeking is my best Guess for AI safety can you maybe uh Steel Man the case for this uh since This objective function of Truth and uh Maybe make an argument against it and in
General what uh are your different ideas To start approaching the solution to AI Safety I didn't see that reply actually Oh interesting all right so but I really Resonate with it because AI is not evil it it caused people Around the world to hate each other much More But that's because we made it in a Certain way it's a tool we can use it For great things and bad things and we Could just as well have ai systems and This is this is part of my vision for Success here Truth seeking AI that really brings us Together again You know why do people hate each other So much between countries and within Countries it's because These have totally different versions of The truth right If they all have the same truth that They trusted for good reason because They could check it and verify it and Not have to believe in some Self-proclaimed Authority right They wouldn't be as nearly as much hate There'd be a lot more understanding Instead and uh This is I think something AI can help enormously With For example A little baby step in this direction is
This website called metaculous where People BET and make predictions not for Money but just for their own reputation And it's kind of funny actually you Treat the humans like you treat AIS you Have a loss function where they get Penalized if they're super confident on Something and then the opposite happens Yeah Yeah Whereas if you're kind of humble and Then you're like I think it's 51 chance This is going to happen and then the Other happens you don't get penalized Much and And what you can see is that some people Are much better at predicting than Others they've earned your trusts right One project that I'm working on right Now is the outgrowth to improve the news Foundation together with the metaculous Focuses seeing if we can really scale This up a lot with more powerful AI Because I would love it I would love for there to be like a Really powerful truth seeking system Where That is Trustworthy because it keeps being right About stuff And people who come to it and Maybe look at its latest trust ranking Of different pundits and newspapers Etc If they want to know why some someone
Got a low score they can click on it and See all that predictions that they Actually made and how they turned out You know This is how we do it in science you Trust scientists like Einstein who said Something everybody thought was And turned out to be right You get a lot of trust Point than he did It multiple times even I think AI has the power to really heal A lot of the the Rifts we're seeing by Creating Trust Systems It has to get away from this idea today With some fact-checking sites which Might themselves have an agenda and you Just trust it because of its reputation To You want to have it so this so these Sort of systems they are in their trust And that's completely transparent This I think would actually help a lot Uh that you know I think help heal the Very dysfunctional conversation that Humanity has about how it's going to Deal with all its Biggest challenges in in the world today And then uh Enter on the technical side you know Another common sort of Gloom uh Comment I get from people are saying We're just screwed there's no hope Is well things like gpt4 are way too Complicated for a human to ever
Understand and prove that they can be Trustworthy they're forgetting that AI Can help us prove that things work right Yeah and and there's this very Fundamental fact that in math it's much Harder to come up with a proof that it Is to verify that the proof is correct You can actually write a little proof Checking code it's quite short But you can assume and understand and Then it can check the most monstrously Long proof ever Drake generated even by A computer and say yeah this is valid So so right now We we have This uh This approach with virus checking Software that it looks to see if there's Something you should not trust it and if It can prove to itself that you should Not trust that code it warns you right What if you flip this around And this is an idea I should give credit To Steve I'm 100 for so that it will Only run the code if it can prove Instead of not running it if it can Prove that it's not trustworthy if it Will only run and if it can prove that It's trustworthy so it asks the code it Prove to me that you're going to do what You say you're going to do And and it gives you this proof And you a little proof that you can Check it now you can actually trust
An AI That's much more intelligent than You are right Because you it's it's problem to come up With this proof that you could never Have found that you should trust it so This is the interesting point I agree With you but this is where Eleazar Yakowski might disagree with you his Claim not with you but with this idea Is his claim is super intelligent AI Would be able to know how to lie to you With such a proof I'll Delight to you and give me a proof That I'm gonna think is correct yeah so But it's not me it's lying to that's the Trick my proof checker Which is a piece of code so his general Idea is a super intelligent system Can lie to a dumber proof checker So you're going to have as a system Becomes more and more intelligent There's going to be a threshold Or a super intelligent system will be Able to effectively lie to a slightly Dumber AGI system uh like there's a Threat like he really focuses on this Weak AGI the strong AGI jump with a Strong AGI can make all the weak age you Guys think that it's just one of them But it's no longer that And that leap is when it runs away yeah I I don't buy that argument I think no Matter how super intelligent an AI is It's never going to be able to prove to
Me that they're only finitely many Primes for example It can try to snow Me by making up all Sorts of the new weird rules of Deduction That and say trust me you know The way your proof jacket work is too Limited and we have this new hyper map And it's true Um but then I would I would just uh take The attitude okay I'm gonna forfeit some Of these this supposedly super cool Technology so I'm only going to go with The ones that I can prove in my own Trusted proof gesture then I don't I Think it's fine there's still of course This is not something anyone has Successfully implemented at this point But I think it I just give it as an Example of hope we don't have to do all The work ourselves right this is exactly The sort of very boring and tedious task That is perfect to Outsource to an AI And this is a way in which less powerful And less intelligent agents like us can Actually Continue to control and Trust more Powerful ones so build AGI systems that Help us defend against other AGI systems Well for starters Begin with a simple problem of just Making sure that the system that you own Or that's supposed to be loyal to you Has to prove to itself that it's always
Going to do the things that you actually Wanted to do right and if it can't prove It maybe it's still going to do it but You won't run it so you just forfeit Some aspects of all the cool things AI Can do I I bet you dollars of donuts you Can still do some incredibly cool stuff For you yeah there are other things too That we shouldn't sweep under the rug Like not every human agrees on exactly Where what direction we should go with Humanity right yes and you've talked a Lot about geopolitical things on this on On your podcast to this effect you know But I think that shouldn't distract us from The fact that there are actually a lot Of things that everybody in the world Virtually agrees on that hey you know Like having no humans on the planet In a in the near future let's not do That right you looked at something like The United Nations sustainable Development goals some of them are Required Ambitious and uh basically all the Countries agree U.S China Russia Ukraine They all agree so instead of quibbling About the little things we don't agree On let's start with the things we do With Beyond and and get them done Instead of being so distracted by all These things we disagree on That warlock wins because
Frankly Molok going wild now it feels like a war On Life playing out in front of our eyes If you if you just look at it From space you know we're on this planet Beautiful vibrant ecosystem now we start Chopping down Big parts of it even though nobody most People thought that was a bad idea Always start doing ocean acidification Wiping out all sorts of species oh now We have all these close calls we almost Had a nuclear war and we're replacing More and more of the biosphere with Non-living things we're also replacing In our social lives a lot of the things Which were so valuable to humanity a lot Of social interactions now are replaced By people staring into their rectangles Right and I I'm not a psychologist I'm out of my Depth here but I suspect that part of The reason why teen suicide and suicide In general in the US is at Record-breaking levels is actually Caused by Again and so AI Technologies and social Media making people spend less time with With actual and actually just human Interaction we've all seen A bunch of good looking people in Restaurants Into the rectangles instead of looking At each other's eyes right
So that's also a part of the war in life That that we're we're replacing so many Really life-affirming things By technology we're we're putting Technology between us The the technology that was supposed to Connect us is actually distancing us Ourselves from each other and um And then we're giving ever more power to Things which are not alive these large Corporations are not living things right They're just maximizing profit There I want to win them more in life I I Think we humans together with all our Fellow living things on this planet will Be better off if we can Remain in control over the non-living Things and make sure that they they work For us I really think it can be done Can you just Linger on this uh maybe High level philosophical disagreement With Eliezer yadkowski Bye in this the hope you're stating so He is very sure He puts a very high probability Very close to one depending on the day He puts it at one That AI is going to kill humans Um That there's just he does not see a Trajectory which it doesn't end up with That conclusion what uh which trajectory Do you see that doesn't end up there and
Maybe can you Can you see the point he's making And and can you also see a way out Mm-hmm First of all I tremendously respect Elias yukowski and his his thinking Second I do share his view that there's A pretty large chance that we're not Going to make it as humans there won't Be any humans on the planet And then not the distance future and That makes me very sad you know we just Had a little baby and I keep asking Myself you know is um Um How old is he even gonna get you know And and um I ask myself hey It feels I said to my wife recently it Feels a little bit like I was just Diagnosed with some sort of um Cancer which has some you know risk of Of dying from in some risk of surviving You know Uh Except this is the kind of cancer which Will kill all of humanity so I Completely take seriously his Um his concerns I think um But I don't absolutely don't think it's Hopeless I think um there is a there is A First of all a lot of momentum now For the first time actually since the
Many many years that have passed since Since I and many others started warning About this I feel Most people are getting it now I I I uh It's just talking To this guy in the gas station they were A house the other day my And he's like I think we're getting replaced And then I think in it so that's Positive that they're finally we're Finally seeing this reaction which is The first step towards solving the Problem uh second uh I really think that This this vision of only running ai's Really if the stakes are really high They can prove to us that they're safe It's really just virus checking in Reverse again I I think it's Scientifically doable I don't think it's hopeless um We might have to Forfeit some of the Technology that we could get if we were Putting Blind Faith in our AIS but we're Still going to get amazing stuff do you Envision a process with the proof Checker like something like gpt4 GPT 5 Will go through a process no rigorous no No I think it's hopeless that's like Trying to prove there about five Spaghetti Okay what I think well how the the whole
The vision I have for success is instead That you know just like we human beings Were able to look at our brains and and They still out the key knowledge Galileo When his dad threw him an apple when he Was a kid he was able to catch it Because his brain could in this funny Spaghetti kind of way you know predict How parabolas are going to move his Kahneman system one right but then he Got older and it's like wait This is a parabola it's it's y equals x Squared I can distill this knowledge out And today you can easily program it into A computer and it can simulate not just That but how to get to Mars and so on Right I Envision a similar process where We use the the amazing learning power of Neural networks To discover the knowledge in the first Place but we don't stop with a black box And and use that we then do a second Round of AI where we use automated Systems to extract out the knowledge and See what is it what are the insights It's had okay and it's And then we we put that knowledge into a Completely different kind of um Architecture programming language or Whatever that's that's made in a way That it can be both really efficient and Also Is more amenable to very formal Verification
That's that's my vision I'm not saying Sitting here saying I'm confident 100 Sure that it's going to work you know But I don't think it's a chance it's Certainly not zero either and it will Certainly be possible to do for a lot of Really cool AI applications that we're Not using now so we can have a lot of The fun that we're excited about If we if we do this we're going to need A little bit of time That's why it's good to pause and put in Place requirements One more thing also I I think You know someone might think well zero Percent chance we're gonna survive let's Just give up right that's very dangerous Because there's no more guaranteed way It's a fail than to convince yourself That it's impossible and not to try You know any if you you know when you Study history in military history the First thing you learn is that that's How you do psychological warfare you Persuade the other side that it's Hopeless so they don't even fight And then then of course you win right Let's not do this uh psychological Warfare on ourselves and say there's a Hundred percent probability we're all Gonna we're all screwed anyway It's sadly I I do get that a little bit Sometimes from from uh some young people Who are like so convinced that we're all
Screwed that they're like I'm just gonna Play game play computer games and do Drugs and because we're screwed anyway Right It's important Keep the hope alive because it actually Has a causal impact and making it makes It more likely that we're going to Succeed it seems like the people that Actually build solutions to the problem Seemingly impossible to solve problems Are the ones that believe yeah they're The ones who are the optimists yeah and It's like uh it seems like there's some Fundamental law to the universe where Fake it till you make it kind of works Like believe it's possible and it Becomes possible yeah was it Henry Ford Who said that If you can if you tell yourself that It's impossible it is So let's not make that mistake yeah and This is a big mistake Society is making Or I think all in all everybody's so Gloomy and the media are also very Biased towards if it bleeds it bleeds And Bloom and doom right so Um Most Visions of the future we have or or a Dystopian which really demotivates People we want a really really focus on The upside also to give people the Willingness to fight for it
And um For AI you and I mostly talked about Gloom here again but let's not remember Not forget that you know We have probably both lost Someone we really cared about some Disease that we were told was incurable Well it's not there's no law in physics Saying they have to die of that cancer Or whatever of course you can cure it And there's so many other things where That we with a human intelligence have Also failed to solve on this planet Which AI could also very much help us With right so if we can get this right Just be a little more chill and slow Down a little bit so we get it right It's mind-blowing how awesome our future Can be right we talked a lot about stuff On Earth it can be great But even if you really get ambitious and Look up at the skies right there's no Reason we have to be stuck on this Planet for the rest of um The remain for billions of years to come We totally understand now that laws of Physics let life spread out into space To other solar systems to other galaxies And flourish for billions of billions of Years And this to me is a very very hopeful Vision That really motivates me to to fight Them coming back to in the end something
You talked about again you know this the Struggle how the human struggle is one Of the things that also really gives Meaning to our lives if there's ever Been an epic struggle This is it and isn't it even more epic If you're the underdog If most people are telling you this is Gonna fail it's impossible right and you Persist And you succeed Right and that's what we can do together As a species on this one a lot of Pundits are ready to count this out Both in the battle to keep AI safe and Becoming a multiple planetary species Yeah and they're they're the same Challenge if we can keep AI safe that's How we're gonna get multi-planatory very Efficiently I had some sort of technical questions About how to get it right so one idea Uh that I'm not even sure what the right Answer is to is uh should systems like Gpt4 be open sourced in the whole arm Part can you make the can you see the Case for either I think the answer right now is no I think the answer early on was yes So we could bring in the all the Wonderful create the thought process of Everybody on this but asking should we Open source gpt4 now is just the same as If you say well is it good should we
Open source Um You know how to build really small Nuclear weapons should we open source How to make bio weapons To the open source how to make um A new virus that kills 90 of everybody Who gets it of course we shouldn't So it's already that powerful It's already that powerful that we have To respect the power of the systems We've built The knowledge that you get Um From open sourcing everything we do now Might very well be powerful enough that People looking at that Can use it to build the things that are Really threatening again let's get it Remember open AI is Gpt4 is a baby AI maybe a sort of baby Proto almost a little bit AGI according To what Microsoft's recent paper said Right It's not that they were scared of what We're scared about is people taking that Who are Who might be a lot less responsible than The company that made it right and uh Just going to town with it that's why We want to it's it's an information Hazard there are many things which um You know are not open sourced right now In society for very good reason
Like how how do you make Certain kind of very powerful toxins out Of stuff you can buy at Home Depot you Know you know you don't open source Those things for a reason And this is really no different So uh what I'm saying that I have to say It feels in a bit weird but in a way a Bit weird to say it because MIT is like The Cradle of the open source movement And I love open source in general power To the people let's say um But um there's always gonna be some Stuff that you don't open source and you Know it's just like you don't open Source so we have a three-month-old baby Right when he gets a little bit older We're not going to open source to him All the most dangerous things he could Do in the house yeah right But it does it's a weird feeling because This is one of the first moments in History where so there's a strong case To be made not to open source software This is when the software has become Yeah too dangerous yeah but it's not the First time that we didn't want to open Source a technology technology yeah Is there something to be said about how To get the release of such systems right Like GPT 4 and gpt5 So open AI went through a pretty Rigorous effort for several months you Could say it could be longer but
Nevertheless it's longer than you would Have expected of trying to test the System to see like what are the ways Goes wrong to make it very difficult for People Somewhat difficult for people to ask Things how do I make a bomb for one Dollar Or how do I say I hate a certain group On Twitter in a way that doesn't get me Blocked from Twitter banned from Twitter Those kinds of questions uh so you Basically use the system to do harm yeah Uh is there something you could say About ideas you have it's just on Looking having thought about this Problem of AI safety how to release such System how to test such systems when you Have them inside the company Yeah so A lot of people say that the two biggest Risks from large language models are It's spreading this information Harmful information the various types And second being used for offensive Cyber weapon Design I I think those are not the two Greatest threats they're very serious Threats and it's wonderful that people Are trying to mitigate them It's a much bigger elephant in the room Is how is this it's going to disrupt our Economy in a huge way obviously and Maybe take away a lot of the most
Meaningful jobs And an even bigger one is the one we Spent so much time talking about here That that This Becomes the Bootloader for the more Powerful AI write code connected to the Internet manipulate humans yeah and Before we know it we have something else Which is not at all a large language Model it looks nothing like it but which Is way more intelligent than capable and Has goals And that's the that's the elephant in The room and and uh obviously no matter How hard any of these companies have Tried uh they that's not something That's easy for them to verify with Large language models and the only way To be Really lower that risk a lot would be To not let for example try not to never Let it read any code not train on that And not put it into an API And um Not Not give it access to so much Information about how to manipulate Humans So but that doesn't mean you still can't Make a lot A ton of money on them you know uh we're Gonna Just watch now this coming year right
Microsoft is rolling out the new uh Office suite where you go into Microsoft Word and give it a prompt they did Writes the whole text for you and then You edit it And then you're like oh give me a PowerPoint version of this and it makes It and now take the spreadsheet and blah And you know all of those things I think Are you can debate the economic impact Of it and whether Society is prepared to Deal with this disruption but those are Not the things which That's not the elephant of the room that Keeps me awake at night for wiping out Humanity And I think that's the biggest Misunderstanding we have a lot of people Think that we're scared of like Automatic spreadsheets that's not the Case that's not what Eleazar was freaked Out about either Is there In terms of the actual mechanism of how AI might kill all humans So something you've been outspoken about You've talked about a lot is it Autonomous weapon systems So the use of AI in war is that one of The things that still you carry concern For as these systems become more and More powerful and Gary concern for it Not that all humans are going to get
Killed by slaughterbots but rather just This Express route into orwellian dystopia Where it becomes much easier for very Few to kill very many and therefore it Becomes very easy for very few to Dominate very many right Um Yeah if you want to know how I could Kill all people just ask yourself how we Humans have driven a lot of species Extinct how do we do it You know we were smarter than them Usually we didn't do it even Systematically by going around One-on-one one after the other and Stepping on them or shooting them or Anything like that we just like chopped Down their habitat because we needed it For something else Uh in some cases we did it by putting More carbon dioxide in the atmosphere Because Of some reason that those animals didn't Even understand and now they're gone Right so if Um if you're in Ai and you just wanna Figure something out then you decide you Know we just really need them the space Here to build more compute facilities You know If that's the only goal it has you know We are just the sort of accidental Roadkill along the way and you could
Totally imagine yeah maybe this oxygen Is kind of annoying because it caused More corrosion so let's get rid of the Oxygen And good luck surviving after that you Know I I I'm not particularly concerned That they would want to kill us just Because That would be like A goal in itself you know when we 've driven the number we've driven a Number of the elephant species extinct Right it wasn't because we didn't like Elephants What the basic problem is you just don't Want to give you don't want to seed Control over your planet to some other More intelligent entity that doesn't Share your goals it's that simple so Which brings us to the other key Challenge which AI Safety Research has Been grappling with for a long time like How do you make AI First of all understand our goals and Then adopt our goals and then retain Them as they get smarter right and um All three of those are really hard right Like A human child first they're just not Smart enough to understand our goals We can't even talk And then eventually they're teenagers And understand our goals just fine but They don't share yeah but there is
Unfortunately a magic space in the Middle where they're smart enough to Understand our goals and malleable Enough that we can hopefully with good Parenting and Teach them right from wrong and then Good good goal is still good goals in Them right So those are all tough challenges with Computers and then you know even if you Teach your kids good goals when they're Little they might outgrow them to and That's a challenge for machines they Keep improving so these are a lot of Hard hard challenges we're up for but I Don't think any of them are Insurmountable The fundamental reason why Eliezer looked so depressed when I last Saw him was because he felt it just Wasn't enough time oh not that it was Unsolvable because there's just not Enough time he was hoping that Humanity Was going to take this threat more Seriously so we would have more time Yeah and now we don't have more time That's why the open letter is calling For more time But even with time the AI alignment Problem It seems to be really difficult oh yeah But it's also the most worthy problem The most important problem for Humanity To ever solve because if we solve that
One Lex That aligned AI can help us solve all The other problems Because it seems like it has to have Constant humility About his goal constantly question the Goal Because as you optimize towards a Particular goal and you start to achieve It that's when you have the unintended Consequences all the things you Mentioned about so how do you enforce And code a constant humility as your Ability become better and better and Better and better Stuart Professor Stuart Russell Berkeley who is also one Of the Driving forces behind this letter he uh Has a whole research program about this I think of it as yeah humility exactly Although he calls it inverse Reinforcement learning and other nerdy Terms but it's about exactly that Instead of telling the AI here's this Goal go optimize the bejesus out of it You tell it okay Do what I want you to do but I'm not Going to tell you right now what it is I Want you to do you need to figure it out So then you give the incentives to be Very humble and keep asking you Questions along the way is this what you Really meant is this what you wanted and Oh this the other thing I tried didn't
Work seemed like it didn't work out Right should I try it differently What's nice about this is it's not just Philosophical mumbo jumbo it's theorems And Technical work that with more time I Think it can make a lot of progress and There are a lot of brilliant people now Working on AI safety and we're just not We just need to give them a bit more Time but also not that many relative to The skill of the problem no way exactly There should be At least just like every University Worth its name has some cancer research Going on in its biology Department right Every University that's computer that Does computer science should have a real Effort in this area and it's nowhere Near that this is something I hope is Changing now thanks to the gpt4 right so I I think If there's a silver lining to um what's Happening here even though I think many People would wish it would have been Rolled out more carefully is that this Might be the wake-up call That Humanity needed To really Stop the stop fantasizing about this Being 100 years off and stop fantasizing About this being completely controllable And predictable because It's so obvious it's it's Not predictable you know why is it that
Open that That I think it was GP chat GPT tried to Persuade a A journalist what was it before to Divorce his wife you know it was not Because the the engineers had built it Was like Let's put this in here and and screw a Little bit with people They haven't predicted at all They built the giant black box and Trained to predict the next word and got All these emergent properties and oops It did this you know Um Yeah I I think this is a very powerful Wake-up call and now anyone watching This is not scared I would encourage Them to just play a bit more with these These tools they're out there now like Gpd4 And um So wake-up call is First Step once You've broken up uh then gotta slow down A little bit the risky stuff To give a chance to all everyone's woken Up to to catch up with us on the safety Front you know what's interesting is you Know MIT That's computer science but in general But let's just even say computer science Curriculum How does the computer science curriculum
Change now you mentioned you mentioned Programming yeah like why would you be When I was coming up programming as a Prestigious position like why would you Be dedicating crazy amounts of time to Become an excellent programmer like the Nature program is fundamentally changing The nature of our entire education System Is completely torn on its head has Anyone been able to like load that in And like think because it's really Turning I mean it's pretty much Professors or English teachers are Beginning to really freak out now yeah Right like to give an essay assignment Then they get back all this fantastic Pros like this is the style of Hemingway And then they realize they have to Completely rethink and even you know Just like we stopped teaching um Writing a script is that what you say in English yeah handwritten yeah yeah when When everybody started typing you know Like so much of what we teach our kids Today Yeah I mean that's Uh Everything is changing and is exchanging Very it is changing very quickly and so Much of us understanding how to deal With the big problems of the world is Through the education system and if the Education system is being turned on its
Head then what what's next it feels like Having these kinds of conversations is Essential to try to figure it out and Everything is happening so rapidly uh I Don't think there's even speaking of Safety what broad AI safety Define I Don't think most universities have Courses on AI safety it's like Philosophy side and like I I'm an Educator myself so it pains me to see This say this but I feel our education Right now is the completely obsoleted By what's happening you know you put a Kid into first grade And then uh you're envisioning like and Then they're going to come out of high School 12 years later And you've already pre-planned now what They're gonna learn when you're not even Sure if there's going to be any world Left to come out to right Clearly you need to have a much more Opportunistic education system that Keeps adapting itself very rapidly as Society re-adapts the the skills that Were really useful when the curriculum Was written I mean how many of those Skills are going to get you a job in 12 Years I mean seriously if we just Linger On the gpt4 system a little bit You kind of hinted at it especially Talking about the importance of Consciousness in uh in the human mind With homosexions
Do you think gpt4 Is conscious I love this question So let's define consciousness first Because in my experience like 90 of all Arguments about Consciousness before I Land to the two people arguing having Totally different definitions of what it Is then they're just shouting past each Other I define consciousness As subjective experience Right now I'm experiencing colors and Sounds and emotions you know That does a self-driving car experience Anything That's the question about whether it's Conscious or not right Other people think you should Define Consciousness differently Fine by me but then maybe use a Different word for it or they can I'm Going to use Consciousness for this at Least Um So um But if people hate the yeah so Is gpt4 conscious does GPT 4 have Subjective experience Short answer I don't know Because we still don't know what it is That gives this wonderful subjective Experience That is kind of the meaning of our life
Right because meaning itself the feeling Of meaning is a subjective experience Joy is a subjective experience love is a Subjective experience We don't know what it is I've written some papers about this a Lot of people have Julio tonone Professor has Stuck his neck out the farthest and Written down actually very bold Mathematical Conjecture for what's the essence of Conscious information processing he Might be wrong he might be right We should test it He postulates the Consciousness has to Do with loops in the information Processing so our brain has loops Information you can go around and around In in computer science nerdspeak you Call it a recurrent neural network where Some of the output gets fed back in Again And with his Mathematical formalism if it's a feed Forward neural network where information Only goes in One Direction Like from your eye retina into the back Of your brain for example that's not Conscious so he would predict that your Retina itself isn't conscious of Anything Or a video camera now the interesting Thing about gpt4 is it's also one-way
Flow of information so if tanoni is Right and gpt4 is a very intelligent Zombie they can do all this smart stuff But isn't experiencing anything And this is Both a relief in that you don't have if It's true you know in that you don't Have to feel guilty about turning off Gpt4 and wiping its memory whenever a New user comes along I wouldn't like if someone you did that To me neutralized me like in Men In Black But it's also Creepy that you can have very high Intelligence perhaps then it's not Conscious because if we get replaced by Machines And by the sad enough that Humanity Isn't here anymore because I kind of Like Humanity But at least if the machines were Conscious they could be like well but They are descendants and maybe we they Have our values they're our children but If if tenoni is right and it's all these Are all Trans Transformers that are Not in the sense of the of Hollywood but In the sense of these one-way Direction And neural networks so they're all the Zombies that's the Ultimate Zombie Apocalypse Now we have this universe That goes on with great construction
Projects and stuff but there's no one Experiencing anything that would be like The ultimate depressing future So I actually think uh As we move forward with building more Advanced AI Should do more research on figuring out What kind of information processing Actually has experience because I think That's what it's all about and I Completely don't buy The dismissal that some people some People will say well this is all because Consciousness equals Intelligence right that's obviously not True you can have a lot of conscious Experience when you're not really Accomplishing any goals at all you're Just reflecting on something and you can Sometimes um Have things doing things that are quite Intelligence probably without being Being conscious but I also worry that we Humans won't Will discriminate against AI systems That clearly exhibit Consciousness that We will not allow AI systems Self-consciousness we'll come up with Theories About measuring Consciousness that will Say this is a lesser being and this was Like I worry about that because maybe we Humans will create something that is Better than us humans in the in the way
That we find beautiful which is they They cut they have a deeper subjective Experience of reality not only are they Smarter but they feel deeper and we Humans will hate them for it As we as human history has shown They'll be the other will try to Suppress it they'll create conflict They'll create War all of this I I worry About this too are you saying that we Humans sometimes come up with Self-serving arguments no we would never Do that would we well that's the danger Here is uh even in this early stages we Might create something beautiful yeah And uh will erase its memory I I Was horrified as a kid when someone Started boiling uh Boiling lobsters like oh my God that That's so cruel and some grown-up there Back in Sweden said oh it doesn't feel Pain I'm like how do you know that oh Scientists have shown that And then there was a recent study where They showed that lobsters actually do Feel pain When you boil them so they banned Lobster boiling in Switzerland now you Have to kill them in a different way First So it's presumably scientific research Boil down to someone asked the lobster Does this hurt Survey so we do the same thing with
Cruelty to farm animals also all these Self-serving Arguments for why they're Fine and yeah so we should certainly What the watchful I think step one is Just be humble and acknowledge that Consciousness is not the same thing as Intelligence And I believe that Consciousness still Is a form of information processing Where it's really information being Aware of itself in a certain way and Let's study it and give ourselves a Little bit of time and I think we will Be able to figure out actually what it Is That causes Consciousness and then we Can make probably unconscious robots That do the boring jobs that we would Feel immoral to give the machines but if You have a companion robot taking care Of your mom Or something like that she would Probably want it to be conscious right So the emotions it seems to display Aren't fake All these things can be Done in a good way if we give ourselves A little bit of time and don't run and Take on this challenge is there Something you could say to the timeline That you think about uh about the Development of AGI Depending on the day I'm sure that Changes for you but when do you think
There'll be a really big leap in Intelligence where it would definitively Say we have built AGI do you think it's One year from now five years from now 10 20 50. What's Your Gut say Honestly For the past decade I've deliberately Given very long timelines because I Didn't want to fuel some kind of stupid Morlock race yeah but I think that cat Has really left the bag now Uh I I think it might be very very close I I don't think that Microsoft paper is Totally off when they say that there are Some glimmers of AJ it's not AGI yet It's not an agent there's a lot of Things it can't do but um I wouldn't bet very strongly against it Happening Very soon that's why we decided to do This open letter because you know if There's ever been a time to pause Uh you know it's today There's a feeling like this gpt4 is a Big transition into waking everybody up To uh the effectiveness of these systems Yeah and so the next version Will be big yeah and if that next one Isn't AGI maybe the next next one will And there are many companies trying to Do these things Um the basic architecture of them is not Some sort of super well kept secret so
This is this is the time to um A lot of people have said for many years That there will come a time when we want To pause a little bit That time is now You have spoken about and thought about Nuclear war a lot Uh over the past year with Seemingly have come Closest to the precipice of nuclear war Then at least in my lifetime Yeah what do you learn about human Nature from that it's our old friend Moloch again it's really scary to see it Where America doesn't want there to be a Nuclear war Russia doesn't want to be a Global nuclear war either we know we Both know that it would just be another If we just try to do it if both sides Try to launch first it's just another Suicide race right so why are we why is It the way you said that this is the Closest we've come since 1962. in fact I Think we've come closer now than even The Cuban Missile Crisis it's because of Moloch you know you you have these other Forces On one hand you have um the West Saying that uh we have to drive Russia Out of Ukraine it's a matter of pride And we've staked so much on it that it Would be seen as a huge Loss of The credibility of the West if
We don't drive Russia out entirely of The Ukraine And on the other hand you have Russia Who um Has and you have the Russian leadership Who knows that if they get completely Driven out of Ukraine you know It might it's not just going to be very Humiliating For them but they might it often happens When countries lose Wars that things Don't go so well for their leadership Either like you remember when Argentina Invaded the Falkland Islands The the military Junta that ordered that Right people were cheering on the Streets at first when they are when they Took it And then when they got their butt kicked By the British you know what happened to Those guys They were out and I believe those were Still alive or in jail now right so so You know the Russian leadership is Entirely cornered where they know that Uh just Getting driven out of Ukraine is not an Option Um and um So this to me is a typical example of Molecu you have these incentives of the Two parties Where both of them are just driven to Escalate more and more right if Russia
Starts losing in the conventional Warfare The only thing they can do with since The back against the war is to keep Escalating And but and the West has put itself in The in the situation now we're sort of Already committed that the drive rush Out so the only option the West has is To call Russ's Bluff and keep sending in More weapons This really bothers me because moloch Can sometimes Drive competing parties to Do something which is ultimately just Really bad for both of them And uh you know what makes me even more Worried is not just that I It's Difficult to see an ending A quick peaceful ending to this tragedy That doesn't involve some horrible Escalation Um but also that we understand more Clearly now just how horrible it was Gonna it would be There was an amazing paper that was Published in nature food this uh August By some of the top researchers who've Been studying nuclear winter for a long Time and what they basically did was They combined Climate models With Food agricultural models so instead of
Just saying yeah you know it gets really Cold blah blah they figured out actually How many people would die in the Different different countries And it's uh it's pretty mind-blowing you Know so basically what happens you know Is the thing that kills the most people Is not the explosions it's not the Radioactivity it's not the EMP Mayhem It's not the right The rampaging mobs forcing food no it's It's the fact that you get so much smoke Coming up from the burning cities into The stratosphere that um That spreads around the earth from the Jet streams so In typical models you get like 10 years Or so where it's just crazy cold and During the first year or after the the War and their models The temperature drops in in Nebraska and In the Ukraine bread baskets you know by Like 20 Celsius or so if I remember No yeah 2030 Celsius depending on where You are 40 so it's just in some places Which is you know 40 Fahrenheit to 80 Fahrenheit colder than what it would Normally be So you know I'm not good at farming but Uh it's knowing if it drops below Freezing pretty much almost days in July And then like that's not good so they Worked out they put this into their
Farming models and what they found was Really interesting the countries that Get the most hard hit are the ones in The northern hemisphere So in in the U.S And and one model they had about 99 of All Americans starving to death In Russia and China and Europe also About 99 98 starving to death So you might be like oh it's kind of Poetic justice that both the Russians And the Americans 99 of them have to pay for it because it Was their bombs that did it but you know That doesn't particularly cheer people Up in Sweden or other random countries That have nothing to do with it right And um Hit the I think it hasn't entered the mainstream Uh Not understanding very much just like How bad this is uh most people Especially a lot of people in decision Making positions still think of nuclear Weapons as something that makes you Powerful Uh scary powerful they don't think of it As something where uh yeah just To within a percent or two you know We're all just gonna starve to death And um and starving to death is Is uh The worst way to die as Audemars all all
The famines in history show The torture involved in that probably Brings out the worst in people also when When people are desperate like this it's Not so some people I've heard some People say that If that's what's going to happen there'd Rather be Ground Zero and just get Vaporized you know And but uh So but I think people underestimate the Risk of this because they they Aren't afraid of Morlock they think oh It's just gonna be because humans don't Want this so it's not going to happen That's the whole point the mark that Things happened that nobody wanted and That applies to nuclear weapons and that Applies to AGI Exactly and it applies to some of the Things that people have gotten most Upset with capitalism for also right Where everybody was just kind of Trapped You know It's not to see if some company does Something It causes a lot of harm and not that the CEO is a bad person but she or he knew That you know that the other all the Other companies were doing this too so Morlock is um As a former foe I hope for someone Making Good movies so we can see who the real
Enemy is so we don't because we're not Fighting against each other molok makes Us fight against each other that small That's what Malik's superpower is The Hope here is any kind of technology Or other mechanism that lets us instead Realize That we're finding The Wrong Enemy right Now it's such a fascinating battle it's Not a sport system it's US versus it Yeah yeah we are fighting Malik for Human survival we as a civilization have You seen the movie Needful Things It's a Stephen King novel I love Stephen King and uh Max fonseedov Swedish actors Playing the guys it's brilliant exactly I just thought I hadn't thought about That until now but that's the closest I've seen to a a movie about moloch I Don't want to spoil the film for anyone Who wants to watch it but basically It's about this guy who turns out that You can interpret him as the devil or Whatever but he doesn't actually ever go Around and kill people or torture people With burning coal or anything He makes everybody fight each other it Makes everybody hate fear each other Hate each other and then kill each other So that that's the movie about malloc You know love is the answer that seems To be Um One of the ways to fight
Malik is by Um compassion by seeing the common Humanity yes yes and to not sound so we Don't sound like like uh what's the Kumbaya Tree Huggers here right We're not just saying love and peace man We're we're trying to actually help People understand the true facts About the other side And feel the compassion Because The truth makes you more compassionate Right So I I that's why I really like using AI For truth and for truth seeking Technologies can That can as a result you know get us More love than hate and And even if you can't get love you know Settle for settle for some understanding Which already gives compassion if Someone is like you know I really disagree with you Lex but I can see why you're where You're coming from you're not a Bad person who needs to be destroyed but I disagree with you and I'm happy to Have an argument about it you know That's a lot of progress compared to Where we are 2023 in the public space Wouldn't you say If we solve the AI safety problem as We've talked about and then you Max tag Mark who has been talking about this
Uh for many years get to sit down with The AGI with the early AGI system on a Beach with a drink What kind of what would you ask her what Kind of question would you ask what Would you talk about Something so much smarter than you A really Zinger of a question that's a Good one would you be afraid To ask some questions no See I'm not afraid of the truth I'm very Humble I know I'm just a meat pie you're With all these flaws you know but yeah I I have that we talked a lot about Homo Sentience I really already tried that For a long time with myself just so that Is what's really valuable about being Alive for me is that I have these Meaningful experiences It's not that I'm Have what I'm good at this or good at That or whatever because there's so much I suck at then So you're not afraid for the system to Show you just how dumb you are no no in Fact my son reminds me of that You could find out how dumb you are in Terms of physics how little how little We humans understand I'm cool with that I think I think um So I I can't waffle my way out of this Question it's a fair one it was tough I think given that I'm a really really Curious person that's
Really the defining part of who I am I'm So curious Uh I have some Physics questions I loved I love to understand I have some Questions about Consciousness about the Nature of reality or just really really Love to understand also I can tell you one for example that I've Been obsessing about a lot recently So I believe that It's supposed to Noni is right I suppose There are some information processing Systems that are conscious and some are Not suppose you can even make reasonably Smart things like gpt4 that are not Conscious but you can also make them Conscious Here's the question that keeps me make It might Is it the case that the unconscious Zombie systems that are really Intelligent are also really efficient so They're really inefficient So that when you try to make things more Efficient we still naturally be a Pressure to do they become conscious I'm kind of hoping that that's correct And I do you want me to give you a hand Wavy argument for it you know like In my lab again every time we look at How it how these large language models Do something we see that they do them in Really dumb ways and you could make it Make it better if if you uh
We have loops in our computer language For a reason The code which would get way way longer If you weren't allowed to use them it's More efficient to have the loops And In order to have self-reflection whether It's conscious or not right even an Operating system knows things about Itself right You need to have loops already and so I Think this is I'm waving my hands a lot But I suspect that um The most efficient way of implementing a Given level of intelligence Has Loops in it the self-reflection it Can And will be conscious isn't that great News yes if it's true it's wonderful Because then we don't have to fear the Ultimate zombie apocalypse and I think If you look at our brains actually our Brains are part zombie And part conscious When I open my eyes I immediately Take all these pixels and hit my ref on My retina right and like oh that's Lex But I have no freaking clue of how I did That computation it's actually quite Complicated right it was only relatively Recently we could even do it well with Machines right you get a bunch of Information processing happening in my
Retina and then it goes to the lateral Nucleus my Thalamus into the area V1 V2 V4 and the fusiform face area here that Nancy can wish her MIT invented and blah Blah blah I'd have no freaking clue how that Worked right it feels to me subjectively Like my conscious module Just got a little email it's like Face facial processing fit Task complete it's Lex yeah I'm gonna Just go with that right so this fits Perfectly with tanoni's model because This was all one way with the Information processing mainly And uh it turned out for that particular Task that's all you needed and it Probably was kind of the most efficient Way to do it but there are a lot of Other things that we associate with Higher intelligence and planning and and So on and so forth where you kind of Want to have loops and be able to Ruminate and self-reflect and introspect And so on where My hunch is that if you want to fake That with a zombie system that just all Goes one way you have to like unroll Those loops and it just really really Long and it's much more inefficient So I'm actually hopeful that AI if in The future we have all these various Sublime and interesting machines that do Cool things
And are aligned with that They will be at least they will also Have Consciousness for the kind of these Things that we do The great intelligence is also Correlated to Great Consciousness or a Deep kind of Consciousness yes so that's A happy thought for me because the Zombie of a couple of apocalypse really Is my worst nightmare of all it would be Like adding insult to injury not only Did we get replaced but we freaking Replaced ourselves by zombies like how Dumb can we be that's such a beautiful Vision and that's actually a provable One that's one that we humans can Intuit And prove that those two things are Correlated as we start to understand What it means to be intelligent and what It means to be conscious which these Systems uh early agi-like systems will Help us understand and I just want to Say one more thing this is super Important most of my colleagues when I Started going on in my Consciousness Tell me that it's all and I Should stop talking about it I hear a little inner voice from my Father and for my mom saying keep Talking about it because I think they're Wrong and and and The main Way to convince people like that that They're wrong if they say that
Consciousness is just equal to Intelligence just ask them what's wrong With torture or why are You Against Torture If it's just about you know These these particles are moving this Way rather than that way And there is no such thing as subjective Experience what's wrong with torture I Mean do you have a good comeback to that No it seems like suffering suffering Imposed onto other humans is somehow Deeply wrong in a way that intelligence Doesn't quite explain that if someone Tells me well you know It's Just an Illusion Consciousness Whatever you know I would like to invite them the next Time they're having surgery to do it Without anesthesia What is anesthesia really doing if you Have it you can have it local anesthesia When you're awake I have that when they Fix my shoulder right it's super Entertaining uh What was that that it did it just Removed my subjective experience of pain It didn't change anything it was Actually happening in my shoulder right So if someone says that's all And skip the anesthesia as my advice This is incredibly Central It could be fundamental to whatever this Thing we have going on here it is
Fundamental because we're we what we Feel is so fundamental is suffering and Joy and pleasure and meaning and That's all those are all subjective Experiences there And let's not those are the elephant in The room that's what makes life worth Living and that's what can make it Horrible if it's just a bunch of Suffering so let's not make a mistake of Saying that that's all And let's not make the mistake of uh Not instilling the AI systems with that Same Thing that makes us uh special yeah Max uh it's a huge honor that you will Sit down to me the first time Uh on the first episode of this podcast It's a huge honest sit down with me Again and talk about this what I think Is uh the most important topic The most important problem that we Humans have to face and hopefully solve Yeah well the honor is all mine and I'm So grateful to Youth for making more People aware of the fact that Humanity Has reached the most important fork in The road ever and it's history and That's Turn in the correct direction Thanks for listening to this Conversation with Max tagmark to support This podcast please check out our Sponsors in the description and now let
Me leave you some words from Frank Herbert History is a constant race between Invention and catastrophe Thank you for listening and hope to see You next time