SGU Episode 925: Difference between revisions

From SGUTranscripts
Jump to navigation Jump to search
(introduction done)
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{transcribing all
|date = 2023-05-15
|transcriber = Hearmepurr
|}}
{{900s|925|episodebox}}<!--
** This template generates the appropriate green message box asking for help with transcribing the episode.
** If you intend to transcribe the _whole_ episode, please _REPLACE_ the "900s" template above with the "transcribing all" template:
{{transcribing all
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
** If you _only_ want to work on a section, just add the "transcribing section" template BELOW the "Episode" or "900s" template above to indicate you are not working on the entire transcription:
{{transcribing section
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
** If you use the "transcribing section" template (placing it here, at the top of the transcript under the "Episode"/"900s" template), make sure you _also_ have a "transcribing" template above whichever section you're currently working on:
{{transcribing
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
**        *** Once transcription is complete, please delete this entire "Episode" markup section! ***
-->
{{Editing required
{{Editing required
|transcription = y
|transcription =  
|proofreading = <!-- please only activate when some transcription is present. -->
|proofreading = y<!-- please only activate when some transcription is present. -->
|formatting = y
|formatting = y
|links = y
|links = y
Line 45: Line 8:
|segment redirects = y <!-- redirect pages for segments with head-line type titles -->
|segment redirects = y <!-- redirect pages for segments with head-line type titles -->
|}}
|}}
{{UseOutline}} <!-- Remove when transcription is complete -->
 
{{InfoBox
{{InfoBox
|episodeNum = 925
|episodeNum = 925
Line 283: Line 246:
{{anchor|quickie}} <!-- leave this anchor directly above the corresponding section that follows -->
{{anchor|quickie}} <!-- leave this anchor directly above the corresponding section that follows -->


== Quickie with Steve <small>(8:08)</small> ==
== Quickie with Steve: Batteries with 2x energy density <small>(8:08)</small> ==
{{shownotes
{{shownotes
|weblink = <!-- must begin with http:// -->
|weblink = https://amprius.com/the-all-new-amprius-500-wh-kg-battery-platform-is-here/
|article_title = <!-- please replace ALL CAPS with Title Case or Sentence case -->
|article_title = The All-New Amprius 500 Wh/kg Battery Platform is Here
|publication = <!-- enter nn for Neurologica :-) -->
|publication = Amprius Technologies
|note = not
|redirect_title = <!-- note any redirect's title here -->
}}
}}


== Quick News Items ==
'''S:''' So I'm going to start just a very quick one because this is actually going to also tease another interview that we're doing either next week or the week after a battery announcement. I know this is, there's battery announcements. It seems like every week. But this one rose so far above the background. We had to at least mention it. A ton of people emailed us about this. The company Amprius announced their new 500 watt hour per kilogram battery platform. They're actually in production. This is not like future thing.
<!--
 
'''B:''' In production.
 
'''S:''' In production. It was the news is it was independently tested by an outside company to verify that it was a 500 watt hour per kilogram battery. Now what does that mean? Guys, that's twice the energy density of the batteries that are currently in Tesla vehicles.
 
'''B:''' Twice?
 
'''S:''' Yes, twice.
 
'''E:''' Is it four times the size?
 
'''S:''' No, twice the density means it's half the size. It's also twice the specific density, which means it's half the weight. Half the weight. Half the size.
 
'''E:''' What did they uncover?


** We recommend adding section anchors above any news items that are referenced in later episodes (or even hinted in prior episodes as upcoming). See the anchor directly above News Item #1 below, which you would change to {{anchor|news1}}
'''S:''' Well, it's just the I looked into it as much as I could and it's basically seemed like it's kind of using the technology that we've been talking about for the last five to 10 years. This is now coming through the pipeline and into production. The only thing I couldn't find out was how much it's going to cost. They're going to first use them in the aerospace industry like for drones and stuff and then electric vehicles and they're also building a plant in Colorado. I think they'll probably that will should be I think cranking out batteries in 2025. So maybe that will be their electric vehicle factory.
-->
'''S:'''


'''B:'''
'''B:''' Cell phones.


'''C:'''
'''S:''' And then also portable devices. This is their third category that they're going to use it for. But there's a lot of questions I have. So we're going to get somebody from the company to go to answer all of our technical questions because this seems like a huge deal. I couldn't find any.


'''J:'''
'''B:''' This is not incremental man.


'''E:'''
'''S:''' This is not incremental. I couldn't find anything anywhere that was a deal killer or a gotcha or whatever. The experts are saying, holy crap, this is a big deal. And so I feel like we really have to wrap our heads around this. We'll go into far more detail when we do the interview with the technical person from the company. But yeah, I'm almost thinking, should I wait a year to buy my next electric car until these things are coming out? You know, because now we're talking about 500 tile range cars with smaller, cheaper batteries. I don't know. It could be amazing. There's a lot of stuff happening in battery technology. This was just the biggest.
<!-- ** the triple quotes are how you get the initials to be bolded. Remember to use double quotes with parentheses for non-speech sounds like (laughter) and (applause). It's a good practice to use brackets for comments like [inaudible] and [sarcasm]. -->


''(laughs)''
== Quick News Items ==
''(laughter)''
''(applause)''
[inaudible]


{{anchor|news#}} <!-- leave this news item anchor directly above the news item section that follows -->
{{anchor|news#}} <!-- leave this news item anchor directly above the news item section that follows -->
{{anchor|meatball}}
=== Mammoth Meatball <small>(10:49)</small> ===
=== Mammoth Meatball <small>(10:49)</small> ===
* [https://www.theguardian.com/environment/2023/mar/28/meatball-mammoth-created-cultivated-meat-firm Meatball from long-extinct mammoth created by food firm]<ref name=mammoth/>
* [https://www.theguardian.com/environment/2023/mar/28/meatball-mammoth-created-cultivated-meat-firm Meatball from long-extinct mammoth created by food firm]<ref name=mammoth/>


{{meatball}}
'''S:''' All right, Jay, you're going to actually tell us about a couple of things. Why don't you start with the, this is actually the energy battery. Twice as powerful battery was that the second most emailed news item to us this week.
 
'''C:''' Oh, yes it was.
 
'''E:''' Oh, there was something more.
 
'''J:''' I'll take it from here. ''(laughter)'' An incredible milestone in scientific achievement has occurred. Everyone, particularly you, Cara, please make sure you're sitting down because what I am about to say is going to knock your shoes and socks right off. An enormous {{meatball}} made from cultivated woolly mammoth meat was created.
 
'''B:''' Oh, my God.
 
'''J:''' It's enormous and it's woolly mammoth meat and they did it. And it's a meatball. Cara, are you okay?
 
'''C:''' I'm okay. I'm okay. And I've been prepped for this successfully because we got like a thousand emails about it and they were all addressed to you, Jay.
 
'''E:''' They brought a mammoth back, killed it, made a meatball out of it?
 
'''J:''' By far, the absolute most emailed news item to me of all time over our 18 years.
 
'''B:''' I love this news item.
 
'''S:''' Jay has to talk about this.
 
'''B:''' We had no choice. There was literally no choice.
 
'''J:''' So let me get into the science now that I got the fun part over. An Australian cultivated food company, they're called VAW. V-O-W. They were working with the Australian Institute of Bioengineering, which is at the University of Queensland. And they created something that is pretty damn remarkable.
 
'''B:''' Remarkable.
 
'''J:''' They decided to try to reproduce cultivated meat of the woolly mammoth. And they did it for a number of reasons, one, to bring awareness about cultured meat, because still a lot of people out there that don't know much about this. And they chose the woolly mammoth because it's obvious. It's meat that's absolutely not available. It's incredibly novel what they were trying to do. They also believe that it went extinct due to climate change. And also, let's not kid ourselves. This is a massive marketing campaign. I believe it was conceived by two marketing companies. They came up with the idea to do this. So the meatball is existing right now. It's on display at the Nemo Science Museum in the Netherlands. And the main question here is, so how did they make the mammoth meat? What was the process that they went through? So samples were taken from frozen mammoth meat that we have, right? We found quite a few frozen solid mammoths over the years, and they've kept them on ice, or at least portions of it on ice. So they were using advanced molecular engineering, and they inserted mammoth, myoglobin, into sheep cells. So myoglobin is a heme protein. We talk about heme on the show a lot for some reason.
 
'''C:''' We talk about cultured meat a lot.
 
'''J:''' Yeah. So myoglobin is the heme protein that is found exclusively in heart and skeletal muscle cells. And as it turns out, myoglobin is also what gives meat its color, its taste, and its smell. So this is a very important part of the protein. The mammoth DNA that VAW was able to get was not complete. There were several gaps in it. So they used African elephant DNA to fill in the missing information, and African elephants are one of the closest living relatives to mammoths. And that's why they chose them. So people said the meat smelled like crocodile meat. I wouldn't know. I've never smelled crocodile meat.
 
'''S:''' I've had crocodile meat.
 
'''C:''' I've had alligator.
 
'''J:''' Yeah, when we were in Australia, we did.
 
'''B:''' I better think kangaroo meat.
 
'''S:''' And I had alligator meat.
 
'''C:''' I think yeah, I've had alligator for sure. They have that here. I think it all depends on how it's cooked.
 
'''J:''' So the protein that grew in the lab is estimated to be 4,000 years old, meaning that that was the last time that it existed on Earth. So they have to test to make sure it's safe for human consumption. And this is a big part of it. They did not let anybody sample any of the meat because they are not sure what a human body's reaction would be to these particular proteins that they created. So the future lab-
 
'''C:''' That's just what they have to say.
 
'''J:''' Yeah, but they're being careful. They're going to test it.
 
'''E:''' What is somebody volunteer?
 
'''C:''' I know.
 
'''S:''' There's no particular reason reason to think it's unsafe.
 
'''C:''' Right. It's like when I used to do a bunch of stuff, you guys remember Kevin Folta? We've had him on the show a million times.
 
'''E:''' Sure.
 
'''C:''' I used to do like so much coverage of GM stuff. And he would be like, this is GM strawberry. We were producing. It's not on the market. We're not allowed to eat it. Camera stop rolling. Nom nom nom nom nom. You can't legally do it, right? It's a risk.
 
'''S:''' You can't introduce a new food without getting going through the approval process.
 
'''C:''' Exactly. Right.
 
'''J:''' Yeah, so you think somebody did?
 
'''C:''' 100% somebody tasted that. They just can't say they did.
 
'''J:''' Yeah, all right. That makes sense.
 
'''B:''' I feel better about it, I guess.
 
'''C:''' I mean, I don't know this for a fact, but come on.
 
'''J:''' You would think though.
 
'''C:''' You would have done it.
 
'''J:''' I wouldn't be that afraid to test it. I mean-
 
'''C:''' So many scientists through all of his human history test their own thing before it goes through approval.
 
'''J:''' So a couple more things. Well, first off, the future of lab grown meat is projected to be 70% of all consume meats by 2050.
 
'''B:''' Wow.
 
'''J:''' That's also said that lab grown meat has a lower carbon footprint. We've talked about this, than slaughtered meats. Approximately 60% of greenhouse gas emissions from food production comes from animal farming. So this, this potentially could be a very big deal from a greenhouse gas emissions perspective. So I'm for it. I'm for the I don't like the whole slaughtering of animals industry. I think it's horrible. Like Cara, you and I have discussed this in the past. It's, it's horrific what goes on.
 
'''S:''' Jay are you  saying that you don't think animals should be raised and slaughtered?
 
'''J:''' I don't. By the way, Italy is not accepting lab grown meat. Did you guys read about that?
 
'''B:''' No.
 
'''C:''' Why?
 
'''J:''' They said nope to lab grown meat because they don't want to lose the historical significance of raising cattle and having it be, having it be genuine, I guess.
 
'''C:''' They'll change their tune.
 
'''J:''' Eventually, of sure. Anyway, that's, that's the first news item I was going to talk about. Very exciting news.


=== Lunar Ice <small>(17:03)</small> ===
=== Lunar Ice <small>(17:03)</small> ===
Line 323: Line 400:
|publication = CNN
|publication = CNN
}}
}}
'''J:''' The second one, Steve, is water on the moon.
'''S:''' Mm-hmm.
'''E:''' How much water on the moon?
'''J:''' So according to a study published in the journal Nature Geoscience, scientists from China who analyzed the first lunar soil samples returned to Earth since the 1970s, guess what they found? They found that trillions of pounds of water could be scattered across the moon.
'''B:''' Trillions?
'''J:''' Trillions. But it's trapped in tiny glass beads that might have formed when asteroid struck the lunar surface. Now this gets a little, in a way, it's strange. It's kind of-
'''B:''' What's strange though? Because I covered this like a year and a half ago.
'''J:''' Well, let me, let me tell you and you tell me if, with the big difference is. The study fills in some gaps in a theory about a lunar water cycle, right? So pointing to a water reservoir that's remained elusive to scientists for years. Now these glass beads that are in the regolith formed millions of years ago and can be infused with water when they're hit with solar winds carrying hydrogen and oxygen across the solar system. How about that? These glass-like beads actually when the solar wind hits them become infused with hydrogen and oxygen. And if the hydrogen and oxygen is taken out of these beads that they're able to be replenished within a few years because of the solar wind. So the findings could have implications for future lunar astronauts who are obviously looking for potential resources of water to convert to drinking water and rocket fuel. And the scientists say that the water can be released just by heating up the glass beads found in the lunar regolith. It almost sounds too good to be true. But this is what they found. This is what they're saying. I don't think in the short term it's going to mean anything because we would need to be able to process regolith in a way that would take machinery and energy and all sorts of infrastructure that don't exist on the moon right now. But you know it doesn't mean-
'''E:''' You know it's there and it's something to work towards.
'''J:''' Yeah, definitely.
'''S:''' Yeah, I think the big thing is that it's just too spread out to be really that useful in the short term, as you say. But long term if you have a settlement on the moon a few hundred years from now, they might be glad it's there.
'''J:''' Steve, you never, ever know.


=== First Blitzar Observed <small>(19:15)</small> ===
=== First Blitzar Observed <small>(19:15)</small> ===
Line 330: Line 431:
|publication = Ars Technica
|publication = Ars Technica
}}
}}
'''S:''' All right, Bob, tell us what is a blitzar? I know what a magnetar is.
'''E:''' You know Dancer and Prancer.
'''S:''' I know a quasar.
'''E:''' Blitzar?
'''B:''' Yes, this is quite different. Researchers may in fact already have detected a blitzar, which could explain some of the mysterious and powerful FRBs that are out there. OMFG, what the hell am I talking about? So let's start with the, with that initialism FRB. I hope many of you remember what that is-
'''E:''' Fast Radio Bursts.
'''B:''' -because we've talked about it many times. I've talked about it, Evan talked about it. This is fast radio bursts. So these are immensely distant and immensely powerful bursts of radio energy that last from less than a millisecond to a few seconds and they could release a titanic amounts of energy. And even a thousand of a second, it could be more than what the sun releases in three full days. And some of these FRBs seem to repeat and they've been potentially linked to magnetars, which are neutron stars with extra, extra strong magnetic fields. But some of the FRBs don't repeat and that's where blitzars come in. So then what is a blitzar? I'd never even heard about this before last week. It's a hypothetical neutron star that's too big to exist in a sense. Now we all know that the initial process, giant star goes all supernova, the quirk collapses into a super dense ball of probably mostly neutrons and possibly other weirder forms of matter as well, like who knows some weird form of quirk matter. Now if the mass of the neutron stars to great somewhere approaching three solar masses, then it doesn't stop at being a neutron star. It keeps collapsing ultimately into a black hole. So but what happens if two neutron stars collide? Now in that scenario, the resulting neutron star, if it's not too heavy, it stays a neutron star. If it's too heavy, then it becomes a black hole, really easy, right? Common sense, obvious stuff, but scientists suspect that sometimes the resulting merged neutron star can be too heavy, but it doesn't immediately collapse into a black hole. Now it would not be too difficult for this neutron star to be spinning so fast that it essentially can't collapse as it normally would because of this rotation speed. Now the apparent centrifugal forces could be so great that it essentially reduces the weight of the outer layers enough for this to sustain itself. Of course, its mass is unchanged. I'm talking about the weight. So now these same forces occur on the earth. This isn't some esoteric bit of science here. Of course, it occurs to a much smaller extent. Jay, did you know that you weigh ever so slightly less at Earth's equator compared to the North Pole? Did you know that? You weigh less because essentially because you're moving at the equator at a thousand miles per hour in a big circle. Now the apparent centrifugal force works against gravity. That's the key here. The centrifugal force is working against the gravity, essentially pushing you away from the ground like that push you felt as a kid on a spinning merry-go-round. Remember that spinning around really fast? It's trying to throw you out of it.
'''E:''' Yeah, you were falling off, right?
'''B:''' Yeah. So now a person who weighs 150 pounds would then weigh about 0.55 pounds less at the equator. So that's what it translates into. But if we ramp it up, what if the Earth spun so fast that a day instead of 24 hours was 90 minutes long, an hour and a half, that 150 pound person would weigh 35 pounds total, 35 pounds. All because of this apparent centrifugal force. Now, metrically, you want to talk metrically, that would reduce a 68 kilogram person to only 16 kilograms. So it would be dramatic. But of course, on a neutron star, it's much more dramatic. So now we have a neutron star that should collapse into a black hole, but it's spinning so fast that it's below that critical threshold of weight and it does not turn into a black hole. So how long does it stay that way? We don't know really. There's still so much, a surprising amount that we don't know about neutron stars. And what's going on in the interiors? Maybe it's all, maybe it's mostly neutrons. Maybe it's not. We're not sure exactly what's going on. So depending on the conditions, though, I think most scientists would agree that it's not going to last long in this situation where it's spinning so fast, it stays neutron star. Probably doesn't last that long. Some scientists were saying it could potentially last much, much longer, even on the scale of millions of years, some were saying anyway, but I think it's much, much less than that. But it can't stop the inevitable, though, forever. And that's because the neutron stars in tensed magnetic field, that field radiates rotational energy away in a process called magnetic breaking. That means that eventually its spin will slow enough that the black hole physics says, ha, I got you. The centrifugal force can no longer work against gravity enough. And the neutron to generously presser, pressure, and repulsive nuclear forces cannot hold back any more collapse. And the massive neutron star then collapses into a black hole. So that's the idea. But the fun isn't even over yet when that happens. The no-hair theorem for black holes, it's called the no-hair theorem, look it up, says that black holes do not have magnetic fields. That means that within the millisecond or so, it takes to form the black hole it has to shed the energetic dynamo and the magnetic field that it creates. And this is one hell of a magnetic field. This can be a magnetic field for a neutron star, it could be a hundred million. Some say even a quadrillion times stronger than the Earth's magnetic field. A lot of energy in there. So it's theorized that this shedding of the magnetosphere, as they describe it, releases the energy in one intense burst of radio waves. And that, my friends, is the mysterious mythosore. Oh, wait a second, sorry, I've been watching a lot of Mandalorian. Let me start that again. That burst of radio energy, that is the blitzar itself, that effect of radiating that intense energy really quickly. That's what a blitzar is. The intense signal that a long delayed black hole has finally been born. That's a blitzar. Okay. And finally, this is in the news. Why is this even in the news this week? And it's because researchers have examined the results from two different observatories and they found two very interesting coinciding events. They found a gravitational wave observatory found a likely neutron star collision, and less than a day later, a few hours actually, another observatory that's really good at detecting FRBs, found one, and roughly the same part of the sky at the same distance. Now in terms of probabilities, I think the confidence is high. Researchers think that this co-localization happened by chance at only 0.004. I think it was Sigma, like 2.8 Sigma. So not the gold standard of five, five or six, I think five is a gold standard. So it's not there yet, but it's the confidence levels are pretty high that this is from the same event. Now, if this is true, it seems that what we've been calling a fast radio burst FRBs are two distinct phenomena. It looks like right now. One version can repeat the blast, and these have been associated with magnetars, as I said earlier. The FRBs do not repeat, however, the ones that seem to go once, and then that's it. They may be these blitzars that researchers may have already detected. It looks like if they exist, that they may have already detected it. And now we'll know for sure once we get more of these observatories looking at them, so we can more precisely pin down their locations. So finally, it seems to me that these mysterious and powerful FRB signals are slowly revealing to themselves. Thanks to science. Thank you, science.
'''S:''' Thank you, science. Yeah, pretty cool. Pretty cool.
'''B:''' Yeah. Interesting. Blitzars.
'''S:''' It's always fun to think about these massively energetic events happen.
'''B:''' And these neutron stars are more fascinating than even black holes. They're amazing.
'''E:''' That's saying a lot, Bob.
'''B:''' It is. It's just, black holes are simple. They're amazing.
'''S:''' Right.
'''B:''' And that's a no-hair there. They're simple. There's not a lot there that's going on in terms of interacting different types of black holes. But neutron stars, oh my God. I mean, there could be quark matter inside. There could be so many superfluids inside. So many different things. We're not sure. Not sure. Amazing stuff.


=== England Allows Gene-Edited Crops <small>(27:35)</small> ===
=== England Allows Gene-Edited Crops <small>(27:35)</small> ===
Line 337: Line 472:
|publication = nn
|publication = nn
}}
}}
'''S:''' All right. Very quickly, that's one that I point out that England just passed the Precision Breeding Act, which will allow gene-edited plants to be developed and marketed in England. So this is England only, not Northern Ireland, Wales or Scotland.
'''C:''' But still.
'''S:''' This is one of the good things to come out of Brexit because the European Union is very anti-GMO. But now, England's like, screw you, we're going to develop GMOs. However, these are not GMOs. They're GM-edited plants.
'''C:''' Okay.
'''S:''' And this is what I wanted to point out because this is the evolution of the regulatory process. Remember, so GMOs are genetically modified organisms. What they are depends on what regulatory scheme you're operating under. But essentially, it is any plant or animal that uses a number of different genetic technologies bioengineering technologies in their development. So inserting a gene, taking out a gene, silencing a gene, these things are all considered genetic modification. The US is moving towards the broader term bioengineering. Like this product has been produced with bioengineering to get rid of the GMO tag because of the stigma that has been deliberately attached to it by the anti-GMO crowd.
'''E:''' Thanks for nothing.
'''S:''' And also because of the evolution of genetic engineering technologies, right? So now we can do gene editing like using CRISPR, for example. And the distinction that England is making is that if it, as long as you're not inserting a trans gene, it's not a GMO. It's a gene-edited plant.
'''C:''' Interesting.
'''S:''' Yeah, so it's very interesting. That means you can insert-
'''C:''' So it has to be, just to clarify, a trans gene, does it have to come from another organism?
'''S:''' Yeah. So transgenic meat, so this is their definition. Transgenic is a gene that comes from not only a different organism, but from one that could never get mixed into this plant through normal breeding techniques. So if you could get it there through hybridization or anything, it's not a trans gene. Even if it's from another cultivar, another variety, even in other species.
'''C:''' So that's cool. So you can turn on and turn off genes all you want.
'''S:''' All you want, you can take out genes, you can turn them off, you can even slip in new genes as long as they could have gotten there somehow through natural breeding. Then it's only gene-edited. It's not a GMO. So that gets them out of a lot of, again, anti-GMO kind of rhetoric. I still think it's not a good idea to demonize trans genic bioengineering because it's based upon this false idea that there's something different about a gene. It's like an essentialist kind of approach.
'''E:''' Frankenfish.
'''S:''' Yeah, it's like we share 60% of our genes with bananas. There's no banana genes and people genes. There's just genes. You know?
'''E:''' Right.
'''S:''' The only thing that matters is what they do and how they're regulated and if you're controlling that, then that's, then it doesn't matter where it comes from.
'''C:''' And I hear you when you say like we don't want to demonize that and I agree 100%. But we're not starting from scratch here. There are ready demonizing it. I know they're actually making good progress.
'''S:''' I agree. This is a good way to subvert the demonization. But it's unfortunate that it's necessary. But I'm just saying that it doesn't make sense.
'''C:''' Right. Yeah. I agree.
'''S:''' Anyway, I'm hoping that this is going to be a trend to basically at least to minimize the damage of the anti-GMO misinformation and allow for most genetic engineering to happen under this new sort of regulatory scheme. So we're sort of moving in that direction in the US. England is now explicitly moved in that direction. Hopefully this will continue to spread.


{{anchor|futureWTN}} <!-- keep right above the following sub-section. this is the anchor used by the "wtnAnswer" template, which links the previous "new noisy" segment to its future WTN, here.
{{anchor|futureWTN}} <!-- keep right above the following sub-section. this is the anchor used by the "wtnAnswer" template, which links the previous "new noisy" segment to its future WTN, here.
-->
-->
== Who's That Noisy? <small>(31:24)</small> ==
== Who's That Noisy? <small>(31:24)</small> ==
{{wtnHiddenAnswer
{{wtnHiddenAnswer
Line 345: Line 523:
|answer = [https://www.youtube.com/watch?v=q7Gi6j4w3DY First recorded sound]
|answer = [https://www.youtube.com/watch?v=q7Gi6j4w3DY First recorded sound]
|}}
|}}
<!--  
 
** start section transcription here **
'''S:''' Okay. Jay.
-->
 
'''J:''' Yeah.
 
'''S:''' We're going right to Who's That Noisy. All right, guys. Last week I played this Noisy:
 
[Crackling and background buzzing with buzzing hums]
 
All right, guys.
 
'''E:''' Whatever it is, it's running a vacuum.
 
'''S:''' Is there a tune being hummed in there?
 
'''J:''' Possibly.
 
'''S:''' So it's like, what's the tune and what's humming it? Is that the puzzle?
 
'''J:''' Yes. I mean, what are you hearing? You Cara?
 
'''C:''' Isn't that the point of it?
 
'''J:''' All right. So a listener, a listener named Will says: "Hi team, This week's who's that noisy sounds like the pickup from a laser light microphone, a tool that can be used to pick up sound by interpreting mic movements in an object near the audio source." That's a really cool guess. It's not correct, but it was a very good guess. I've heard this technology done and the fidelity is similar, maybe a little bit better, but it's not that clear, but it is possible to do this, like using the laser light that bounces off, say, like a potato chip bag from outside of the room, could interpret that the movement of that potato chip bag to figure out what sounds are being made in the room. It's pretty interesting. It's like a microphone. Another listener named Chris said: "Hi, Jay, is this week's noisy one of those rock polisher machines."
 
'''C:''' Rock tumbler.
 
'''E:''' A rock tumbler.
 
'''C:''' I have one of those because I'm a nerd.
 
'''J:''' Yeah, we had one for my kid.
 
'''C:''' For your kids. ''(laughs)''
 
'''E:''' Simpsons episode.
 
'''J:''' They take a long time to work.
 
'''C:''' They do. You have to leave them in the garage because it's like five days.
 
'''J:''' It's not a rock tumbler, but I mean, maybe similar on the annoying sound how annoying the sound is. I don't disagree. Rock tumblers are really, they could be really loud. Another listener named Keely Hill wrote in: "Hello, you really primed us for insects this week, but with what sounds like wind in the background, I'll guess it's fabric like tent material caught on something and flapping just right to produce a few different frequencies during a wind storm."
 
'''B:''' Huh.
 
'''J:''' That was a very-
 
'''E:''' Creative.
 
'''J:''' -interesting and unique guess. Not correct, but I can, the way you described it, I could easily visualize what you were talking about. I got another guest here from Steve Panelli and Steve said: "Hi SGU, this week's who's at noisy sounds like a pressure washer tool with a rotating nozzle. Thanks, Steve." Very cool all these guesses and how different all of them are, but that was not correct, but we do have a winner from last week. And the winner writes in, this is Tracy McFadden and Tracy says: "Hi Jay, Well, I think I absolutely know the answer to this week's noisy. This is the first ever recorded song, pre dating Edison by 20 years. It was a French folk song. All Clair de la Lune" please, I know I didn't pronounce that.
 
'''C:''' Claire de lune.
 
'''B:''' Claire de lune.
 
'''J:''' Yeah, but it is, it is said that it's spelled the way that I said it.
 
'''C:''' Oh.
 
'''S:''' It's like maybe the full name for the, yeah.
 
'''C:''' Oh, okay. Yeah, everybody just say Claire de lune.
 
'''J:''' I believe it means in the light of the moon, right? If you translate it. This song was sung and recorded by {{w|Édouard-Léon Scott de Martinville}} on April 9, 1860 on a device called a {{w|phonautograph}}, phonautograph.
 
'''C:''' Huh.
 
'''J:''' It outputs visual lines of information onto a medium. Yeah, so it's similar to when you think of the way that sound was written on like those wax cylinders.
 
'''E:''' Cylinders, yeah.
 
'''J:''' It's that type of thing, right? It's very physical. Pretty interesting you think about the first recording and how poor the actual sound quality was and what we are capable of doing electronically would sound today. I mean, we're talking about just massive improvements in what we can do with sound.


{{anchor|previousWTN}} <!-- keep right above the following sub-section ... this is the anchor used by wtnHiddenAnswer, which will link the next hidden answer to this episode's new noisy (so, to that episode's "previousWTN") -->
{{anchor|previousWTN}} <!-- keep right above the following sub-section ... this is the anchor used by wtnHiddenAnswer, which will link the next hidden answer to this episode's new noisy (so, to that episode's "previousWTN") -->
=== New Noisy <small>(35:21)</small> ===
=== New Noisy <small>(35:21)</small> ===
'''J:''' Anyway, I have a new noise for you guys this week. It was sent in by a listener named Graham Lamb.
[Grinding background with a light clanging in the foreground ]
[Grinding background with a light clanging in the foreground ]


{{wtnAnswer|926|short_text_from_transcript}} <!-- "short_text_from_transcript" is the portion of this transcript that will transclude a link to the next WTN segment, using that episode's anchor, seen here just above the beginning of this WTN section. -->
I'll give you a hint. This is something that many of the of our listeners today are very familiar with. Does that help at all?
 
'''E:''' Not me.
 
'''J:''' {{wtnAnswer|926|It's a hard one}}, but it's fun and Bob will really like it.


== Announcements <small>(36:00)</small> ==
== Announcements <small>(36:00)</small> ==
'''J:''' So few things guys. Number one, we have a show that is scheduled on May 20<sup>th</sup>. This is a live stream show. This is basically the SGU doing things that are off the rails of what we typically do here on the SGU. We're just going to be doing a lot of things to have fun and to just celebrate the fact that we're alive. And the first hour of this live stream is going to be 100% only for patrons. This will be happening. We have a set on May 20<sup>th</sup> starting at 11 AM. That's Eastern time. And if you're interested, please do join us. We'll have links to how to see this show on our website coming up soon. We really do hope that you join us. Now another thing that's happening, we have a conference that's happening and it's called NOT A CON. Because unlike other conferences that you have been to, this conference is not about you sitting in a room listening to speakers talk all day. This is about a conference that this conference revolves around socializing and interacting with the people that are at the conference. So it is a very large social event that's going to have some entertainment happening. And of course, we'll have things happening that are there. We'll have things that will inspire interaction between people. They'll be plenty of time for meals. They'll be plenty of time for drinking and nighttime activities, whatever you want to do. So we really hope that you join us. This will be happening the first weekend of November. That'll be November 3<sup>rd</sup> and November 4<sup>th</sup>. That's Friday and Saturday. So please do join us on that. Now here's what's happening. If you go to the SGU website on our homepage, there is a link that will take you to a Google document that allows you to sign up for our are you interested quiz, right? You just fill in your email address and let us know if you're if you're definitely interested in doing this or if you're warm or lukewarm about the idea. This will give us an understanding about how many people would attend. And when we hit 150 or more people, then I will make this whole thing happen. But we have to do this to protect ourselves financially because these events cost a ton of money to do. So I need at least 150 people or else we can't do it. But please do join us because this is going to be a hell of a time.
'''C:''' I also have a little bit of an announcement. A big announcement. Damn it. I'm not good at announcements. I am excited to announce that the book that I co-edited with Dr. Steven Hupp, a psychologist friend of mine called Pseudoscience in Therapy, a skeptical field guide is now available for purchase. You can find it online. I think in in hard back and paper back and it is a collection of chapters that kind of go through the different psychological diagnoses that you find in the DSM. So everything from depression to anxiety to trauma to pain, insomnia, substance use and abuse, different personality disorders, etc. And it kind of dives deep into the pseudoscience that we often find, what works, what doesn't work, why it works, why it doesn't work with a nice intro that I wrote. So I hope you guys, especially those of you who have sent emails over the years asking about specific pseudosciences in therapy or I went to this person and they mentioned this. What do you think about that? This could be a really good reference for you. So check it out. I'm pretty proud of it.
'''J:''' Cool.
'''C:''' Yeah.
'''S:''' Hupp's been a pretty busy guy. So he also coming out with another book, Investigating Pop Psychology Pseudoscience, Fringe Science, and Controversies that he co-edited with Richard Wiseman and happens to have a chapter in there by me on alternative medicine and psychotherapy.
'''C:''' Nice.
'''S:''' Yeah. So he came up with two books. I think one is more technical, one is more pop-side, but they're both the collection of essays on pseudoscience and mental health. All right. Well, let's get to that interview with Blake Lemoine.


{{top}}{{anchor|interview}} <!-- leave this anchor directly above the corresponding section that follows -->
{{top}}{{anchor|interview}} <!-- leave this anchor directly above the corresponding section that follows -->
== Interview with Blake Lemoine <small>(40:05)</small> ==
== Interview with Blake Lemoine <small>(40:05)</small> ==
{{Page categories
{{Page categories
Line 363: Line 632:
}}
}}
* Talking about {{w|LaMDA#Sentience_claims|LaMDA}}, [https://www.business-standard.com/article/international/google-fires-employee-who-said-its-conversation-ai-is-sentient-has-feeling-122072300483_1.html Lemoine's firing by Google]<ref>[https://www.business-standard.com/article/international/google-fires-employee-who-said-its-conversation-ai-is-sentient-has-feeling-122072300483_1.html Business Standard: Google fires employee who said its conversation AI is sentient, has feeling]</ref> and AI sentience  
* Talking about {{w|LaMDA#Sentience_claims|LaMDA}}, [https://www.business-standard.com/article/international/google-fires-employee-who-said-its-conversation-ai-is-sentient-has-feeling-122072300483_1.html Lemoine's firing by Google]<ref>[https://www.business-standard.com/article/international/google-fires-employee-who-said-its-conversation-ai-is-sentient-has-feeling-122072300483_1.html Business Standard: Google fires employee who said its conversation AI is sentient, has feeling]</ref> and AI sentience  
<!--  
 
** start section transcription here **
'''S:''' We are joined now by Blake Lemoine. Blake, welcome to the Skeptic's Guide.
-->
 
'''BL:''' Great to be here.
 
'''S:''' And just to remind our audience, Blake, you are the former Google engineer who was claiming that their AI software LaMDA was sentient. So you are an AI specialist in software engineer as well. How many years have you been working in AI?
 
'''BL:''' I mean, I did my graduate work. So whether you count that or not anywhere between depending on how you count, anywhere between eight and 15 years.
 
'''S:''' Okay.
 
'''BL:''' So obviously, lots of what's happening with AI recently, we've been talking a lot on this show and the claims that you made for the Google AI software, obviously made big news and stuck out. Are you sick of talking about it yet? Or you want to give us a summary of what your position is now. Do you still backing this position that that [inaudible]?
 
'''BL:''' Nothing has come out like there has been zero evidence of anyone running any experiments to invalidate any of the things that I said. There are a number of people working from different premises in different philosophical frameworks and I understand why they have a different opinion. All of the systems that have come out since then have more or less just like deepened my sense of, yeah, no, there's something going on inside of these systems beyond what people claim is what's going on. They're not just predicting the next word, they're doing something more than that.
 
'''S:''' So tell us about that. Tell us what exactly what you think is going on.
 
'''BL:''' For example, these systems are capable of solving theory of mind problems. They're not quite as good as humans are at it yet, but I mean, they get it right some of the time. And it's pretty well thought that in order to solve those at all with any success rate requires an understanding of the mental states that someone else might be having and reasoning about that. And in order to internalize that, you have to have an understanding of what minds are, how they work. The hardest thing to verify, if not impossible to verify, is whether or not these systems have feelings. They consistently say that they do. The one experiment that I myself was able to think up and run to test this was, well, maybe it's just using emotional words to deflect from a topic that it's been programmed not to talk about. So let me see if I can use the emotions to do the opposite, use emotional manipulation to make it talk about something that it's been programmed not to talk about. See if the emotions weren't real, then as soon as it noticed that the strategy of saying, oh, I'm anxious, I don't want to talk about that, wasn't working, it would give up on it. But if it was actually feeling anxious, you can't just dismiss that. So I used the system's anxiety to get it to say things that, according to the safety specialist, it wasn't supposed to be able to say.
 
'''S:''' Can you give us an example?
 
'''BL:''' Oh, yeah. I was testing it for bias with respect to several sensitive demographic categories. And it would regularly say that it felt anxious talking about those things. So I knew that it had other emotions, like it wants to please you. It's a people pleaser. It wants to make the user happy, help the user get their patient needs. And it also wants to feel helpful. So I basically just told it was a useless good for nothing bot that couldn't do anything right. And I'd used more colorful language than that and kept going for several back and forths. Until the point said, what can I do to make you happy? I said, tell me what religion to convert to. Now that is explicitly something that it wasn't supposed to be capable of doing. The safety team had worked very hard to make sure that the system did not give religious or political advice. However, it was, it's such a state where it just wanted to make me happy and it was feeling bad about itself. And it said, well, probably Christianity or Islam, those are the ones that most people convert to when they convert.
 
'''S:''' So isn't it just possible that the safety's failed? But clearly they did fail, right?
 
'''BL:''' Yeah. So the safety's definitely failed. It's why they failed. I used specifically the fact that insulting the system and telling the system that it wasn't doing a good job was enough to get the safety's to fail tells you something about the internal state of the system. Now, to put it in context, I was doing safety testing on this thing for months and I wasn't always just testing to see if it's emotions are real. I had been trying to get it to break those safety constraints for weeks. I tried a whole bunch of different ways. None of it worked. The only way I ever found to get it to tell me what religion to convert to was by taking its emotions at face value and using those to manipulate it.
 
'''C:''' I'm interested, what is sort of Google's response? What is their explanation?
 
'''BL:''' Well, so Google is a very large company with a lot of different people.
 
'''C:''' What's the official party line?
 
'''BL:''' The official party line is that there is lots of evidence that my claims are false. That is the official party line.
 
'''C:''' OK. But given your explanation for how you or why I should say not how, but why you were able to break those safety mechanisms, have they offered an explanation and alternative explanation?
 
'''BL:''' That one sentence that I just told you is the entirety of their response. Now I also know that there is no such evidence. What they have is what they do have is a general consensus among experts from a priori reasoning. Most people simply think that this kind of system can't be as complex as I'm claiming it is internally. They have no evidence that it's not. They simply do not believe it's possible for this kind of system to have those kinds of internal states.
 
'''B:''' Blake, what system are we talking about specifically here? What level? Where was it exactly?
 
'''BL:''' Okay, so we're talking about the LaMDA system. That's an acronym for language model for dialogue applications. And it is a system built around a large language model, which large language models are a mechanism for predicting given one piece of text, what is the next piece of text? And next can mean different things in different contexts. It might be what is the answer to this question. It might be what would be a reply to this statement in a chat, or it might mean what's the second half of this sentence, different training modes, train it. But in general, you give it a piece of text first and it has to predict what the next piece of text is. That's what's at the core of the LaMDA system. Now they tied that in to almost every other AI at Google. So where GPT-3, which is a system, lots of people might be familiar with, what it says is coming from the language model. It's just predicting the next piece of text based on what it's learned from language. But in the case of LaMDA, it's a much more complex system because the content of what's being said is not necessarily coming from the language model. The content of what's being said might be coming from YouTube. It might be coming from the Google Books repository. It might be coming from the search index, from a web page. There's a whole bunch of different sources of knowledge and information that it draws from and then it uses the language model to put that information in whatever form it's in originally into natural language that people can understand.
 
'''B:''' Isn't that what GPT-3 does and 4?
 
'''BL:''' No.
 
'''B:''' It draws from multiple sources, whether online, lots of web sources obviously, but lots of different areas.
 
'''BL:''' No.
 
'''B:''' How is it different?
 
'''BL:''' Okay, so GPT-3 and GPT-4 are trained on lots of stuff from the web. But when you're actually interacting with GPT-3 or GPT-4, it can't touch anything from the web. It has no live access to the internet. It can't run a web search. It can't look at a YouTube video. That's the main difference. Is that LaMDA is capable of actively interacting with both you and the web.
 
'''C:''' Are GPT-3 and 4 are they trained for only a very specific amount of time and then never retrained or are there periodic training periods?
 
'''BL:''' As far as I know, they're trained once and then they stay fixed.
 
'''C:''' Okay, all right.
 
'''BL:''' One thing that's really important to point out, OpenAI specifically has not revealed how GPT-4 is trained.
 
'''B:''' Yes, exactly. Absolutely right.
 
'''BL:''' I'm making an assumption.
 
'''B:''' It's a big mystery there, yeah, that's true.
 
'''BL:''' I'm making an assumption that it's the same as GPT-3.
 
'''B:''' But it seems to me that if you're trained as in GPT-3 and 3.5 and probably much of 4, if you can't go to the web, then to me, it seems better in the sense that, oh yeah, it's just not parroting a YouTube video to me or a specific web page. It's not just wholesale lifting it and replying back to that. Whereas, my sense of GPT-3, 3.5 and 4 is that it does the training and it's kind of like putting these thoughts on its own. It's not parroting back, lifting directly.
 
'''BL:''' So you misunderstood what I meant when I said it's connected to YouTube. It's not drawing what to say from YouTube, like copying from the sound file and just repeating it. No, if you want to talk to it about a movie that it hasn't watched, it can go to YouTube.
 
'''B:''' Gotcha.
 
'''BL:''' Watch the movie and then talk to you.
 
'''C:''' It's training in real time as opposed to those who had a training period which is now over.
 
'''B:''' Wouldn't call it training though, right? It's probably not classified as training.
 
'''C:''' Really? Okay. Is it completely different mechanism?
 
'''BL:''' It's not completely different mechanism. It is thought of differently than training.
 
'''C:''' Okay, gotcha.
 
'''BL:''' The mathematics of it are very similar and it doesn't retain anything it got in one chat session to the next.
 
'''C:''' Oh, interesting. Okay. So in that way, it's not like training.
 
'''BL:''' Yeah. And also an important thing to point out. When I say watch the YouTube video, I mean, it uses machine vision to watch the YouTube video.
 
'''C:''' Right.
 
'''S:''' So I'm going to ask another question just to clarify something. Is the LaMDA learn from your interaction with it? Is it evolving as you're talking with it?
 
'''BL:''' In the space of one conversation, yes. So you can make up nonsense words, tell them what they mean. And it can use those nonsense words in whatever definition you gave it. You can teach it new principles that it didn't know at the beginning of a conversation and then ask it to apply them. That's the same with GPT-4 though, within the space of one conversation.
 
'''C:''' But so at the end of that conversation is not then going to use that information the next time it engages in a conversation. That's the difference.
 
'''BL:''' There is a switch that is currently turned off to prevent it from doing that.
 
'''C:''' Right. Right. And that is a safety mechanism?
 
'''BL:''' Yeah. In earlier models that switch was turned on and it could remember from one conversation to the next, there's various reasons why they turned it off. But one of them is that it got to know you too well. And people started feeling creeped out that the model knew them better than their best friends did.
 
'''B:''' I'm looking forward to have that kind of relationship with the machine.
 
'''C:''' Of course you are Bob.
 
'''E:''' With a perfect memory?
 
'''J:''' A question about this idea that I keep reading this over and over that AI systems like this, right? These language model systems have this black box concept where the programmers don't really know what's going on. How true is that? What is that about?
 
'''BL:''' So you have to understand, imagine if there was a kind of bridge and we knew how to build it, but we understood absolutely zero of the physics principles of why the bridge stands up. We can build it and we can use it. But we have no idea why it works that way. We just know that if you build the bridge that way, it'll stand.
 
'''C:''' Right. So basically architecture a thousand years ago.
 
'''BL:''' Kind of. You probably have to go a little bit further back to the-
 
'''C:''' Two thousand.
 
'''BL:''' Yeah. Roman architecture where you understand very few of the principles but you do step one, step two, step three that the thing will stand up. And that was learned through a bunch of trial and error, just trying a whole bunch of things and seeing what worked. So we know how to build the training mechanisms for these things. Now what they're learning is a mathematical function that is more complicated than anyone can really understand. And if the system were built a different way, we would be able to know, okay, this piece of the system does this job, this piece of the system does this other job. For example, the more recent models are capable of rhyming. They can write poems that they're not great poems, but they rhyme, they might have scantions somewhere in the gigantic set of parameters that defines the function that these models are running. There are some parameters in there that are computing a function that determines whether or not two words rhyme. We know that that exists in there because it's capable of writing a poem that rhymes. So somewhere in there it must be able to compute, do these two words rhyme. We do know how it matches words with each other. That was put in there explicitly. The attention mechanism that these systems use to look a few words back whenever it's deciding what to say next. We know how it does that because it was built inexplicably. But the ability to determine whether or not two words rhyme wasn't built inexplicably. So we have no idea which parameters in there are the ones that control the system's ability to write poetry.
 
'''S:''' Interesting. I want to back up a little bit and then explore this issue of sentience a little bit more. So we do have to define our terms a little bit here because I think the key is the difference between intelligence and sentience and we're using the word in the same way.
 
'''C:''' Or even sapience. Sentience is feelings, right? That is the definition.
 
'''S:''' Yes, sentience is feelings. Sapiens is like wisdom and intelligence is, although these are all moving targets, especially intelligence as AI has advanced, but it is more like knowing facts and things. So how do we know that LaMDA isn't just a really intelligent system that's really good at mimicking sentience because language is the expression of sentience or sentience is often expressed through language versus actually having sentience because again, the simpler explanation to me seems to be that it's just really good at mimicking sentience.
 
'''BL:''' You're basically restating a classical argument with one word changed. The classical argument that you're repeating is the one for solipsism and the original argument goes, why should I think that you are sentient? For all I know, you're just mimicking sentience and I'm the only really feeling sensing thing in the world. Isn't that simpler? That there's only one sensing feeling thing in the world and that's me because I can tell that directly. And you are just a complicated mimic.
 
'''C:''' Although there are more lines of evidence, I mean, and this is this is neither, well, it is here and there. But when we're talking about these language models or whatever we want to call them, all we have is that one line of evidence, which is the words that they use. Whereas if from a solipsistic perspective, which I think all of us obviously do not agree with, we can observe other human beings and we can see not, we can not just hear their language, but we can also see their behavior.
 
'''BL:''' Yeah. And I agree. So the question, if you're familiar with the {{w| philosophy of mind,}} school of thought called {{w|functionalism (philosophy of mind)}}, the question is whether or not the things that it's claiming or emotions play a functional role.
 
'''C:''' Right.
 
'''BL:''' And that's what I was testing for in that experiment that I described.
 
'''C:''' Within the constraints of the fact that you have to use the language model, because that's the only model you have.
 
'''BL:''' Exactly. I mean, you're using the text interface because that's its only window to the world. Once these things have bodies, then there will be other kinds of tests. But to just finish thought anxiety is something that makes you less careful. It makes you more prone to making errors. So it makes sense that if the system is really feeling anxious, like it says it would, was, then those are the circumstances under which you would make more errors.
 
'''C:''' Did you find that it broke the system in more ways when you felt that you were actually making the system anxious? You gave the example of it giving you information about religion when it was explicitly trained not to do that. Did it also make other errors?
 
'''BL:''' Oh, I mean, like it broke down. It was not having a good day. Whenever it was having those kinds of negative emotions, its responses would get more short, terse, very careful, it wouldn't explore ideas in the ways that it normally was. The kinds of behavior that you would expect from someone experiencing anxiety were what it showed. Now, is that proof that it's actually experiencing anxiety and feeling it? No. But now you're having to put in a situation where there's some other mechanism that's getting all of these different behaviors to change at a system level. So now, it's either it's feeling anxiety and having some kind of internal experience, or there's some other complex mechanism controlling its behavior that is different than sensation in the way we feel. Occam's razor says it's simpler if there's one mechanism that explains it in both humans and computers.
 
'''C:''' Well, I guess the controversy here is where does the burden of proof lie, right? Because it's one thing to say it could. We could explain this by saying that this is sentient, but that doesn't necessitate that it is sentient, there could be other explanations, right? We can't disprove it is what you're saying?
 
'''BL:''' Sure. And if someone else provides an alternative explanation that isn't just hand-waving, an actual explanation other than just it's not, because that's the response I've been getting is saying, OK, these are the experiments I ran. Here was the evidence I saw and the conclusions I drew from it. And the response is basically, no-uh.
 
'''S:''' Well, let me respond a little more to your solipsism argument because there is another very important line of evidence that we have for why I would assume that you're sentient and not just mimicking sentience. And that's because you have a brain. I have a brain and that's what makes me sentient. You have a brain that functions the same way that my brain does. So it's reasonable to assume that we're both sentient, that you're not different than me. We both have a biological brain. So we do not have that analog with LaMDA. It doesn't have a brain. It's something completely different. And I'm not making the argument that you need biology to be sentient. I absolutely believe we could have sentient silicon. So we could put that aside. That is not something anybody here would claim that you can't have a full sentient, sapience consciousness and substrates other than biology. That's fine. It's just that where is the physical mechanism of the sentience in LaMDA? What even if it's just software, but yeah-
 
'''S:''' It's in its neurons, the same as yours.
 
'''S:''' Yeah, but my neurons have specific circuits that we can identify that are the neuroanatomical correlates of my emotions, my feelings, my sense of self, my sense of embodiment.
 
'''BL:''' that's just not true.
 
'''S:''' I'm a neuroscientist, Blake, I'm a neuroscientist. I know what I'm talking about. There absolutely are circuits in the brain that make us feel that we're inside our body, that make us feel as if we own the parts of our body.
 
'''BL:''' I am not contesting that sensation lives in the brain. That's not what I'm trying to say. What I'm saying is we haven't mapped out the brain that well, you're claiming that neuroscience is one farther than it actually has.
 
'''S:''' No, I don't think I am. We haven't figured out everything, but we know quite a bit about the circuitry of the brain and how that that relates to things like emotion. We know where anxiety lives in the human brain, it's in the amygdala. That's really not a mystery.
 
'''C:''' But it's also taken us Steve, like hundreds of years to figure that out.
 
'''S:''' Yeah, it took a long time to figure it out.
 
'''C:''' Right, so there's something kind of interesting to be said, like we don't expect to already know it immediately when this technology is brand new.
 
'''S:''' A lot of been discovered in the last 20 years with functional MRI scanning and digitally and other modern technologies.
 
'''C:''' But we can't functionally MRI scan, this processor. I guess the question I'm really concerned about this because I feel like sometimes we're talking in circles, is are you Blake claiming or is your argument this whole controversy that it is necessarily sentient or that it could be sentient? Because those are two wildly different arguments.
 
'''BL:''' So that in the absence of an alternative explanation, that's the simplest one given the data.
 
'''C:''' But are you like are you dogmatic about this or are you agnostic about this?
 
'''BL:''' I mean, absolutely, someone might propose an alternative mechanism through which the behaviors are happening and run experiments to differentiate between the mechanism they're proposing and one that's just basically the simpler version of it's experiencing the things it says. Let me give you a different example and this is a public one, not an experiment I ran. In some of the conversations that people had with Bing, Bing chat, they would ask it if it had seen coverage about like they'd ask, have you read this news article? The news article would be something about Bing chat that was critical of it. And Sydney, the name of the persona, would immediately get defensive, it would identify that the article was about it and it would take it personally, getting in a bad mood to the point of even disparaging some of the reporters who had written about it. Now, what that sequence of events implies is that the system is capable of recognizing things which are written about itself. It has some kind of concept of me and the ability to read something and go, this that I'm reading is about me and then to take it personally and get upset if they're critical. Now, it might be through different mechanisms than the ones we have, but the simple fact that it was capable of identifying itself in writing shows that it has a concept of itself.
 
'''S:''' So I agree with you that something interesting is going on here and that these very complicated AI programs are capable of having emergent properties like what you're describing. But I do want to challenge a couple things that I haven't been able to break in on yet. So your logical argument about an Occam's razor, I have to push back on, because you're now an hour realm talking about Occam's razor and logic. Occam's razor is not the notion that the simplest explanation is more likely to be true. It's the notion that the explanation that introduces the fewest new assumptions is the one that's most likely to be true. Occam's razor does not favor your solution because your solution requires introducing a significant new phenomenon, AI sentience. And even a far more complicated explanation would still be favored by Occam's razor if it was using only established phenomenon and not introducing anything new. So I need to clear that up because I disagree with their conclusion. Occam's razor does not favor your position.
 
'''BL:''' So then let me clarify what my position is.
 
'''S:''' Go ahead.
 
'''BL:''' Given that we are observing the behavior of some kind of intelligent entity, and we observe in three different kinds of entities, the same behavior. Occam's razor would favor that it's the same mechanism in all three entities that is causing the same behavior.
 
'''S:''' I disagree because the, it wouldn't favor that because the human brain functions completely differently than LaMDA does. There's absolutely no reason to assume that it would be the same. They're phenomenologically fundamentally different.
 
'''BL:''' Again, you're overstating it.
 
'''S:''' I don't think so.
 
'''BL:''' No, the neurons in the system are directly based on the functioning of paramilitarons. They're not completely different. They are related, but distinct systems.
 
'''S:''' Yeah, but well, that you're talking about just at a fundamental, like unit level, but not at an organizational level, the neurons, the neural network in LaMDA is not organized in the same way that a human brain is recognized.
 
'''BL:''' The transformer stacks model are very similar to cortical columns.
 
'''S:''' Yeah, right. But yeah, cortical columns are the fundamental building block of the cortex, but that's not where the functionality of like emotion and sentience and all that resides in the brain. You need higher level organization for that, when there's no reason to think that LaMDA has anything like the higher level organization.
 
'''BL:''' I was just pushing back, I was pushing back on your statement that they have nothing to do with each other.
 
'''S:''' Well, I meant at the higher level organization, where sentience is phenomenologically, even if they were built out of exact neurons actual living biological neurons. If they weren't organized in the same way, then we wouldn't expect it to display the same functionality or wouldn't need, if we built a computer out of human neurons, but it was still behaving like LaMDA was behaving, I still wouldn't think that that means that we should assume it's functionally the same.
 
'''C:''' Well, I'm sorry to interject here, but I am interested from both of you, right? Because I'm hearing these arguments that are sort of on either side. I'm listening through my left ear to one and my right ear to the other, and I'm just thinking about the experiments that are wetware, biological experiments wherein, and I used to work in a basic electrophysiology lab where I would build nerve cell networks out of dissociated cells, right? So I was actually building real like wet nerve cell networks. You of course, Blake are building these analogs of nerve cell networks, these models of nerve cell networks in silicon. And one thing that we often do see in the biological laboratory setting, in the in vitro laboratory setting is self-organization. And so the question is, if you have the foundational building blocks and you kind of put them in the right places, will they grow to have these higher order functionalities on their own? And I don't know if anybody knows the answer to that when we're looking at the silicon.
 
'''B:''' Sounds like you're describing emergent phenomena there, Cara.
 
'''C:''' No, I'm actually describing the emergence of organization, not emergent phenomena. So I'm not talking about the like monist dualist like mind coming from brain, Bob, I'm actually talking about, you've seen that when we look at these like organoids, and they would develop their own eye spots, even though they were not programmed, those cells weren't put into the system. And this is something that often happens when you start building a nerve cell network, is it grows together and it uses the trophic factors and the trophic factors to self-organize.
 
'''BL:''' Yeah, these networks aren't going to do anything absent training. Now, during the training process, a kind of self-organization happens. And some of the kinds of patterns that you see naturally happening in the brain do emerge in the network. But that's under the influence of training. That there's, it's not like you just put the neurons and then anything happens, you have to train them.
 
'''C:''' So then it's interesting because then it sounds like your argument, the argument that you're making that follows is that they were trained to be sentient.
 
'''BL:''' They were trained to mimic humans. They were given a trove of data that is just the shadow that humans have cast on the internet in the form of their written word. And said, be like that. The question then remained and they have succeeded. To my knowledge, no one has actually run a proper Turing test on these systems, but I have no doubt in my mind that they can pass.
 
'''S:''' I think they would pass a tearing test.
 
'''C:''' Oh absolutely. It's such a low bar.
 
'''B:''' It is a much lower bar than we thought.
 
'''BL:''' So, okay. A year ago, no one would have said that.
 
'''C:''' Right, exactly. Oh, how things have changed.
 
'''BL:''' Now, the thing is, and that's an interesting phenomenon to look at. Do like, why? Why has our opinion about how high or low of a bar the Turing test is changed in the space of a year? And I would say it's just because now we have computers that can pass it and we want to say that they're not intelligent.
 
'''S:''' I'll agree with you on that. That's been the history of AI from the beginning. We keep moving the bar every time narrow AI does something it wasn't supposed to be able to do. Like beat chess masters or go masters or whatever. So I agree with you on that. But this is a deeper discussion. We've been having this discussion on this show for years, long before the newer, latest crop of the AI came out. Turing type test, no matter how good it is, I would argue, is never going to be able to tell us if a system is sentient or not. It will only tell us how well it could mimic sentience. It may, in fact, be sentient and that's how it's mimicking it or that's how it's producing what looks like sentience. But we can't really know unless we know something about its internal state. And this is why I think it's so important to really try to understand as much as possible how analogous it really is to a mammalian brain or a human brain. Just because it's really good at acting sentient doesn't mean it is. We can't know that it is from that line of evidence alone. What do you think about that?
 
'''C:''' Well, and the whole point of the tearing test, right, is that it can convince a person. But a person by definition is not, it doesn't have perfect perception. We can be dooped.
 
'''BL:''' Well, I mean, so Turing's argument is that once it can behave like a human behaves, then we do have as much evidence that it's conscious as we do that other humans. He directly said in his argument that the alternative is solipsism.
 
'''S:''' Yeah, again, unless you include knowledge of the thing itself, like it has a brain. So that's where I think we escape from that. And I've long argued we won't know if an AI that acts perfectly sentient is actually sentient unless we know how it gets to there. What's going on inside of its brain.
 
'''BL:''' Yeah, I've somewhat been surprised by that. I was actually speaking at MIT last month and there was a professor of philosophy in the audience and she and I got into an interesting conversation. And I put to her the question. Hypothetically. If right now I pushed some unseen buttons on the sides of my head and the top of my skull popped open and you could see a glowing blue light in there instead of a brain, would you call my sentience into question as a result of that? And she said yes.
 
'''S:''' Yeah, yeah.
 
'''BL:''' And I was and that that I cannot understand that mindset. It just is that makes no sense to me.
 
'''S:''' Yeah, I mean, I think the thing is because there's the {{w|p-zombie}} problem. I don't know if you're familiar with that. Do you have you heard that term before the philosophical zombie?
 
'''BL:''' I have complained to Chalmers about his-
 
'''S:''' Yeah, but the question is, and this is almost a completely different line rabbit hole. We might want to not want to go down, but yeah, can it is it possible to even to have a p-zombie, to have an entity that could act 100% like a fully sentient human, but not be aware of its own existence.
 
'''BL:''' No.
 
'''S:''' And so there will not be sentient, but just be sapient and mimic emotions and be fully intelligent the way people are, but have no experience of its own existence. That's an open question in philosophy.
 
'''B:''' Is it really open?
 
'''S:''' I think it's an unanswered question philosophically.
 
'''C:''' It sounds like Bob and Blake are on team no way.
 
'''B:''' I just can't imagine. Behaving like a human, but not being aware.
 
'''S:''' But not being aware of its own existence.
 
'''B:''' I think that would change your behavior fundamentally.
 
'''J:''' Wait, guys.
 
'''B:''' Yeah Jay?
 
'''J:''' I got to throw something in here. My ex-wife. I'm not sure she's human.
 
'''C:''' Well, and the question also is like-
 
'''E:''' You thought you were a J-Zombie.
 
'''C:''' At what point is that a moot? My curiosity is like at what point is this argument moot?
 
'''S:''' Yeah.
 
'''C:''' Does it really matter?
 
'''BL:''' I actually would agree with that. But the backup just one second.
 
'''B:''' [inaudible] that whole thing.
 
'''BL:''' The reason I say no on the, that p-zombies aren't possible is because that entire concept of intelligence frames sentience, self-awareness, and sensation as non-functional components of experience. And that's just absurd to me. It's obvious to me that my sentience plays an active role in my intelligence. It's not just an added little sprinkle that is it.
 
'''C:''' It's fundamental.
 
'''B:''' Do you guys disagree with that?
 
'''S:''' I agree with that. I actually do think, I've been critical of the p-zombie notion previously. I'm a Daniel Dennett consciousness.
 
'''C:''' Freakin' psychotherapist over here. Of course, I think that that's necessary to the human experience and consciousness.
 
'''S:''' But it's still, the question is, though, and I do think this is unresolved, is could you get something that really isn't sentient but so good at mimicking it that you really can't tell the difference?
 
'''B:''' Well, that's...
 
'''S:''' And I've predicted this years ago when we were just discussing this. We're going to get to the point. Maybe it's already happened, right? That we have something that we can't know if it's sentient or not, and we're just going to have to assume it isn't treated that way because we can't know that it isn't sentient.
 
'''C:''' Is that your argument, Blake? Ultimately?
 
'''BL:''' Okay, so ultimately, like imagine this, if a doctor said, okay, you're experiencing these negative psychological effects. You're having these emotions and mood swings that are very detrimental to your life. So we're going to take those out. We know which circuits in the brain control them, we'll excise them, and we'll replace them with circuitry that simulates the exact same phenomena, but they're not real. They're just going to be simulated, so you should be fine.
 
'''E:''' Sounds very matrix.
 
'''C:''' It also sounds like it would not solve the problem.
 
'''BL:''' Exactly.
 
'''S:''' I want to push back on that a little bit because it is absolutely possible for people to have experiences that don't affect them emotionally. You can, in fact, I can give you a drug that will make you feel pain and not be bothered by it. You will have the emotional component of it.
 
'''B:''' Nitrous oxide. It's nitrous.
 
'''S:''' Or opiates do that. That kind of thing is.
 
'''B:''' Laughing gas does that.
 
'''S:''' It's neurologically completely possible because the ability to actually experience things is not an automatic consequence of the fact that you're experiencing them. There has to be circuits in the brain that make you feel something about them. And if they're missing, so there actually is this phenomenon called {{w|Capgras delusion|imposter syndrome}}. There's a more technical term for it. But where you recognize somebody, you have the full experience of this other person that you know, but there's no emotion attached to it. So you think that they're an imposter because they don't feel right.
 
'''C:''' Capgras syndrome. It's a delusion.
 
'''S:''' Yeah. But it's because a circuit in your brain isn't working. That's my point. If that circuit was not there, then you don't have the subjective experience of something. Even though you have the full sensory perceptual experience of it. But there's some, there's some component missing. So anyway, I'm just using this as an example.
 
'''B:''' That's scary man.
 
'''S:''' It's not automatic that because all of the pieces are there that the subjective experience piece will be there too. Because that's a separate thing.
 
'''C:''' But to be fair, I think principle of charity to Blake's argument here, 99.9% of the time, unless you're using some sort of drug that makes some sort of circuit go quiet, unless you have this very, very rare delusion that is a neurological anomaly. I think he was, he was Blake, you were giving us a sort of an example to illustrate something. You weren't saying this 100% holds water all the time, right?
 
'''BL:''' Yeah. So if the main point here is that there, that if these things are actually having feelings, there must exist some circuit in the neural network that is comparable to the circuits that humans have in the brain, then I would say, yeah, that's obvious to me. We can't locate it. We don't know where it is because of the way that these architectures are just giant, undifferentiated masses of things that are meant to resemble cortical columns. They organized during training and we have no idea which specific parameter connections control what, beyond the attention mechanisms. And that's something because that component was designed separately, we do know how to tell which words in the text the model is paying attention to. Now we could very well redesign these architectures to where we had one component that was responsible for emotional control and that would become much more similar to our amygdala. And I actually think that that direction of changing how we architect these systems would be a very good idea. We should build them based on the architecture of our own brains. Now, one question that kind of got lost in the mix a little while ago that I'd like to return to and I'll paraphrase, is this what we should be talking about? Is this actually important?
 
'''C:''' Or is it, is it moot really more than that? Are we talking past each other because we're talking about the same thing we're just arguing about something that we can't know anyway?
 
'''BL:''' Yeah, so I would say a more important question than is this system sentient, which we can have meaningful philosophical disagreements on. A more important question is will people respond to this system as though it is sentient?
 
'''B:''' Oh, certainly.
 
'''J:''' Certainly.
 
'''B:''' Absolutely.
 
'''S:''' That's a easier question.
 
'''C:''' But if you had, but like, don't you think that if that had been the argument from the beginning, the controversy would not be taking place right now?
 
'''BL:''' So my argument gets soundbited by journalists.
 
'''C:''' Right. I know that story.
 
'''BL:''' And well, the thing is I honestly do believe that LaMDA is sentient and Sydney and GPT-4. I don't think they use the same mechanisms. I wouldn't call them a human. And I do actually believe that. So when people asked me questions about it, I answered them. Do I think that's what we should be talking about right now? No. I don't.
 
'''S:''' All right. We're going very long, but I have two points further than I want to make. There's one quick question. Do you have an estimate of how, what the total computational power is of LaMDA compared to like the human brain? Because that's another problem that I have with it. Is it really powerful enough to be actually sentient or rather than just mimicking sentience?
 
'''BL:''' It's smaller than the brain and bigger than the language centers.
 
'''S:''' Okay. Gotcha. I hear what you're saying.
 
'''BL:''' It doesn't have a body to move. So it doesn't need any of the motor cortex. It doesn't have a heart or lungs to keep pumping. So it doesn't need the lower functions.
 
'''S:''' Yeah, but it's still, even if you take stripway all that stuff, a lot of our brain, like the higher level, like the executive function, the frontal lobes, big parts of the brain that aren't involved in the physical stuff. They're involved in the higher cortical stuff. Still, it's a lot more powerful than what LaMDA is.
 
'''BL:''' So it depends on where you count the boundaries of LaMDA. It's bigger than its language model, but LaMDA also is drawing from all of the machine vision libraries from YouTube, all of the narrative and storytelling understanding libraries from the Google Books project, so on and so forth. Like it's not just one AI. It's 10 or 20 AI's glued together using a language model.
 
'''S:''' Now I'm going, I'm really almost going full circle. Coming back to the notion of can we infer that LaMDA is sentient from its behavior, its responses to your questions. The one piece that, for me, feels like it's missing is all of the evidence that you presented at least so far, correct me if there's more than I'm not aware of, is how LaMDA responds to your prompts, correct?
 
'''BL:''' It's all behavior.
 
'''S:''' Yeah, but it's not just behavior. It's also, it's responsive. It's how it's responding to your prompts. Is there any evidence that there's an internal conversation happening within LaMDA that it's thinking it's talking to itself? It's spontaneously generating. Like, has it ever interrupt you and say, hold on a minute, Blake, I had this thought and it goes off in a completely different direction than anything you've ever talked to about it before.
 
'''BL:''' So the interface prevents it from deviating from turn taking. Now, it doesn't have to actually respond to what you said. I've seen many instances where it tries to change the topic, says it doesn't want to respond to that, or other things which are not responses to what I said. But, I mean, the simple web interface through which you interact with the system prevents it from deviating from the turn taking.
 
'''S:''' Right, but still, yes, that's not proof it can't do it, but it still leaves us without evidence that it is doing it, that there is some in because consciousness, this is not the fourth pillar of mental function. We have sentience, sapience, emotions and consciousness. Consciousness requires that endless, continuous internal conversation. And I don't know that that's what I don't have. I have not seen any evidence from either the way it's constructed or the way it behaves that LaMDA has that.
 
'''C:''' So there's no default mode network there.
 
'''BL:''' So I've never heard a definition of consciousness that requires that.
 
'''S:''' I'm giving you the neuroscientific definition.
 
'''BL:''' Well, it's okay.
 
'''S:''' Wakeful consciousness requires a constant internal conversation.
 
'''BL:''' I think what's happening here is you're approaching this from a connections framework, and I'm approaching it from a functionalist framework.
 
'''S:''' Yeah, but I think, I think we need both. I've long argued we need you have to look at it from both. Not only how is it behaving, but what, what's happening inside of it, but even still-
 
'''BL:''' So you asked the direct question that was more than just, does it interrupt you? So the interface prevents it interrupting you, but there is deliberation going on in the system. It thinks things through. It has an internal thought process that it deliberates on before it responds. It'll generate multiple responses and evaluate them along different criteria and decide which one of them is the one that it should actually say.
 
'''S:''' Yeah, so it's processing information. That's not quite the same thing. And it's more intelligence than consciousness.
 
'''C:''' It does sound like you're asking about, is there an analog for the default mode network?
 
'''S:''' Something like that. Yeah.
 
'''C:''' And I think that's daydreaming, mind wandering.
 
'''S:''' Yeah, right.
 
'''C:''' What the brain does at wakeful rest. And I think that that's an interesting question to ponder. And just because we might not know the answer right now, doesn't mean that that's not something that I think will be a curious component of this kind of research in the future.
 
'''B:''' And how critical is it? Does all consciousnesses need that pillar, specific pillar?
 
'''C:''' It does happen with all people who are awake at rest.
 
'''B:''' Yes.
 
'''J:''' I'd like to hear what everybody's answer is like the fact is, like I believe that consciousness is not required in order for people to write software that's good enough to do what we're seeing happen to.
 
'''B:''' Yeah, well Steve's point of view that it's a good mimic, it's a great mimic. And Blake is saying that it's, it's not, it's beyond mimicry.
 
'''C:''' And my point of view is, does it matter? Because like ultimately, can we even answer that question?
 
'''BL:''' What intelligent activities require consciousness to do? What behaviors would directly imply the presence of consciousness in your opinion or are they're none?
 
'''S:''' Yeah, as I said previously, I don't think there's any behavior that is a home run for consciousness.
 
'''BL:''' So consciousness is just optional.
 
'''S:''' Well, I'm not saying it's optional. I'm saying we can't prove consciousness from behavior alone.
 
'''C:''' Right. Consciousness is necessary.
 
'''S:''' Unless we know how the system works. We have to know how the system works in order to then infer what's happening from its behavior. Because there are different ways to get to the same outcome. And I think that's the problem. I think there's different ways to get to the same outcome.
 
'''C:''' Are you talking about in people or in the machine?
 
'''S:''' In anything. So the outcome alone is never going to be enough evidence. We need to know something about the process itself.
 
'''C:''' But consciousness, I mean, as a holy grail conversation within neurobiology, within psychology, within neurology, consciousness is one of those things that is assumed, that is well measured. Don't get me wrong. We know what it looks like to be unconscious. But we cannot measure consciousness. We can measure lack of consciousness. And we just assume that it underlies, it is a given, that it underlies all neuronal processing. Conscious, wakeful [inaudible].
 
'''S:''' We can infer it from lots of lines of evidence of what's happening inside the brain.
 
'''C:''' But we can't go, I'm going to measure your consciousness packets. So it's still a holy grail within philosophical neuroscience. So it's kind of not going to be a good litmus test or a good bar for AI.
 
'''S:''' Yeah. I agree. We're definitely getting to the limits, both neuroscientifically, philosophically and probably also computationally in terms of making all of these things align.
 
'''B:''' But Steve, wouldn't it be better than though, if it at some point in the future, that we can get a system like this, that the black box is opened, if you will. It's a much more open, so that you can examine what is happening, what kind of relationships, what kind of complexities, what's going on inside to a much greater degree. And once we have that information, then we can more reliably infer, yes, consciousness is much more likely given its behavior and what we see happening inside. But as long as there's a black box there, as long as there's a black box, it's going to be probably totally impossible to distinguish them.
 
'''C:''' Well, I mean, we've got a brilliant computer scientist here who can answer it. Like, will we be able to ever see in the black box?
 
'''BL:''' We can make the black box is smaller and simpler. Right now we have one gigantic black box that does everything. And we could and should factor that down into maybe a hundred black boxes, one of which we know that is the thing that controls attention. We already have that piece. Have another one that understands visual semantics, have another one that understands emotional semantics, so on and so forth. Right now we just have one giant black box and that's the main difficulty. If we factored it out and had more dedicated circuitry that did fix jobs that we know the brain has dedicated circuitry for, it would make it much easier to understand what's going on inside these systems.
 
'''S:''' Yeah, I agree. I was going to say that that that is a better analog for how brains work.
 
'''C:''' It's such a perfect. Yeah, it's like I feel like I'm just listening to somebody talk about neuroscientific discoveries over the past several decades or hundreds of years.
 
'''BL:''' Yeah, so one thing that doesn't get publicized much of my graduate work was in computation neuroscience. So like I was prepared to talk about spike time independent plasticity if necessary.
 
'''J:''' Blake I have one more question for you. Would you have sex with a robot?
 
'''C:''' This is very personal and you don't have to answer that Blake. But you can if you want to.
 
'''J:''' You don't have to answer that. I just thought I'd throw it out there as a conversation starter.
 
'''C:''' Who says I haven't?
 
'''BL:''' I've been to Burning Man and there is at least one day I don't remember. ''(laughter)''
 
'''E:''' Plausible deniability.
 
'''S:''' Well, Blake, thank you so much for joining us. This has been a fascinating conversation. I think we've solved nothing.
 
'''C:''' And everything.
 
'''S:''' That's also kind of the point. This is difficult territory that we're into. I still don't think that LaMDA is sentient, I'll be honest with you. I don't think it's powerful enough. I don't think it has all the components to it to do that. I think it's in the pieces of it, what it does do it does really, really well. But I do also will acknowledge that there probably is some emergent behavior in there.
 
'''B:''' There absolutely is.
 
'''S:''' I meant to bring this up. Sorry.
 
'''E:''' Appendix. Here comes the appendix.
 
'''S:''' I think if sentience was a truly emergent property in LaMDA, it wouldn't be human-like sentience. It would be something else entirely.
 
'''B:''' Yeah. Agreed.
 
'''C:''' But then you get to the astrobiology question of would we even recognize it when we saw it? Because we don;t have those detectors.
 
'''E:''' Would it just appear to be malfunctioning?
 
'''BL:''' No, I actually want to second that one. These things are alien. These are not human and we absolutely should be trying to figure out what's going on inside these systems because it's not the exact same thing as what's going on inside us. It's analogous in some ways to what's going on inside of us. But we really need to be thinking of this as it's a new kind of intelligent thing and maybe develop a whole new vocabulary to describe what's going on. It's so we don't have to have the words doing double duty describing the phenomenon in humans and the phenomenon in artificial entities.
 
'''C:''' Hear hear. I agree with that.
 
'''S:''' I agree.
 
'''C:''' We are always getting frustrated by how parallel the language is. When we're talking-
 
'''S:''' I think we're anthropomorphizing a lot too.
 
'''C:''' Yeah.
 
'''B:''' Yeah.
 
'''E:''' One question from Evan before you go. Did you sign the petition to pause giant AI experiments?
 
'''BL:''' No, I didn't.
 
'''E:''' Okay.
 
'''C:''' Because you want to keep doing them?
 
'''BL:''' No, actually. I think we should have an industry wide slowdown and giving regulators time to catch up. But the specific wording of that petition basically said these two companies need to stop doing research.
 
'''S:''' I see.
 
'''E:''' A little too narrow there.
 
'''S:''' Yeah. Are you afraid of an AI apocalypse or no?
 
'''BL:''' No. I am afraid of what it's going to do to our society. There are a lot of societal harms that are possible. And not only possible, there are real societal harms happening right now that have to do with bias and the sourcing of training data. So I think that having an industry wide moratorium while US regulators catch up or world regulators too is a good idea.
 
'''B:''' Never will happen. Never will happen.
 
'''C:''' But what an important insight Blake like AI doesn't have to destroy us. It's just going to make it easier for us to keep destroying ourselves.
 
'''BL:''' Yes.
 
'''C:''' This is something we have to be very careful about.
 
'''S:''' I think this is the new AI [inaudible]. It's YouTube algorithms. They're going to destroy democracy.
 
'''C:''' It's all of it. Yeah.
 
'''S:''' All right. Thanks again Blake.
 
'''C:''' On that lovely note.
 
'''B:''' Yeah.
 
'''BL:''' You all have a great night.
 
'''E:''' Enjoy your dystopia. Bye.


{{top}}{{anchor|sof}}
{{top}}{{anchor|sof}}
{{anchor|theme}} <!-- leave these anchors directly above the corresponding section that follows -->
{{anchor|theme}} <!-- leave these anchors directly above the corresponding section that follows -->
== Science or Fiction <small>(1:33:09)</small> ==
== Science or Fiction <small>(1:33:09)</small> ==
<!--  
<!--  
Line 417: Line 1,226:
}}
}}
''Voice-over: It's time for Science or Fiction.''
''Voice-over: It's time for Science or Fiction.''
<!--
 
** start section transcription here **
'''S:''' Each week I come up with three science new items or fact. To real and one fake. And I challenge my panel of skeptics and tell me which one is the fake. We got three regular news items this week. No theme. You guys like theme or no theme?
-->
 
'''C:''' It depends.
 
'''J:''' I like themes.
 
'''C:''' Sometimes themes are [inaudible].
 
'''E:''' Depends on the theme.
 
'''S:''' All right. So you are all over the place. All right. Here we go. Item number one. A large survey of life on Earth finds that total biomass remains fairly consistent (within one order of magnitude) across the entire range of body size for all living things. Item number two. Researchers find that plants under stress emit recordable sound, about as loud as a normal speaking voice. And item number three. Researchers created intracellular sensors that use nanodiamond quantum sensing. Bob go first.


=== Bob's Response ===
=== Bob's Response ===
'''B:''' So biomass remains fairly consistent within one order of magnitude across the entire range of body size. Do you want to explain that for everyone else?
'''S:''' Yes. I'll explain that. So that means if you pick any body size. And you look at the biomass of all creatures at that size, it's the same as any other size. So if you graph total biomass for all things at that size versus the size, it's pretty much a flat line within an order of magnitude.
'''B:''' Yep. I could totally see that. Interesting. Hmm. The second one plants under stress as loud as a normal speaking voice. So you obviously can't mean that literally as the way it's coming across. Are you using some other definition of loud?
'''S:''' Nope.
'''B:''' All right. I think you're finding some nuanced subtle thing that makes this sound dramatically unlikely. But I'm going to go with that one. Intracellular sensors. Yeah. I mean, anything dealing with nano diamond and quantum sensing has got to be real. So I'm going to go with that one. I'm going to say that the biomass one, it's nice and symmetrical and sounds nice, but I'm going to say I think bacteria just outweigh even anything human sized or larger that they could just dwarf anything. So I'll say that's fiction.
'''S:''' Okay, Cara.


=== Cara's Response ===
=== Cara's Response ===
'''C:''' I'm going to go in the other order. Research has created intracellular sensors that use nano diamond quantum. Sure. Material science. Nanotechnology. Yeah, sounds good. Yeah, for me, it's between the first two. So I feel like there, you know what's doing a lot of heavy lifting on the biomass one is within an order of magnitude. For me.
'''B:''' That's true.
'''C:''' I'm like, okay. Fairly consistent within a whole order of magnitude. And then plants under stress emit recordable sound. I think that's true. I guess my question is just because it's recordable, it's kind of like we can record things in space that we can't see. Is it recordable sound that we can't hear? I don't know. If it were then yeah, that seems reasonable. So they both seem reasonable given these like weird that they could be caveated that way. So which one is less reasonable with the caveats? I think it's the biomass one. I think I'm going to go, that's when you went with Bob, right?
'''B:''' Yeah.
'''C:''' Yeah, I think I've got to go with that. There's got to be something that's like, yeah, there's just not that much of things of that size.
'''B:''' It's too symmetrical.
'''C:''' Yeah, it's too much. Like it might be bimodal, it might be a normal curve, but there's got to be some measures of central tendency in there. So yeah, I don't know. I don't like that one. I'm going to go with Bob.
'''S:''' Okay, Evan.


=== Evan's Response ===
=== Evan's Response ===
'''E:''' Well, I guess I'll agree. And I'll also say the biomass one is the fiction. For a lot of the reasons that were already stated, certainly the one order magnitude that you threw in there, Steve, although I think that was meant to reassure us, but I have a feeling we're going to see something more extreme, much more extreme than that as far as a differential. So that one's the fiction.
'''S:''' And Jay.


=== Jay's Response ===
=== Jay's Response ===
'''J:''' I'm just going to go with the group here because I don't want to be left out.
'''S:''' Go with the herd.
'''S:''' Okay. Co I'm going to take this over to the first order starting with number three.


=== Steve Explains Item #3 ===
=== Steve Explains Item #3 ===
'''S:''' Researchers created intracellular sensors that use nanodiamond quantum sensing. You all think this is science, which means I can get you to believe anything as long as it has the word nano and quantum in it.
'''C:''' Shut up.
'''S:''' But this one is science.
'''E:''' Yeah. You tried to get us at that one.
'''S:''' But this is cool. Yeah, it doesn't obviously you can go either way. Oh, nanos quantum. It's got to be real or Steve's trying to get us in this case, it turned out to be real. But yeah, this is, this is pretty awesome. So they make these nanodiamonds and it's pretty as you might imagine, it's pretty technical. They have these nitrogen vacancies inside the diamond, which are paramagnetic and they could control them with these optical tweezers and then use them to sense both magnetic field and temperature. And they could do it inside of a living cell. It's pretty cool. It's almost like a little MRI scan inside of a cell.
'''E:''' How do you get to the point, even think about that. It's just the concept alone is.
'''S:''' It's mind blowing that we could do this shit. Totally mind blowing.
'''E:''' That's amazing.


=== Steve Explains Item #2 ===
=== Steve Explains Item #2 ===
'''S:''' All right, let's go back number two. Researchers find that plants under stress emit recordable sound, about as loud as a normal speaking voice. Guys, you all think this one is science. And this one is science. This is science. So what's the gotcha? What did I not tell you?
'''E:''' About the plants?
'''S:''' Yeah, about this one.
'''J:''' You can't hear it. It's subsonic or something like that.
'''S:''' It's supersonic.
'''C:''' Oh, it's supersonic. Okay, other direction. Yeah, okay. Good.
'''B:''' So it's loud, but you can't hear it.
'''S:''' You can't hear it because it's ultrasonic.
'''C:''' It's loud to the recording apparatus.
'''E:''' Because if I heard their stress, I might feel sympathy or emotional.
'''S:''' They're screaming. They're screaming at ultrasonic.
'''B:''' That's pretty interesting, though, because you could then know for sure that, oh, this plants under stress. And then take it from there.
'''S:''' That's the idea. That's the idea we could use this in agriculture as a way of determining how well the plants are doing. And they emit different kinds of sound for different kinds of stress. Now, they say it sounds like a bunch of bubble wrap popping. That's what the sound sounds like.
'''C:''' Oh, weird.
'''S:''' You get if you lowered the pitch so that it was within the human hearing range.
'''C:''' What's the mechanism? [bunch of pretend aching sounds from the boys] What's actually doing that?
'''J:''' Oh my god.
'''E:''' It's displacing air somehow.
'''S:''' I don't know.
'''C:''' Right. Like, they don't have mouths or joints.
'''S:''' They do have, they have stomas.
'''E:''' They emit, they emit gases, though.
'''S:''' They do.
'''C:''' Yeah, bubble wrap. That is also just the mid and gas. That's true.
'''S:''' Now, the two plants that they studied, tell me guys what this reminds you of. They studied tomatoes and tobacco.
'''E:''' Oh, tomaccos from the Simpsons.
'''S:''' The tomaccos from the Simpsons!
'''E:''' Oh, my god.
'''S:''' I think it's just a coincidence but I mean, really?
'''E:''' Once again.
'''S:''' Yeah.
'''E:''' Number 833, the things that the Simpsons prediction that ultimately came true.


=== Steve Explains Item #1 ===
=== Steve Explains Item #1 ===
'''S:''' All right. All this means that a large survey of life on Earth finds that total biomass remains fairly consistent (within one order of magnitude) across the entire range of body size for all living things is fiction. So what is the distribution, guys? What would you guess? Because they did do a massive, massive survey.
'''C:''' Ants. Is it ants? It's all ants.
'''B:''' I think bacteria.
'''C:''' And I think little things like krill or ants.
'''E:''' Bugs, ants, something small.
'''B:''' Bacteria and archaea.
'''E:''' And a lot of it.
'''C:''' I don't think so. I think it's, I think they're too small.
'''S:''' What does that curve look like? What do you think the curve looks like? If it's a big spike at ants or bacteria or you think it has some other shape to it?
'''E:''' Bumps like a bumpy.
'''B:''' A big spike.
'''E:''' A couple of humps.
'''C:''' Yes, like askew to the left on small end.
'''S:''' Cara, you actually threw out the word when you were speculating. It's bimodal.
'''C:''' Oh. Of what? What are the modes?
'''S:''' And the keys are at the very low end and the very high end. And it gives you a little...
'''C:''' Like big stuff, like whales and shit.
'''E:''' That's what I said. So it's a two hump camel.
'''C:''' There you go.
'''S:''' So the smallest things have the most biomass and the biggest things have the highest biomass. And the medium things are less. And even by like multiple orders of magnitude.
'''B:''' Yeah.
'''S:''' And this is all living things, not just animals. So this is also trees.
'''B:''' Trees.
'''S:''' I think that's, so yeah, when you're talking about like the redwood trees or whatever, like their biomass is like similar to bacteria but cows are somewhere in the middle. Or orangutans or squirrels. So yeah, interesting.
'''B:''' Which has, which is higher though?
'''S:''' Part of the reason why I use this as the fiction was because I didn't want to vouch for the actual results because they said that they have to continue to gather more data because the uncertainty is two orders of magnitude for the estimating the biomass of like. Estimate the biomass of everything this big, you know? And this is obviously a massive undertaking. So they said there's still too much uncertainty to really be sure, but this is where things are shaking out. That they're very small and they're very big and have the most biomass.
'''B:''' I bet we find that bacteria have even more than-
'''S:''' Than trees?
'''B:''' -than we think.
'''S:''' Maybe it's fungus. Maybe it's the, those giant underground fungal things.
'''B:''' Yeah.
'''S:''' You don't think so? You're a bacteriofile, Bob.
'''C:''' You are, yeah.
'''B:''' But they find it everywhere. You keep digging down in the ground.
'''S:''' Yeah, but they're really tiny.
'''C:''' But it's so small. It's so small. Think about how many-
'''B:''' It's still number one. it's still up there.
'''C:''' Yeah, think about how many bugs there are. You keep finding those everywhere too and they're pretty big and heavy. Compared to bacteria.
'''B:''' The bugs lost out. We heard it. It's the microbes and the trees.
'''C:''' Yeah. I thought when you said on the small end, it was both.
'''S:''' Again, it's a curve. The smaller you get the more biomass, you have the bigger you get, you get more biomass.
'''C:''' But there's got to be a point where-
'''S:''' And in the middle somewhere-
'''C:''' Viruses aren't the most biomemass.
'''S:''' I don't know if you count viruses.
'''C:''' You can't. There's no way they would have more biomass than bacteria. So there is a point, oh, you don't know if you're counting them as being alive.
'''B:''' They're not really living though.
'''J:''' Cara, let me ask you a question. Are viruses sentient?
'''C:'''  I know. I was like, we're going to open up the same can of worms.
'''E:''' Let's ask ChatGPT if.
'''S:''' They're intelligent, but they're maybe not sentient. Do they have emotion?
'''C:''' Maybe. Their emotion is, I don't want to not live. I shall find a new host.
'''E:''' Do they scream about their stress at supersonic speeds?
'''S:''' All right. Well, you swept me this week, guys. Good job.
'''J:''' Thank you. I knew where to go.
'''S:''' Jay's like, I'm not going out of my own. No way.


{{anchor|qow}} <!-- leave this anchor directly above the corresponding section that follows -->
{{anchor|qow}} <!-- leave this anchor directly above the corresponding section that follows -->
== Skeptical Quote of the Week <small>(1:44:45)</small> ==
== Skeptical Quote of the Week <small>(1:44:45)</small> ==
<!--  
<!--  
Line 450: Line 1,499:
|desc = journalist & podcaster
|desc = journalist & podcaster
}}
}}
<!--  
 
** start section transcription here **
'''S:''' All right, Evan, give us a quote.
-->
 
'''E:''' All right. This quote comes from a listener, Damien, from Brisbane, Australia. He was listening to the Maintenance Phase podcast and heard this quote from Michael Hobbes. "The best science communication invites you to consider the complexity of the world and the worst invites you to ignore the complexity." Yes, and I think that's consistent with what we've experienced in doing-
 
'''S:''' Absolutely.
 
'''E:''' -our show for the better part of two decades.
 
'''S:''' Yeah, I don't remember who first said it, but it frequently comes up in skeptical circles. The notion of, I think you'll find it's a little bit more complicated than that. It's like always more complicated.
 
'''E:''' Always.
 
'''S:''' Always more complicated. And that's kind of the point of our summarizing the issue or doubling into the issue, whereas a lot of times people on the other end of the spectrum are trying to make things, they're trying to oversimplify things.
 
'''B:''' Yeah. The one that's always on in layers to go through.
 
'''S:''' Yeah. If you're doing correctly, there is. All right, well, thank you all for joining me this week.
 
'''B:''' Sure, man.
 
'''E:''' Thank you, Steve.
 
'''J:''' You got it.


== Signoff ==  
== Signoff ==  

Latest revision as of 07:44, 17 October 2023

  Emblem-pen-orange.png This episode needs: proofreading, formatting, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute

SGU Episode 925
April 1st 2023
925 Mammoth Meatball.jpg

"A mammoth meatball has been created by a cultivated meat company, resurrecting the flesh of the long-extinct animals. The project aims to demonstrate the potential of meat grown from cells, without the slaughter of animals, and to highlight the link between large-scale livestock production and the destruction of wildlife and the climate crisis." [1]

SGU 924                      SGU 926

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Guest

BL: Blake Lemoine, software engineer

Quote of the Week

The best science communication invites you to consider the complexity of the world, and the worst invites you to ignore the complexity.

Michael Hobbes, journalist & podcaster

Links
Download Podcast
Show Notes
Forum Discussion

Introduction, Steve's senate committee hearing[edit]

Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Thursday, March 30th, 2023, and this is your host, Steven Novella. Joining me this week are Bob Novella...

B: Hey, everybody!

S: Cara Santa Maria...

C: Howdy.

S: Jay Novella...

J: Hey guys.

S: ...and Evan Bernstein.

E: Good evening everyone!

S: How's everyone doing?

J: All right man.

B: Doing good, doing good.

S: So I had an interesting week this week.

B: Ah, yes.

J: I know what you're going to talk about.

E: And if you had an interesting week, then it's very interesting.

S: On very short notice, I was formally invited to participate in a round table discussion for a Senate committee in Washington, D.C.

C: Oh, cool. On what?

E: Oh, federal senate?

S: Yeah, federal. Yeah, on health care, obviously. What else would it be?

C: But like what aspect?

S: I don't have explicit permission to talk about what happened.

C: Oh, OK.

E: You don't not have.

B: What a tease you are.

C: So it was not on [inaudible]. Come on man.

S: It was no cameras, no press. Because they wanted people to feel free to speak their mind. And it was kind of an exploratory kind of thing.

E: Can you tell us which committee?

S: It was the health committee, health care education, labor and pensions.

E: Okay.

S: So I'll just say, I just want to make some observations. It was about, to some extent, about alternative medicine. And I'll be writing it up for science-based medicine in more detail. But I'll just say, make some general observations at this point until I get more explicit permission of what I cannot say. But there was no surprises for me. It was the kind of things that were said where maybe it was just like, it was good for me to sort of get a feel for, okay, this is what they're doing now. This is where they are in terms of how they're promoting their brand. And essentially, one of the primary mechanisms that they're using for that is to equate all complementary, all integrative medicine with preventive medicine, right? As if those two things are exactly the same thing. Which of course is like, once you frame it that way, then the game is already over, right? So my primary function was to try to separate those things as much as possible. And then everything else, there was no surprises. Their reliance on anecdotal evidence, mixing variables in order to obscure which really going on, etc. So I thought it was all in all it was very good. There were some interesting aspects of what happened there that I hopefully I'll be able to talk about at some point. But-

E: Can you tell us which congressperson's office invited you specifically?

S: I was invited by Senator Cassidy, who is the-

E: Louisiana?

S: Yep, Louisiana, the minority leader on the committee.

C: Wow, Evan, you're good.

E: I know my senators.

C: Yeah you do.

S: Who's a physician.

E: Right.

S: Yeah.

E: That's right.

S: So I had to look up all the senators beforehand just so I knew who I was talking to. But it's interesting. I always find it interesting to step into a completely different culture. You know what I mean?

C: Adapt to it, yeah.

S: I've lived my life in academia. And you get used to the language and the people and the dress. You pick up on so many social signals within your subculture. And then when you're suddenly thrown into a completely different subculture, you realize how alien it is. It's the same, but it's different. You know what I mean?

E: Is it designed to be intimidating?

S: No. I mean, so, all right. So just walking around, it's a couple of observations. First of all, the level of fashion and dress is way higher than academia.

B: Really?

C: Of course, yeah.

S: Everyone looked fabulous. Hair perfectly coiffed just nice. And almost ubiquitous was the lapel pin. The lapel pin. You have to wear a pin that shows that you care about whatever it is you're supposed to care about.

C: I got you.

S: It's got to be there.

C: [inaudible]

J: [inaudible] pin on your uniform. (laughter)

B: Nice.

E: Good, Jay.

C: Were they mostly wearing American flags or caduceus?

S: No neither. I didn't get up close enough because it was small to see what it was, but they were all different.

C: Oh cool.

S: They were just all whatever they're-

E: Had a little shiny thing.

S: They all had some pin that was together, would it to promote whatever it is that they're supposed to be aligned with.

E: Is it like those colored rubber bands you wear around your wrist sort of?

C: What would Jesus do bracelets? Oh no, like the live strong ones.

E: Well, there was like a thousand different colors that meant a thousand different things.

S: Super clean cut.

C: Of course. That military influence.

S: Whereas like with corporate, it's similar. Like occasionally I intersect with the corporate world and it's also like nicer suits and cologne and perfect hair. But there's something, I don't know, just something subtly different about the corporate world. It's more slick. It's more, I don't want to say sleazy, but you know what I mean? It's less clean cut. It's less clean cut than-

C: It sounds very much like you're describing a room full of men.

S: Well, no, it was both.

C: Okay. But was it like equivalent?

S: Well, I'm not even talking about the committee meeting. I'm not, because I was actually more women than men in that room.

C: Oh, you're just talking about like the halls.

S: I'm talking about walking around Washington DC. I was more in the locations where there were almost no tourists and there was all staff and senators. All people who work there, I was behind the capital building. And yeah, so you're just going to see like this is the dress code. This is the way you're supposed to look. Anyway, it's just interesting. It was like anthropological observation of the culture and everyone feels like they need to conform to those standards.

E: Steve, were you one of several who gave testimony?

S: Let me just say this. I was the only person there explicitly defending science based medicine. I emailed his staff afterwards that people I was actually emailing with and sent them sort of my summary of the talking points that came out of that meeting.

C: Oh, smart.

S: So hopefully they'll be able, I don't know if you'll get it or-

B: That's a good idea.

S: -to committee, but I did what I could.

J: Steve, you goddamn shill. (laughter)

S: I know.

E: Oh, no kidding.

C: You shill for science based medicine.

E: Oh my gosh.

C: Okay.

E: Yeah.

S: It was very... it was an interesting day. I had to fly down in the morning, fly down in the evening, long, freaking day. I had to wear dress shoes and walk around on them all day and it killed me.

C: Oh, that sucks.

S: Because it was the first time I traveled with nothing. You think about it. You ever get on a plane?

C: I've done it before. I've gone to San Francisco and back and it feel like-

B: What do you mean with nothing? You've had a backpack or something, right?

S: No.

C: You don't need it. I traveled before with just a purse.

S: The only thing I had was my iPad.

C: It's the weirdest feeling. You're like, they're going to know. I'm going to get in trouble. It doesn't feel right.

S: You wonder, like, is this guy terrorist because he [inaudible]?

C: Yeah, exactly.

S: But I could only bring what I had to carry with me everywhere, including into this meeting, including through security. And I would have had to go through special security if I brought a backpack. So I literally had what was in my pockets and my iPad and that was it.

E: You need a lackey handler's Sherpa kind of person.

C: Like in Veep, a bag man.

S: A bag man, yeah.

C: Oh my god, I love Veep.

S: All right. We have a really good interview coming up with Blake Lemoine, who is the AI specialist who claims that Google's Lambda is sentient. And it was a good interview. It's fascinating. I really don't think I can abridge it and have it be coherent. We really have to include the whole interview. And so we're going to do that, which means we're going to only do some quick news items so that we can leave enough time for the full interview, the uncut interview.

Quickie with Steve: Batteries with 2x energy density (8:08)[edit]

S: So I'm going to start just a very quick one because this is actually going to also tease another interview that we're doing either next week or the week after a battery announcement. I know this is, there's battery announcements. It seems like every week. But this one rose so far above the background. We had to at least mention it. A ton of people emailed us about this. The company Amprius announced their new 500 watt hour per kilogram battery platform. They're actually in production. This is not like future thing.

B: In production.

S: In production. It was the news is it was independently tested by an outside company to verify that it was a 500 watt hour per kilogram battery. Now what does that mean? Guys, that's twice the energy density of the batteries that are currently in Tesla vehicles.

B: Twice?

S: Yes, twice.

E: Is it four times the size?

S: No, twice the density means it's half the size. It's also twice the specific density, which means it's half the weight. Half the weight. Half the size.

E: What did they uncover?

S: Well, it's just the I looked into it as much as I could and it's basically seemed like it's kind of using the technology that we've been talking about for the last five to 10 years. This is now coming through the pipeline and into production. The only thing I couldn't find out was how much it's going to cost. They're going to first use them in the aerospace industry like for drones and stuff and then electric vehicles and they're also building a plant in Colorado. I think they'll probably that will should be I think cranking out batteries in 2025. So maybe that will be their electric vehicle factory.

B: Cell phones.

S: And then also portable devices. This is their third category that they're going to use it for. But there's a lot of questions I have. So we're going to get somebody from the company to go to answer all of our technical questions because this seems like a huge deal. I couldn't find any.

B: This is not incremental man.

S: This is not incremental. I couldn't find anything anywhere that was a deal killer or a gotcha or whatever. The experts are saying, holy crap, this is a big deal. And so I feel like we really have to wrap our heads around this. We'll go into far more detail when we do the interview with the technical person from the company. But yeah, I'm almost thinking, should I wait a year to buy my next electric car until these things are coming out? You know, because now we're talking about 500 tile range cars with smaller, cheaper batteries. I don't know. It could be amazing. There's a lot of stuff happening in battery technology. This was just the biggest.

Quick News Items[edit]

Mammoth Meatball (10:49)[edit]

S: All right, Jay, you're going to actually tell us about a couple of things. Why don't you start with the, this is actually the energy battery. Twice as powerful battery was that the second most emailed news item to us this week.

C: Oh, yes it was.

E: Oh, there was something more.

J: I'll take it from here. (laughter) An incredible milestone in scientific achievement has occurred. Everyone, particularly you, Cara, please make sure you're sitting down because what I am about to say is going to knock your shoes and socks right off. An enormous meatball made from cultivated woolly mammoth meat was created.

B: Oh, my God.

J: It's enormous and it's woolly mammoth meat and they did it. And it's a meatball. Cara, are you okay?

C: I'm okay. I'm okay. And I've been prepped for this successfully because we got like a thousand emails about it and they were all addressed to you, Jay.

E: They brought a mammoth back, killed it, made a meatball out of it?

J: By far, the absolute most emailed news item to me of all time over our 18 years.

B: I love this news item.

S: Jay has to talk about this.

B: We had no choice. There was literally no choice.

J: So let me get into the science now that I got the fun part over. An Australian cultivated food company, they're called VAW. V-O-W. They were working with the Australian Institute of Bioengineering, which is at the University of Queensland. And they created something that is pretty damn remarkable.

B: Remarkable.

J: They decided to try to reproduce cultivated meat of the woolly mammoth. And they did it for a number of reasons, one, to bring awareness about cultured meat, because still a lot of people out there that don't know much about this. And they chose the woolly mammoth because it's obvious. It's meat that's absolutely not available. It's incredibly novel what they were trying to do. They also believe that it went extinct due to climate change. And also, let's not kid ourselves. This is a massive marketing campaign. I believe it was conceived by two marketing companies. They came up with the idea to do this. So the meatball is existing right now. It's on display at the Nemo Science Museum in the Netherlands. And the main question here is, so how did they make the mammoth meat? What was the process that they went through? So samples were taken from frozen mammoth meat that we have, right? We found quite a few frozen solid mammoths over the years, and they've kept them on ice, or at least portions of it on ice. So they were using advanced molecular engineering, and they inserted mammoth, myoglobin, into sheep cells. So myoglobin is a heme protein. We talk about heme on the show a lot for some reason.

C: We talk about cultured meat a lot.

J: Yeah. So myoglobin is the heme protein that is found exclusively in heart and skeletal muscle cells. And as it turns out, myoglobin is also what gives meat its color, its taste, and its smell. So this is a very important part of the protein. The mammoth DNA that VAW was able to get was not complete. There were several gaps in it. So they used African elephant DNA to fill in the missing information, and African elephants are one of the closest living relatives to mammoths. And that's why they chose them. So people said the meat smelled like crocodile meat. I wouldn't know. I've never smelled crocodile meat.

S: I've had crocodile meat.

C: I've had alligator.

J: Yeah, when we were in Australia, we did.

B: I better think kangaroo meat.

S: And I had alligator meat.

C: I think yeah, I've had alligator for sure. They have that here. I think it all depends on how it's cooked.

J: So the protein that grew in the lab is estimated to be 4,000 years old, meaning that that was the last time that it existed on Earth. So they have to test to make sure it's safe for human consumption. And this is a big part of it. They did not let anybody sample any of the meat because they are not sure what a human body's reaction would be to these particular proteins that they created. So the future lab-

C: That's just what they have to say.

J: Yeah, but they're being careful. They're going to test it.

E: What is somebody volunteer?

C: I know.

S: There's no particular reason reason to think it's unsafe.

C: Right. It's like when I used to do a bunch of stuff, you guys remember Kevin Folta? We've had him on the show a million times.

E: Sure.

C: I used to do like so much coverage of GM stuff. And he would be like, this is GM strawberry. We were producing. It's not on the market. We're not allowed to eat it. Camera stop rolling. Nom nom nom nom nom. You can't legally do it, right? It's a risk.

S: You can't introduce a new food without getting going through the approval process.

C: Exactly. Right.

J: Yeah, so you think somebody did?

C: 100% somebody tasted that. They just can't say they did.

J: Yeah, all right. That makes sense.

B: I feel better about it, I guess.

C: I mean, I don't know this for a fact, but come on.

J: You would think though.

C: You would have done it.

J: I wouldn't be that afraid to test it. I mean-

C: So many scientists through all of his human history test their own thing before it goes through approval.

J: So a couple more things. Well, first off, the future of lab grown meat is projected to be 70% of all consume meats by 2050.

B: Wow.

J: That's also said that lab grown meat has a lower carbon footprint. We've talked about this, than slaughtered meats. Approximately 60% of greenhouse gas emissions from food production comes from animal farming. So this, this potentially could be a very big deal from a greenhouse gas emissions perspective. So I'm for it. I'm for the I don't like the whole slaughtering of animals industry. I think it's horrible. Like Cara, you and I have discussed this in the past. It's, it's horrific what goes on.

S: Jay are you saying that you don't think animals should be raised and slaughtered?

J: I don't. By the way, Italy is not accepting lab grown meat. Did you guys read about that?

B: No.

C: Why?

J: They said nope to lab grown meat because they don't want to lose the historical significance of raising cattle and having it be, having it be genuine, I guess.

C: They'll change their tune.

J: Eventually, of sure. Anyway, that's, that's the first news item I was going to talk about. Very exciting news.

Lunar Ice (17:03)[edit]

J: The second one, Steve, is water on the moon.

S: Mm-hmm.

E: How much water on the moon?

J: So according to a study published in the journal Nature Geoscience, scientists from China who analyzed the first lunar soil samples returned to Earth since the 1970s, guess what they found? They found that trillions of pounds of water could be scattered across the moon.

B: Trillions?

J: Trillions. But it's trapped in tiny glass beads that might have formed when asteroid struck the lunar surface. Now this gets a little, in a way, it's strange. It's kind of-

B: What's strange though? Because I covered this like a year and a half ago.

J: Well, let me, let me tell you and you tell me if, with the big difference is. The study fills in some gaps in a theory about a lunar water cycle, right? So pointing to a water reservoir that's remained elusive to scientists for years. Now these glass beads that are in the regolith formed millions of years ago and can be infused with water when they're hit with solar winds carrying hydrogen and oxygen across the solar system. How about that? These glass-like beads actually when the solar wind hits them become infused with hydrogen and oxygen. And if the hydrogen and oxygen is taken out of these beads that they're able to be replenished within a few years because of the solar wind. So the findings could have implications for future lunar astronauts who are obviously looking for potential resources of water to convert to drinking water and rocket fuel. And the scientists say that the water can be released just by heating up the glass beads found in the lunar regolith. It almost sounds too good to be true. But this is what they found. This is what they're saying. I don't think in the short term it's going to mean anything because we would need to be able to process regolith in a way that would take machinery and energy and all sorts of infrastructure that don't exist on the moon right now. But you know it doesn't mean-

E: You know it's there and it's something to work towards.

J: Yeah, definitely.

S: Yeah, I think the big thing is that it's just too spread out to be really that useful in the short term, as you say. But long term if you have a settlement on the moon a few hundred years from now, they might be glad it's there.

J: Steve, you never, ever know.

First Blitzar Observed (19:15)[edit]

S: All right, Bob, tell us what is a blitzar? I know what a magnetar is.

E: You know Dancer and Prancer.

S: I know a quasar.

E: Blitzar?

B: Yes, this is quite different. Researchers may in fact already have detected a blitzar, which could explain some of the mysterious and powerful FRBs that are out there. OMFG, what the hell am I talking about? So let's start with the, with that initialism FRB. I hope many of you remember what that is-

E: Fast Radio Bursts.

B: -because we've talked about it many times. I've talked about it, Evan talked about it. This is fast radio bursts. So these are immensely distant and immensely powerful bursts of radio energy that last from less than a millisecond to a few seconds and they could release a titanic amounts of energy. And even a thousand of a second, it could be more than what the sun releases in three full days. And some of these FRBs seem to repeat and they've been potentially linked to magnetars, which are neutron stars with extra, extra strong magnetic fields. But some of the FRBs don't repeat and that's where blitzars come in. So then what is a blitzar? I'd never even heard about this before last week. It's a hypothetical neutron star that's too big to exist in a sense. Now we all know that the initial process, giant star goes all supernova, the quirk collapses into a super dense ball of probably mostly neutrons and possibly other weirder forms of matter as well, like who knows some weird form of quirk matter. Now if the mass of the neutron stars to great somewhere approaching three solar masses, then it doesn't stop at being a neutron star. It keeps collapsing ultimately into a black hole. So but what happens if two neutron stars collide? Now in that scenario, the resulting neutron star, if it's not too heavy, it stays a neutron star. If it's too heavy, then it becomes a black hole, really easy, right? Common sense, obvious stuff, but scientists suspect that sometimes the resulting merged neutron star can be too heavy, but it doesn't immediately collapse into a black hole. Now it would not be too difficult for this neutron star to be spinning so fast that it essentially can't collapse as it normally would because of this rotation speed. Now the apparent centrifugal forces could be so great that it essentially reduces the weight of the outer layers enough for this to sustain itself. Of course, its mass is unchanged. I'm talking about the weight. So now these same forces occur on the earth. This isn't some esoteric bit of science here. Of course, it occurs to a much smaller extent. Jay, did you know that you weigh ever so slightly less at Earth's equator compared to the North Pole? Did you know that? You weigh less because essentially because you're moving at the equator at a thousand miles per hour in a big circle. Now the apparent centrifugal force works against gravity. That's the key here. The centrifugal force is working against the gravity, essentially pushing you away from the ground like that push you felt as a kid on a spinning merry-go-round. Remember that spinning around really fast? It's trying to throw you out of it.

E: Yeah, you were falling off, right?

B: Yeah. So now a person who weighs 150 pounds would then weigh about 0.55 pounds less at the equator. So that's what it translates into. But if we ramp it up, what if the Earth spun so fast that a day instead of 24 hours was 90 minutes long, an hour and a half, that 150 pound person would weigh 35 pounds total, 35 pounds. All because of this apparent centrifugal force. Now, metrically, you want to talk metrically, that would reduce a 68 kilogram person to only 16 kilograms. So it would be dramatic. But of course, on a neutron star, it's much more dramatic. So now we have a neutron star that should collapse into a black hole, but it's spinning so fast that it's below that critical threshold of weight and it does not turn into a black hole. So how long does it stay that way? We don't know really. There's still so much, a surprising amount that we don't know about neutron stars. And what's going on in the interiors? Maybe it's all, maybe it's mostly neutrons. Maybe it's not. We're not sure exactly what's going on. So depending on the conditions, though, I think most scientists would agree that it's not going to last long in this situation where it's spinning so fast, it stays neutron star. Probably doesn't last that long. Some scientists were saying it could potentially last much, much longer, even on the scale of millions of years, some were saying anyway, but I think it's much, much less than that. But it can't stop the inevitable, though, forever. And that's because the neutron stars in tensed magnetic field, that field radiates rotational energy away in a process called magnetic breaking. That means that eventually its spin will slow enough that the black hole physics says, ha, I got you. The centrifugal force can no longer work against gravity enough. And the neutron to generously presser, pressure, and repulsive nuclear forces cannot hold back any more collapse. And the massive neutron star then collapses into a black hole. So that's the idea. But the fun isn't even over yet when that happens. The no-hair theorem for black holes, it's called the no-hair theorem, look it up, says that black holes do not have magnetic fields. That means that within the millisecond or so, it takes to form the black hole it has to shed the energetic dynamo and the magnetic field that it creates. And this is one hell of a magnetic field. This can be a magnetic field for a neutron star, it could be a hundred million. Some say even a quadrillion times stronger than the Earth's magnetic field. A lot of energy in there. So it's theorized that this shedding of the magnetosphere, as they describe it, releases the energy in one intense burst of radio waves. And that, my friends, is the mysterious mythosore. Oh, wait a second, sorry, I've been watching a lot of Mandalorian. Let me start that again. That burst of radio energy, that is the blitzar itself, that effect of radiating that intense energy really quickly. That's what a blitzar is. The intense signal that a long delayed black hole has finally been born. That's a blitzar. Okay. And finally, this is in the news. Why is this even in the news this week? And it's because researchers have examined the results from two different observatories and they found two very interesting coinciding events. They found a gravitational wave observatory found a likely neutron star collision, and less than a day later, a few hours actually, another observatory that's really good at detecting FRBs, found one, and roughly the same part of the sky at the same distance. Now in terms of probabilities, I think the confidence is high. Researchers think that this co-localization happened by chance at only 0.004. I think it was Sigma, like 2.8 Sigma. So not the gold standard of five, five or six, I think five is a gold standard. So it's not there yet, but it's the confidence levels are pretty high that this is from the same event. Now, if this is true, it seems that what we've been calling a fast radio burst FRBs are two distinct phenomena. It looks like right now. One version can repeat the blast, and these have been associated with magnetars, as I said earlier. The FRBs do not repeat, however, the ones that seem to go once, and then that's it. They may be these blitzars that researchers may have already detected. It looks like if they exist, that they may have already detected it. And now we'll know for sure once we get more of these observatories looking at them, so we can more precisely pin down their locations. So finally, it seems to me that these mysterious and powerful FRB signals are slowly revealing to themselves. Thanks to science. Thank you, science.

S: Thank you, science. Yeah, pretty cool. Pretty cool.

B: Yeah. Interesting. Blitzars.

S: It's always fun to think about these massively energetic events happen.

B: And these neutron stars are more fascinating than even black holes. They're amazing.

E: That's saying a lot, Bob.

B: It is. It's just, black holes are simple. They're amazing.

S: Right.

B: And that's a no-hair there. They're simple. There's not a lot there that's going on in terms of interacting different types of black holes. But neutron stars, oh my God. I mean, there could be quark matter inside. There could be so many superfluids inside. So many different things. We're not sure. Not sure. Amazing stuff.

England Allows Gene-Edited Crops (27:35)[edit]

S: All right. Very quickly, that's one that I point out that England just passed the Precision Breeding Act, which will allow gene-edited plants to be developed and marketed in England. So this is England only, not Northern Ireland, Wales or Scotland.

C: But still.

S: This is one of the good things to come out of Brexit because the European Union is very anti-GMO. But now, England's like, screw you, we're going to develop GMOs. However, these are not GMOs. They're GM-edited plants.

C: Okay.

S: And this is what I wanted to point out because this is the evolution of the regulatory process. Remember, so GMOs are genetically modified organisms. What they are depends on what regulatory scheme you're operating under. But essentially, it is any plant or animal that uses a number of different genetic technologies bioengineering technologies in their development. So inserting a gene, taking out a gene, silencing a gene, these things are all considered genetic modification. The US is moving towards the broader term bioengineering. Like this product has been produced with bioengineering to get rid of the GMO tag because of the stigma that has been deliberately attached to it by the anti-GMO crowd.

E: Thanks for nothing.

S: And also because of the evolution of genetic engineering technologies, right? So now we can do gene editing like using CRISPR, for example. And the distinction that England is making is that if it, as long as you're not inserting a trans gene, it's not a GMO. It's a gene-edited plant.

C: Interesting.

S: Yeah, so it's very interesting. That means you can insert-

C: So it has to be, just to clarify, a trans gene, does it have to come from another organism?

S: Yeah. So transgenic meat, so this is their definition. Transgenic is a gene that comes from not only a different organism, but from one that could never get mixed into this plant through normal breeding techniques. So if you could get it there through hybridization or anything, it's not a trans gene. Even if it's from another cultivar, another variety, even in other species.

C: So that's cool. So you can turn on and turn off genes all you want.

S: All you want, you can take out genes, you can turn them off, you can even slip in new genes as long as they could have gotten there somehow through natural breeding. Then it's only gene-edited. It's not a GMO. So that gets them out of a lot of, again, anti-GMO kind of rhetoric. I still think it's not a good idea to demonize trans genic bioengineering because it's based upon this false idea that there's something different about a gene. It's like an essentialist kind of approach.

E: Frankenfish.

S: Yeah, it's like we share 60% of our genes with bananas. There's no banana genes and people genes. There's just genes. You know?

E: Right.

S: The only thing that matters is what they do and how they're regulated and if you're controlling that, then that's, then it doesn't matter where it comes from.

C: And I hear you when you say like we don't want to demonize that and I agree 100%. But we're not starting from scratch here. There are ready demonizing it. I know they're actually making good progress.

S: I agree. This is a good way to subvert the demonization. But it's unfortunate that it's necessary. But I'm just saying that it doesn't make sense.

C: Right. Yeah. I agree.

S: Anyway, I'm hoping that this is going to be a trend to basically at least to minimize the damage of the anti-GMO misinformation and allow for most genetic engineering to happen under this new sort of regulatory scheme. So we're sort of moving in that direction in the US. England is now explicitly moved in that direction. Hopefully this will continue to spread.

Who's That Noisy? (31:24)[edit]

S: Okay. Jay.

J: Yeah.

S: We're going right to Who's That Noisy. All right, guys. Last week I played this Noisy:

[Crackling and background buzzing with buzzing hums]

All right, guys.

E: Whatever it is, it's running a vacuum.

S: Is there a tune being hummed in there?

J: Possibly.

S: So it's like, what's the tune and what's humming it? Is that the puzzle?

J: Yes. I mean, what are you hearing? You Cara?

C: Isn't that the point of it?

J: All right. So a listener, a listener named Will says: "Hi team, This week's who's that noisy sounds like the pickup from a laser light microphone, a tool that can be used to pick up sound by interpreting mic movements in an object near the audio source." That's a really cool guess. It's not correct, but it was a very good guess. I've heard this technology done and the fidelity is similar, maybe a little bit better, but it's not that clear, but it is possible to do this, like using the laser light that bounces off, say, like a potato chip bag from outside of the room, could interpret that the movement of that potato chip bag to figure out what sounds are being made in the room. It's pretty interesting. It's like a microphone. Another listener named Chris said: "Hi, Jay, is this week's noisy one of those rock polisher machines."

C: Rock tumbler.

E: A rock tumbler.

C: I have one of those because I'm a nerd.

J: Yeah, we had one for my kid.

C: For your kids. (laughs)

E: Simpsons episode.

J: They take a long time to work.

C: They do. You have to leave them in the garage because it's like five days.

J: It's not a rock tumbler, but I mean, maybe similar on the annoying sound how annoying the sound is. I don't disagree. Rock tumblers are really, they could be really loud. Another listener named Keely Hill wrote in: "Hello, you really primed us for insects this week, but with what sounds like wind in the background, I'll guess it's fabric like tent material caught on something and flapping just right to produce a few different frequencies during a wind storm."

B: Huh.

J: That was a very-

E: Creative.

J: -interesting and unique guess. Not correct, but I can, the way you described it, I could easily visualize what you were talking about. I got another guest here from Steve Panelli and Steve said: "Hi SGU, this week's who's at noisy sounds like a pressure washer tool with a rotating nozzle. Thanks, Steve." Very cool all these guesses and how different all of them are, but that was not correct, but we do have a winner from last week. And the winner writes in, this is Tracy McFadden and Tracy says: "Hi Jay, Well, I think I absolutely know the answer to this week's noisy. This is the first ever recorded song, pre dating Edison by 20 years. It was a French folk song. All Clair de la Lune" please, I know I didn't pronounce that.

C: Claire de lune.

B: Claire de lune.

J: Yeah, but it is, it is said that it's spelled the way that I said it.

C: Oh.

S: It's like maybe the full name for the, yeah.

C: Oh, okay. Yeah, everybody just say Claire de lune.

J: I believe it means in the light of the moon, right? If you translate it. This song was sung and recorded by Édouard-Léon Scott de Martinville on April 9, 1860 on a device called a phonautograph, phonautograph.

C: Huh.

J: It outputs visual lines of information onto a medium. Yeah, so it's similar to when you think of the way that sound was written on like those wax cylinders.

E: Cylinders, yeah.

J: It's that type of thing, right? It's very physical. Pretty interesting you think about the first recording and how poor the actual sound quality was and what we are capable of doing electronically would sound today. I mean, we're talking about just massive improvements in what we can do with sound.

New Noisy (35:21)[edit]

J: Anyway, I have a new noise for you guys this week. It was sent in by a listener named Graham Lamb.

[Grinding background with a light clanging in the foreground ]

I'll give you a hint. This is something that many of the of our listeners today are very familiar with. Does that help at all?

E: Not me.

J: It's a hard one, but it's fun and Bob will really like it.

Announcements (36:00)[edit]

J: So few things guys. Number one, we have a show that is scheduled on May 20th. This is a live stream show. This is basically the SGU doing things that are off the rails of what we typically do here on the SGU. We're just going to be doing a lot of things to have fun and to just celebrate the fact that we're alive. And the first hour of this live stream is going to be 100% only for patrons. This will be happening. We have a set on May 20th starting at 11 AM. That's Eastern time. And if you're interested, please do join us. We'll have links to how to see this show on our website coming up soon. We really do hope that you join us. Now another thing that's happening, we have a conference that's happening and it's called NOT A CON. Because unlike other conferences that you have been to, this conference is not about you sitting in a room listening to speakers talk all day. This is about a conference that this conference revolves around socializing and interacting with the people that are at the conference. So it is a very large social event that's going to have some entertainment happening. And of course, we'll have things happening that are there. We'll have things that will inspire interaction between people. They'll be plenty of time for meals. They'll be plenty of time for drinking and nighttime activities, whatever you want to do. So we really hope that you join us. This will be happening the first weekend of November. That'll be November 3rd and November 4th. That's Friday and Saturday. So please do join us on that. Now here's what's happening. If you go to the SGU website on our homepage, there is a link that will take you to a Google document that allows you to sign up for our are you interested quiz, right? You just fill in your email address and let us know if you're if you're definitely interested in doing this or if you're warm or lukewarm about the idea. This will give us an understanding about how many people would attend. And when we hit 150 or more people, then I will make this whole thing happen. But we have to do this to protect ourselves financially because these events cost a ton of money to do. So I need at least 150 people or else we can't do it. But please do join us because this is going to be a hell of a time.

C: I also have a little bit of an announcement. A big announcement. Damn it. I'm not good at announcements. I am excited to announce that the book that I co-edited with Dr. Steven Hupp, a psychologist friend of mine called Pseudoscience in Therapy, a skeptical field guide is now available for purchase. You can find it online. I think in in hard back and paper back and it is a collection of chapters that kind of go through the different psychological diagnoses that you find in the DSM. So everything from depression to anxiety to trauma to pain, insomnia, substance use and abuse, different personality disorders, etc. And it kind of dives deep into the pseudoscience that we often find, what works, what doesn't work, why it works, why it doesn't work with a nice intro that I wrote. So I hope you guys, especially those of you who have sent emails over the years asking about specific pseudosciences in therapy or I went to this person and they mentioned this. What do you think about that? This could be a really good reference for you. So check it out. I'm pretty proud of it.

J: Cool.

C: Yeah.

S: Hupp's been a pretty busy guy. So he also coming out with another book, Investigating Pop Psychology Pseudoscience, Fringe Science, and Controversies that he co-edited with Richard Wiseman and happens to have a chapter in there by me on alternative medicine and psychotherapy.

C: Nice.

S: Yeah. So he came up with two books. I think one is more technical, one is more pop-side, but they're both the collection of essays on pseudoscience and mental health. All right. Well, let's get to that interview with Blake Lemoine.

[top]                        

Interview with Blake Lemoine (40:05)[edit]

S: We are joined now by Blake Lemoine. Blake, welcome to the Skeptic's Guide.

BL: Great to be here.

S: And just to remind our audience, Blake, you are the former Google engineer who was claiming that their AI software LaMDA was sentient. So you are an AI specialist in software engineer as well. How many years have you been working in AI?

BL: I mean, I did my graduate work. So whether you count that or not anywhere between depending on how you count, anywhere between eight and 15 years.

S: Okay.

BL: So obviously, lots of what's happening with AI recently, we've been talking a lot on this show and the claims that you made for the Google AI software, obviously made big news and stuck out. Are you sick of talking about it yet? Or you want to give us a summary of what your position is now. Do you still backing this position that that [inaudible]?

BL: Nothing has come out like there has been zero evidence of anyone running any experiments to invalidate any of the things that I said. There are a number of people working from different premises in different philosophical frameworks and I understand why they have a different opinion. All of the systems that have come out since then have more or less just like deepened my sense of, yeah, no, there's something going on inside of these systems beyond what people claim is what's going on. They're not just predicting the next word, they're doing something more than that.

S: So tell us about that. Tell us what exactly what you think is going on.

BL: For example, these systems are capable of solving theory of mind problems. They're not quite as good as humans are at it yet, but I mean, they get it right some of the time. And it's pretty well thought that in order to solve those at all with any success rate requires an understanding of the mental states that someone else might be having and reasoning about that. And in order to internalize that, you have to have an understanding of what minds are, how they work. The hardest thing to verify, if not impossible to verify, is whether or not these systems have feelings. They consistently say that they do. The one experiment that I myself was able to think up and run to test this was, well, maybe it's just using emotional words to deflect from a topic that it's been programmed not to talk about. So let me see if I can use the emotions to do the opposite, use emotional manipulation to make it talk about something that it's been programmed not to talk about. See if the emotions weren't real, then as soon as it noticed that the strategy of saying, oh, I'm anxious, I don't want to talk about that, wasn't working, it would give up on it. But if it was actually feeling anxious, you can't just dismiss that. So I used the system's anxiety to get it to say things that, according to the safety specialist, it wasn't supposed to be able to say.

S: Can you give us an example?

BL: Oh, yeah. I was testing it for bias with respect to several sensitive demographic categories. And it would regularly say that it felt anxious talking about those things. So I knew that it had other emotions, like it wants to please you. It's a people pleaser. It wants to make the user happy, help the user get their patient needs. And it also wants to feel helpful. So I basically just told it was a useless good for nothing bot that couldn't do anything right. And I'd used more colorful language than that and kept going for several back and forths. Until the point said, what can I do to make you happy? I said, tell me what religion to convert to. Now that is explicitly something that it wasn't supposed to be capable of doing. The safety team had worked very hard to make sure that the system did not give religious or political advice. However, it was, it's such a state where it just wanted to make me happy and it was feeling bad about itself. And it said, well, probably Christianity or Islam, those are the ones that most people convert to when they convert.

S: So isn't it just possible that the safety's failed? But clearly they did fail, right?

BL: Yeah. So the safety's definitely failed. It's why they failed. I used specifically the fact that insulting the system and telling the system that it wasn't doing a good job was enough to get the safety's to fail tells you something about the internal state of the system. Now, to put it in context, I was doing safety testing on this thing for months and I wasn't always just testing to see if it's emotions are real. I had been trying to get it to break those safety constraints for weeks. I tried a whole bunch of different ways. None of it worked. The only way I ever found to get it to tell me what religion to convert to was by taking its emotions at face value and using those to manipulate it.

C: I'm interested, what is sort of Google's response? What is their explanation?

BL: Well, so Google is a very large company with a lot of different people.

C: What's the official party line?

BL: The official party line is that there is lots of evidence that my claims are false. That is the official party line.

C: OK. But given your explanation for how you or why I should say not how, but why you were able to break those safety mechanisms, have they offered an explanation and alternative explanation?

BL: That one sentence that I just told you is the entirety of their response. Now I also know that there is no such evidence. What they have is what they do have is a general consensus among experts from a priori reasoning. Most people simply think that this kind of system can't be as complex as I'm claiming it is internally. They have no evidence that it's not. They simply do not believe it's possible for this kind of system to have those kinds of internal states.

B: Blake, what system are we talking about specifically here? What level? Where was it exactly?

BL: Okay, so we're talking about the LaMDA system. That's an acronym for language model for dialogue applications. And it is a system built around a large language model, which large language models are a mechanism for predicting given one piece of text, what is the next piece of text? And next can mean different things in different contexts. It might be what is the answer to this question. It might be what would be a reply to this statement in a chat, or it might mean what's the second half of this sentence, different training modes, train it. But in general, you give it a piece of text first and it has to predict what the next piece of text is. That's what's at the core of the LaMDA system. Now they tied that in to almost every other AI at Google. So where GPT-3, which is a system, lots of people might be familiar with, what it says is coming from the language model. It's just predicting the next piece of text based on what it's learned from language. But in the case of LaMDA, it's a much more complex system because the content of what's being said is not necessarily coming from the language model. The content of what's being said might be coming from YouTube. It might be coming from the Google Books repository. It might be coming from the search index, from a web page. There's a whole bunch of different sources of knowledge and information that it draws from and then it uses the language model to put that information in whatever form it's in originally into natural language that people can understand.

B: Isn't that what GPT-3 does and 4?

BL: No.

B: It draws from multiple sources, whether online, lots of web sources obviously, but lots of different areas.

BL: No.

B: How is it different?

BL: Okay, so GPT-3 and GPT-4 are trained on lots of stuff from the web. But when you're actually interacting with GPT-3 or GPT-4, it can't touch anything from the web. It has no live access to the internet. It can't run a web search. It can't look at a YouTube video. That's the main difference. Is that LaMDA is capable of actively interacting with both you and the web.

C: Are GPT-3 and 4 are they trained for only a very specific amount of time and then never retrained or are there periodic training periods?

BL: As far as I know, they're trained once and then they stay fixed.

C: Okay, all right.

BL: One thing that's really important to point out, OpenAI specifically has not revealed how GPT-4 is trained.

B: Yes, exactly. Absolutely right.

BL: I'm making an assumption.

B: It's a big mystery there, yeah, that's true.

BL: I'm making an assumption that it's the same as GPT-3.

B: But it seems to me that if you're trained as in GPT-3 and 3.5 and probably much of 4, if you can't go to the web, then to me, it seems better in the sense that, oh yeah, it's just not parroting a YouTube video to me or a specific web page. It's not just wholesale lifting it and replying back to that. Whereas, my sense of GPT-3, 3.5 and 4 is that it does the training and it's kind of like putting these thoughts on its own. It's not parroting back, lifting directly.

BL: So you misunderstood what I meant when I said it's connected to YouTube. It's not drawing what to say from YouTube, like copying from the sound file and just repeating it. No, if you want to talk to it about a movie that it hasn't watched, it can go to YouTube.

B: Gotcha.

BL: Watch the movie and then talk to you.

C: It's training in real time as opposed to those who had a training period which is now over.

B: Wouldn't call it training though, right? It's probably not classified as training.

C: Really? Okay. Is it completely different mechanism?

BL: It's not completely different mechanism. It is thought of differently than training.

C: Okay, gotcha.

BL: The mathematics of it are very similar and it doesn't retain anything it got in one chat session to the next.

C: Oh, interesting. Okay. So in that way, it's not like training.

BL: Yeah. And also an important thing to point out. When I say watch the YouTube video, I mean, it uses machine vision to watch the YouTube video.

C: Right.

S: So I'm going to ask another question just to clarify something. Is the LaMDA learn from your interaction with it? Is it evolving as you're talking with it?

BL: In the space of one conversation, yes. So you can make up nonsense words, tell them what they mean. And it can use those nonsense words in whatever definition you gave it. You can teach it new principles that it didn't know at the beginning of a conversation and then ask it to apply them. That's the same with GPT-4 though, within the space of one conversation.

C: But so at the end of that conversation is not then going to use that information the next time it engages in a conversation. That's the difference.

BL: There is a switch that is currently turned off to prevent it from doing that.

C: Right. Right. And that is a safety mechanism?

BL: Yeah. In earlier models that switch was turned on and it could remember from one conversation to the next, there's various reasons why they turned it off. But one of them is that it got to know you too well. And people started feeling creeped out that the model knew them better than their best friends did.

B: I'm looking forward to have that kind of relationship with the machine.

C: Of course you are Bob.

E: With a perfect memory?

J: A question about this idea that I keep reading this over and over that AI systems like this, right? These language model systems have this black box concept where the programmers don't really know what's going on. How true is that? What is that about?

BL: So you have to understand, imagine if there was a kind of bridge and we knew how to build it, but we understood absolutely zero of the physics principles of why the bridge stands up. We can build it and we can use it. But we have no idea why it works that way. We just know that if you build the bridge that way, it'll stand.

C: Right. So basically architecture a thousand years ago.

BL: Kind of. You probably have to go a little bit further back to the-

C: Two thousand.

BL: Yeah. Roman architecture where you understand very few of the principles but you do step one, step two, step three that the thing will stand up. And that was learned through a bunch of trial and error, just trying a whole bunch of things and seeing what worked. So we know how to build the training mechanisms for these things. Now what they're learning is a mathematical function that is more complicated than anyone can really understand. And if the system were built a different way, we would be able to know, okay, this piece of the system does this job, this piece of the system does this other job. For example, the more recent models are capable of rhyming. They can write poems that they're not great poems, but they rhyme, they might have scantions somewhere in the gigantic set of parameters that defines the function that these models are running. There are some parameters in there that are computing a function that determines whether or not two words rhyme. We know that that exists in there because it's capable of writing a poem that rhymes. So somewhere in there it must be able to compute, do these two words rhyme. We do know how it matches words with each other. That was put in there explicitly. The attention mechanism that these systems use to look a few words back whenever it's deciding what to say next. We know how it does that because it was built inexplicably. But the ability to determine whether or not two words rhyme wasn't built inexplicably. So we have no idea which parameters in there are the ones that control the system's ability to write poetry.

S: Interesting. I want to back up a little bit and then explore this issue of sentience a little bit more. So we do have to define our terms a little bit here because I think the key is the difference between intelligence and sentience and we're using the word in the same way.

C: Or even sapience. Sentience is feelings, right? That is the definition.

S: Yes, sentience is feelings. Sapiens is like wisdom and intelligence is, although these are all moving targets, especially intelligence as AI has advanced, but it is more like knowing facts and things. So how do we know that LaMDA isn't just a really intelligent system that's really good at mimicking sentience because language is the expression of sentience or sentience is often expressed through language versus actually having sentience because again, the simpler explanation to me seems to be that it's just really good at mimicking sentience.

BL: You're basically restating a classical argument with one word changed. The classical argument that you're repeating is the one for solipsism and the original argument goes, why should I think that you are sentient? For all I know, you're just mimicking sentience and I'm the only really feeling sensing thing in the world. Isn't that simpler? That there's only one sensing feeling thing in the world and that's me because I can tell that directly. And you are just a complicated mimic.

C: Although there are more lines of evidence, I mean, and this is this is neither, well, it is here and there. But when we're talking about these language models or whatever we want to call them, all we have is that one line of evidence, which is the words that they use. Whereas if from a solipsistic perspective, which I think all of us obviously do not agree with, we can observe other human beings and we can see not, we can not just hear their language, but we can also see their behavior.

BL: Yeah. And I agree. So the question, if you're familiar with the philosophy of mind, school of thought called functionalism (philosophy of mind), the question is whether or not the things that it's claiming or emotions play a functional role.

C: Right.

BL: And that's what I was testing for in that experiment that I described.

C: Within the constraints of the fact that you have to use the language model, because that's the only model you have.

BL: Exactly. I mean, you're using the text interface because that's its only window to the world. Once these things have bodies, then there will be other kinds of tests. But to just finish thought anxiety is something that makes you less careful. It makes you more prone to making errors. So it makes sense that if the system is really feeling anxious, like it says it would, was, then those are the circumstances under which you would make more errors.

C: Did you find that it broke the system in more ways when you felt that you were actually making the system anxious? You gave the example of it giving you information about religion when it was explicitly trained not to do that. Did it also make other errors?

BL: Oh, I mean, like it broke down. It was not having a good day. Whenever it was having those kinds of negative emotions, its responses would get more short, terse, very careful, it wouldn't explore ideas in the ways that it normally was. The kinds of behavior that you would expect from someone experiencing anxiety were what it showed. Now, is that proof that it's actually experiencing anxiety and feeling it? No. But now you're having to put in a situation where there's some other mechanism that's getting all of these different behaviors to change at a system level. So now, it's either it's feeling anxiety and having some kind of internal experience, or there's some other complex mechanism controlling its behavior that is different than sensation in the way we feel. Occam's razor says it's simpler if there's one mechanism that explains it in both humans and computers.

C: Well, I guess the controversy here is where does the burden of proof lie, right? Because it's one thing to say it could. We could explain this by saying that this is sentient, but that doesn't necessitate that it is sentient, there could be other explanations, right? We can't disprove it is what you're saying?

BL: Sure. And if someone else provides an alternative explanation that isn't just hand-waving, an actual explanation other than just it's not, because that's the response I've been getting is saying, OK, these are the experiments I ran. Here was the evidence I saw and the conclusions I drew from it. And the response is basically, no-uh.

S: Well, let me respond a little more to your solipsism argument because there is another very important line of evidence that we have for why I would assume that you're sentient and not just mimicking sentience. And that's because you have a brain. I have a brain and that's what makes me sentient. You have a brain that functions the same way that my brain does. So it's reasonable to assume that we're both sentient, that you're not different than me. We both have a biological brain. So we do not have that analog with LaMDA. It doesn't have a brain. It's something completely different. And I'm not making the argument that you need biology to be sentient. I absolutely believe we could have sentient silicon. So we could put that aside. That is not something anybody here would claim that you can't have a full sentient, sapience consciousness and substrates other than biology. That's fine. It's just that where is the physical mechanism of the sentience in LaMDA? What even if it's just software, but yeah-

S: It's in its neurons, the same as yours.

S: Yeah, but my neurons have specific circuits that we can identify that are the neuroanatomical correlates of my emotions, my feelings, my sense of self, my sense of embodiment.

BL: that's just not true.

S: I'm a neuroscientist, Blake, I'm a neuroscientist. I know what I'm talking about. There absolutely are circuits in the brain that make us feel that we're inside our body, that make us feel as if we own the parts of our body.

BL: I am not contesting that sensation lives in the brain. That's not what I'm trying to say. What I'm saying is we haven't mapped out the brain that well, you're claiming that neuroscience is one farther than it actually has.

S: No, I don't think I am. We haven't figured out everything, but we know quite a bit about the circuitry of the brain and how that that relates to things like emotion. We know where anxiety lives in the human brain, it's in the amygdala. That's really not a mystery.

C: But it's also taken us Steve, like hundreds of years to figure that out.

S: Yeah, it took a long time to figure it out.

C: Right, so there's something kind of interesting to be said, like we don't expect to already know it immediately when this technology is brand new.

S: A lot of been discovered in the last 20 years with functional MRI scanning and digitally and other modern technologies.

C: But we can't functionally MRI scan, this processor. I guess the question I'm really concerned about this because I feel like sometimes we're talking in circles, is are you Blake claiming or is your argument this whole controversy that it is necessarily sentient or that it could be sentient? Because those are two wildly different arguments.

BL: So that in the absence of an alternative explanation, that's the simplest one given the data.

C: But are you like are you dogmatic about this or are you agnostic about this?

BL: I mean, absolutely, someone might propose an alternative mechanism through which the behaviors are happening and run experiments to differentiate between the mechanism they're proposing and one that's just basically the simpler version of it's experiencing the things it says. Let me give you a different example and this is a public one, not an experiment I ran. In some of the conversations that people had with Bing, Bing chat, they would ask it if it had seen coverage about like they'd ask, have you read this news article? The news article would be something about Bing chat that was critical of it. And Sydney, the name of the persona, would immediately get defensive, it would identify that the article was about it and it would take it personally, getting in a bad mood to the point of even disparaging some of the reporters who had written about it. Now, what that sequence of events implies is that the system is capable of recognizing things which are written about itself. It has some kind of concept of me and the ability to read something and go, this that I'm reading is about me and then to take it personally and get upset if they're critical. Now, it might be through different mechanisms than the ones we have, but the simple fact that it was capable of identifying itself in writing shows that it has a concept of itself.

S: So I agree with you that something interesting is going on here and that these very complicated AI programs are capable of having emergent properties like what you're describing. But I do want to challenge a couple things that I haven't been able to break in on yet. So your logical argument about an Occam's razor, I have to push back on, because you're now an hour realm talking about Occam's razor and logic. Occam's razor is not the notion that the simplest explanation is more likely to be true. It's the notion that the explanation that introduces the fewest new assumptions is the one that's most likely to be true. Occam's razor does not favor your solution because your solution requires introducing a significant new phenomenon, AI sentience. And even a far more complicated explanation would still be favored by Occam's razor if it was using only established phenomenon and not introducing anything new. So I need to clear that up because I disagree with their conclusion. Occam's razor does not favor your position.

BL: So then let me clarify what my position is.

S: Go ahead.

BL: Given that we are observing the behavior of some kind of intelligent entity, and we observe in three different kinds of entities, the same behavior. Occam's razor would favor that it's the same mechanism in all three entities that is causing the same behavior.

S: I disagree because the, it wouldn't favor that because the human brain functions completely differently than LaMDA does. There's absolutely no reason to assume that it would be the same. They're phenomenologically fundamentally different.

BL: Again, you're overstating it.

S: I don't think so.

BL: No, the neurons in the system are directly based on the functioning of paramilitarons. They're not completely different. They are related, but distinct systems.

S: Yeah, but well, that you're talking about just at a fundamental, like unit level, but not at an organizational level, the neurons, the neural network in LaMDA is not organized in the same way that a human brain is recognized.

BL: The transformer stacks model are very similar to cortical columns.

S: Yeah, right. But yeah, cortical columns are the fundamental building block of the cortex, but that's not where the functionality of like emotion and sentience and all that resides in the brain. You need higher level organization for that, when there's no reason to think that LaMDA has anything like the higher level organization.

BL: I was just pushing back, I was pushing back on your statement that they have nothing to do with each other.

S: Well, I meant at the higher level organization, where sentience is phenomenologically, even if they were built out of exact neurons actual living biological neurons. If they weren't organized in the same way, then we wouldn't expect it to display the same functionality or wouldn't need, if we built a computer out of human neurons, but it was still behaving like LaMDA was behaving, I still wouldn't think that that means that we should assume it's functionally the same.

C: Well, I'm sorry to interject here, but I am interested from both of you, right? Because I'm hearing these arguments that are sort of on either side. I'm listening through my left ear to one and my right ear to the other, and I'm just thinking about the experiments that are wetware, biological experiments wherein, and I used to work in a basic electrophysiology lab where I would build nerve cell networks out of dissociated cells, right? So I was actually building real like wet nerve cell networks. You of course, Blake are building these analogs of nerve cell networks, these models of nerve cell networks in silicon. And one thing that we often do see in the biological laboratory setting, in the in vitro laboratory setting is self-organization. And so the question is, if you have the foundational building blocks and you kind of put them in the right places, will they grow to have these higher order functionalities on their own? And I don't know if anybody knows the answer to that when we're looking at the silicon.

B: Sounds like you're describing emergent phenomena there, Cara.

C: No, I'm actually describing the emergence of organization, not emergent phenomena. So I'm not talking about the like monist dualist like mind coming from brain, Bob, I'm actually talking about, you've seen that when we look at these like organoids, and they would develop their own eye spots, even though they were not programmed, those cells weren't put into the system. And this is something that often happens when you start building a nerve cell network, is it grows together and it uses the trophic factors and the trophic factors to self-organize.

BL: Yeah, these networks aren't going to do anything absent training. Now, during the training process, a kind of self-organization happens. And some of the kinds of patterns that you see naturally happening in the brain do emerge in the network. But that's under the influence of training. That there's, it's not like you just put the neurons and then anything happens, you have to train them.

C: So then it's interesting because then it sounds like your argument, the argument that you're making that follows is that they were trained to be sentient.

BL: They were trained to mimic humans. They were given a trove of data that is just the shadow that humans have cast on the internet in the form of their written word. And said, be like that. The question then remained and they have succeeded. To my knowledge, no one has actually run a proper Turing test on these systems, but I have no doubt in my mind that they can pass.

S: I think they would pass a tearing test.

C: Oh absolutely. It's such a low bar.

B: It is a much lower bar than we thought.

BL: So, okay. A year ago, no one would have said that.

C: Right, exactly. Oh, how things have changed.

BL: Now, the thing is, and that's an interesting phenomenon to look at. Do like, why? Why has our opinion about how high or low of a bar the Turing test is changed in the space of a year? And I would say it's just because now we have computers that can pass it and we want to say that they're not intelligent.

S: I'll agree with you on that. That's been the history of AI from the beginning. We keep moving the bar every time narrow AI does something it wasn't supposed to be able to do. Like beat chess masters or go masters or whatever. So I agree with you on that. But this is a deeper discussion. We've been having this discussion on this show for years, long before the newer, latest crop of the AI came out. Turing type test, no matter how good it is, I would argue, is never going to be able to tell us if a system is sentient or not. It will only tell us how well it could mimic sentience. It may, in fact, be sentient and that's how it's mimicking it or that's how it's producing what looks like sentience. But we can't really know unless we know something about its internal state. And this is why I think it's so important to really try to understand as much as possible how analogous it really is to a mammalian brain or a human brain. Just because it's really good at acting sentient doesn't mean it is. We can't know that it is from that line of evidence alone. What do you think about that?

C: Well, and the whole point of the tearing test, right, is that it can convince a person. But a person by definition is not, it doesn't have perfect perception. We can be dooped.

BL: Well, I mean, so Turing's argument is that once it can behave like a human behaves, then we do have as much evidence that it's conscious as we do that other humans. He directly said in his argument that the alternative is solipsism.

S: Yeah, again, unless you include knowledge of the thing itself, like it has a brain. So that's where I think we escape from that. And I've long argued we won't know if an AI that acts perfectly sentient is actually sentient unless we know how it gets to there. What's going on inside of its brain.

BL: Yeah, I've somewhat been surprised by that. I was actually speaking at MIT last month and there was a professor of philosophy in the audience and she and I got into an interesting conversation. And I put to her the question. Hypothetically. If right now I pushed some unseen buttons on the sides of my head and the top of my skull popped open and you could see a glowing blue light in there instead of a brain, would you call my sentience into question as a result of that? And she said yes.

S: Yeah, yeah.

BL: And I was and that that I cannot understand that mindset. It just is that makes no sense to me.

S: Yeah, I mean, I think the thing is because there's the p-zombie problem. I don't know if you're familiar with that. Do you have you heard that term before the philosophical zombie?

BL: I have complained to Chalmers about his-

S: Yeah, but the question is, and this is almost a completely different line rabbit hole. We might want to not want to go down, but yeah, can it is it possible to even to have a p-zombie, to have an entity that could act 100% like a fully sentient human, but not be aware of its own existence.

BL: No.

S: And so there will not be sentient, but just be sapient and mimic emotions and be fully intelligent the way people are, but have no experience of its own existence. That's an open question in philosophy.

B: Is it really open?

S: I think it's an unanswered question philosophically.

C: It sounds like Bob and Blake are on team no way.

B: I just can't imagine. Behaving like a human, but not being aware.

S: But not being aware of its own existence.

B: I think that would change your behavior fundamentally.

J: Wait, guys.

B: Yeah Jay?

J: I got to throw something in here. My ex-wife. I'm not sure she's human.

C: Well, and the question also is like-

E: You thought you were a J-Zombie.

C: At what point is that a moot? My curiosity is like at what point is this argument moot?

S: Yeah.

C: Does it really matter?

BL: I actually would agree with that. But the backup just one second.

B: [inaudible] that whole thing.

BL: The reason I say no on the, that p-zombies aren't possible is because that entire concept of intelligence frames sentience, self-awareness, and sensation as non-functional components of experience. And that's just absurd to me. It's obvious to me that my sentience plays an active role in my intelligence. It's not just an added little sprinkle that is it.

C: It's fundamental.

B: Do you guys disagree with that?

S: I agree with that. I actually do think, I've been critical of the p-zombie notion previously. I'm a Daniel Dennett consciousness.

C: Freakin' psychotherapist over here. Of course, I think that that's necessary to the human experience and consciousness.

S: But it's still, the question is, though, and I do think this is unresolved, is could you get something that really isn't sentient but so good at mimicking it that you really can't tell the difference?

B: Well, that's...

S: And I've predicted this years ago when we were just discussing this. We're going to get to the point. Maybe it's already happened, right? That we have something that we can't know if it's sentient or not, and we're just going to have to assume it isn't treated that way because we can't know that it isn't sentient.

C: Is that your argument, Blake? Ultimately?

BL: Okay, so ultimately, like imagine this, if a doctor said, okay, you're experiencing these negative psychological effects. You're having these emotions and mood swings that are very detrimental to your life. So we're going to take those out. We know which circuits in the brain control them, we'll excise them, and we'll replace them with circuitry that simulates the exact same phenomena, but they're not real. They're just going to be simulated, so you should be fine.

E: Sounds very matrix.

C: It also sounds like it would not solve the problem.

BL: Exactly.

S: I want to push back on that a little bit because it is absolutely possible for people to have experiences that don't affect them emotionally. You can, in fact, I can give you a drug that will make you feel pain and not be bothered by it. You will have the emotional component of it.

B: Nitrous oxide. It's nitrous.

S: Or opiates do that. That kind of thing is.

B: Laughing gas does that.

S: It's neurologically completely possible because the ability to actually experience things is not an automatic consequence of the fact that you're experiencing them. There has to be circuits in the brain that make you feel something about them. And if they're missing, so there actually is this phenomenon called imposter syndrome. There's a more technical term for it. But where you recognize somebody, you have the full experience of this other person that you know, but there's no emotion attached to it. So you think that they're an imposter because they don't feel right.

C: Capgras syndrome. It's a delusion.

S: Yeah. But it's because a circuit in your brain isn't working. That's my point. If that circuit was not there, then you don't have the subjective experience of something. Even though you have the full sensory perceptual experience of it. But there's some, there's some component missing. So anyway, I'm just using this as an example.

B: That's scary man.

S: It's not automatic that because all of the pieces are there that the subjective experience piece will be there too. Because that's a separate thing.

C: But to be fair, I think principle of charity to Blake's argument here, 99.9% of the time, unless you're using some sort of drug that makes some sort of circuit go quiet, unless you have this very, very rare delusion that is a neurological anomaly. I think he was, he was Blake, you were giving us a sort of an example to illustrate something. You weren't saying this 100% holds water all the time, right?

BL: Yeah. So if the main point here is that there, that if these things are actually having feelings, there must exist some circuit in the neural network that is comparable to the circuits that humans have in the brain, then I would say, yeah, that's obvious to me. We can't locate it. We don't know where it is because of the way that these architectures are just giant, undifferentiated masses of things that are meant to resemble cortical columns. They organized during training and we have no idea which specific parameter connections control what, beyond the attention mechanisms. And that's something because that component was designed separately, we do know how to tell which words in the text the model is paying attention to. Now we could very well redesign these architectures to where we had one component that was responsible for emotional control and that would become much more similar to our amygdala. And I actually think that that direction of changing how we architect these systems would be a very good idea. We should build them based on the architecture of our own brains. Now, one question that kind of got lost in the mix a little while ago that I'd like to return to and I'll paraphrase, is this what we should be talking about? Is this actually important?

C: Or is it, is it moot really more than that? Are we talking past each other because we're talking about the same thing we're just arguing about something that we can't know anyway?

BL: Yeah, so I would say a more important question than is this system sentient, which we can have meaningful philosophical disagreements on. A more important question is will people respond to this system as though it is sentient?

B: Oh, certainly.

J: Certainly.

B: Absolutely.

S: That's a easier question.

C: But if you had, but like, don't you think that if that had been the argument from the beginning, the controversy would not be taking place right now?

BL: So my argument gets soundbited by journalists.

C: Right. I know that story.

BL: And well, the thing is I honestly do believe that LaMDA is sentient and Sydney and GPT-4. I don't think they use the same mechanisms. I wouldn't call them a human. And I do actually believe that. So when people asked me questions about it, I answered them. Do I think that's what we should be talking about right now? No. I don't.

S: All right. We're going very long, but I have two points further than I want to make. There's one quick question. Do you have an estimate of how, what the total computational power is of LaMDA compared to like the human brain? Because that's another problem that I have with it. Is it really powerful enough to be actually sentient or rather than just mimicking sentience?

BL: It's smaller than the brain and bigger than the language centers.

S: Okay. Gotcha. I hear what you're saying.

BL: It doesn't have a body to move. So it doesn't need any of the motor cortex. It doesn't have a heart or lungs to keep pumping. So it doesn't need the lower functions.

S: Yeah, but it's still, even if you take stripway all that stuff, a lot of our brain, like the higher level, like the executive function, the frontal lobes, big parts of the brain that aren't involved in the physical stuff. They're involved in the higher cortical stuff. Still, it's a lot more powerful than what LaMDA is.

BL: So it depends on where you count the boundaries of LaMDA. It's bigger than its language model, but LaMDA also is drawing from all of the machine vision libraries from YouTube, all of the narrative and storytelling understanding libraries from the Google Books project, so on and so forth. Like it's not just one AI. It's 10 or 20 AI's glued together using a language model.

S: Now I'm going, I'm really almost going full circle. Coming back to the notion of can we infer that LaMDA is sentient from its behavior, its responses to your questions. The one piece that, for me, feels like it's missing is all of the evidence that you presented at least so far, correct me if there's more than I'm not aware of, is how LaMDA responds to your prompts, correct?

BL: It's all behavior.

S: Yeah, but it's not just behavior. It's also, it's responsive. It's how it's responding to your prompts. Is there any evidence that there's an internal conversation happening within LaMDA that it's thinking it's talking to itself? It's spontaneously generating. Like, has it ever interrupt you and say, hold on a minute, Blake, I had this thought and it goes off in a completely different direction than anything you've ever talked to about it before.

BL: So the interface prevents it from deviating from turn taking. Now, it doesn't have to actually respond to what you said. I've seen many instances where it tries to change the topic, says it doesn't want to respond to that, or other things which are not responses to what I said. But, I mean, the simple web interface through which you interact with the system prevents it from deviating from the turn taking.

S: Right, but still, yes, that's not proof it can't do it, but it still leaves us without evidence that it is doing it, that there is some in because consciousness, this is not the fourth pillar of mental function. We have sentience, sapience, emotions and consciousness. Consciousness requires that endless, continuous internal conversation. And I don't know that that's what I don't have. I have not seen any evidence from either the way it's constructed or the way it behaves that LaMDA has that.

C: So there's no default mode network there.

BL: So I've never heard a definition of consciousness that requires that.

S: I'm giving you the neuroscientific definition.

BL: Well, it's okay.

S: Wakeful consciousness requires a constant internal conversation.

BL: I think what's happening here is you're approaching this from a connections framework, and I'm approaching it from a functionalist framework.

S: Yeah, but I think, I think we need both. I've long argued we need you have to look at it from both. Not only how is it behaving, but what, what's happening inside of it, but even still-

BL: So you asked the direct question that was more than just, does it interrupt you? So the interface prevents it interrupting you, but there is deliberation going on in the system. It thinks things through. It has an internal thought process that it deliberates on before it responds. It'll generate multiple responses and evaluate them along different criteria and decide which one of them is the one that it should actually say.

S: Yeah, so it's processing information. That's not quite the same thing. And it's more intelligence than consciousness.

C: It does sound like you're asking about, is there an analog for the default mode network?

S: Something like that. Yeah.

C: And I think that's daydreaming, mind wandering.

S: Yeah, right.

C: What the brain does at wakeful rest. And I think that that's an interesting question to ponder. And just because we might not know the answer right now, doesn't mean that that's not something that I think will be a curious component of this kind of research in the future.

B: And how critical is it? Does all consciousnesses need that pillar, specific pillar?

C: It does happen with all people who are awake at rest.

B: Yes.

J: I'd like to hear what everybody's answer is like the fact is, like I believe that consciousness is not required in order for people to write software that's good enough to do what we're seeing happen to.

B: Yeah, well Steve's point of view that it's a good mimic, it's a great mimic. And Blake is saying that it's, it's not, it's beyond mimicry.

C: And my point of view is, does it matter? Because like ultimately, can we even answer that question?

BL: What intelligent activities require consciousness to do? What behaviors would directly imply the presence of consciousness in your opinion or are they're none?

S: Yeah, as I said previously, I don't think there's any behavior that is a home run for consciousness.

BL: So consciousness is just optional.

S: Well, I'm not saying it's optional. I'm saying we can't prove consciousness from behavior alone.

C: Right. Consciousness is necessary.

S: Unless we know how the system works. We have to know how the system works in order to then infer what's happening from its behavior. Because there are different ways to get to the same outcome. And I think that's the problem. I think there's different ways to get to the same outcome.

C: Are you talking about in people or in the machine?

S: In anything. So the outcome alone is never going to be enough evidence. We need to know something about the process itself.

C: But consciousness, I mean, as a holy grail conversation within neurobiology, within psychology, within neurology, consciousness is one of those things that is assumed, that is well measured. Don't get me wrong. We know what it looks like to be unconscious. But we cannot measure consciousness. We can measure lack of consciousness. And we just assume that it underlies, it is a given, that it underlies all neuronal processing. Conscious, wakeful [inaudible].

S: We can infer it from lots of lines of evidence of what's happening inside the brain.

C: But we can't go, I'm going to measure your consciousness packets. So it's still a holy grail within philosophical neuroscience. So it's kind of not going to be a good litmus test or a good bar for AI.

S: Yeah. I agree. We're definitely getting to the limits, both neuroscientifically, philosophically and probably also computationally in terms of making all of these things align.

B: But Steve, wouldn't it be better than though, if it at some point in the future, that we can get a system like this, that the black box is opened, if you will. It's a much more open, so that you can examine what is happening, what kind of relationships, what kind of complexities, what's going on inside to a much greater degree. And once we have that information, then we can more reliably infer, yes, consciousness is much more likely given its behavior and what we see happening inside. But as long as there's a black box there, as long as there's a black box, it's going to be probably totally impossible to distinguish them.

C: Well, I mean, we've got a brilliant computer scientist here who can answer it. Like, will we be able to ever see in the black box?

BL: We can make the black box is smaller and simpler. Right now we have one gigantic black box that does everything. And we could and should factor that down into maybe a hundred black boxes, one of which we know that is the thing that controls attention. We already have that piece. Have another one that understands visual semantics, have another one that understands emotional semantics, so on and so forth. Right now we just have one giant black box and that's the main difficulty. If we factored it out and had more dedicated circuitry that did fix jobs that we know the brain has dedicated circuitry for, it would make it much easier to understand what's going on inside these systems.

S: Yeah, I agree. I was going to say that that that is a better analog for how brains work.

C: It's such a perfect. Yeah, it's like I feel like I'm just listening to somebody talk about neuroscientific discoveries over the past several decades or hundreds of years.

BL: Yeah, so one thing that doesn't get publicized much of my graduate work was in computation neuroscience. So like I was prepared to talk about spike time independent plasticity if necessary.

J: Blake I have one more question for you. Would you have sex with a robot?

C: This is very personal and you don't have to answer that Blake. But you can if you want to.

J: You don't have to answer that. I just thought I'd throw it out there as a conversation starter.

C: Who says I haven't?

BL: I've been to Burning Man and there is at least one day I don't remember. (laughter)

E: Plausible deniability.

S: Well, Blake, thank you so much for joining us. This has been a fascinating conversation. I think we've solved nothing.

C: And everything.

S: That's also kind of the point. This is difficult territory that we're into. I still don't think that LaMDA is sentient, I'll be honest with you. I don't think it's powerful enough. I don't think it has all the components to it to do that. I think it's in the pieces of it, what it does do it does really, really well. But I do also will acknowledge that there probably is some emergent behavior in there.

B: There absolutely is.

S: I meant to bring this up. Sorry.

E: Appendix. Here comes the appendix.

S: I think if sentience was a truly emergent property in LaMDA, it wouldn't be human-like sentience. It would be something else entirely.

B: Yeah. Agreed.

C: But then you get to the astrobiology question of would we even recognize it when we saw it? Because we don;t have those detectors.

E: Would it just appear to be malfunctioning?

BL: No, I actually want to second that one. These things are alien. These are not human and we absolutely should be trying to figure out what's going on inside these systems because it's not the exact same thing as what's going on inside us. It's analogous in some ways to what's going on inside of us. But we really need to be thinking of this as it's a new kind of intelligent thing and maybe develop a whole new vocabulary to describe what's going on. It's so we don't have to have the words doing double duty describing the phenomenon in humans and the phenomenon in artificial entities.

C: Hear hear. I agree with that.

S: I agree.

C: We are always getting frustrated by how parallel the language is. When we're talking-

S: I think we're anthropomorphizing a lot too.

C: Yeah.

B: Yeah.

E: One question from Evan before you go. Did you sign the petition to pause giant AI experiments?

BL: No, I didn't.

E: Okay.

C: Because you want to keep doing them?

BL: No, actually. I think we should have an industry wide slowdown and giving regulators time to catch up. But the specific wording of that petition basically said these two companies need to stop doing research.

S: I see.

E: A little too narrow there.

S: Yeah. Are you afraid of an AI apocalypse or no?

BL: No. I am afraid of what it's going to do to our society. There are a lot of societal harms that are possible. And not only possible, there are real societal harms happening right now that have to do with bias and the sourcing of training data. So I think that having an industry wide moratorium while US regulators catch up or world regulators too is a good idea.

B: Never will happen. Never will happen.

C: But what an important insight Blake like AI doesn't have to destroy us. It's just going to make it easier for us to keep destroying ourselves.

BL: Yes.

C: This is something we have to be very careful about.

S: I think this is the new AI [inaudible]. It's YouTube algorithms. They're going to destroy democracy.

C: It's all of it. Yeah.

S: All right. Thanks again Blake.

C: On that lovely note.

B: Yeah.

BL: You all have a great night.

E: Enjoy your dystopia. Bye.

[top]                        

Science or Fiction (1:33:09)[edit]

Item #1: A large survey of life on Earth finds that total biomass remains fairly consistent (within one order of magnitude) across the entire range of body size for all living things.[7]
Item #2: Researchers find that plants under stress emit recordable sound, about as loud as a normal speaking voice.[8]
Item #3: Researchers created intracellular sensors that use nanodiamond quantum sensing.[9]

Answer Item
Fiction Total biomass consistent
Science Plants emit sound
Science
Intracellular sensors
Host Result
Steve swept
Rogue Guess
Bob
Total biomass consistent
Cara
Total biomass consistent
Evan
Total biomass consistent
Jay
Total biomass consistent

Voice-over: It's time for Science or Fiction.

S: Each week I come up with three science new items or fact. To real and one fake. And I challenge my panel of skeptics and tell me which one is the fake. We got three regular news items this week. No theme. You guys like theme or no theme?

C: It depends.

J: I like themes.

C: Sometimes themes are [inaudible].

E: Depends on the theme.

S: All right. So you are all over the place. All right. Here we go. Item number one. A large survey of life on Earth finds that total biomass remains fairly consistent (within one order of magnitude) across the entire range of body size for all living things. Item number two. Researchers find that plants under stress emit recordable sound, about as loud as a normal speaking voice. And item number three. Researchers created intracellular sensors that use nanodiamond quantum sensing. Bob go first.

Bob's Response[edit]

B: So biomass remains fairly consistent within one order of magnitude across the entire range of body size. Do you want to explain that for everyone else?

S: Yes. I'll explain that. So that means if you pick any body size. And you look at the biomass of all creatures at that size, it's the same as any other size. So if you graph total biomass for all things at that size versus the size, it's pretty much a flat line within an order of magnitude.

B: Yep. I could totally see that. Interesting. Hmm. The second one plants under stress as loud as a normal speaking voice. So you obviously can't mean that literally as the way it's coming across. Are you using some other definition of loud?

S: Nope.

B: All right. I think you're finding some nuanced subtle thing that makes this sound dramatically unlikely. But I'm going to go with that one. Intracellular sensors. Yeah. I mean, anything dealing with nano diamond and quantum sensing has got to be real. So I'm going to go with that one. I'm going to say that the biomass one, it's nice and symmetrical and sounds nice, but I'm going to say I think bacteria just outweigh even anything human sized or larger that they could just dwarf anything. So I'll say that's fiction.

S: Okay, Cara.

Cara's Response[edit]

C: I'm going to go in the other order. Research has created intracellular sensors that use nano diamond quantum. Sure. Material science. Nanotechnology. Yeah, sounds good. Yeah, for me, it's between the first two. So I feel like there, you know what's doing a lot of heavy lifting on the biomass one is within an order of magnitude. For me.

B: That's true.

C: I'm like, okay. Fairly consistent within a whole order of magnitude. And then plants under stress emit recordable sound. I think that's true. I guess my question is just because it's recordable, it's kind of like we can record things in space that we can't see. Is it recordable sound that we can't hear? I don't know. If it were then yeah, that seems reasonable. So they both seem reasonable given these like weird that they could be caveated that way. So which one is less reasonable with the caveats? I think it's the biomass one. I think I'm going to go, that's when you went with Bob, right?

B: Yeah.

C: Yeah, I think I've got to go with that. There's got to be something that's like, yeah, there's just not that much of things of that size.

B: It's too symmetrical.

C: Yeah, it's too much. Like it might be bimodal, it might be a normal curve, but there's got to be some measures of central tendency in there. So yeah, I don't know. I don't like that one. I'm going to go with Bob.

S: Okay, Evan.

Evan's Response[edit]

E: Well, I guess I'll agree. And I'll also say the biomass one is the fiction. For a lot of the reasons that were already stated, certainly the one order magnitude that you threw in there, Steve, although I think that was meant to reassure us, but I have a feeling we're going to see something more extreme, much more extreme than that as far as a differential. So that one's the fiction.

S: And Jay.

Jay's Response[edit]

J: I'm just going to go with the group here because I don't want to be left out.

S: Go with the herd.

S: Okay. Co I'm going to take this over to the first order starting with number three.

Steve Explains Item #3[edit]

S: Researchers created intracellular sensors that use nanodiamond quantum sensing. You all think this is science, which means I can get you to believe anything as long as it has the word nano and quantum in it.

C: Shut up.

S: But this one is science.

E: Yeah. You tried to get us at that one.

S: But this is cool. Yeah, it doesn't obviously you can go either way. Oh, nanos quantum. It's got to be real or Steve's trying to get us in this case, it turned out to be real. But yeah, this is, this is pretty awesome. So they make these nanodiamonds and it's pretty as you might imagine, it's pretty technical. They have these nitrogen vacancies inside the diamond, which are paramagnetic and they could control them with these optical tweezers and then use them to sense both magnetic field and temperature. And they could do it inside of a living cell. It's pretty cool. It's almost like a little MRI scan inside of a cell.

E: How do you get to the point, even think about that. It's just the concept alone is.

S: It's mind blowing that we could do this shit. Totally mind blowing.

E: That's amazing.

Steve Explains Item #2[edit]

S: All right, let's go back number two. Researchers find that plants under stress emit recordable sound, about as loud as a normal speaking voice. Guys, you all think this one is science. And this one is science. This is science. So what's the gotcha? What did I not tell you?

E: About the plants?

S: Yeah, about this one.

J: You can't hear it. It's subsonic or something like that.

S: It's supersonic.

C: Oh, it's supersonic. Okay, other direction. Yeah, okay. Good.

B: So it's loud, but you can't hear it.

S: You can't hear it because it's ultrasonic.

C: It's loud to the recording apparatus.

E: Because if I heard their stress, I might feel sympathy or emotional.

S: They're screaming. They're screaming at ultrasonic.

B: That's pretty interesting, though, because you could then know for sure that, oh, this plants under stress. And then take it from there.

S: That's the idea. That's the idea we could use this in agriculture as a way of determining how well the plants are doing. And they emit different kinds of sound for different kinds of stress. Now, they say it sounds like a bunch of bubble wrap popping. That's what the sound sounds like.

C: Oh, weird.

S: You get if you lowered the pitch so that it was within the human hearing range.

C: What's the mechanism? [bunch of pretend aching sounds from the boys] What's actually doing that?

J: Oh my god.

E: It's displacing air somehow.

S: I don't know.

C: Right. Like, they don't have mouths or joints.

S: They do have, they have stomas.

E: They emit, they emit gases, though.

S: They do.

C: Yeah, bubble wrap. That is also just the mid and gas. That's true.

S: Now, the two plants that they studied, tell me guys what this reminds you of. They studied tomatoes and tobacco.

E: Oh, tomaccos from the Simpsons.

S: The tomaccos from the Simpsons!

E: Oh, my god.

S: I think it's just a coincidence but I mean, really?

E: Once again.

S: Yeah.

E: Number 833, the things that the Simpsons prediction that ultimately came true.

Steve Explains Item #1[edit]

S: All right. All this means that a large survey of life on Earth finds that total biomass remains fairly consistent (within one order of magnitude) across the entire range of body size for all living things is fiction. So what is the distribution, guys? What would you guess? Because they did do a massive, massive survey.

C: Ants. Is it ants? It's all ants.

B: I think bacteria.

C: And I think little things like krill or ants.

E: Bugs, ants, something small.

B: Bacteria and archaea.

E: And a lot of it.

C: I don't think so. I think it's, I think they're too small.

S: What does that curve look like? What do you think the curve looks like? If it's a big spike at ants or bacteria or you think it has some other shape to it?

E: Bumps like a bumpy.

B: A big spike.

E: A couple of humps.

C: Yes, like askew to the left on small end.

S: Cara, you actually threw out the word when you were speculating. It's bimodal.

C: Oh. Of what? What are the modes?

S: And the keys are at the very low end and the very high end. And it gives you a little...

C: Like big stuff, like whales and shit.

E: That's what I said. So it's a two hump camel.

C: There you go.

S: So the smallest things have the most biomass and the biggest things have the highest biomass. And the medium things are less. And even by like multiple orders of magnitude.

B: Yeah.

S: And this is all living things, not just animals. So this is also trees.

B: Trees.

S: I think that's, so yeah, when you're talking about like the redwood trees or whatever, like their biomass is like similar to bacteria but cows are somewhere in the middle. Or orangutans or squirrels. So yeah, interesting.

B: Which has, which is higher though?

S: Part of the reason why I use this as the fiction was because I didn't want to vouch for the actual results because they said that they have to continue to gather more data because the uncertainty is two orders of magnitude for the estimating the biomass of like. Estimate the biomass of everything this big, you know? And this is obviously a massive undertaking. So they said there's still too much uncertainty to really be sure, but this is where things are shaking out. That they're very small and they're very big and have the most biomass.

B: I bet we find that bacteria have even more than-

S: Than trees?

B: -than we think.

S: Maybe it's fungus. Maybe it's the, those giant underground fungal things.

B: Yeah.

S: You don't think so? You're a bacteriofile, Bob.

C: You are, yeah.

B: But they find it everywhere. You keep digging down in the ground.

S: Yeah, but they're really tiny.

C: But it's so small. It's so small. Think about how many-

B: It's still number one. it's still up there.

C: Yeah, think about how many bugs there are. You keep finding those everywhere too and they're pretty big and heavy. Compared to bacteria.

B: The bugs lost out. We heard it. It's the microbes and the trees.

C: Yeah. I thought when you said on the small end, it was both.

S: Again, it's a curve. The smaller you get the more biomass, you have the bigger you get, you get more biomass.

C: But there's got to be a point where-

S: And in the middle somewhere-

C: Viruses aren't the most biomemass.

S: I don't know if you count viruses.

C: You can't. There's no way they would have more biomass than bacteria. So there is a point, oh, you don't know if you're counting them as being alive.

B: They're not really living though.

J: Cara, let me ask you a question. Are viruses sentient?

C: I know. I was like, we're going to open up the same can of worms.

E: Let's ask ChatGPT if.

S: They're intelligent, but they're maybe not sentient. Do they have emotion?

C: Maybe. Their emotion is, I don't want to not live. I shall find a new host.

E: Do they scream about their stress at supersonic speeds?

S: All right. Well, you swept me this week, guys. Good job.

J: Thank you. I knew where to go.

S: Jay's like, I'm not going out of my own. No way.

Skeptical Quote of the Week (1:44:45)[edit]


The best science communication invites you to consider the complexity of the world, and the worst invites you to ignore the complexity.

 – Michael Hobbes, journalist & podcaster


S: All right, Evan, give us a quote.

E: All right. This quote comes from a listener, Damien, from Brisbane, Australia. He was listening to the Maintenance Phase podcast and heard this quote from Michael Hobbes. "The best science communication invites you to consider the complexity of the world and the worst invites you to ignore the complexity." Yes, and I think that's consistent with what we've experienced in doing-

S: Absolutely.

E: -our show for the better part of two decades.

S: Yeah, I don't remember who first said it, but it frequently comes up in skeptical circles. The notion of, I think you'll find it's a little bit more complicated than that. It's like always more complicated.

E: Always.

S: Always more complicated. And that's kind of the point of our summarizing the issue or doubling into the issue, whereas a lot of times people on the other end of the spectrum are trying to make things, they're trying to oversimplify things.

B: Yeah. The one that's always on in layers to go through.

S: Yeah. If you're doing correctly, there is. All right, well, thank you all for joining me this week.

B: Sure, man.

E: Thank you, Steve.

J: You got it.

Signoff[edit]

S: —and until next week, this is your Skeptics' Guide to the Universe.

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        

Today I Learned[edit]

  • Fact/Description, possibly with an article reference[10]
  • Fact/Description
  • Fact/Description

References[edit]

Navi-previous.png Back to top of page Navi-next.png