SGU Episode 929: Difference between revisions

From SGUTranscripts
Jump to navigation Jump to search
(news items done)
(929 done)
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Google speech|episode}}
{{transcribing all
|date = 2023-06-02
|transcriber = Hearmepurr
|}}
{{900s|929|episodebox}}<!--
** This template generates the appropriate green message box asking for help with transcribing the episode.
** If you intend to transcribe the _whole_ episode, please _REPLACE_ the "900s" template above with the "transcribing all" template:
{{transcribing all
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
** If you _only_ want to work on a section, just add the "transcribing section" template BELOW the "Episode" or "900s" template above to indicate you are not working on the entire transcription:
{{transcribing section
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
** If you use the "transcribing section" template (placing it here, at the top of the transcript under the "Episode"/"900s" template), make sure you _also_ have a "transcribing" template above whichever section you're currently working on:
{{transcribing
|date = YYYY-MM-DD
|transcriber = (optional)
|time = (optional; use HHMM (Enter the 24-hour time in GMT) )
|}}
**        *** Once transcription is complete, please delete this entire "Episode" markup section! ***
-->
{{Editing required
{{Editing required
|transcription = y
|transcription =  
|proofreading = <!-- please only activate when some transcription is present. -->
|proofreading = y <!-- please only activate when some transcription is present. -->
|formatting = y
|formatting = y
|links = y
|links = y
Line 44: Line 8:
|segment redirects = y <!-- redirect pages for segments with head-line type titles -->
|segment redirects = y <!-- redirect pages for segments with head-line type titles -->
|}}
|}}
{{ThisOutline}} <!-- Remove when all the episode's segments are outlined -->
{{UseOutline}} <!-- Remove when transcription is complete -->
{{InfoBox
{{InfoBox
|episodeNum = 929
|episodeNum = 929
|episodeDate = {{900s|929|boxdate}} <!-- inserts the correct and formatted date -->
|episodeDate = {{900s|929|boxdate}} <!-- inserts the correct and formatted date -->
|verified = <!-- leave blank until verified, then put a 'y'-->
|verified = <!-- leave blank until verified, then put a 'y'-->
|episodeIcon = File:929 Blue Hole.jpg
|episodeIcon = File:929 Blue Hole.jpg


Line 793: Line 753:
'''S:''' Jay, it's Who's That Noisy time.
'''S:''' Jay, it's Who's That Noisy time.


'''J:''' All right, guys.
'''J:''' All right, guys, last week I played this noisy:
 
'''J:''' Last week I played this noisy.


'''J:''' So what do you guys think?
[background hissing with bird calls and a strong plunking in the foreground]


'''E:''' A bird dropping marbles into a pond.
So what do you guys think?


'''E:''' Is that that Kerplunk almost kind of sound?
'''E:''' A bird dropping marbles into a pond. Is that that kerplunk almost kind of sound?


'''J:''' It does have that kind of a drop noise to it, doesn't it?
'''J:''' It does have that kind of a drop noise to it, doesn't it?


'''E:''' The bird almost sounded fake to me.
'''S:''' The bird almost sounded fake to me. Remember those whistles you put a little water in it shaped like a bird?
 
'''E:''' You know, it's remember those whistles you put a little water in it shaped like a bird?


'''C:''' Yeah, it sounds just like one of those.
'''C:''' Yeah, it sounds just like one of those.


'''C:''' Right?
'''E:''' Right?


'''C:''' That's so funny.
'''C:''' That's so funny. I love those whistles. Those are like bird call whistles.
 
'''C:''' I love those whistles.
 
'''C:''' Those are like bird call whistles.


'''E:''' Yeah, those are fun.
'''E:''' Yeah, those are fun.
Line 825: Line 777:
'''E:''' I'll assume they're real birds.
'''E:''' I'll assume they're real birds.


{{anchor|vanguard}}'''J:''' Before I get into the answers, '''I have a correction from a previous Who's That Noisy.'''
{{anchor|vanguard}}'''J:''' Before I get into the answers, '''I have a correction from a previous Who's That Noisy.''' Do you guys remember I was talking about the Vanguard satellites?


'''J:''' Do you guys remember I was talking about the Vanguard satellites?
'''S:''' Mhm.


'''J:''' Yes.
'''E:''' Yes.


'''J:''' Well, a listener named Craig wrote in and said, "Hey, Jay, hope you're doing well. I just listened to your [[SGU Episode 928#vanguard|Who's That Noisy segment]] on the last podcast where you described the Vanguard satellites transmitter power as 10 megawatts." And he says, "I think this was unlikely. The batteries required to output such a signal would mean the satellite could never be launched, especially back in the 50s. I thought perhaps you'd mistaken the abbreviation 'm', so lowercase 'm', capital 'W', which would be milliwatts."
'''J:''' Well, a listener named Craig wrote in and said, "Hey, Jay, hope you're doing well. I just listened to your [[SGU Episode 928#vanguard|Who's That Noisy segment]] on the last podcast where you described the Vanguard satellites transmitter power as 10 megawatts." And he says, "I think this was unlikely. The batteries required to output such a signal would mean the satellite could never be launched, especially back in the 50s. I thought perhaps you'd mistaken the abbreviation 'm', so lowercase 'm', capital 'W', which would be milliwatts." So he said 10 milliwatts seems more reasonable, and he looked it up and he is correct. So I made a mistake from megawatts to milliwatts. He said I was only eight orders of magnitude over. So I appreciate the correction. I have absolutely no problem [[Nintendo Wii Noisy (WTN 829)|putting corrections up]], so thanks for that. Listener named Myron Getman wrote in to Who's That Noisy. He said, "Jay, I'm pretty sure this week's Noisy is a p{{w|rairie chicken}} booming to attract females." That is not correct. When I read this email, I just instantly pictured a chicken with a boombox. Remember those guys?


'''J:''' So he said 10 milliwatts seems more reasonable, and he looked it up and he is correct.
'''B:''' Yeah.
 
'''J:''' So I made a mistake from megawatts to milliwatts.
 
'''J:''' He said I was only eight orders of magnitude over.
 
'''J:''' So I appreciate the correction.
 
'''J:''' I have absolutely no problem [[Nintendo Wii Noisy (WTN 829)|putting corrections up]], so thanks for that.
 
'''J:''' Listener named Myron Getman wrote in to Who's That Noisy.
 
'''J:''' He said, Jay, I'm pretty sure this week's noisy is a prairie chicken booming to attract females.
 
'''J:''' That is not correct.
 
'''J:''' When I read this email, I just instantly pictured a chicken with a boombox.
 
'''J:''' Remember those guys?
 
'''J:''' Yeah.


'''J:''' Most people don't even know what the hell they are anymore.
'''J:''' Most people don't even know what the hell they are anymore.


'''J:''' The boombox?
'''E:''' The boombox? I remember that.


'''J:''' I remember that.
'''J:''' Another listener named Marie Terrill, she says, "Hi, Steve. I love you guys so much and listen every week. It's the highlight of my week. I think this week's What's That Noisy is the sound of an {{w|American dipper}} dipping into the water of a river. The other sounds support this, the river and the bird song of the dipper." So I listen to the sound of an American dipper and there is a little bit of a similarity. It is not an American dipper dipping into the water, but that is definitely not a bad guess. Another listener named Tim Welsh wrote in and said, "Hey, Jay, I'm pretty sure that noisy is a cat bird, which is a type of mockingbird. They sound a lot like R2-D2. Love the show and I hope I can make it to Notacon." Tim, that is not correct, but because you mentioned R2-D2, I will give you a 1/8th correct answer just because I love R2-D2. But we have a winner this week. A listener named Lydia Parsons wrote in and said, "Hello, Jay, my guest for this week's Who's That Noisy is the call of the {{w|greater sage-grouse}}."


'''J:''' Yeah.
'''E:''' Ooh, grouse.


'''J:''' Another listener named Marie Terrill, she says, hi, Steve.
'''J:''' "Hopefully that is correct because I recognized it almost immediately from my years of animal show binging as a kid." So Lydia, you got it right. That is a greater sage-grouse. It's also known as the sage hen. It's the largest grouse, which is a type of bird in North America. Its range is sagebrush country in the Western United States and Southern Alberta and Saskatchewan, Canada. So I mean, yeah, it's a bird, of course. I'm sure most people knew it was a bird, but this is a specific one, the greater sage-grouse. [plays Noisy]


'''J:''' I love you guys so much and listen every week.
'''E:''' What's that plunks?


'''J:''' It's the highlight of my week.
'''S:''' That's the bird.


'''J:''' I think this week's What's That Noisy is the sound of an American dipper dipping into the water of a river.
'''E:''' The bird makes the plunk sound?


'''J:''' The other sounds support this, the river and the bird song of the dipper.
'''S:''' Yeah.
 
'''J:''' So I listen to the sound of an American dipper and there is a little bit of a similarity.
 
'''J:''' It is not an American dipper dipping into the water, but that is definitely not a bad guess.
 
'''J:''' Another listener named Tim Welsh wrote in and said, hey, Jay, I'm pretty sure that noisy is a cat bird, which is a type of mockingbird.
 
'''J:''' They sound a lot like R2-D2.
 
'''J:''' Love the show and I hope I can make it to Nauticon.
 
'''J:''' Tim, that is not correct, but because you mentioned R2-D2, I will give you a 1 8th correct answer just because I love R2-D2.
 
'''J:''' But we have a winner this week.
 
'''J:''' A listener named Lydia Parsons wrote in and said, hello, Jay, my guest for this week's Who's That Noisy is the call of the greater sage grouse.
 
'''J:''' Ooh, grouse.
 
'''J:''' Hopefully that is correct because I recognized it almost immediately from my years of animal show binging as a kid.
 
'''J:''' So Lydia, you got it right.
 
'''J:''' That is a greater sage grouse.
 
'''J:''' It's also known as the sage hen.
 
'''J:''' It's the largest grouse, which is a type of bird in North America.
 
'''J:''' Its range is sagebrush country in the Western United States and Southern Alberta and Saskatchewan, Canada.
 
'''J:''' So I mean, yeah, it's a bird, of course.
 
'''J:''' I'm sure most people knew it was a bird, but this is a specific one, the greater sage grouse.
 
None What's that plunk?
 
None That's a bird.
 
'''E:''' The bird makes the plunk sound.
 
'''J:''' Yeah.


'''J:''' Yeah, that is the sound.
'''J:''' Yeah, that is the sound.


'''S:''' It's not unlike a brown-headed cowbird.
'''S:''' It's not unlike a {{w|brown-headed cowbird}}. They kind of make also that little plinking. I think it's like a little bit of electronic sound. Really weird when you first hear that coming from a bird.


'''S:''' They kind of make also that little plinking.
{{anchor|previousWTN}} <!-- keep right above the following sub-section ... this is the anchor used by wtnHiddenAnswer, which will link the next hidden answer to this episode's new noisy (so, to that episode's "previousWTN") -->
 
'''S:''' I think it's like a little bit of electronic sound.
 
'''S:''' Really weird when you first hear that coming from a bird.


{{anchor|previousWTN}} <!-- keep right above the following sub-section ... this is the anchor used by wtnHiddenAnswer, which will link the next hidden answer to this episode's new noisy (so, to that episode's "previousWTN") -->
=== New Noisy <small>(1:1:56)</small> ===
=== New Noisy <small>(1:1:56)</small> ===
[_short_vague_description_of_Noisy]
{{wtnAnswer|930|short_text_from_transcript}} <!-- "NNNN" is the episode number of the next WTN segment and "short_text_from_transcript" is the portion of this transcript that will transclude a link to the next WTN segment, using that episode's anchor, seen here just above the beginning of this WTN section. -->


'''J:''' All right, I got a new noisy this week sent in by a listener named Johnny Noble, and here it is.
'''J:''' All right, I got a new noisy this week sent in by a listener named Johnny Noble, and here it is.


'''J:''' So my hint for this week is that it's not a bird and it's not any kind of sea mammal.
[_short_vague_description_of_Noisy]
 
'''J:''' Thank you.
 
'''J:''' I deliberately went with something that is not either of those two things.
 
'''J:''' So guys, if you heard something cool this week or if you think you know what this week's noisy is, just email me at wtn at the skeptics guide.org.
 
'''J:''' Don't bother emailing me through the skeptics guide website because you cannot put attachments on there.


'''J:''' Just use wtn at the skeptics guide.org.
So my hint for this week is that it's not a bird and it's not any kind of sea mammal.


'''J:''' I'll get it.
'''C:''' Thank you.


'''J:''' I'm the only one that'll get it and it's nice and easy.
'''J:''' I deliberately went with something that is not either of those two things. So guys, if you heard something cool this week or {{wtnAnswer|930|if you think you know what this week's noisy is}}, just email me at WTN@theskepticsguide.org. Don't bother emailing me through the skeptics guide website because you cannot put attachments on there. Just use WTN@theskepticsguide.org. I'll get it. I'm the only one that'll get it and it's nice and easy.


== Announcements <small>(1:17:54)</small> ==
== Announcements <small>(1:17:54)</small> ==


'''J:''' So we have things going on, guys.
'''J:''' So we have things going on, guys. The SGU has things on the calendar. We got May 20<sup>th</sup>, which is not far now. What are we, as we sit here, we're about just over three weeks away.


'''J:''' The SGU has things on the calendar.
'''E:''' Roughly three weeks, yeah.


'''J:''' We got May 20th, which is not far now.
'''J:''' Yeah. So that show is going to start at 11 a.m. Eastern time. The first hour will be for patrons and then the remaining five hours will be for the open public. We invite everyone to come check us out. We will have a link on our website as soon as that link is created. Probably within a few days before the event, so it's not going to be up very soon, but it'll be up there. And we're just going to be doing a lot of different things for fun, having conversations about stuff that we normally don't talk about and doing stuff. So join us, live stream, six hours if you're a patron, five hours if you're not a patron. That's Saturday, May 20<sup>th</sup>. In case anybody's interested, we will be at Dragon Con this year.


'''J:''' What are we ... As we sit here, we're about just over three weeks away.
'''E:''' Atlanta, Georgia.


'''J:''' Roughly three weeks, yeah.
'''J:''' Yep. Just letting you know. Just letting you know we'll be there. If you're there, come up and say hi. And then the other big thing is there is a conference that we are having November 3<sup>rd</sup> and 4<sup>th</sup> of this year. It's called Notacon. Why is it called that, you might ask, Cara? Because-


'''J:''' Yeah.
'''C:''' I know.


'''J:''' So that show is going to start at 11 a.m. Eastern time.
'''J:''' Yes, well ''now'' you do. That's all I talk about. So this is a conference that is not going to be like any conference you've ever been to because we are not going to have typical conference-like things happening. This conference is about socializing because that is what an amazing number of people have been emailing us saying, when are we going to get Necss back in person because we miss all of our friends and we want to socialize. And then after a lot of consideration, we realized maybe we should just do a con where people have the time to socialize and have fun and enjoy the skeptical community that the SGU has built. And that's exactly what we're doing. So there will be entertainment at this conference, but that's basically what it's going to be. We're going to be providing entertainment. George Hrabb, Andrea Jones-Roy and Brian Wecht will be joining the entire SGU and we're going to be doing a bunch of fun things over the course of that Friday and Saturday to entertain you. But it's not going to be lectures. It's not going to be people standing up there talking for 45 minutes. It's going to be fun stuff. There's going to be a lot of audience interaction. And most importantly, there will be plenty of time to hang out, to talk, to have meals and to just spend time with the other people that are attending the conference. So we're getting a lot of people that are very interested, emailing us questions. You don't have to ask questions. It's exactly what I said. It's going to be in White Plains, New York. The hotel is great and it has a pool. There will be a shuttle from Westchester Airport to the hotel. If you fly into the New York City airports, you can simply take a like a other ground transportation, like an Uber or something like that. Probably what you should do is you should coordinate sharing transportation with other people that you know will be flying into whatever airport you choose to fly into. And also just split a room with someone. You know what I mean? There's no reason why you need to have a whole room to yourself. If you want to save money, just bunk in a room with someone or more than one person. But please do come to the conference because it's going to be a ton of fun. It'll be unlike anything else you've ever done. That's what I'm putting right in front of you. So [https://www.theskepticsguide.org/events go to our website]. The signup link is there. Buy tickets. Right now, everything is happening. So if you did pre-register with us, now's the time to buy your tickets. Now there are rooms that are being held for the conference, but I can only guarantee that the first hundred rooms will get the special rate. As soon as we get close to booking a hundred rooms at the hotel, I will try to get more. But I'm just putting it out there. If you want to stay in the hotel where this is happening, be one of the first hundred people to sign up and that'll guarantee you'll get a room in that hotel.


'''J:''' The first hour will be for patrons and then the remaining five hours will be for the open public.
'''S:''' All right. Thanks, Jay.


'''J:''' We invite everyone to come check us out.
{{anchor|followup}} <!-- leave these anchors directly above the corresponding section that follows -->


'''J:''' We will have a link on our website as soon as that link is created.
'''J:''' It's only within a few days before the event, so it's not going to be up very soon, but it'll be up there.
'''J:''' And we're just going to be doing a lot of different things for fun, having conversations about stuff that we normally don't talk about and doing stuff.
'''J:''' So join us, live stream, six hours if you're a patron, five hours if you're not a patron.
'''J:''' That's Saturday, May 20th.
'''J:''' In case anybody's interested, we will be at Dragon Con this year.
'''J:''' Atlanta, Georgia.
'''J:''' Yep.
'''J:''' Just letting you know.
'''J:''' Just letting you know we'll be there.
'''J:''' If you're there, come up and say hi.
'''J:''' And then the other big thing is there is a conference that we are having November 3rd and 4th of this year.
'''J:''' It's called Nauticon.
'''J:''' Why is it called that, you might ask, Cara?
'''J:''' Because- I know.
'''J:''' Yes, well now you do.
'''J:''' That's all I talk about.
'''J:''' So this is a conference that is not going to be like any conference you've ever been to because we are not going to have typical conference-like things happening.
'''J:''' This conference is about socializing because that is what an amazing number of people have been emailing us saying, when are we going to get Nexus back in person because we miss all of our friends and we want to socialize?
'''J:''' And then after a lot of consideration, we realized maybe we should just do a con where people have the time to socialize and have fun and enjoy the skeptical community that the SGU has built.
'''J:''' And that's exactly what we're doing.
'''J:''' So there will be entertainment at this conference, but that's basically what it's going to be.
'''J:''' We're going to be providing entertainment.
'''J:''' George Robb, Andrea Jones-Roy and Brian Wecht will be joining the entire SGU and we're going to be doing a bunch of fun things over the course of that Friday and Saturday to entertain you.
'''J:''' But, you know, it's not going to be lectures.
'''J:''' It's not going to be people standing up there talking for 45 minutes.
'''J:''' It's going to be fun stuff.
'''J:''' There's going to be a lot of audience interaction.
'''J:''' And most importantly, there will be plenty of time to hang out, to talk, to have meals and to just spend time with the other people that are attending the conference.
'''J:''' So we're getting a lot of people that are very interested, emailing us questions.
'''J:''' You don't have to ask questions.
'''J:''' It's exactly what I said.
'''J:''' It's going to be in White Plains, New York.
'''J:''' The hotel is great and it has a pool.
'''J:''' There will be a shuttle from Westchester Airport to the hotel.
'''J:''' If you fly into the New York City airports, you can simply take a, you know, like a other ground transportation, like an Uber or something like that.
'''J:''' You know, probably what you should do is you should coordinate sharing transportation with other people that you know will be flying into whatever airport you choose to fly into.
'''J:''' And also just split a room with someone.
'''J:''' You know what I mean?
'''J:''' You know, there's no reason why you need to have a whole room to yourself.
'''J:''' If you want to save money, just bunk in a room with someone or more than one person.
'''J:''' But please do come to the conference because it's going to be a ton of fun.
'''J:''' It'll be unlike anything else you've ever done.
'''J:''' That's what I'm putting right in front of you.
'''J:''' So go to our website.
'''J:''' The signup link is there.
'''J:''' Buy tickets.
'''J:''' Right now, everything is happening.
'''J:''' So if you did pre-register with us, now's the time to buy your tickets.
'''J:''' Now there are rooms that are being held for the conference, but I can only guarantee that the first hundred rooms will get the special rate.
'''J:''' As soon as we get close to booking a hundred rooms at the hotel, I will try to get more.
'''J:''' But I'm just putting it out there.
'''J:''' If you want to stay in the hotel where this is happening, be one of the first hundred people to sign up and that'll guarantee you'll get a room in that hotel.
'''S:''' All right.
'''S:''' Thanks, Jay.
{{anchor|followup}} <!-- leave these anchors directly above the corresponding section that follows -->
== Questions/Emails/Corrections/Follow-ups <small>(1:21:50)</small> ==
== Questions/Emails/Corrections/Follow-ups <small>(1:21:50)</small> ==
=== Question #1: P-Values ===
=== Question #1: P-Values ===
<blockquote><p style="line-height:115%"> When you all where talking about the full moon and suicide study last week kara said that “p-values as we know are pretty mean goes as they tell us a little bit more about the analysis than the actual (pause). That’s why effect sizes matter”. Could you please elaborate on this and the sentence kara stop herself from finishing accidentally? How are p values better for understanding the analysis and what are then effect sizes better for? This seems like a really important statistical concept to grasp for us skeptics so I wanted to ask this. <br> </p></blockquote>
<blockquote><p style="line-height:115%"> When you all where talking about the full moon and suicide study last week kara said that “p-values as we know are pretty mean goes as they tell us a little bit more about the analysis than the actual (pause). That’s why effect sizes matter”. Could you please elaborate on this and the sentence Cara stop herself from finishing accidentally? How are p values better for understanding the analysis and what are then effect sizes better for? This seems like a really important statistical concept to grasp for us skeptics so I wanted to ask this. <br> –Anthony</p></blockquote>
 
'''S:''' All right, guys, we're going to do an email.
 
'''S:''' This one comes from Antony.
 
'''S:''' And Antony writes, when you all were talking about the full moon and suicide study last week, Cara said that P values, as we know, are pretty mean.
 
'''S:''' Goes as they tell us a little bit more about the analysis than the actual pause.
 
'''S:''' That didn't make sense.


'''S:''' Oh, you said what?
'''S:''' All right, guys, we're going to do an email. This one comes from Antony. And Antony writes, "When you all were talking about the full moon and suicide study last week, Cara said that P values, as we know, are pretty mean. Goes as they tell us a little bit more about the analysis than the actual pause." That didn't make sense.


'''C:''' Didn't you say pretty meaningless?
'''C:''' Didn't you say pretty meaningless?


'''S:''' Well, he said, they are pretty mean goes as they tell us a little bit more about the analysis than the actual, and then you paused.
'''S:''' Well, he said, "they are pretty mean goes as they tell us a little bit more about the analysis than the actual", and then you paused. "That's why effect sizes matter."


'''S:''' That's why effect sizes matter.
'''C:''' Yeah.
 
'''S:''' Yeah.


'''S:''' Yeah.
'''S:''' Yeah. I don't know.


'''S:''' I don't know.
'''C:''' I don't think that's exactly what I said. but ok.


'''S:''' I think that's exactly what I said.
'''S:''' I think we lost the translation there. "Could you please elaborate on this? And the sentence, Cara, stopped herself from finishing accidentally. How are P values better for understanding the analysis of what are then effect sizes better for?


'''S:''' I think we lost the translation there.
'''C:''' I can elaborate. Can I elaborate, Steve?


'''S:''' Could you please elaborate on this?
'''S:''' Yeah, go ahead.
 
'''S:''' And the sentence, Cara, stopped herself from finishing accidentally.
 
'''S:''' How are P values better for understanding the analysis of what are then effect sizes better for?
 
'''C:''' I can elaborate.
 
'''C:''' Can I elaborate, Steve?
 
'''C:''' Yeah, go ahead.


'''C:''' And then you can elaborate?
'''C:''' And then you can elaborate?


'''C:''' Yeah.
'''S:''' Yeah.
 
'''C:''' All right.


'''C:''' So the effect size is really the main thing you're looking for in a study.
'''C:''' All right. So the effect size is really the main thing you're looking for in a study. So here's the difference between the effect size and a P value. A P value specifically tells you whether or not something reaches a level of significance. So basically you have a population and you're taking a sample of that population. And based on the normal curve, if you look at sufficiently large enough samples of data, you're always going to find some significant contrast. Let's say you're doing T-tests, ANOVAs, correlational studies, whatever your statistical analysis is, when you compare enough things within that dataset, some of them are going to come up as related to one another in whatever way you're studying. And what you're asking the analysis is, is this due to chance or is this an actual effect? And all a P value tells you, whether your cutoff is 0.05, 0.1, whatever, is if it's greater or less than that cutoff, if it's significant or not. We can say that we think that this is a real effect and not that it's due to chance. But if that number is 0.001, 00001, 00001, that doesn't tell you anything. That's when you have to look at the effect size. The effect size tells you the magnitude of the relationship. Is it a strong relationship or is it a weak relationship? Does this variable affect this other variable a lot or a little bit? A P value is kind of an all or nothing response. Either it is significant or it's not, and you can easily hack that number, even unintentionally. It's a good statistic. It's important. It tells you if based on the way that you're doing your analysis, it's likely that these things are related or that they happen by chance alone. But it doesn't tell you how strong the relationship is. That's what the effect size is for. That's why we're seeing more and more journals that are requiring effect sizes to be published. Does that make sense?


'''C:''' So here's the difference between the effect size and a P value.
'''S:''' Yeah, it does. But let me give you a couple other ways of looking at it because it's more complicated than obviously. I know you know that.


'''C:''' A P value specifically tells you whether or not something reaches a level of significance.
'''C:''' Obviously.


'''C:''' So basically you have a population and you're taking a sample of that population.
'''S:''' There's multiple different statistical ways we could look at a study to say, is this significant? Is it statistically significant? Is it likely to be true? And how much of an effect is there? How much does this change the probability that this is true or not? So the technical definition actually of a p-value is the probability that the results would be what they were or greater given the {{w|null hypothesis}}, which is kind of a backwards way of looking at it.


'''C:''' And based on the normal curve, if you look at sufficiently large enough samples of data, you're always going to find some significant contrast.
'''C:''' But sadly, that's how we do all of our statistics is based on null hypothesis testing.


'''C:''' Let's say you're doing T-tests, ANOVAs, correlational studies, whatever your statistical analysis is, when you compare enough things within that dataset, some of them are going to come up as related to one another in whatever way you're studying.
'''S:''' It doesn't really mean the chance that it's real or not. That's a big thing to make sure that we don't walk away with that. The probability that the effect is real.


'''C:''' And what you're asking the analysis is, is this due to chance or is this an actual effect?
'''C:''' The probability is not the probability that it's real. Yes, you're right. That's a very important point to make.


'''C:''' And all a P value tells you, whether your cutoff is 0.05, 0.1, whatever, is if it's greater or less than that cutoff, if it's significant or not.
'''S:''' That's actually more of a Bayesian analysis. The Bayesian analysis is what is the pre-test probability and what's the post-test probability. So how much does this data change the probability that the hypothesis is actually true? That's actually a really good way of analyzing the data. For clinical studies, effect size is critical. And it's not just like what's the effect size, but is it clinically significant? Like if it's a reduction in pain, is it an amount of reduction that an individual person would notice or is it just a statistical phenomenon? It reduced your cold by on average one hour. It's clinically irrelevant, even if it's statistically significant. You can also slice that data up differently to make it more intuitive or to get a better perspective on if it's meaningful. I like the number needed to treat way of looking at it. So how many people would you need to treat with this treatment before one person is likely to have benefited? That's another way of looking at the effect size. Maybe you need to treat 100 people just to help one person versus how many people are harmed for how many people you get treated. So the p-value is just one way of looking at the data statistically. It's not a terribly good way.
 
'''C:''' We can say that we think that this is a real effect and not that it's due to chance.
 
'''C:''' But if that number is 0.001, 00001, 00001, that doesn't tell you anything.
 
'''C:''' That's when you have to look at the effect size.
 
'''C:''' The effect size tells you the magnitude of the relationship.
 
'''C:''' Is it a strong relationship or is it a weak relationship?
 
'''C:''' Does this variable affect this other variable a lot or a little bit?
 
'''C:''' A P value is kind of an all or nothing response.
 
'''C:''' Either it is significant or it's not, and you can easily hack that number, even unintentionally.
 
'''C:''' It's a good statistic.
 
'''C:''' It's important.
 
'''C:''' It tells you if based on the way that you're doing your analysis, it's likely that these things are related or that they happen by chance alone.
 
'''C:''' But it doesn't tell you how strong the relationship is.
 
'''C:''' That's what the effect size is for.
 
'''C:''' That's why we're seeing more and more journals that are requiring effect sizes to be published.
 
'''S:''' Does that make sense?
 
'''S:''' Yeah, it does.
 
'''S:''' But let me give you a couple other ways of looking at it because it's more complicated than obviously.
 
'''S:''' I know you know that.
 
'''S:''' Obviously.
 
'''S:''' There's multiple different statistical ways we could look at a study to say, is this significant?
 
'''S:''' Is it statistically significant?
 
'''S:''' Is it likely to be true?
 
'''S:''' And how much of an effect is there?
 
'''S:''' How much does this change the probability that this is true or not?
 
'''S:''' So the technical definition actually of a p-value is the probability that the results would be what they were or greater given the null hypothesis, which is kind of a backwards way of looking at it.
 
'''C:''' But sadly, that's how we do all of our statistics is based on null hypothesis.
 
'''S:''' It doesn't really mean the chance that it's real or not.
 
'''S:''' That's a big thing to make sure that we don't walk away with that.
 
'''S:''' The probability that the effect is real.
 
'''C:''' The probability is not the probability that it's real.
 
'''C:''' Yes, you're right.
 
'''C:''' That's a very important point to make.
 
'''S:''' That's actually more of a Bayesian analysis.
 
'''S:''' The Bayesian analysis is what is the pre-test probability and what's the post-test probability.
 
'''S:''' So how much does this data change the probability that the hypothesis is actually true?
 
'''S:''' That's actually a really good way of analyzing the data.
 
'''S:''' For clinical studies, effect size is critical.
 
'''S:''' And it's not just like what's the effect size, but is it clinically significant?
 
'''S:''' Like if it's a reduction in pain, is it an amount of reduction that an individual person would notice or is it just a statistical phenomenon?
 
'''S:''' It reduced your cold by on average one hour.
 
'''S:''' It's clinically irrelevant, even if it's statistically significant.
 
'''S:''' You can also slice that data up differently to make it more intuitive or to get a better perspective on if it's meaningful.
 
'''S:''' I like the number needed to treat way of looking at it.
 
'''S:''' So how many people would you need to treat with this treatment before one person is likely to have benefited?
 
'''S:''' That's another way of looking at the effect size.
 
'''S:''' Maybe you need to treat 100 people just to help one person versus how many people are harmed for how many people you get treated.
 
'''S:''' So the p-value is just one way of looking at the data statistically.
 
'''S:''' It's not a terribly good way.


'''C:''' It's not actually a very good way at all.
'''C:''' It's not actually a very good way at all.


'''S:''' It's way overused and people interpret it incorrectly.
'''S:''' It's way overused and people interpret it incorrectly. You're far better looking at several different ways of looking at the statistics. And just please don't confuse a p-value for the probability that this is a real phenomenon because that's not what it is.
 
'''S:''' You're far better looking at several different ways of looking at the statistics.
 
'''S:''' And just please don't confuse a p-value for the probability that this is a real phenomenon because that's not what it is.
 
'''C:''' Well, and let's actually, if you don't mind, let's break that down for just a second because I think it'll be helpful.
 
'''C:''' I don't know if everybody knows what the null hypothesis is, but when we do testing, we might say, what is the likelihood that if I give this plant coffee grounds that I don't know what's something that we know is going to work.
 
'''C:''' Like if I give this plant plant food that it will grow taller than the plants that I don't give plant food to.
 
'''C:''' That is your hypothesis.
 
'''C:''' That's hypothesis one.


'''C:''' The null hypothesis is if I give this plant plant food, it will not grow any taller.
'''C:''' Well, and let's actually, if you don't mind, let's break that down for just a second because I think it'll be helpful. I don't know if everybody knows what the null hypothesis is, but when we do testing, we might say, what is the likelihood that if I give this plant coffee grounds that I don't know what's something that we know is going to work. Like if I give this plant plant food that it will grow taller than the plants that I don't give plant food to. That is your hypothesis. That's hypothesis one. The null hypothesis is if I give this plant plant food, it will not grow any taller.


'''S:''' Is that the hypothesis is not true.
'''S:''' Is that the hypothesis is not true.


'''C:''' Right.
'''C:''' Right. It's saying that the hypothesis is not true. And what we do in science is we try to disprove the null. We cannot prove the hypothesis. We try to disprove the null. And so what we're saying with a p-value is what is the probability that if the null is true, meaning there is no relationship between these variables, that the plant food and the growth of the plant are in no way related. What is the probability that we will get a chance result?
 
'''C:''' It's saying that the hypothesis is not true.
 
'''C:''' And what we do in science is we try to disprove the null.
 
'''C:''' We cannot prove the hypothesis.
 
'''C:''' We try to disprove the null.
 
'''C:''' And so what we're saying with a p-value is what is the probability that if the null is true, meaning there is no relationship between these variables, that the plant food and the growth of the plant are in no way related.
 
'''C:''' What is the probability that we will get a chance result?


'''S:''' Yeah, this data that we're looking at.
'''S:''' Yeah, this data that we're looking at.


'''C:''' Yeah, the data that we're looking at will say by chance, you know, there's a relationship here when we know there's no relationship because the null hypothesis is true.
'''C:''' Yeah, the data that we're looking at will say by chance there's a relationship here when we know there's no relationship because the null hypothesis is true. So when you said earlier, how did you phrase it? It's not the probability that the thing works. It's the probability that the not thing doesn't work. And I know that that sounds crazy, but that is a really important way that we do science. So really for a lot of people, p-values is just a cutoff. It's a threshold. It's all it is.


'''C:''' So when you said earlier, how did you phrase it?
'''S:''' Is the data interesting. It doesn't mean it's real or not.
 
'''C:''' It's not the probability that the thing works.
 
'''C:''' It's the probability that the not thing doesn't work.
 
'''C:''' And I know that that sounds crazy, but that is a really important way that we do science.
 
'''C:''' So really for a lot of people, p-values is just a cutoff.
 
'''C:''' It's a threshold.
 
'''C:''' It's all it is.
 
'''S:''' Interesting.
 
'''C:''' It doesn't mean it's real or not.


'''C:''' Is the data even worth continuing to talk about?
'''C:''' Is the data even worth continuing to talk about?


'''S:''' A good way to think about it, I think bottom line, because I know you're probably confused at this point, is that if something is not statistically significant, then it's probably there's no real effect there.
'''S:''' A good way to think about it, I think bottom line, because I know they're probably confused at this point, is that if something is not statistically significant, then it's probably there's no real effect there. If it is statistically significant, then it may be. It doesn't mean that it is.
 
'''S:''' If it is statistically significant, then it may be.
 
'''S:''' It doesn't mean that it is.


'''C:''' Yeah, then there still might not be an effect.
'''C:''' Yeah, then there still might not be an effect.


'''C:''' It still might not be an effect.
'''S:''' It still might not be an effect. At least you're in the ballgame now. If you're not even statistically significant, you're not even in the ballgame. There's definitely nothing going on here.
 
'''S:''' At least you're in the ballgame now.
 
'''S:''' If you're not even statistically significant, you're not even in the ballgame.


'''S:''' There's definitely nothing going on here.
'''C:''' Yeah, I almost think about it as the way that we often will talk about, and this is a non-statistical thing that we do. It's more of a critical thinking reasoning thing that we do. But Bob, I'll see you say this a lot, and I think it's important. Is there plausibility? Is there face validity to this claim? And that's sort of in some ways the way we should be looking at a p-value.


'''C:''' Yeah, I almost think about it as the way that we often will talk about, and this is a non-statistical thing that we do.
'''S:''' Yeah, now you're getting Bayesian.


'''C:''' It's more of a critical thinking reasoning thing that we do.
'''C:''' Now it seems like there's something.


'''C:''' But Bob, I'll see you say this a lot, and I think it's important.
'''S:''' Yeah, we're talking about prior probability and post probability. All right. Let's go on. Let's go on with science or fiction.
 
'''C:''' Is there plausibility?
 
'''C:''' Is there face validity to this claim?
 
'''C:''' And that's sort of in some ways the way we should be looking at a p-value.
 
'''C:''' Yeah, now you're getting based.
 
'''C:''' Now it seems like there's a like.
 
'''S:''' Yeah, we're talking about prior probability and post probability.
 
'''S:''' All right.
 
'''S:''' Let's go on.
 
'''S:''' Let's go on with science or fiction.


{{top}}{{anchor|sof}}
{{top}}{{anchor|sof}}
{{anchor|theme}} <!-- leave these anchors directly above the corresponding section that follows -->
{{anchor|theme}} <!-- leave these anchors directly above the corresponding section that follows -->
== Science or Fiction <small>(1:29:53)</small> ==
== Science or Fiction <small>(1:29:53)</small> ==
<!--  
<!--  
Line 1,395: Line 971:
|swept = <!-- all the Rogues guessed right -->
|swept = <!-- all the Rogues guessed right -->
}}
}}
''Voice-over: It's time for Science or Fiction.''
''Voice-over: It's time for Science or Fiction.''


'''S:''' Each week I come up with three science news items. They're facts, two real, one fake. And I challenge my panelists to tell me which one is the fake. We have a theme this week.
'''S:''' Each week I come up with three science news items or facts. Two real, one fake. And I challenge my panel of skeptics to tell me which one is the fake. We have a theme this week.
 
'''J:''' Uh-oh.


'''S:''' Uh-oh.
'''S:''' It's a topical theme.


'''S:''' It's a topical theme. Skin.
'''E:''' Skin.


'''S:''' These are recent news items.
'''S:''' These are recent news items. They all deal with artificial intelligence.


'''S:''' They all deal with artificial intelligence.
'''J:''' Yeah, baby.


'''S:''' Yeah, baby.
'''C:''' Oh, shit.


'''S:''' Let's see how much you've been paying attention.
'''S:''' Let's see how much you've been paying attention.


'''S:''' Oh, gosh.
'''J:''' Let's do it.
 
'''S:''' Okay.


'''S:''' Item number one.
'''E:''' Oh, gosh.


'''S:''' ChatGPT4 was able to pass the uniform bar exam scoring in the 90th percentile.
'''S:''' Okay. Item number one. Chat GPT-4 was able to pass the uniform bar exam scoring in the 90th percentile. Item number two. The US Copyright Office has issued guidance that registrants must disclose any AI-generated material in their work and it will not issue copyrights for content created using artificial intelligence software.


'''S:''' Item number two.
'''C:''' What?
 
'''S:''' The US Copyright Office has issued guidance that registrants must disclose any AI-generated material in their work and it will not issue copyrights for content created using artificial intelligence software.


'''S:''' What?
'''S:''' Item number three. An amateur Go player without any computer assistance beats the best Go playing AI in 14 out of 15 matches. All right, Evan, go first.
 
'''S:''' Item number three.
 
'''S:''' An amateur Go player without any computer assistance beats the best Go playing AI in 14 out of 15 matches.
 
'''S:''' All right, Evan, go first.


=== Evan's Response ===
=== Evan's Response ===


'''E:''' Okay, chatGPT4.
'''E:''' Okay, Chat GPT-4. Is that the one that's accessible?


'''E:''' Is that the one that's accessible?
'''S:''' Yeah.
 
'''E:''' Isn't there one that's not accessible yet?
 
'''E:''' Is that five?
 
'''E:''' It must be five.
 
'''E:''' Five's not out.
 
'''E:''' Five's not out.


'''E:''' That's what I was thinking.
'''E:''' Isn't there one that's not accessible yet? Is that five? It must be five.


'''E:''' All right.
'''B:''' Five's not out.


'''E:''' So, chatGPT4 passed the uniform bar exam scoring in the 90th percentile.
'''E:''' Five's not out. That's what I was thinking. All right. So, Chat GPT-4 passed the uniform bar exam scoring in the 90th percentile.


'''S:''' So it could be a lawyer, basically.
'''S:''' So it could be a lawyer, basically.


'''E:''' There's not much to analyze here.
'''E:''' There's not much to analyze here. It just either did or it didn't. I can't see anything in here. It's kind of the either hint one way or the other as to whether it's correct or not correct. And the second one. The US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work. The Copyright Office? And it will not issue copyrights for content created using artificial intelligence software?
 
'''E:''' It just either did or it didn't.
 
'''E:''' I can't see anything in here.
 
'''E:''' It's kind of the either hint one way or the other as to whether it's correct or not correct.
 
'''E:''' And the second one.
 
'''E:''' The US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work.
 
'''E:''' The Copyright Office?
 
'''E:''' And it will not issue copyrights for content created using artificial intelligence software?
 
'''S:''' Yeah, you can't copyright anything that you make if you used artificial intelligence to make it.
 
'''S:''' Right?
 
'''S:''' So that artwork you made using Mid Journey can't be copyrighted.
 
'''E:''' Would the Copyright Office put that kind of restriction on there?
 
'''E:''' I mean, do they care that much?
 
'''B:''' Are you talking about the thing being patented or the actual patent form
 
'''E:''' itself being copyrighted?
 
'''S:''' Yeah, intellectual property.
 
'''E:''' I mean, it sounds like, you know, I don't know. Maybe they haven't issued that guidance yet.
 
'''E:''' I don't know.
 
'''E:''' And do they care that much?
 
'''E:''' They've been lax in so many other things.
 
'''E:''' They give you a copyright on the freaking what?
 
'''E:''' The peanut butter and jelly sandwich?
 
'''E:''' I mean, right?
 
'''E:''' But this they're going to go, you know, full blown, like, hardcore over?
 
'''E:''' Maybe not.
 
'''E:''' Maybe not.


'''E:''' And then this amateur Go player without any computer assistance beat the best Go playing AI 14 out of 15 matches.
'''S:''' Yeah, you can't copyright anything that you make if you used artificial intelligence to make it. Right? So that artwork you made using Mid Journey can't be copyrighted.


'''E:''' I'm trying to think about, oh, the computers.
'''E:''' Would the Copyright Office put that kind of restriction on there? I mean, do they care that much?


'''E:''' We spoke about this years ago.
'''B:''' Are you talking about the thing being patented or the actual patent form itself?


'''E:''' There was a test with computers.
'''S:''' Copyrighted. Intellectual property.


'''E:''' And was it the IBM computer?
'''E:''' I mean, it sounds like I don't know. Maybe they haven't issued that guidance yet. I don't know. And do they care that much? They've been lax in so many other things. They give you a copyright on the freaking what? The peanut butter and jelly sandwich? I mean, right? But this they're going to go full blown, like, hardcore over? Maybe not. Maybe not. And then this amateur Go player without any computer assistance beat the best Go playing AI 14 out of 15 matches. I'm trying to think about, oh, the computers. We spoke about this years ago. There was a test with computers. And was it the IBM computer? The one that did the Jeopardy one? It was a different one, right? But it definitely was Go. And I remember it.


'''E:''' The one that did the Jeopardy one?
'''S:''' Bob, contain yourself.


'''E:''' It was a different one, right?
'''E:''' I remember it performed well. Steve, don't interrupt his assistance in my guess here. But in any case, so, okay. Yeah, but I think that computer was like, what, designed to play Go? Or is Chat-GPT? I'm sorry, what is this? Best Go AI. Go playing AI. This is specifically Go playing AI. Hmm. Maybe the human still has advantages there. I'm going to say the Copyright Office one is the fiction. And I just don't know that they've been this tight with what gets copywritten and what doesn't. Because I've seen all kinds of weird stuff kind of get through. So I have a thing, that one's weak. So, copyright, fiction.
 
'''E:''' But it definitely was Go.
 
'''E:''' And I remember it.
 
'''E:''' Bob, contain yourself.
 
'''E:''' I remember it performed well.
 
'''E:''' Steve, don't interrupt his assistance in my guess here.
 
'''E:''' But in any case, so, okay.
 
'''E:''' Yeah, but I think that computer was like, what, designed to play Go?
 
'''E:''' Or is chat GPT?
 
'''E:''' I'm sorry, what is this?
 
'''E:''' Best Go AI.
 
'''E:''' Go playing AI.
 
'''E:''' This is specifically Go playing AI.
 
'''E:''' Hmm.
 
'''E:''' Maybe the human still has advantages there.
 
'''E:''' I'm going to say the Copyright Office one is the fiction.
 
'''E:''' And, you know, I just don't know that they've been this tight with what gets copywritten and what doesn't.
 
'''E:''' Because I've seen all kinds of weird stuff kind of get through.
 
'''E:''' So I have a thing, that one's weak.
 
'''E:''' So, copyright, fiction.


'''S:''' Okay, Cara.
'''S:''' Okay, Cara.
Line 1,563: Line 1,034:
=== Cara's Response ===
=== Cara's Response ===


'''C:''' Sure, it passed the bar.
'''C:''' Sure, it passed the bar. 90th percentile. Details matter. Maybe it didn't pass that high. U.S. Copyright Office has issued guidance. I like that wording. Makes me think it doesn't have to be set in stone yet. It's just guidance. You just have to disclose it. And if you disclose it, we're not going to give you a copyright. I don't know if there's laws yet.
 
'''C:''' 90th percentile.
 
'''C:''' Details matter.
 
'''C:''' Maybe it didn't pass that high.
 
'''C:''' U.S. Copyright Office has issued guidance.
 
'''C:''' I like that wording.
 
'''C:''' Makes me think it doesn't have to be set in stone yet.
 
'''C:''' It's just guidance.
 
'''C:''' You just have to disclose it.
 
'''C:''' And if you disclose it, we're not going to give you a copyright.
 
'''C:''' I don't know if there's laws yet.


'''S:''' I mean, I'll just tell you, that means it's their policy.
'''S:''' I mean, I'll just tell you, that means it's their policy.


'''C:''' It's their policy, right?
'''C:''' It's their policy, right? But I don't know if there's actual legislation yet.


'''C:''' But I don't know if there's actual legislation yet.
'''S:''' Well, they're a regulatory body. They don't make legislation. They just carry it out. So that's basically as good as it gets for, it's like the FDA making a decision about a drug. It's the same thing. It's not a law. It's just they're just doing their regulatory thing. This is what they're doing. They're not giving copyright to intellectual property created with AI.


'''S:''' Well, they're a regulatory body.
'''C:''' I think that makes sense. I think that it makes sense that very often when there's something new or something worrisome from a regulatory perspective, there's a sweeping response in like a severe direction. And then it iterates over time. And I think probably the safe response in this case might be a little more draconian as opposed to just being completely open. I don't think we've had time yet to get into the nuance. So in order to prevent any problems, I could see why they would just say, no, no AI in anything you're trying to copyright right now. And then we'll figure it out later. I think the one that bugs me is the Go one because not only are you saying that a Go player beat, like a real person beat the AI 14 out of 15 times. You're saying that an amateur, and I'm not saying an amateur is bad, an amateur is great, but you're not even saying that a professional did it. I don't think that's true. I think that the AI probably was matching tit for tat with this Go player. So I don't know. That's the one that bugs me. I'm going to say that one's the fiction. I think it's easier to pass a test than it is to play a complex game that has a lot of like chaos theory built into it. And even still, I think it still probably did it.


'''S:''' They don't make legislation.
'''S:''' Okay, Bob.
 
'''S:''' They just carry it out.
 
'''S:''' So that's basically as good as it gets for, it's like the FDA making a decision about a drug.
 
'''S:''' It's the same thing.
 
'''S:''' It's not a law.
 
'''S:''' It's just they're just doing their regulatory thing.
 
'''S:''' This is what they're doing.
 
'''S:''' They're not giving copyright to intellectual property created with AI.
 
'''C:''' I think that makes sense.
 
'''C:''' I think that it makes sense that very often when there's something new or something worrisome from a regulatory perspective, there's a sweeping response in like a severe direction.
 
'''C:''' And then it iterates over time.
 
'''C:''' And I think probably the safe response in this case might be a little more draconian as opposed to just being completely open.
 
'''C:''' I don't think we've had time yet to get into the nuance.
 
'''C:''' So in order to prevent any problems, I could see why they would just say, no, no AI in anything you're trying to copyright right now.
 
'''C:''' And then we'll figure it out later.
 
'''C:''' I think the one that bugs me is the Go one because not only are you saying that a Go player beat, like a real person beat the AI 14 out of 15 times.
 
'''C:''' You're saying that an amateur, and I'm not saying an amateur is bad, an amateur is great, but you're not even saying that a professional did it.
 
'''C:''' I don't think that's true.
 
'''C:''' I think that the AI probably was matching tit for tat with this Go player.
 
'''C:''' So I don't know.
 
'''C:''' That's the one that bugs me.
 
'''C:''' I'm going to say that one's the fiction.
 
'''C:''' I think it's easier to pass a test than it is to play a complex game that has a lot of like chaos theory built into it.
 
'''C:''' And even still, I think it still probably did it.


=== Bob's Response ===
=== Bob's Response ===


'''S:''' Okay, Bob.
'''B:''' Oh, man. I'm going to be so pissed at you over this. Yeah, the bar exam. I remember reading about that going back when 4 was released. 90th percentile seems a little high. I don't know if this is a new test and they hit 90. For some reason, I think it wasn't quite up to 90th percentile. But yeah, that's within the realm of reasonableness. The copyright office kind of makes sense. It's so crazy right now. I'm just kind of like, yeah, we're not going to get mired in this right now. And I could see them potentially changing their mind in the future. But they just don't want to just even deal with it. I can kind of see that as well. The one that's like, you've got to be shitting me is this third one with the Go. Alpha Go. Alpha Go, deep learning, big news item, defeated the champ. I don't remember how brutal of a beating it was. Or even if it was brutal. I know that Alpha Go won. It was huge because Alpha Go is much more complex than chess. Much harder to do. But they used their new system which basically didn't feed any rules into it at all. It was just like, here's the game, here's the rules. Figure it out. And when they did that with chess, that's when they created the most amazing chess, superhuman chess program ever that no person will ever beat. And if you said that this was chess, then that would absolutely be the fiction. There's no way anyone's beating that Alpha Zero chess. But Alpha Go, I just don't could it have been he played so counterintuitively like Kirk playing Spock that it just like blew out the algorithms? I doubt it. I mean, I really can't imagine it's going to beat this. But you know that I knew that you know that I know that you know that I know that that's bullshit.
 
'''B:''' Oh, man.
 
'''B:''' I'm going to be so pissed at you over this.
 
'''B:''' Yeah, the bar exam.
 
'''B:''' I remember reading about that going back when 4 was released.
 
'''B:''' 90th percentile seems a little high.
 
'''B:''' I don't know if this is a new test and they hit 90.
 
'''B:''' For some reason, I think it wasn't quite up to 90th percentile.
 
'''B:''' But yeah, that's within the realm of reasonableness.
 
'''B:''' The copyright office kind of makes sense.
 
'''B:''' It's so crazy right now.


'''B:''' I'm just kind of like, yeah, we're not going to get mired in this right now.
'''C:''' He's so mad. I love it. Bob, what are you going to say? I got to know if I'm right or wrong.
 
'''B:''' And I could see them potentially changing their mind in the future.
 
'''B:''' But they just don't want to just even deal with it.
 
'''B:''' I can kind of see that as well.
 
'''B:''' The one that's like, you've got to be shitting me is this third one with the Go.
 
'''B:''' Alpha Go.
 
'''B:''' Alpha Go, deep learning, big news item, defeated the champ.
 
'''B:''' I don't remember how brutal of a beating it was.
 
'''B:''' I don't remember even if it was brutal.
 
'''B:''' I know that Alpha Go won.
 
'''B:''' It was huge because Alpha Go is much more complex than chess.
 
'''B:''' Much harder to do.
 
'''B:''' But they used their new system which basically didn't feed any rules into it at all.
 
'''B:''' It was just like, here's the game, here's the rules.
 
'''B:''' Figure it out.
 
'''B:''' And when they did that with chess, that's when they created the most amazing chess, superhuman chess program ever that no person will ever beat.
 
'''B:''' And if you said that this was chess, then that would absolutely be the fiction.
 
'''B:''' There's no way anyone's beating that Alpha Zero chess.
 
'''B:''' But Alpha Go, I just don't, you know, could it have been he played so counterintuitively like Kirk playing Spock that it just like blew out the algorithms?
 
'''B:''' I doubt it.
 
'''B:''' I mean, I really can't imagine it's going to beat this.
 
'''B:''' But you know that I knew that you know that I know that you know that I know that that's bullshit.
 
'''C:''' He's so mad. I love it.
 
'''B:''' Bob, what are you going to say? I got to know if I'm right or wrong.


'''B:''' I mean, my gut feeling was like he knows I'm going to, Steve knows I'm going to go for this and call it fiction.
'''B:''' I mean, my gut feeling was like he knows I'm going to, Steve knows I'm going to go for this and call it fiction.


'''B:''' Go.
'''E:''' He-he. Go.


'''B:''' He was trying to catch us out on this, specifically me.
'''B:''' He was trying to catch us out on this, specifically me.


'''B:''' You know what a rabbit hole, you know how dangerous it is.
'''C:''' You know what a rabbit hole, you know how dangerous it is.


'''C:''' Yes, I know.
'''B:''' Yes, I know. I know the rabbit hole. Everything else seems real. This is the one that's like, really? And I just know I know it's going to be like, oh, yeah, he did this stupid. He did this trick. He did the data trick like data on next day where he played the champ and he didn't play to win. He played not to lose. And he and he frustrated the guy into quitting. Is that what happened? Did he do the data maneuver? I don't know.


'''B:''' I know the rabbit hole.
'''S:''' So that's your answer?


'''B:''' Everything else seems real.
'''B:''' I'm going to put my trust in deep learning AlphaGo and just say no, amateur did not beat fiction.


'''B:''' This is the one that's like, really?
'''S:''' Okay.
 
'''B:''' And I just know I know it's going to be like, oh, yeah, he did this stupid.
 
'''B:''' He did this trick.
 
'''B:''' He did the data trick like data on next day where he played the champ and he didn't play to win.
 
'''B:''' He played not to lose.
 
'''B:''' And he and he frustrated the guy into quitting.
 
'''B:''' Is that what happened?
 
'''B:''' Did he do the data maneuver?
 
'''B:''' I don't know.


'''S:''' So that's your answer.
'''E:''' There you go, Bob.


'''B:''' I'm going to put my trust in deep learning AlphaGo and just say no, amateur did not beat fiction.
'''S:''' Ok, and Jay.
 
'''B:''' Okay.
 
'''S:''' There you go, Bob.


=== Jay's Response ===
=== Jay's Response ===


'''J:''' The first one, I do think that chat GPT4 passed the uniform bar exam in the 90th percentile.
'''J:''' The first one, I do think that Chat GPT-4 passed the uniform bar exam in the 90th percentile. Yeah, I think that happened. Absolutely. The second one about the copyright office. So this is a little tricky. I'm going to read this correctly. They issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software. Well, I agree with Evan that I don't think they care. I think that they're going to let you copyright something. It's a timestamp. Just says I made this on this date. And if there's ever a conflict, they will check dates to see who created it first. I don't think it matters if AI helped in any way. I think it is the right thing to do to disclose it. And that's a whole other conversation like what level of disclosure should you give and all that stuff. As life goes by, we will figure all that out. But no, I really don't think that the copyright office is paying attention to details on that level. They couldn't administer that. So I agree with Evan. That one is definitely the fiction.
 
'''J:''' Yeah, I think that happened.
 
'''J:''' Absolutely.
 
'''J:''' The second one about the copyright office.
 
'''J:''' So this is a little tricky.
 
'''J:''' I'm going to read this correctly.
 
'''J:''' They issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software.
 
'''J:''' Well, I agree with Evan that I don't think they care.
 
'''J:''' I think that they're going to let you copyright something.
 
'''J:''' It's a timestamp.
 
'''J:''' Just says I made this on this date.
 
'''J:''' Yeah.
 
'''J:''' And if there's ever a conflict, they will check dates to see who created it first.
 
'''J:''' I don't think it matters if AI helped in any way.
 
'''J:''' I think it is the right thing to do to disclose it.
 
'''J:''' And that's a whole other conversation like what level of disclosure should you give and all that stuff.
 
'''J:''' As life goes by, we will figure all that out.
 
'''J:''' But no, I really don't think that the copyright office is paying attention to details on that level.
 
'''J:''' They couldn't administer that.
 
'''J:''' So I agree with Evan.
 
'''J:''' That one is definitely the fiction.


'''B:''' Yeah, but that means AlphaGo is science.
'''B:''' Yeah, but that means AlphaGo is science.


'''E:''' Well, tune in next week, folks.
'''E:''' Well, tune in next week, folks. We'll give you the answer to this week's... Steve, you don't have to play it up.
 
'''E:''' We'll give you the answer to this week's.
 
'''E:''' Steve, you don't have to play it up.


=== Steve Explains Item #1 ===
=== Steve Explains Item #1 ===


'''S:''' You all agree on the first one, so we'll start there.
'''S:''' You all agree on the first one, so we'll start there. Chat GPT-4 was able to pass the uniform bar exam, scoring in the 90th percentile. You all think that one is science. And that one is science. Yeah. Now, Chat GPT isn't passing all of the exams that it's giving them. A lot of people are doing it. This is the one I think where it did the best. And it kind of makes sense because the law is very language-based. And it blew away the tests. It would have passed in every state in the US. It exceeded. It got a combined score of 297, which is greater than the highest threshold in the highest state. It's basically 90 percent of human test takers. It did do a little bit better in the multiple choice than in the essay part. But it did well in the essay part, too, having to actually write out answers. So, yeah, it did really, really well. It's not that surprising. I know it's passed medical exams, although not every one. The specialty ones that it was given, it hasn't done better than humans in every professional exam that it's been given. But it's kicking butt so far. Okay.
 
'''S:''' Chat, GPD 4 was able to pass the uniform bar exam, scoring in the 90th percentile.
 
'''S:''' You all think that one is science.
 
'''S:''' And that one is science.
 
'''S:''' Science, yeah.
 
'''S:''' Now, Chat, GPD isn't passing all of the exams that it's giving them.
 
'''S:''' A lot of people are doing it.


'''S:''' This is the one I think where it did the best.
'''B:''' Yeah, pretty impressive.


'''S:''' And it kind of makes sense because the law is very language-based.
'''S:''' I guess we'll go to, hmm, should we go to two or three? Two or three? Let's go to number three.
 
'''S:''' And it blew away the tests.
 
'''S:''' It would have passed in every state in the US.
 
'''S:''' It exceeded.
 
'''S:''' It got a combined score of 297, which is greater than the highest threshold in the highest state.
 
'''S:''' It's basically 90 percent of human test takers.
 
'''S:''' It did do a little bit better in the multiple choice than in the essay part.
 
'''S:''' But it did well in the essay part, too, having to actually write out answers.
 
'''S:''' So, yeah, it did really, really well.
 
'''S:''' It's not that surprising.
 
'''S:''' I know it's passed medical exams, although not every one.
 
'''S:''' The specialty ones that it was given, it hasn't done better than humans in every professional exam that it's been given.
 
'''S:''' But it's kicking butt so far.
 
'''S:''' Okay.
 
'''S:''' Yeah, pretty impressive.
 
'''S:''' I guess we'll go to, hmm, should we go to two or three?


=== Steve Explains Item #3 ===
=== Steve Explains Item #3 ===


'''S:''' Let's go to number three.
'''S:''' An amateur Go player without any computer assistance beat the best Go playing AI in 14 out of 15 matches. Bob and Cara think this is the fiction. Jay and Evan think this is science. So let me ask you a question. Did any of you guys see the email today where somebody sent us this news item?
 
'''S:''' An amateur Go player without any computer assistance beat the best Go playing AI in 14 out of 15 matches.
 
'''S:''' Bob and Cara think this is the fiction.
 
'''S:''' Jay and Evan think this is science.
 
'''S:''' So let me ask you a question.
 
'''S:''' Did any of you guys see the email today where somebody sent us this news item?


'''C:''' No.
'''C:''' No.


'''C:''' Wait, what?
'''E:''' Wait, what?


'''C:''' Are you kidding me?
'''C:''' Are you kidding me?


'''S:''' Because an amateur Go player did beat the best Go playing AI in 14 out of 15 matches.
'''S:''' Because-
 
'''S:''' But not without computer assistance.
 
'''S:''' So this is the fiction.
 
'''S:''' So this has been all over the news.
 
'''S:''' I thought I was going to get you on the without any computer assistance thing because you got to read the details.


'''B:''' Oh, nice.
'''C:''' No Steve, we have jobs.


'''B:''' Nice try.
'''S:''' -an amateur Go player did beat the best Go playing AI in 14 out of 15 matches. But not without computer assistance. So this is the fiction. So this has been all over the news. I thought I was going to get you on the without any computer assistance thing because you got to read the details.


'''B:''' Nice try.
'''B:''' Oh, nice. Nice try. Nice try.


'''S:''' What happened was they used a computer to figure out the Go playing AI's weaknesses.
'''S:''' What happened was they used a computer to figure out the Go playing AI's weaknesses.


'''S:''' Oh, interesting.
'''B:''' Oh, interesting.
 
'''S:''' And it found out a hole in its strategy.
 
'''S:''' And then without further assistance, an amateur Go player was able to learn the technique from the computer and then use it against its K to Go, K-A-T-A Go, the current, I guess, best Go program.
 
'''S:''' Was able to use it against it and beat it 14 out of 15 times.
 
'''S:''' It has to do with you, like, encircle a group of your enemy's stones with your stones.
 
'''S:''' Like, that's the strategy.
 
'''S:''' It's kind of a weird, it is a weird strategy.
 
'''S:''' And the other thing is, like, a human would see it coming a mile away.
 
'''S:''' But it was just never part of the training data because it's not something a professional would do.


'''B:''' I don't think it had training data.
'''S:''' And it found out a hole in its strategy. And then without further assistance, an amateur Go player was able to learn the technique from the computer and then use it against its Kata Go, K-A-T-A Go, the current, I guess, best Go program. Was able to use it against it and beat it 14 out of 15 times. It has to do with you, like, encircle a group of your enemy's stones with your stones. Like, that's the strategy. It's kind of a weird, it is a weird strategy. And the other thing is, like, a human would see it coming a mile away. But it was just never part of the training data because it's not something a professional would do.


'''B:''' So what do you mean by training data?
'''B:''' I don't think it had training data. So what do you mean by training data?


'''S:''' Well, I mean, whoever it was playing against to learn how to be good at playing Go.
'''S:''' Well, I mean, whoever it was playing against to learn how to be good at playing Go.
Line 1,919: Line 1,118:
'''B:''' No, I think that was the point of the latest deep learning models was that it didn't, there was no training data.
'''B:''' No, I think that was the point of the latest deep learning models was that it didn't, there was no training data.


'''S:''' It was a pre-trained.
'''S:''' It wasn't a pre-trained.


'''S:''' It trained against itself.
'''B:''' It trained against itself. It played itself. And that's how it learned what worked.


'''B:''' Yeah.
'''S:''' Well, it never encountered this strategy.


'''B:''' It played itself.
'''B:''' Wow.


'''S:''' And that's how it learned what worked.
'''S:''' So it was a hole. Because it doesn't, it's a good example of how powerful and brittle AI can be, right? At the same time. Because, yes, it can blow away Go masters, no problem. It's really, really good. But if you find something that just, it wasn't part of its pre-existing knowledge base, it doesn't have the real deep understanding to innovate or to see something, like, say, oh, what's, this is a pattern I haven't detected before. What does this mean?


'''S:''' It never encountered this strategy.
'''B:''' Yeah.
 
'''S:''' Wow.
 
'''S:''' So it was a hole.
 
'''S:''' Because it doesn't, it's a good example of how powerful and brittle AI can be, right?
 
'''S:''' At the same time.
 
'''S:''' Because, yes, it can blow away Go masters, no problem.
 
'''S:''' It's really, really good.
 
'''S:''' But if you find something that just, it wasn't part of its pre-existing knowledge base, it doesn't have the real deep understanding to innovate or to see something, like, say, oh, what's, this is a pattern I haven't detected before.
 
'''S:''' What does this mean?
 
'''S:''' Yeah.
 
'''S:''' It doesn't know.
 
'''S:''' It can't figure it out from first principles because it doesn't know the first principles.
 
'''S:''' It just knows the patterns it needs to do in order to win.
 
'''S:''' You know what I mean?
 
'''S:''' So.
 
'''S:''' Yeah.
 
'''B:''' It probably played against itself millions or billions of times to learn how to be a good Go player.
 
'''B:''' And somehow this, I don't know.
 
'''B:''' It never showed up.
 
'''E:''' Maybe it needs to do.
 
'''E:''' Yeah, that up, never showed up.
 
'''B:''' It needs to play against itself a hundred billion times.


'''B:''' I don't know.
'''S:''' It doesn't know. It can't figure it out from first principles because it doesn't know the first principles. It just knows the patterns it needs to do in order to win. You know what I mean? So-


'''B:''' But I mean, I can't wait to read about this.
'''B:''' Yeah. It probably played against itself millions or billions of times to learn how to be a good Go player. And somehow this, I don't know.


'''S:''' Yeah.
'''E:''' It never showed up.


'''S:''' Fascinating.
'''B:''' Maybe it needs to do, it needs to play against itself a hundred billion times. I don't know. But I mean, I can't wait to read about this.


'''S:''' All right.
'''S:''' Yeah. Fascinating. All right.


=== Steve Explains Item #2 ===
=== Steve Explains Item #2 ===


'''S:''' Which means that the US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software is science.
'''S:''' Which means that the US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software is science. It kind of had to make a decision because somebody applied for copyright for artwork it generated using Mid Journey and the Copyright Office was, no, we're not going to let you copyright that because you didn't create that. The software created it. And it's being widely criticized as being stupid because it doesn't, it is being overly cautious. Maybe that's one way to justify it. But the idea is that there isn't sufficient human creativity in the process to say it's your intellectual work, which really isn't true. If you're an artist using it as a tool, there's a lot and it really-
 
'''S:''' It kind of had to make a decision because somebody applied for copyright for artwork it generated using Mid Journey and the Copyright Office was, no, we're not going to let you copyright that because you didn't create that.
 
'''S:''' The software created it.
 
'''S:''' And it's being widely criticized as being stupid, you know, because it doesn't, it is being overly cautious.
 
'''S:''' Maybe that's one way to justify it.
 
'''S:''' But the idea is that there isn't sufficient human creativity in the process to say it's your intellectual work, which really isn't true.
 
'''S:''' If you're an artist using it as a tool, there's a lot and it really.
 
'''S:''' But what if you're not?
 
'''S:''' And it really stems from, well, the thing is, it's got to be case by case, but they're making a blanket statement.


'''S:''' If you used AI, you don't get credit for it.
'''C:''' But what if you're not?


'''S:''' And that's the problem is that it's such a blanket statement that they're making.
'''S:''' -and it really stems from, well, the thing is, it's got to be case by case, but they're making a blanket statement. If you used AI, you don't get credit for it. And that's the problem is that it's such a blanket statement that they're making.


'''S:''' You have to disclose it is what they're saying.
'''E:''' You have to disclose it is what they're saying. You can do it, but you have to disclose it.
 
'''S:''' You can do it, but you have to disclose it.


'''S:''' Yeah, but and they won't copyright it.
'''S:''' Yeah, but and they won't copyright it.


'''S:''' No, they're saying once you disclose it, they won't.
'''C:''' No, they're saying once you disclose it, they won't.


'''J:''' Yeah, you have to disclose it.
'''S:''' Yeah, you have to disclose it.


'''J:''' I'm surprised, Steve, because that's a lot of work for them.
'''J:''' I'm surprised, Steve, because that's a lot of work for them.


'''J:''' Yeah.
'''S:''' Yeah. Well, it just means that if you say I used AI to make this, they'll say, OK, you can't, you're not eligible for copyright. So basically downplays the actual input that the human user is doing the prompt maker, if you will, or like the artwork that prompted this decision. It was again, it was an artist using, I think it was Midjourney, who went through hundreds of iterations and there was a lot of work involved in getting the picture that they wanted. And this is like, this is the discussion we've been having. How much of that output is the AI? How much is the user? Is it art? Is it intellectual property? But now a bureaucratic office had to make a decision that was very specific and practical and they just said, no, we're not copywriting it. So interesting it's a very interesting decision. I'm not sure that I agree with it. I do think it's erring way in one direction. But, and again, like, again, there's a lot of the comment that I was reading about. It's like the purpose of copyright is to promote creativity, right? Is to give people credit for the work that they do. And that's, this is not going to accomplish that, right? Because why would somebody put in the hundreds of hours to create something if they're not going to get copyright on it? Because they were using a tool. Because of the tool that they were using. It's like not, again, you could use the photograph analogy. It's like saying, well, you didn't create that. You just took the picture, you know?
 
'''S:''' Well, it just means that if you say I used AI to make this, they'll say, OK, you can't, you're not eligible for copyright.
 
'''S:''' So basically downplays the actual input that the human user is doing, you know, the prompt maker, if you will, or like the artwork that prompted this decision.
 
'''S:''' It was again, it was an artist using, I think it was Majorni, who went through hundreds of iterations and there was a lot of work involved in getting the picture that they wanted.
 
'''S:''' And this is like, this is the discussion we've been having.
 
'''S:''' How much of that output is the AI?
 
'''S:''' How much is the user?
 
'''S:''' Is it art?
 
'''S:''' Is it intellectual property?
 
'''S:''' But now a bureaucratic office had to make a decision that was very specific and practical and they just said, no, we're not copywriting it.
 
'''S:''' So interesting, you know, it's a very interesting decision.
 
'''S:''' I'm not sure that I agree with it.
 
'''S:''' I do think it's erring way in one direction.
 
'''S:''' But, and again, like, again, there's a lot of the comment that I was reading about.
 
'''S:''' It's like the purpose of copyright is to promote creativity, right?
 
'''S:''' Is to give people credit for the work that they do.
 
'''S:''' And that's, this is not going to accomplish that, right?
 
'''S:''' Because why would somebody put in, you know, the hundreds of hours to create something if they're not going to get copyright on it?
 
'''S:''' Because they were using a tool.
 
'''S:''' Because of the tool that they were using.
 
'''S:''' You know, it's like not, again, you could use the photograph analogy.
 
'''S:''' It's like saying, well, you didn't create that.
 
'''S:''' You just took the picture, you know.


'''E:''' Oh, yeah, the monkey who took the selfie.
'''E:''' Oh, yeah, the monkey who took the selfie.


'''S:''' Yeah, whatever.
'''S:''' Yeah, whatever. Yeah, who gets credit for the monkey who took a selfie? So I guess we're all monkeys now.


'''S:''' Yeah, who gets credit for the monkey who took a selfie?
'''E:''' Yeah.


'''S:''' So I guess we're all monkeys now.
'''S:''' I've been using Midjourney the whole time we've been recording this show, by the way. ''(laughter)''


'''S:''' Yeah.
'''E:''' Well, don't try to get a copyright on it.


'''S:''' I've been using Midjourney the whole time we've been recording this show, by the way.
'''S:''' I'm just in the back. I'm not. This is all for my personal use, but it's just in the background. Because you can put your prompt in and forget about it for five minutes. All right. Well, good job, Bob and Cara. Although you did back your way. You backed your way into a victory this week, but it still counts.


'''S:''' Well, don't try to get a copyright on it.
'''B:''' Yeah.
 
'''S:''' I'm just in the back.
 
'''S:''' I'm not.
 
'''S:''' This is all for my personal use, but it's just in the background.
 
'''S:''' Because, you know, you can put your prompt in and forget about it for five minutes, you know.
 
'''S:''' All right.
 
'''S:''' Well, good job, Bob and Cara.
 
'''S:''' Although you did back your way.
 
'''S:''' You backed your way into a victory this week, but it still counts.
 
'''J:''' Yeah.


'''J:''' I was absolutely sure Evan and I were correct.
'''J:''' I was absolutely sure Evan and I were correct.


'''S:''' Yeah.
'''S:''' Yeah. Jay, I got to say, when you say I'm absolutely sure, you're almost definitely wrong.
 
'''S:''' Jay, I got to say, when you say I'm absolutely sure, you're almost definitely wrong.


'''S:''' I know, right?
'''J:''' I know, right?


'''J:''' That was not so sure.
'''E:''' I was not so sure.


'''S:''' All right.
{{anchor|qow}} <!-- leave this anchor directly above the corresponding section that follows -->


{{anchor|qow}} <!-- leave this anchor directly above the corresponding section that follows -->
== Skeptical Quote of the Week <small>(1:48:40)</small> ==
== Skeptical Quote of the Week <small>(1:48:40)</small> ==
<!--  
<!--  
Line 2,125: Line 1,199:
}}
}}


'''S:''' Evan, give us a quote.
'''S:''' All right, Evan, give us a quote.
 
'''E:''' It would be useful if the concept of the umwelt were embedded in the public lexicon.
 
'''E:''' It neatly captures that idea of limited knowledge, of unobtainable information, of unimagined possibilities.


'''E:''' David Eagleman, who's a neuroscientist at Baylor College of Medicine.
'''E:''' "It would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures that idea of limited knowledge, of unobtainable information, of unimagined possibilities." David Eagleman, who's a neuroscientist at Baylor College of Medicine.


'''S:''' So what is umwelt?
'''S:''' So what is umwelt?


'''S:''' Umwelt.
'''C:''' Umwelt.


'''E:''' Umwelt.
'''E:''' Umwelt.


'''C:''' Umwelt.
'''S:''' Umwelt?


'''C:''' Umwelt is sort of your...
'''C:''' Umwelt is sort of your...


'''C:''' Give us the gestalt of umwelt.
'''S:''' Give us the gestalt of umwelt.


'''C:''' Well, that's like a good way to put the gestalt, actually, because it's sort of like your perspective, your experience as an individual.
'''C:''' Well, that's like a good way to put the gestalt, actually, because it's sort of like your perspective, your experience as an individual.


'''C:''' Right.
'''E:''' Right. A mosquito has a certain perspective of the world. A human has a very different perspective of the world. So the umwelt of a mosquito is different than the umwelt of a human.


'''C:''' A mosquito has a certain perspective of the world.
'''C:''' And individual humans have different umwelts, depending on their culture. Yeah, yeah, yeah.


'''E:''' A human has a very different perspective of the world.
'''S:''' Yeah. That's interesting.


'''E:''' So the umwelt of a mosquito is different than the umwelt of a human.
'''E:''' So it's interesting.


'''C:''' And individual humans have different umwelts, depending on their culture.
'''S:''' Yeah.


'''C:''' Yeah, yeah, yeah.
'''C:''' And you know, David Eagleman famously, he studies creativity. He also famously studied synesthesia.


'''C:''' Yeah, okay.
'''S:''' Oh, yeah? And so he's very interested in the experience, like the relationship between the individual and their experience of how they perceive the world. I worked with David on a TV show once where he was doing this fascinating thing where he developed a vest using little cell phone motors that vibrate. And he basically mapped out almost a version of the cochlea on the vest so that people who can't hear could perceive sound through tactile stimulation.


'''S:''' That's interesting.
'''E:''' Neat.


'''C:''' So it's interesting.
'''C:''' Yeah, based on the way that the vibrates would be up or down. And it was funny because I was like, how on earth do they make sense of this? Same as you've seen the people who like are blind, but they have the thing on their tongue that makes little prickles on their tongue. Like how on earth do they? And it's like the brain just maps it eventually. It just does it. It's like not even conscious. It's very cool.


'''C:''' Yeah.
'''S:''' Yeah, just the idea that the brain maps to the world and creates the illusion of reality inside our brains is a cool one and I think a necessary one for skepticism.
 
'''C:''' And you know, David Eagleman famously, he studies creativity.
 
'''C:''' He also famously studied synesthesia.
 
'''C:''' Oh, yeah?
 
'''C:''' And so he's very interested in the experience, like the relationship between the individual and their experience of how they perceive the world.
 
'''C:''' I worked with David on a TV show once where he was doing this fascinating thing where he developed a vest using little cell phone motors, you know, that vibrate.
 
'''C:''' And he basically mapped out almost a version of the cochlea on the vest so that people who can't hear could perceive sound through tactile stimulation.
 
'''C:''' Neat.
 
'''C:''' Yeah, based on the way that the vibrates, you know, would be up or down.
 
'''C:''' And it was funny because I was like, how on earth do they make sense of this?
 
'''C:''' Same as you've seen the people who like are blind, but they have the thing on their tongue that makes little prickles on their tongue.
 
'''C:''' Like how on earth do they?
 
'''C:''' And it's like the brain just maps it eventually.


'''C:''' It just does it.
'''C:''' Yeah, for sure.


'''C:''' It's like not even conscious.
'''S:''' All right. Well, thank you all for joining me this week.


'''C:''' It's very cool.
'''B:''' Sure, man.


'''S:''' Yeah, just the idea that the brain maps to the world and creates the illusion of reality inside our brains is a cool one and I think a necessary one for skepticism.
'''C:''' Thanks Steve.


'''S:''' Yeah, for sure.
'''E:''' Thank you, Steve.


== Signoff ==  
== Signoff ==  


'''C:''' All right.
'''S:''' —and until next week, this is your {{SGU}}. <!-- typically this is the last thing before the Outro -->
 
'''S:''' Well, thank you all for joining me this week.
 
'''S:''' Sure, man.
 
'''S:''' Thank you, Steve.


'''S:''' —and until next week, this is your {{SGU}}. <!-- typically this is the last thing before the Outro -->
{{Outro664}}{{top}}


{{Outro664}}{{top}}
== Today I Learned ==
== Today I Learned ==
* Fact/Description, possibly with an article reference<ref>[url_for_TIL publication: title]</ref> <!-- add this format to include a referenced article, maintaining spaces: <ref>[URL publication: title]</ref> -->  
* Fact/Description, possibly with an article reference<ref>[url_for_TIL publication: title]</ref> <!-- add this format to include a referenced article, maintaining spaces: <ref>[URL publication: title]</ref> -->  

Latest revision as of 03:50, 8 June 2023

  Emblem-pen-orange.png This episode needs: proofreading, formatting, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute


SGU Episode 929
April 29th 2023
929 Blue Hole.jpg

"The second deepest blue hole in the world has been discovered off the coast of the Yucatan Peninsula. The giant, underwater cavern is around 900 feet deep and spans 147,000 square feet." [1]

SGU 928                      SGU 930

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Quote of the Week

It would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures that idea of limited knowledge, of unobtainable information, of unimagined possibilities.

David Eagleman, American neuroscientist

Links
Download Podcast
Show Notes
Forum Discussion

Introduction, SGU almost turns 18, Changes in multimedia tech[edit]

Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Thursday, April 27th, 2023, and this is your host, Steven Novella. Joining me this week are Bob Novella...

B: Hey, everybody!

S: Cara Santa Maria...

C: Howdy.

S: Jay Novella...

J: Hey guys.

S: ...and Evan Bernstein.

E: Good evening everyone!

S: So this is the end of our 18th year recording SGU. I think.

C: What?

E: The last of 18.

S: Yeah, next week is our first episode of our 19th year.

E: We kick off the 19th year next week.

B: Do we get a prize? What do we get?

J: We're one year away from 20 years. I mean 20, 20 years.

E: And that's just a podcasting. I mean we were skeptical activists a dozen years prior to that.

C: Do we celebrate the 20th year at the beginning of the 20th year or at the end of the 20th year?

S: Yeah, I think when we complete 20 years, there's two more years until we complete 20 years.

C: Or do we just celebrate the whole year?

E: Yes. A year-long celebration. Yeah, I'm good with that.

B: So didn't we agree 20 years ago that we would quit at our 20th year? Didn't we do that? I have a memory of that. Is that a false memory?

S: That's a false memory.

B: Oh, jeez.

E: I'm not sure it was quit. It was maybe-

C: Retire?

E: -take stock and just kind of see.

B: Oh, we're going to sell stock? Cool.

C: We have stock?

J: Maybe after 20 years, Steve will give us one week off.

S: I'll think about it.

E: Yeah, Steve, you've never taken a week off.

S: That's correct.

E: The rest of us have missed episodes, not you.

S: If I do have a week off or if we go on vacation, we have to get one or two weeks ahead of the game. We have to get them in the can.

C: The proverbial can.

E: It's a recording lingo. The can.

S: Even on the holidays.

E: You think it was called the can because reels of film used to be placed in cans?

C: Yeah, that's exactly why.

E: In the can?

C: Yeah.

E: Ah, there you go. Do we have to explain what film is to this audience?

S: Film?

E: You can't take anything for granted.

C: I can't with those. Yeah, I think we've talked about it before, but you guys saw that video where the guy said that he showed his kid a diskette and they were like, oh, you 3D printed the save icon.

B: Oh my God.

E: Oh no, no. I didn't think I would ever become so old so quickly.

C: Right? Planned obsolescence.

E: Yeah.

C: That's us.

E: Unplanned.

C: I can't do it. I'm so sad.

E: It is a whole different world than 18 years ago. That's for certain. My gosh, all the things we didn't have in 2005.

S: Yeah, we started this show. We started this podcast before the iPhone.

E: Yep.

B: Yep. Yes, we did.

E: There were iPods, right?

S: There were iPods.

E: Those were, and that was hence the podcast.

S: Right, exactly.

E: The term podcast, that's what it was. And those were only a few years old at that point.

C: And they were huge.

E: Oh my gosh.

C: Remember how big they were?

E: That's right. They were the size of a smartphone nowadays, basically.

B: Well, they were cavernous. 200 Meg or something like that. That was just like, oh my God, do you know how many songs you could fit on there?

S: Yeah, at the time it was like a total revolution.

B: I remember I had the money and I had a choice. I can get an iPod with huge, deep well of storage. Or I can get, what were they called? It was the iPhone without the phone.

S: The iTouch.

C: The iPhone without the phone.

B: That's right. That's what it was.

E: The iTouch.

B: And you could get on the, the key was you could get on the internet through Wi-Fi. And it had a little bit of storage for music. And for me, that was just like the, the aha oh my god moment where I was like, oh my god, I could surf the web on the couch with something in my little thing in my hand. Oh my god.

E: Yeah, Bob.

B: I mean, after a little while, I'm like, yep, I'm getting the iPhone. It was like, that was the decision. It was nuts.

E: And then the cell phones were just connecting over what? 3G technology at the time.

B: Oh no, I think, no, I think like 2G was the earliest.

E: Was it still 2G then? Oh my gosh.

B: Or maybe even less than that for the very, very first. But yeah, it was about 2G or less.

E: I remember 3G being a big deal. Big deal.

B: Yeah, I think-

J: There was a lot of, there was a lot of those like, oh my god moments with the iPhone. I remember tripping out about apps. Like, wait a second, this is amazing. Like there's an app store and like people are coming out with all these different applications that I'm going to run on this device. That was huge. The other thing was the first time I downloaded a file from the internet to my phone and was able to open it, like a PDF or something like that, that was huge.

C: And now you can just access your whole computer on your phone.

J: Yep.

E: That's right.

S: All right, well, let's get started. Our last podcast of our 18th year.

E: All right, let's make it the best.

News Items[edit]

Starship Launch (5:07)[edit]

S: So Jay, last week you talked about Starship not launching, almost launching. And in the interim, as predicted, it did launch. So what happened?

B: As predicted?

E: Yeah, yeah. What happened?

J: Yeah, I was listening back to the last show and Steve was going, we don't know what's going to happen.

E: That's right.

J: So here's what happened. So last week, SpaceX launched its Starship spacecraft. Wow, lots of S's in that sentence. And on top of, so Starship was on top of the super heavy booster and four minutes into the flight, SpaceX had to abort the test flight by remotely exploding the rocket. Not really that dramatic, right guys? They had, I think that might be really the first time I've ever seen a rocket exploded deliberately live. I've seen footage of NASA doing it in the past, but not live. I was surprised when that happened.

B: Yeah.

J: So there's a lot of people talking online about what went wrong and who's to blame. No big surprise here, but a lot of people of course are blaming Elon Musk. So let's dig into the details. So first, SpaceX calls its launch pad system stage zero, right? Because rockets have stages, stage one, stage two. They're calling the test, the launch pad stage zero. And this is because it's an integral part of the entire rocket launch process.

B: Oh yeah. More than you might think.

J: They're a highly engineered construct that's capable of doing a lot of different things. Of course, it's number one thing is to hold the rocket up, but it's complicated and they're expensive and they have to be very precisely engineered to work with the rocket that it's working with. NASA's launch pads use a couple of important features that SpaceX's launch pad doesn't have. So let me explain these to you and then you'll understand why after. So first, NASA has something called a flame diverter. This is also known as a flame trench. Now what this does is it channels the blast energy from these massive booster engines away from the launch pad. And this is why you typically see a huge plume of exhaust going in one direction during a launch, like to the side of the rocket. It's not going all around the rocket as much as it's going off in one direction. Second, NASA uses a water deluge system. This system runs a massive amount of water under the booster rockets during liftoff. You can see video of this easily online. You should take a look at it if you've never seen it. It's pretty cool. A lot of the exhaust you see during a NASA launch is white because it's water vapor. The water absorbs heat and acoustic energy generated by the engines and that's really important because that heat can damage the rocket and the acoustic energy made by the sound of the rocket engines can damage the rocket.

B: Yeah, just the pure sound energy itself. Powerful. That would just mess you up if you were close.

J: Yeah, it's energy dense. It really is. It's an amazing thing. So Elon made a decision to not include these two features in his Stage Zero launch pad. This is where a lot of people are saying that Elon didn't listen to his engineers and he made a seemingly illogical decision.

S: Although I heard though that they were using this hardened concrete that could have survived. So it was a test to see if that material, it wasn't just regular concrete. Like it was this super hard material.

J: Yeah, it definitely was.

B: Jay, I think you also said that this is just telemetry. So it's just really, this is just data gathering and it seemed like that, at least from what I read, that he said, yeah, it's just telemetry. We don't need the diversion and the deluge.

J: Yeah, but don't get fooled. I mean, of course they wanted it all to work. I know that there were Elon himself said that he gives it a percentage chance of exploding and blah, blah, blah. There was definitely hedging your bets going on there. And I think that they knew that there was a chance that things can go sideways. But apparently this, from what I read, right, because I'm deferring to the experts and and to people who actually are in the business of launching rockets. Elon didn't want these two systems included because they won't, they won't likely be used or available on the moon and on Mars. So I think his logic is, they're not going to have it up there. So we've got to launch it down here the same way so we can build the same launch pads and on different on the moon and on Mars.

B: Well, wait a second. When would they ever launch something like that full rocket on the moon?

S: Yeah, they wouldn't have to.

B: Apples to apples.

J: He said that.

B: Okay.

J: So I'm just again, I can't, I don't put words in his mouth.

B: It doesn't make sense, but all right.

J: I'll just report back to you guys.

S: The whole one-sixth gravity thing, you know.

J: So going into the test launch, it seemed like Elon and SpaceX were expecting problems and man, did they have problems. There was a lot of problems. And due to the massive energy put out by the super heavy booster engines, the launch pad was significantly damaged. So let's go over what happened during the launch. First, the rocket was on the launch pad for a longer than typical time, meaning the rocket was running on the launch pad. It wasn't just sitting there. The engines were on and it's sitting on the launch pad and it was sitting there longer than, than what we would consider to be a normal amount of time. And the, what they did was they were throttling up the engine slowly. And I think they did that because they were not using the, these hydraulic clamps that they have there that holds the rocket in place onto the pad. So the rocket is clamped to the pad when it's just waiting to launch. And I think what's typical is that they will let, they will throttle engines up and then release the rocket. So everything is basically like raring to go and at speed. So I don't know why these decisions were made. I really don't think the information is out there yet. But what we do know is it took 15 seconds for the rocket to clear the tower. That's a long time.

B: Long time.

J: So the plume of dark ejecta you saw at the launch was largely made out of dirt, sand, and concrete as the engines blasted the hell out of the ground under the rockets. And at this point, large chunks of concrete were thrown all over the launch area and I've seen video of all of it and it's it's pretty cool to see. The silver holding tanks that you see right near the launch pad were heavily damaged by the debris. There's a video of the ocean getting pelted with concrete and even a news reporter's car, I saw this, the car was totaled by a huge chunk of concrete that went through the car.

E: Was it a Tesla?

J: I think it was a Subaru. It looked like Bob's car. And on top of that, the rocket itself was damaged at this point. So on an 8K video that Everyday Astronaut was showing on his channel, there were six engines on the-

B: 8K, wait, 8K?

J: Yep, yeah, 8K.

B: Holy crap.

J: Yeah, that's not in the professional world of cameras, Bob-

E: Standard now?

J: Yeah, it's not that big of a deal. There's a lot of 8K cameras out there.

B: It's a big deal to me, man.

J: So there were six engines on the Super Heavy booster missing their nozzles. The nozzle is that cone on the end of a rocket. For each engine, there's a cone, right? So if you go back to the Saturn V rocket that was used for the Apollo missions, there was five engines on that rocket.

S: Five huge engines, yeah.

J: Huge.

B: Like an F-1s, I think.

J: These Raptor 2s that we have on the booster here, they're smaller, but you lose six engines. What is that?

B: That's a chunk, man.

J: That's like 18% of your thrust is gone. If that nozzle is not on there, nothing is happening.

E: That'll screw up your calculations, I'm sure.

J: So it's very, very likely that those nozzles were ripped off by the debris that the rocket itself caused. So massive damage.

B: That's messed up, man.

J: At about 2.5 minutes after launch, the booster was supposed to separate from Starship, right? So you have two different... This rocket has the first stage, which is the booster, and the second stage, which is Starship, which has its own engines, right? So the way that SpaceX designed their rocket, the first stage is supposed to change its angle of direction a little bit throttle its engines down as the Starship, now using its own engines, continues to fly on the intended flight path. So the separation is done by a change, the two ships are going in a slightly different direction and the booster is slowing down. So the two parts of the ship were supposed to separate from each other and this didn't happen. And this is where the, this is like the earliest point where the explosion was going to happen and it was inevitable at this point because they weren't separating. The fact is the booster kept its engines throttled all the way up, which that was either due to damage that it received or it could have been a software error. I haven't heard or read anything about what the actual cause was at this point. Maybe the likely thing is it was due to the damage. I would suspect because the engines were damaged, something was going off there.

B: Jay, what about the idea that they were actually, because they lost 18% of their thrust, they were making them go a little bit longer to meet their trajectory. And so that was where the mistake came from, where they were firing longer because they lost they lost some of the thrust.

J: Yeah, the whole mission was seconds behind where it should have been, probably because of the missing engines. So I wouldn't doubt that. But it is odd that when they initiated that release moment the ship wasn't able to do it. Because it was still compressing the Starship onto the booster because of all that thrust that was happening. So the rocket now begins to tumble and then it stops gaining altitude. Then it falls about a kilometer and then SpaceX remotely destroyed the rocket. So this is an interesting point. Now we have this rocket that was exploded and the EPA in the United States, the Environmental Protection Agency and the FAA, they come in and they're like, this is no good. We don't like anything that we saw here and now they're involved and they're going to have to be significantly involved in clearing SpaceX in the future. Because the rocket's explosion sent debris out for miles and there of course was a lot of unused fuel going into the atmosphere. Rocket parts were everywhere. SpaceX has a lot to consider here. It's a big deal what happened. It is actually a big deal and it's a problem. And the big fear is that this test launch will somehow delay the Artemis program and it can.

B: Yeah.

J: Depending on how quickly can they figure out a solution to what took place. With the launch pad and if there was any software issues or any updates that they have to make. All of that has to happen. The EPA and the FAA have to give their sign-off at a level that they probably weren't as involved as they're going to be now. And there is no moon mission without SpaceX at this point. So I'm worried. I need to find out more about what the long-term impact is going to be and when that information comes in I'll mention it on the show. So one thing I want to say that has had an impact on this whole Starship build and everything about it is that it is a cost-saving endeavor. He picked to build the rocket out of steel to save money. The launch pad and the way that they designed it it was to save money. The boosters and the way they designed that to save money. Even details like how the booster separates from the Starship. Because typically there are hydraulics involved and there is a very very much a mechanical thing that separates a stage 1 from stage 2, say. But with the Starship it disengages the booster disengages from the Starship that's above it by just changing direction and throttling down. It's like a non-mechanical way that doesn't cost any money. It's just like unscrewing a top off of a bottle. It's just as simple as that. It's not held on in a way that you would think a normal rocket would be. So, but all of this is to make space accessible. So, on one hand, I get it. I understand why SpaceX is doing this because the less expensive they make putting material into outer space. That's the way you judge all this. How much does it cost to get a pound of stuff in outer space? And what SpaceX is doing is making this accessible on an order of magnitude less amount of cost to do it. It's huge. And it is something that I'm very excited about because if they could do that it's going to allow lots of companies to have missions to go into outer space to do a lot of different things and it's going to be the real beginning of the space economy is when people can access space cheaply. So, I get it. I think that we saw a pretty significant failure here with some successes, right? Like Bob said, they wanted to collect a lot of telemetry. They got the telemetry. I'm pretty sure they didn't want their rocket to have to be exploded. For them to have to destroy it like they did and they would have liked to have seen it go a lot farther. But the real thing that we're seeing here is that decisions were made. Everybody is saying that Elon Musk made those decisions and there was a massive amount of failures stacked on top of each other because of a series of decisions that were made. Now, whether those decisions were directly there to save money or not, I don't think we know the truth yet. But we'll find out. You know, it's just a matter of time before we get more details. But a little drama. I think it was a lot of fun. I loved watching the whole thing and I like reading about all this stuff and it's very exciting. I certainly don't want Artemis to be delayed but we'll see what happens.

B: Yeah. Another little tidbit, Jay. That's related to this is the goal of making the rocket simpler and simpler and cheaper and cheaper. Of course, it's a laudable goal in a lot of ways because you want to reduce the amount of the expense per pound into orbit. But as they offloaded complexity from the rocket, they unloaded that complexity onto the launch pad and that launch pad, I read, was actually more complex than the rocket itself. Amazingly complex. It actually constructs the rocket putting the booster down and then it puts the starship on top of that. It actually constructs it on site because those rocket pieces cannot lay on their side because they're not strong enough. They're strong vertically but not horizontally like conventional rockets you may have seen. So the launch pad has to actually construct it. It's a very complex launch pad. So when you have the launch pad damaged, that's bad. Because it would be the hardest thing to replace. Losing rockets pieces, losing a starship, losing booster, that's no big deal. He's got those in the pipeline already made and already being made. That launch facility is not. You would have to make that from scratch. If there were more damage, that would have been truly devastating.

S: If the mission was a 100% success, at the end of that mission they were ditching the whole thing in the ocean. It's not like they were ever going to reuse it anyway. They didn't lose anything really. Blowing it up versus ditching it in the ocean doesn't make any difference to the program. It did clear the tower and they did get a lot of data out of it. If the only real problem was that they didn't use a water deluge system and it just caused a lot of damage to the rocket, then at least it's easy to fix.

B: Yeah, that's not a horrible problem. That deluge system I think was already a lot of that work was already done and set up. They just didn't finalize it according to what I read.

S: Interesting combination of successes and failures. We'll see how it goes from here.

False Belief Systems (20:52)[edit]

S: All right, Cara. How do we keep people from believing false things?

B: We can't. Next story.

C: See the Skeptics Guide to the Universe. Yeah, this is a tough one. There's a new study that came out of Dartmouth. It was published in Nature, Human Behavior called A Belief Systems Analysis of Fraud Beliefs Following the 2020 US Election. It's an interesting approach to trying to understand why individuals so readily accept the idea of fraud in elections and why it's so hard to change their minds about it even when there is really, really good explicit evidence that says that fraud did not occur. And so, let me jump a little bit into the methodology first and then we can talk about sort of why these researchers are talking about this "belief systems analysis" which is really based on like kind of a Bayesian model of reasoning. So, basically what they did is it's a really interesting idea for a study. So, in the middle of the 2020 election here in the US they actually surveyed 1,642 different people while the votes were actively being counted before we knew what the outcomes were. And so, it's a little bit kind of like the halfway point between asking people polls before the election and then asking them after the fact what they think based on the result. They were sort of catching them in that really uncomfortable we don't know what's going to happen period. And then they actually gave them different hypothetical election outcomes and they looked at their responses to them. So, what they were specifically looking for was their belief in fraud and they found some pretty predictable but I think important to remember outcomes. So, the first one was that as you guys wanted to win lost or when the person that you wanted to lose won you were more likely to say that it was fraudulent. Right?

S: I'm shocked. Shocked.

C: Shocked, right? They also found that the beliefs were stronger the stronger your partisan preference. So, if you were somebody who identified deeply as a Republican or deeply as a Democrat. Or we could say if if you're somebody who was were really, really interested in Trump winning or really, really interested in Biden winning then your beliefs on fraud were more intense. So, they were looking at, again, like I said, based on the hypothetical outcomes of the election because it was in the middle of the election they asked people first, who do you want to win? Biden or Trump? And then they asked the candidate to see basically how partisan they really were. Then they asked how likely is your candidate to win the true vote without any fraud presence and then how likely is it that fraud will actually affect this outcome? They were shown one of two U.S. maps with hypothetical winners. So, they were shown two different schematics, one in which Biden won and one in which Trump won. And so they were able to ask again about their fraud beliefs after those hypothetical scenarios. And then 3 months after the survey a smaller group of respondents were followed up with and they asked them their beliefs about the true vote winner and who had potentially "benefited" from election fraud. They found exactly what I mentioned. Both Democrats and Republicans were more likely to believe in fraud when their candidate lost. They were less likely to believe in fraud when their candidate won. The stronger their preference the stronger what they called desirability effect. So the stronger the effect of the actual belief of election fraud or lack of belief in election fraud. And so basically they asked a question, is this rational or is this irrational. When we're talking about motivated reasoning, we use that term a lot. Like this cognitive distortion or this cognitive bias on the show. We talk about motivated reasoning a lot. This idea that people deceive themselves in order to make decisions. In order to process information were our biased reasoning leads us to a particular conclusion and that it might be unconsciousness but that it's often "irrational". But the argument of the authors of this paper is maybe their views on whether or not the fraud was perpetrated. Whether or not their wanted candidate was elected fairly is a form of rational thinking. And if it is a form of rational thinking, how? So they decided to create their own Bayesian model and I'm not, obviously don't have time to dig deep into Bayesian theorem and how probability works. But I think kind of the most important take away of this idea is that a Bayesian model utilities prior knowledge and prior conditions. And it's more tailored. It's more specific. So instead of looking at probability at sort of a population level, it's saying, I want to know the probability based on these specific things that have already happened, or these specific beliefs of the individual. And I'm going to work that into the modeling. And so their kind of outcome was when you really start to talk to these individuals, and you look at all of the different belief structures that they have, which are anchored by much larger beliefs that are all tied together and interacting. Taking out one specific viewpoint, like did fraud occur or did it not occur, it's really hard to counter somebody's belief in that one viewpoint with available evidence. Because as we've talked about this a million times on the show, this is not new information, they're just modeling it in a really interesting mathematical way. You can't take a piece of information out of somebody's entire structured belief system. It's not independent. It relates to everything else. And that's why, sadly and scarily, the authors posit that even when you provide available evidence, the individual actors who were questioning here are going to just flagrantly deny the evidence. Because if they didn't deny the evidence, it would undo their entire belief structure.

E: But we see that in lots of different areas of thought.

C: We see it with religion. We see it in so many different examples. But I think it's in religion, though, I think we see it over and over and over because these are ideological experiential kind of frames. And so there's one thing that they said in the discussion that I thought kind of bared repeating. I have a little poll quote here. "Our findings illustrate how specific combinations of beliefs can prevent rational people from accepting the results of democratic elections. Beliefs in election fraud have played a substantial role in undermining democratic governments worldwide and have grown and remained strikingly prevalent in the United States. Belief in fraud undermines both motivation to vote and acceptance of election results, which bear directly on the viability of an elected government." So they specifically wanted to focus on this topic because of its implications. It's really, really severe future implications. So this is an example of like kind of a real world application of one of these, what we often talk about as more like theoretical problems with what they are referring to specifically as belief systems. So it really goes to kind of, I guess, support why we as science communicators, why you listener as a family member, a friend, a colleague, somebody who's talking to another person about something that you disagree on and something where you're like, why are they believing this in the face of evidence? It blows my mind that I think rethinking the way that we approach this and instead of discounting these individuals as irrational, but instead thinking what frame would they need to have in which this thought process would be rational for them is going to be a much better approach because it can help us start to dismantle some of those systems that support the individual belief instead of just pinpointing and targeting that individual belief, which is probably not going to be very effective.

S: Yeah, I mean, I think we already know this, but-

C: Of course.

S: -the other approach is to because people could be more than one thing at once, right? So even though you may be ideologically a Trump supporter, and that leads you to accept that whole information ecosystem and construct that I get, I know people that I think are otherwise savvy, intelligent people who like with a totally straight face, think that the election was rigged. In the face of, I mean, just to put a point on that, there were two news items just in this past week. So the one was that the guy Lindell, the pillow guy, so he had this $5 million challenge to prove his evidence of Chinese hacking of the election wrong. And a computer programmer proved it wrong, like straight up, and it went into arbitration and the arbiters said, yeah, he totally proved you wrong. It was an interesting story. And the guy found out, because basically, Lindell, like had this file with all this data on it, like this data is voting data. And the guy looked at it, and he analyzed the data. It's like, well, first of all, this is not voting data. There's none of the packets or whatever. There's none of the information that should be there if this is voting data, and he was able to reverse engineer what they did. Somebody just made a table of nonsense data in Word, and then saved it like as a hex file. And he was able to put it back into Word, and you knew it had to be a Word file, because it had formatting, you know what I mean? It wasn't a nonsense file that you get if you take a non-Word file and try to open up in Word. So he basically proved it was nonsense data. It was fraudulent, you know?

E: It was fraud. That's what it was.

S: Yeah. But now Lindell's just straight up refusing to pay him the money. He's just going to-

J: Wow, he's not gonna-

E: He's going to fight it out in court.

C: Of course he's going to fight it out. He may not even have that money.

E: He'll probably lose.

S: The other one was that Trump hired another company that he hired to do independent investigation to find fraud. And they found zero evidence for fraud. There was zero evidence. And they told that to the Trump campaign prior to January 6the.

C: Yet these people still believe the narrative.

S: Yeah. Multiple independent investigations have found no significant fraud. And they had-

B: Including how many courts?

S: Yeah. There were 60 court cases involved with it that all came up with nothing. So you have to think that the whole structure is multiple layers of fraudulent. I mean, it starts to completely become absurd in terms of the construct that you have to believe in.

B: But the damage is done.

C: Yet at the same time, I think what's important-

S: But once you put your nipple down on that belief system because it's part of your tribe, then you just will keep doubling and tripling and quadrupling down on that belief.

C: And I think it even kind of goes beyond that. Because while it's true, and I think it's important that I specify here, we're saying Trump, Trump, Trump. And it is true. When you actually look at the data, even within this article, if you look at prior win beliefs, significantly more Democrats had a stronger belief that Biden would win. Then it's interesting. When you actually look at the data, more Democrats were more sure that Biden was going to win than Republicans were sure that Trump was going to win, which kind of makes sense for 2020. When you look at the prior fraud beliefs, significantly more Republicans already thought there was fraud. And very few Democrats believed that there was any election fraud going on.

E: I think perhaps some of that in 2020 is a bit of an outlier, I think, only because of the mail-in ballots. There's never been an election that had this level of mail-in ballots, but all because of the COVID that occurred that year. So it's again, I think that makes this election cycle, that election cycle, an outlier.

C: Well, there were a lot of things that made this election cycle an outlier.

E: Well, no, I understand that. But I think some of those people got into that mind frame ahead of time because they knew six months ahead of time that that was going to be happening. So they sort of already convinced themselves, like when you're going to have this much mail-in balloting, there has to be, it has to be fraudulent just by the nature of it. I believe that was their thinking. And perhaps that's why it's an elevated level of it.

B: Yeah. And I think that that sets the stage for claims of fraud for now, for the near future, regardless of the amount of mail.

E: Oh yeah, absolutely. This isn't the end of it. No, no, no.

C: No, I mean, we've seen across the board, right, like a news article that Steve, you didn't mention, but is huge, is the settlement for Fox with the voting machines, right? Like so clearly this is another piece of evidence that fraud did not exist, that these voting machines were not broken. But regardless, the point that I was about to make, sorry, was that even though Democrats had a stronger prior win belief in their candidate and Republicans had a stronger prior fraud belief, both parties showed that they were more likely to believe in fraud when their candidate lost and they were less likely to believe in fraud when their candidate won. So I think it's important that even though we're sort of like singling out Trump supporters, because that happens to have been the outcome of this election and that happens to have been the real world implication that we see, this is a human tendency.

E: Sure.

C: This is not a Republican tendency.

E: It's happened in prior elections, borrowed that out, I believe.

B: 2016, that was a little bit from the Democrats.

C: Right. And it's very-

E: 2016, 2004, there was that controversy.

B: A little bit.

S: To be fair, this is an order of magnitude difference.

B: Two orders of magnitude.

S: This is like two thirds of Republicans still believe that Trump won. I think that's because of the leaders, because there wasn't really any major Democratic leaders who were saying, yes, there was fraud, yes, this election was not fair, and doing things about it like the way Trump is and Trump's supporters are. So it's different. It's fundamentally different.

C: It is. And my sort of reading of at least this specific study and the goals or maybe the takeaways of this specific study are while we're even sitting here in this sort of meta way pointing to the different reasons, what they're saying is it's not just this or that. It's all of it.

S: Yeah.

C: And we can't tease out this or that because there's a gestalt at play here. And until we really appreciate the gestalt, we're not going to be able to go in and excise out a belief. It just doesn't work that way.

S: There were previous studies too which showed that you can't just take a belief away from somebody. You have to give them an alternate belief. And I think that alternate belief or belief system or way of arriving at an understanding of how the world works is skepticism. If you give these people sort of a skeptical, critical thinking part of their identity, then you could say, all right, whatever my political affiliation, I'm going to back up and look at this as objectively as I can. I also like personally one of the exercises that I do when I have a clear bias in a situation is think to myself, do the thought experiment. How would I think about this if I were on the other side? Play the devil's advocate in your own head.

C: You have to. And I would even go farther than, I mean, as much as I completely and totally believe and agree with the view, I mean, clearly I'm a dyed in the wool skeptic. I'm on this podcast. It's what I do. And as much as I fully and completely believe that that can serve as a foundational frame for individuals to adopt that can start shifting some of that, what we think of as irrational, but what they might believe is a purely rational next step based on the constructivist belief structure that they're already utilizing. I also think that we sometimes fall short and we make the mistake that all of these decisions are based on logic. And I think that if we don't also offer a worldview, an option, a structure that's based on moral philosophy, that's based on concepts of right and wrong, we're lacking an entire portion of the human psyche. I think that individuals are going to make decisions based on what they think is true and based on what they think is good. And both of those things have to exist. And skepticism doesn't necessarily offer the second half of that equation.

S: Yeah, because it's not everything, but it is still useful to say, regardless of what's good or bad, what is true? I mean, just-

C: It is absolutely useful. I just think sometimes we make the mistake of thinking it's literally everything when it can't be everything. It's the same thing with science. It's the same thing with religion. It's like if we see it as more of a gestalt, I think that our humanity comes through a little bit better. And it helps us when we engage with people with whom we vehemently disagree. And that is the point, right? It's to get away from this reductionist approach and to start backing away and seeing the forest for the trees.

S: I think they're complimentary, I wouldn't say, instead of doing that. Because reductionism is good. Reductionism is great. Just not the only way to look at things.

C: Right. And the problem is, I think, baked into the reductionist approach very often, or we could say the logical positive approach very often, is this kind of tendency to only go there. It's sort of fundamental to how we do reduction. Once you've been reductionistic, now you're down in the reduced parts.

S: Right.

C: So you do have to step back as well. But I'm with you. It's both.

S: Yeah. Yeah, yeah.

Ashwagandha (39:08)[edit]

S: Have you guys heard of Ashwagandha?

C: Oh, yes. I'm from LA, Steve. Of course I've heard of Ashwagandha.

S: So it's the latest herbal sensation. These things have a sort of marketing cycle to them.

E: Oh, does it cure everything? I love it.

S: I mean, pretty much. Yeah. So it does derive from Indian Ayurvedic herbal medicine. So it's alleged to have had thousands of years of being used. It's been having its moment on TikTok and social media, and it's becoming very popular. So I did a deep dive, as we like to do, into this herb. And it basically has all the characteristics that drive me nuts about the supplement industry and the promotion. I broke it down when I wrote about it for Science Based Medicine. I wanted to review it. It's a good review of like, what's the problem with herbal supplements? The first and biggest problem is marketing herbs as if they're something other than drugs. The fact is, they're drugs.

C: They're just unregulated drugs.

S: That's all. They're poorly regulated drugs. That's all that they are. There is nothing else about them other than the fact that it's a bunch of drugs together in one plant or one part of a plant or whatever.

C: But you're right. You'll never hear them called drugs. They're always called herbs.

S: Yeah.

C: Or supplements.

S: Herbs or supplements rather than herbal drugs. I mean, I think they should be called herbal drugs, right? Because they're pharmacological agents. That's what they are. They're just not purified. So what does that mean? It means we're not really sure what all the active ingredients are. We don't know how much is in there. They vary based upon, from plant to plant, crop to crop. Was it a wet season or a dry season or whatever? There's all kinds of things that could make lots of difference in terms of the amount of active ingredients. So you really don't know what you're getting. You don't know how much you're getting. And people have a tendency to think of it, well, this is natural. So therefore, it's somehow magically safe and effective. And the other aspect of it, which is not always there, but it's usually there in the background, sometimes it's explicit, is this notion that, and again, this is like most of the literature that I saw in Ashwagandha had this angle to it, like that it has to be part of other treatments. By itself, it may not do anything. It really needs to be given with other herbs or as part of some kind of chakra evaluation or whatever. So there's this explicitly magical angle to it. And sometimes you say, to me, it's a bad thing that it's mixed with a lot of other drugs in one product. That's usually not a good way to practice medicine. Here's 10 drugs in random dosages.

E: Good luck figuring out what's working, what's not working.

S: Yeah, exactly. But they say that, no, but there's just like something magical about this combination. It's like there's some synergistic effect. Sometimes it's explicitly magical. They say that god made the plants to treat it. You have to believe that sort of thing, because there's no evolutionary pressure for there to be any synergistic effect in a plant to make people better. Plants evolve these chemicals as poisons to keep animals from eating them. They're designed to be poison.

E: Just the opposite.

S: Well, it's also super dangerous as a physician, right? And I see this just when I'm doing intakes in psych, where I'm like, okay, what meds do you take? And I always have to say, are you taking any like herbs or anything over the counter?

S: You have to ask specifically.

C: And there's sometimes dozens of things that people take that have direct, that are like directly contraindicated with their psych meds. It's so dangerous.

S: Yeah, so because they don't recognize that they're drugs, they don't think about drug-drug interactions, right? Or common drug toxicity. So for example, there are case reports and case series of liver damage from ashwagandha. That's a very common drug toxicity. Liver and kidney are like the two big ones, right? Because those are the organs that are going to metabolize and get rid of the drug. Mostly it's going to be one or the other. So it's very, very common. So not surprising. If you end up taking enough that you're actually getting a pharmacological dose in your system, there's a risk of liver damage. That's the kind of thing that would keep like a drug from getting FDA approval. Or maybe you would get a black box warning if it was not that common. But still, herbs get a free pass. It's right there in the literature. It can cause liver damage, but whatever.

E: Thank you, Orrin Hatch, among others.

S: Exactly. All right, another big aspect of it is the hyping of preliminary evidence. And in fact, if you look at the herbal supplement literature, it's all preliminary. Pretty much. It's like 99% preliminary evidence. It's usually small series or small numbers of people or they're open label or it's not really properly blind or they didn't assess the blinding or they didn't use active placebo. So it's very easy for people to know if they're really getting the drug or if they're getting a placebo. So they're always just these weak studies. And sometimes the outcome measures are subjective, which makes it even more difficult. The only time you get like really fairly definitive, like double-blind placebo controlled trials is when the NIH is funding a study. That's one of the things they like to do. And historically, when you get the really definitive drug-like trials of these herbs, there tend to be negative. The results are usually very disappointing. So to me, this is one of those situations where like it's a feature, not a bug. They're deliberately sort of living in this preliminary evidence world because first of all, it's cheaper to do those studies. And you can make a reasonable argument that they're not profitable enough to justify the millions of dollars it would take to do really big studies, although it's a multi-billion dollar industry, so not so much. And then also, these studies are often designed not to really show whether or not they work. But it is just enough to market these products. Because they're evidence-based. Look, there's some preliminary evidence that it may work. And so they end up looking like they work for everything. Nothing is definitive. And as we've discussed at length here and in science-based medicine that this preliminary evidence is not very predictive, right? What's the probability of something working based upon encouraging preliminary evidence? It's pretty damn low. It's mostly going to be false positive.

C: Especially when you look at how they market them. Because so often you see things that are like, I don't know about what's ashwagandha supposed to do? Because I feel like half of these herbs are supposed to like increase heart health and also foot function and also hearing and also your smell is improved. And you're like, what?

S: It's good for what else? You're like, there's so many different things that it could treat. So a good red flag, if it seems to work for everything, it probably works for nothing. It's just telling you that whatever they're using to determine whether or not it works is not working, right? It's not predictive.

C: That they're basically giving you the placebo effect at that point.

S: Yeah, basically.

C: Yeah, sadly.

S: But that also bleeds into the next big point, which is extrapolating from basic science. So when you say, well, what does it do? How does it work? Here they do research or they don't but when they do study, it's either like on animals or in a petri dish. This is the kind of research that I characterize as we exposed whatever the animal or the cells in a petri dish to our product and stuff happened, right? Stuff happened.

E: Something moved.

C: Which is like, of course, because it's a drug.

S: Right. Yeah. If you dribble it onto cells, yes, stuff is going to happen. So very commonly, the stuff that happens is changes in a marker of immune activity. So, first of all, the immune system is large and complicated and it is basically designed to react to stimuli. And there are almost anything you give to an organism, it's going to have, it's going to increase or decrease some marker of immune activity. This is the Texas sharpshooter fallacy, right? No matter what happens, they draw a bullseye around it. Doesn't matter what it is.

C: Because they can market whatever it is to be good.

S: So listen, so anything that decreases any marker of immune activity is anti-inflammatory.

C: Right. Yeah.

S: And anything that increases any marker of immune activity boosts the immune system.

C: Which is not a thing. That's not a thing.

S: I know, but that's the way you go.

E: Slap a label on it, sell it.

S: No matter what happens, it's a good thing. But the chances are it's inconsequential. It's just the background noise of immune activity.

C: Doesn't it blow your mind though, Steve, or make you insane when people are like, oh, it boosts immunity. I'm like, so does infection. I don't want that.

S: Right. Right. Right. Right. It's all just narrative, right? Because you could say that decreasing the marker is suppressing the immune system. Right?

C: Of course.

S: And increasing the immune marker is pro-inflammatory. You could also make them negative if you just state them that way. And I love it when products do both at the same time.

C: I can't, then they're extra confused.

S: Because look, they look at 20 markers, some go up, some go down. Look at that, it boosts the immune system and it's anti-inflammatory at the same time. They literally do that for the same product.

E: No wonder you have to invoke magic to tell people how this works.

S: So there's a lot of homeostatic systems in the body. And again, all you're looking at is just that some marker of these homeostatic systems changes and then you just spin that narratively into a good thing. So another one, another common one is the oxidative stress system.

C: Oh, I hate that one.

S: This is the antioxidant system where anything that increases oxygen, well, oxygen's good. It's going to be nourishing your tissues. If it decreases it, then it's an antioxidant. And again, sometimes it's both at the same time. So again, it's this head's I win, tails you lose kind of approach. Whatever happens, whatever happens at a basic science level, they have their narrative where they could spin it into a positive thing. But it's just as likely to be harmful and it's probably most likely to be worthless. And because some things, some chemicals are just very reactive. They just sort of test positive. They're going to cause a lot of things to react in a lot of assays. It doesn't mean that anything meaningful is happening in your body when you take them.

C: And the truth of the matter is like, yes, it's true. We talk a lot about supplements and herbs as like you said, like inert or worthless. But ultimately I want to err. Like this is the time when the precautionary principle makes sense. It's the reason we have to do safety trials of drugs. I am not going to assume that if I take an active ingredient that binds to receptors somewhere in my body, that it is going to not do anything to me. I'm going to assume that it's going to be harmful to me until proven otherwise.

S: Right, because chances are it is.

C: Right, because it's a drug.

B: And also, if I never hear the word antioxidant again, it'll be too soon.

S: Bob, here's another one for you.

E: Still everywhere.

S: Here's another for you. So now they're saying that ashwagandha is an adaptogen.

C: What does that mean?

S: That's a new one on you, huh?

E: Write that one down.

B: Cool.

C: I mean, I've heard it, but like, I don't, what are they trying to say?

S: It helps your body adapt to stress.

C: Stress?

S: So how do we know that? Again, this is another homeostatic system where stuff happens and they go and they just interpret it in a positive way. Oh, look, it's decreasing corticosteroids. Therefore it's reducing your stress response. It's an adaptogen. It's helping you adapt to stress. No, it's just changing one of the many markers of this homeostatic system that you probably shouldn't mess with. It's just ridiculous. It's all marketing bullshit, complete nonsense.

C: We don't even have any like proven drugs that help you "adapt" to stress. That's not a thing.

S: No. They just make it up.

C: It's all behavioral.

E: It is now, Cara. That's the beautiful thing about this.

C: It's so insane.

S: An adaptogen. Yeah, it's nonsense. But what they don't have are clinical studies which show a net positive health outcome.

B: They're overrated.

E: Yeah, who needs that?

C: Who needs science?

E: That would get in the way of sales. Cha-ching.

S: Right. No, exactly. Because they don't want to do the kind of study which can show it doesn't work. So they do only the kind of studies that are heads I win, tails you lose. No matter what happens, we'll market it and it'll be great.

E: Falsification? What does that mean?

S: Right. But they never study if it works. They just want to show how it works. It's a multi-billion dollar scam on the public.

E: Didn't you try telling this to the NIH years ago, Steve, when you went to Washington for a meeting with them?

S: Yeah. Yeah, yeah.

E: And nothing. Right?

S: I also tried to tell them recently that acupuncture doesn't work. But of course, the head of the NIH now is an acupuncturist.

C: Yeah, now is not the time to.

E: Oh, no.

C: Yeah.

J: That sentence that you just said, Steve, makes me want to put my face out on fire with a fork.

C: And it's not just here. I mean, look at the World Health Organization.

S: It's the head of the NIH Center for Complementary and Alternative Medicine, not the whole overall NIH.

E: Okay.

S: Sorry, just to clarify that.

C: Wait, why is there an NIH Center for Alternative and Complementary?

S: This goes way back. This goes to the 90s. First, there was the Office of Alternative Medicine, then the National Center for Complementary and Alternative Medicine. Now, it's the National Center for Integrative Health. So they just keep changing the branding.

E: Sounds fancy.

S: But it's the same shit, right? It's the same exact things that they were promoting 20, 30, 40 years ago. Just it was first, it was alternative. Then it was complementary. Now it's integrative. But it's still homeopathy and acupuncture and all this crap. It's still the same exact stuff being promoted in the exact same way.

J: Yeah, but Steve, what about wellness? What about wellness? I mean, if it says wellness, it's good, right?

E: Well...

S: Well...

The Evolution of Eukaryotes (55:09)[edit]

S: All right, Bob, tell us about the evolution of eukaryotes.

B: All right.

E: I'm a eukaryote.

B: So, yeah, a new and fascinating study explores the details of endosymbiosis. It's a process that forever changed Earth's primitive cells and gave rise to the cells that allowed multicellular life to proliferate and spread throughout the entire biosphere of the Earth. Okay, so this new study was recently published in PNAS. It's called... It's called... Did you do that? It's called Metabolic Compatibility and the Rarity of Prokaryote Endosymbioses. Yeah, that's a tough title, but the stuff underneath is nice and juicy and delicious. It was written by Eric Libby, associate professor at the Integrated Science Lab at Umeå University in Sweden, and his colleagues. Are you ready for the story that I call The Joining? All right. So, we believe that the first modern type of cells that proliferated on early Earth that still exist today were prokaryotic cells. You may have heard of them. They still exist today.

C: Bacteria.

B: Yeah. They're distinguished by the fact that they have no membrane-bound structures inside. The ribosomes that make the proteins are loose inside, basically. The real defining characteristic here is that there's no true nucleus in the cell, as we would think of in our types of cells. These earliest cells would be similar to what we call bacteria or archaea today. So around two billion years ago, give or take a lot, it appears, because I got lots of different numbers on this, around two billion years ago, there arose new types of cells that had a membrane-bound nucleus and also the amazing little machines inside that we call organelles bound within their own membranes that can perform these very complex tasks. Now, these are the eukaryotic cells. I remember that word, eukaryotic, as opposed to prokaryotic, by you, the eukaryotic you, your friend, your brother. They have eukaryotic cells. That's how I always, in my mind, describe them.

C: That's funny. That's how you thought about it?

B: Yeah. I did that decades ago and it still works. So now these are all over the earth, comprising not only many unicellular organisms, but also fungi, plants, and of course, people. So how did that happen? What's the deal here? So the most popular theory, with some decent, solid evidence backing it up, actually, involves something called endosymbiosis, which I've been fascinated with for years. So the earliest eukaryotic cell probably arose from an Archean prokaryotic cell. So Archaea, as I mentioned before, they're similar in many ways to bacteria, if you haven't heard of them. There's some interesting differences in the cell wall and other things, but they're kind of like bacteria. So this early cell engulfed, or was perhaps parasitized by a prokaryotic bacterial cell. It just kind of sucked it in. Somehow it ate it, or maybe something else, not sure, but it got in there. And somehow the cell survived inside, becoming what's called an endosymbiont. Basically a cell living within another cell, right? A nested matrioshka shell, if you will. So eventually this rogue cell stopped lounging on the couch in the parent cell and actually became useful. It hooked up cable TV. It set up Netflix and Hulu accounts, but most importantly it had Instacart and Grubhub. It had the food delivery. Now that's because it wasn't smoking weed in the basement, it was smoking oxygen in a sense. This was an aerobic cell that was consumed. It used oxygen to create energy. In this case it made energy from the half-digested food molecules that were floating around inside the parent cell. It created so much energy in the form of ATP, adenosine triphosphate. It made so much ATP that it leaked into the rest of the parent cell, which then the parent cell could of course utilize as well. Now at this point in Earth's history, oxygen just happened to be becoming more and more prevalent in the Earth's atmosphere, created by cyanobacteria primarily, I believe. So therefore this new cellular tool became increasingly important. The old school prokaryotic cells that didn't have it were at a huge disadvantage. I'm sure many of them just became extinct because they couldn't compete with cells that were producing so much energy like that. So what do you think this endosymbiont inside the cell became? What do you think it eventually became?

S: The mitochondria?

B: Mitochondria, right. It became the critical engines of our cells that now it's called mitochondria, but they're also not just endosymbionts, but they're obligate endosymbionts. That means that it became so tightly integrated into our eukaryotic cells that you can't pull them out. They can't live outside of them. That's it. They can only live within that cell.

C: Well, chloroplasts too.

B: Exactly. The same thing happened to plants with their photosynthetic chloroplasts. They were also formerly these lone bacterial prokaryotic cells that basically hooked up awesome solar panels all over the place on the roof, on the grass, over the pool, everywhere. And so that's just another amazing example of what this endosymbiosis was able to accomplish. So the rest, as they say, is history, right? But scientists have been wondering if this type of endosymbiosis is so awesome, if it's such an advantage, why aren't we seeing it happening today with the many prokaryotic cells that still exist, right? You would think we'd be seeing them all over the place and we're not. The lead author, Eric Libby, believes that this probably has to do with metabolism. He said that metabolism is a fundamental challenge. If one cell swallows another, can both grow? Can they compete in the population with others that do not have to sustain two cells? So to test this, the researchers created these complex models of complete genomes of various kinds of prokaryotic cells, right? They called these models, these genome scale metabolic flux models. And I'd like to see these because they sound pretty wicked because they were able to essentially run these models to see how they would do, how would these prokaryotic cells do if they were to virtually engulf another prokaryotic cell and make it into an endosymbiont inside of them? What happens when that happens programmatically at least? So they looked at three areas of survivability. Viability, can this new cell even grow and reproduce? Because if you introduce this endosymbiont and you don't even grow and reproduce, then that's, you're not viable. That's it, you're done. They also looked at persistence. Could this new cell survive in a changing environment and can it compete with its ancestors and not go extinct? And then the third thing they looked at was evolvability. Could this new cell garner enough favorable mutations to adapt? So what do you guys think when they looked at the viability of these cells? What do you think they found? Do you think that they were viable? Could they grow and reproduce? What do you think?

S: No.

E: No. No.

J: Yes.

E: I say yes.

C: Which cells specifically?

B: So these were the virtual cells that they made into endosymbionts. They introduced the endosymbionts inside. So basically you've got two genomes and you've got two prokaryotes living together. So when they did that in their models-

C: So they've got all the machinery that they would need. So then yeah, they're probably viable.

B: So well, they actually, they found that greater than 50% of them were viable, which was a surprise. They, more than half of them survived, which was actually surprising to them. Because you know, you're throwing these two cells together, who knows what's going to happen? But they were pleasantly surprised that over 50% survived and that was pretty good. Now, regarding the other two that they looked at, they looked at persistence and evolvability, right? The models did less well regarding that. They showed that the merged cells were in fact less fit and less evolvable than their normal ancestor cells most of the time. That's a key, of course, caveat right there. Most of the time they were less fit and less evolvable. This is what they said regarding this in the paper. They said: "We find that while more than half of host endosymbiont pairings are metabolically viable, the resulting endosymbioses have reduced growth rates compared to their ancestral metabolisms and are unlikely to gain mutations to overcome these fitness differences." So yeah, they didn't do well for two of these important measures of survivability, persistence and evolvability. So here's Libby describing some of their reactions to these findings. He said, in some sense, it's surprising how over half the possible endosymbioses between prokaryotes might actually survive. It was also surprising that given two genomes in endosymbioses, they are less able to adapt than their single genome ancestors. Both of these results went against our initial expectations. So they were surprised pretty much all around. Researcher Jordan Oakey from the Arizona State University said this. He said: "This means they have a lower potential for diversifying and radiating across the planet and may help explain why with the exception of eukaryotes, there are relatively few prokaryote endosymbioses today." So this kind of makes sense, right? Maybe this is just another example of how life on earth got lucky and we were just fortunate that our specific type of endosymbiosis happened to work out. These two cells got together and these two wacky kids worked it out. We got lucky because if you roll that dice most of the time, it's not going to work out and we just got lucky. So maybe that's it. But though, despite this poor showing, if you looked at their models, many of their models did however show that when the resources in the environment were scarce, the endosymbiotic cells had an advantage. So there are scenarios where this actually was an advantage to the ancestor cells that weren't thrown together. So now this bit of information actually could lead us to specific environments on the earth, right? Now that we know this, we could actually look at environments where the resources are scarce and maybe it's there, if we focus on there, that we will find new eukaryotic type cells that only recently have evolved through endosymbiosis. So that would be cool if we could find it. Imagine if we found a new type of eukaryotic cell that, oh my god, look at this thing joined with another prokaryote relatively recently and look what's going on. Look at these organelles in these cells. That would be fascinating to find something like that. Okay. So now regarding the future, so it's no surprise that the researchers call for more research. Yes, more research like this, but yeah, I agree. Fellow researcher Christopher Kempis, also from Arizona State University said, how hard of a challenge is eukaryogenesis or that means the creation of eukaryotes? How hard of a challenge is eukaryogenesis? We need a common scale, both for understanding the past and as a baseline for synthetic biologists who want to build new organelles to increase cellular efficiency. So I agree. So it seems that if we can understand endosymbiosis more fully, it makes sense. We can then discover not only how earth life evolved, right, obvious, but also maybe extraterrestrial life as well, how that could potentially evolve and even more tantalizing, and I'll end with this, how we may benefit from the artificial biological life we will almost certainly be creating in the future. I've been waiting for that for a long time. So that's it.

S: Cool.

Blue Holes (1:06:38)[edit]

S: All right Evan, what are blue holes?

E: Yeah, blue holes. Well, hey Bob, you ever heard of a black hole? I'm not sure if you've ever talked about black holes before on the show.

B: They sound cool though.

E: Yeah, yeah. But here we have a blue hole. Have we ever talked about a blue hole before on the show? I don't think so. Blue holes are underwater sinkholes, not dissimilar to sinkholes on land. Underwater sinkholes, they are calcium carbonate rock features and they vary in size, shape and depth. And most are ecological hotspots with a high diversity of abundance of plants and animals. Thanks to our friends at NOAA for that definition. And that's today's news item because scientists have found the second deepest blue hole in the world, which they're calling it in an unlikely place. It's called the Chetumal Bay and it's located on the southeastern side of the Yucatan Peninsula. And this bay is shallow and there are parts of it that are only about six feet deep. It ranges maybe about six to 16 feet deep. So a real shallow place. Fishermen, the local fishermen there reported this, that they discovered it and they asked some scientists, well, they gave it to the scientists and the scientists went out and investigated it. This was back in 2021. They're calling this blue hole Tamzha, which means deep water in Mayan. And this is again, the second deepest one yet discovered. The deepest one is called Dragon Hole in the South China Sea. That one was discovered in 2016. That one measures 987 feet deep, but Tamzha is about 900 feet deep, which means you could submerge most of the Eiffel Tower into this blue hole and have it practically disappear. Not quite, but almost.

J: How big around is it?

E: Yeah, so it is, let's get that measurement, encompasses an area at the surface of, they're saying approximately 13,690 square meters, which is which is significant. And then as you go down, it slopes and it slopes pretty significantly, 80 degrees or more in some areas. So picture like this hole, but it's a cone shaped that goes down, goes down to that 900 foot depth, at least that's how far that they've been able to measure it. Sampling and surveying of the blue hole were conducted by using a scuba and echo sounders and CTD profilers. And they collected a bunch of water samples. Oh, CTD is conductivity, temperature, and depth, by the way. And the profile showed a stratified water column inside the blue hole consisting of a hypoxic layer, a chemicline layer, and an anoxic layer.

B: Damn, man.

E: A-N-O-X-I-C layer. Yeah. So not much oxygen going on down there.

B: So not much fish down there?

E: Yeah, that's right. But guess what is down there? A lot of-

B: A megalodon. A megalodon.

E: Yeah, the Meg. No, not quite, not quite. But they are homes to some, I guess what they would be called extremophiles, right?

B: Oh, yeah.

E: It would be, yeah, because you can't-

B: I love me some extremophiles.

E: Without the oxygen and the light that can't penetrate down there, you're going to get things that you can't readily find in other places. So closer to the surface or on the surface itself.

B: So chemosynthesis, they like getting all their energy from minerals?

E: Yeah, what's down there is hydrogen sulfide, which is a deadly gas.

S: Yummy.

E: Yeah, don't soak that in, right? So you can only do so much exploring, at least people can only do so much exploring in these things. But obviously there are things down there that frolic and thrive in the hydrogen sulfide instead of the oxygen. The nice thing is it offers, well, it's a couple of different things because it's limestone down there. That means you're going to have fossils down there, fossils that you may be able to see what things were like, well, they think about 11,000 years ago, which is generally when these were created during what, the last ice age. And then the sea levels eventually rose and these caverns became filled in. So you have some probably very well-preserved fossils down there that you can grab. And you can also perhaps learn some more about, well, what life could potentially be on other planets. They say there's been some precedent for this. In 2012, researchers peering into blue holes in the Bahamas found bacteria in the caverns where no other life forms dwelled. So yeah, it could reveal some things or some things we should be looking for as we go and explore the moons of the solar system among other places.

B: Yeah, baby.

E: That'd be good. But the other problem, and sort of a problem with these things are is they're not inoculated from human pollution, unfortunately. Yeah. So these blue holes have also been found, not this one in particular, but other ones. You've got things like garbage down there. You've got plastic. You've got other things that have died, sort of, and fell in there. And they attribute that to the pollution of the water. So yeah so how much contamination has gone on now into these blue holes, that's something they're still trying to figure out as we learn some more about these. But it's interesting because you can really explore other worlds on our own planet and you don't have to go two miles down into the bottom of the ocean either. You can just be a few hundred feet down into these blue holes and find amazing things. So very cool stuff.

B: Yeah, they are beautiful.

E: They are beautiful. Oh yeah. And just looking at them and how they stand out among the rest of the ocean or sea or wherever they lie, they really do. I mean, it's that deep blue that just absolutely stands out. So yeah, they are beautiful as well.

S: All right. Thanks, Evan.

E: Yep.

Who's That Noisy? (1:12:52)[edit]

S: Jay, it's Who's That Noisy time.

J: All right, guys, last week I played this noisy:

[background hissing with bird calls and a strong plunking in the foreground]

So what do you guys think?

E: A bird dropping marbles into a pond. Is that that kerplunk almost kind of sound?

J: It does have that kind of a drop noise to it, doesn't it?

S: The bird almost sounded fake to me. Remember those whistles you put a little water in it shaped like a bird?

C: Yeah, it sounds just like one of those.

E: Right?

C: That's so funny. I love those whistles. Those are like bird call whistles.

E: Yeah, those are fun.

J: Yeah, you say they sound fake, but they actually don't sound fake because they sound like real birds.

E: I'll assume they're real birds.

J: Before I get into the answers, I have a correction from a previous Who's That Noisy. Do you guys remember I was talking about the Vanguard satellites?

S: Mhm.

E: Yes.

J: Well, a listener named Craig wrote in and said, "Hey, Jay, hope you're doing well. I just listened to your Who's That Noisy segment on the last podcast where you described the Vanguard satellites transmitter power as 10 megawatts." And he says, "I think this was unlikely. The batteries required to output such a signal would mean the satellite could never be launched, especially back in the 50s. I thought perhaps you'd mistaken the abbreviation 'm', so lowercase 'm', capital 'W', which would be milliwatts." So he said 10 milliwatts seems more reasonable, and he looked it up and he is correct. So I made a mistake from megawatts to milliwatts. He said I was only eight orders of magnitude over. So I appreciate the correction. I have absolutely no problem putting corrections up, so thanks for that. Listener named Myron Getman wrote in to Who's That Noisy. He said, "Jay, I'm pretty sure this week's Noisy is a prairie chicken booming to attract females." That is not correct. When I read this email, I just instantly pictured a chicken with a boombox. Remember those guys?

B: Yeah.

J: Most people don't even know what the hell they are anymore.

E: The boombox? I remember that.

J: Another listener named Marie Terrill, she says, "Hi, Steve. I love you guys so much and listen every week. It's the highlight of my week. I think this week's What's That Noisy is the sound of an American dipper dipping into the water of a river. The other sounds support this, the river and the bird song of the dipper." So I listen to the sound of an American dipper and there is a little bit of a similarity. It is not an American dipper dipping into the water, but that is definitely not a bad guess. Another listener named Tim Welsh wrote in and said, "Hey, Jay, I'm pretty sure that noisy is a cat bird, which is a type of mockingbird. They sound a lot like R2-D2. Love the show and I hope I can make it to Notacon." Tim, that is not correct, but because you mentioned R2-D2, I will give you a 1/8th correct answer just because I love R2-D2. But we have a winner this week. A listener named Lydia Parsons wrote in and said, "Hello, Jay, my guest for this week's Who's That Noisy is the call of the greater sage-grouse."

E: Ooh, grouse.

J: "Hopefully that is correct because I recognized it almost immediately from my years of animal show binging as a kid." So Lydia, you got it right. That is a greater sage-grouse. It's also known as the sage hen. It's the largest grouse, which is a type of bird in North America. Its range is sagebrush country in the Western United States and Southern Alberta and Saskatchewan, Canada. So I mean, yeah, it's a bird, of course. I'm sure most people knew it was a bird, but this is a specific one, the greater sage-grouse. [plays Noisy]

E: What's that plunks?

S: That's the bird.

E: The bird makes the plunk sound?

S: Yeah.

J: Yeah, that is the sound.

S: It's not unlike a brown-headed cowbird. They kind of make also that little plinking. I think it's like a little bit of electronic sound. Really weird when you first hear that coming from a bird.

New Noisy (1:1:56)[edit]

J: All right, I got a new noisy this week sent in by a listener named Johnny Noble, and here it is.

[_short_vague_description_of_Noisy]

So my hint for this week is that it's not a bird and it's not any kind of sea mammal.

C: Thank you.

J: I deliberately went with something that is not either of those two things. So guys, if you heard something cool this week or if you think you know what this week's noisy is, just email me at WTN@theskepticsguide.org. Don't bother emailing me through the skeptics guide website because you cannot put attachments on there. Just use WTN@theskepticsguide.org. I'll get it. I'm the only one that'll get it and it's nice and easy.

Announcements (1:17:54)[edit]

J: So we have things going on, guys. The SGU has things on the calendar. We got May 20th, which is not far now. What are we, as we sit here, we're about just over three weeks away.

E: Roughly three weeks, yeah.

J: Yeah. So that show is going to start at 11 a.m. Eastern time. The first hour will be for patrons and then the remaining five hours will be for the open public. We invite everyone to come check us out. We will have a link on our website as soon as that link is created. Probably within a few days before the event, so it's not going to be up very soon, but it'll be up there. And we're just going to be doing a lot of different things for fun, having conversations about stuff that we normally don't talk about and doing stuff. So join us, live stream, six hours if you're a patron, five hours if you're not a patron. That's Saturday, May 20th. In case anybody's interested, we will be at Dragon Con this year.

E: Atlanta, Georgia.

J: Yep. Just letting you know. Just letting you know we'll be there. If you're there, come up and say hi. And then the other big thing is there is a conference that we are having November 3rd and 4th of this year. It's called Notacon. Why is it called that, you might ask, Cara? Because-

C: I know.

J: Yes, well now you do. That's all I talk about. So this is a conference that is not going to be like any conference you've ever been to because we are not going to have typical conference-like things happening. This conference is about socializing because that is what an amazing number of people have been emailing us saying, when are we going to get Necss back in person because we miss all of our friends and we want to socialize. And then after a lot of consideration, we realized maybe we should just do a con where people have the time to socialize and have fun and enjoy the skeptical community that the SGU has built. And that's exactly what we're doing. So there will be entertainment at this conference, but that's basically what it's going to be. We're going to be providing entertainment. George Hrabb, Andrea Jones-Roy and Brian Wecht will be joining the entire SGU and we're going to be doing a bunch of fun things over the course of that Friday and Saturday to entertain you. But it's not going to be lectures. It's not going to be people standing up there talking for 45 minutes. It's going to be fun stuff. There's going to be a lot of audience interaction. And most importantly, there will be plenty of time to hang out, to talk, to have meals and to just spend time with the other people that are attending the conference. So we're getting a lot of people that are very interested, emailing us questions. You don't have to ask questions. It's exactly what I said. It's going to be in White Plains, New York. The hotel is great and it has a pool. There will be a shuttle from Westchester Airport to the hotel. If you fly into the New York City airports, you can simply take a like a other ground transportation, like an Uber or something like that. Probably what you should do is you should coordinate sharing transportation with other people that you know will be flying into whatever airport you choose to fly into. And also just split a room with someone. You know what I mean? There's no reason why you need to have a whole room to yourself. If you want to save money, just bunk in a room with someone or more than one person. But please do come to the conference because it's going to be a ton of fun. It'll be unlike anything else you've ever done. That's what I'm putting right in front of you. So go to our website. The signup link is there. Buy tickets. Right now, everything is happening. So if you did pre-register with us, now's the time to buy your tickets. Now there are rooms that are being held for the conference, but I can only guarantee that the first hundred rooms will get the special rate. As soon as we get close to booking a hundred rooms at the hotel, I will try to get more. But I'm just putting it out there. If you want to stay in the hotel where this is happening, be one of the first hundred people to sign up and that'll guarantee you'll get a room in that hotel.

S: All right. Thanks, Jay.

Questions/Emails/Corrections/Follow-ups (1:21:50)[edit]

Question #1: P-Values[edit]

When you all where talking about the full moon and suicide study last week kara said that “p-values as we know are pretty mean goes as they tell us a little bit more about the analysis than the actual (pause). That’s why effect sizes matter”. Could you please elaborate on this and the sentence Cara stop herself from finishing accidentally? How are p values better for understanding the analysis and what are then effect sizes better for? This seems like a really important statistical concept to grasp for us skeptics so I wanted to ask this.
–Anthony

S: All right, guys, we're going to do an email. This one comes from Antony. And Antony writes, "When you all were talking about the full moon and suicide study last week, Cara said that P values, as we know, are pretty mean. Goes as they tell us a little bit more about the analysis than the actual pause." That didn't make sense.

C: Didn't you say pretty meaningless?

S: Well, he said, "they are pretty mean goes as they tell us a little bit more about the analysis than the actual", and then you paused. "That's why effect sizes matter."

C: Yeah.

S: Yeah. I don't know.

C: I don't think that's exactly what I said. but ok.

S: I think we lost the translation there. "Could you please elaborate on this? And the sentence, Cara, stopped herself from finishing accidentally. How are P values better for understanding the analysis of what are then effect sizes better for?

C: I can elaborate. Can I elaborate, Steve?

S: Yeah, go ahead.

C: And then you can elaborate?

S: Yeah.

C: All right. So the effect size is really the main thing you're looking for in a study. So here's the difference between the effect size and a P value. A P value specifically tells you whether or not something reaches a level of significance. So basically you have a population and you're taking a sample of that population. And based on the normal curve, if you look at sufficiently large enough samples of data, you're always going to find some significant contrast. Let's say you're doing T-tests, ANOVAs, correlational studies, whatever your statistical analysis is, when you compare enough things within that dataset, some of them are going to come up as related to one another in whatever way you're studying. And what you're asking the analysis is, is this due to chance or is this an actual effect? And all a P value tells you, whether your cutoff is 0.05, 0.1, whatever, is if it's greater or less than that cutoff, if it's significant or not. We can say that we think that this is a real effect and not that it's due to chance. But if that number is 0.001, 00001, 00001, that doesn't tell you anything. That's when you have to look at the effect size. The effect size tells you the magnitude of the relationship. Is it a strong relationship or is it a weak relationship? Does this variable affect this other variable a lot or a little bit? A P value is kind of an all or nothing response. Either it is significant or it's not, and you can easily hack that number, even unintentionally. It's a good statistic. It's important. It tells you if based on the way that you're doing your analysis, it's likely that these things are related or that they happen by chance alone. But it doesn't tell you how strong the relationship is. That's what the effect size is for. That's why we're seeing more and more journals that are requiring effect sizes to be published. Does that make sense?

S: Yeah, it does. But let me give you a couple other ways of looking at it because it's more complicated than obviously. I know you know that.

C: Obviously.

S: There's multiple different statistical ways we could look at a study to say, is this significant? Is it statistically significant? Is it likely to be true? And how much of an effect is there? How much does this change the probability that this is true or not? So the technical definition actually of a p-value is the probability that the results would be what they were or greater given the null hypothesis, which is kind of a backwards way of looking at it.

C: But sadly, that's how we do all of our statistics is based on null hypothesis testing.

S: It doesn't really mean the chance that it's real or not. That's a big thing to make sure that we don't walk away with that. The probability that the effect is real.

C: The probability is not the probability that it's real. Yes, you're right. That's a very important point to make.

S: That's actually more of a Bayesian analysis. The Bayesian analysis is what is the pre-test probability and what's the post-test probability. So how much does this data change the probability that the hypothesis is actually true? That's actually a really good way of analyzing the data. For clinical studies, effect size is critical. And it's not just like what's the effect size, but is it clinically significant? Like if it's a reduction in pain, is it an amount of reduction that an individual person would notice or is it just a statistical phenomenon? It reduced your cold by on average one hour. It's clinically irrelevant, even if it's statistically significant. You can also slice that data up differently to make it more intuitive or to get a better perspective on if it's meaningful. I like the number needed to treat way of looking at it. So how many people would you need to treat with this treatment before one person is likely to have benefited? That's another way of looking at the effect size. Maybe you need to treat 100 people just to help one person versus how many people are harmed for how many people you get treated. So the p-value is just one way of looking at the data statistically. It's not a terribly good way.

C: It's not actually a very good way at all.

S: It's way overused and people interpret it incorrectly. You're far better looking at several different ways of looking at the statistics. And just please don't confuse a p-value for the probability that this is a real phenomenon because that's not what it is.

C: Well, and let's actually, if you don't mind, let's break that down for just a second because I think it'll be helpful. I don't know if everybody knows what the null hypothesis is, but when we do testing, we might say, what is the likelihood that if I give this plant coffee grounds that I don't know what's something that we know is going to work. Like if I give this plant plant food that it will grow taller than the plants that I don't give plant food to. That is your hypothesis. That's hypothesis one. The null hypothesis is if I give this plant plant food, it will not grow any taller.

S: Is that the hypothesis is not true.

C: Right. It's saying that the hypothesis is not true. And what we do in science is we try to disprove the null. We cannot prove the hypothesis. We try to disprove the null. And so what we're saying with a p-value is what is the probability that if the null is true, meaning there is no relationship between these variables, that the plant food and the growth of the plant are in no way related. What is the probability that we will get a chance result?

S: Yeah, this data that we're looking at.

C: Yeah, the data that we're looking at will say by chance there's a relationship here when we know there's no relationship because the null hypothesis is true. So when you said earlier, how did you phrase it? It's not the probability that the thing works. It's the probability that the not thing doesn't work. And I know that that sounds crazy, but that is a really important way that we do science. So really for a lot of people, p-values is just a cutoff. It's a threshold. It's all it is.

S: Is the data interesting. It doesn't mean it's real or not.

C: Is the data even worth continuing to talk about?

S: A good way to think about it, I think bottom line, because I know they're probably confused at this point, is that if something is not statistically significant, then it's probably there's no real effect there. If it is statistically significant, then it may be. It doesn't mean that it is.

C: Yeah, then there still might not be an effect.

S: It still might not be an effect. At least you're in the ballgame now. If you're not even statistically significant, you're not even in the ballgame. There's definitely nothing going on here.

C: Yeah, I almost think about it as the way that we often will talk about, and this is a non-statistical thing that we do. It's more of a critical thinking reasoning thing that we do. But Bob, I'll see you say this a lot, and I think it's important. Is there plausibility? Is there face validity to this claim? And that's sort of in some ways the way we should be looking at a p-value.

S: Yeah, now you're getting Bayesian.

C: Now it seems like there's something.

S: Yeah, we're talking about prior probability and post probability. All right. Let's go on. Let's go on with science or fiction.

[top]                        

Science or Fiction (1:29:53)[edit]

Theme: Artificial intelligence

Item #1: Chat GPT-4 was able to pass the Uniform Bar Exam, scoring in the 90th percentile.[6]
Item #2: The US Copyright Office has issued guidance that registrants must disclose any AI-generated material in their work and it will not issue copyrights for content created using artificial intelligence software.[7]
Item #3: An amateur Go player, without any computer assistance, beat the best Go-playing AI in 14 out of 15 matches.[8]

Answer Item
Fiction Amateur Go player vs AI
Science Chat GPT-4's Bar Exam
Science
Copyrights for AI content
Host Result
Steve win
Rogue Guess
Evan
Copyrights for AI content
Cara
Amateur Go player vs AI
Bob
Amateur Go player vs AI
Jay
Copyrights for AI content


Voice-over: It's time for Science or Fiction.

S: Each week I come up with three science news items or facts. Two real, one fake. And I challenge my panel of skeptics to tell me which one is the fake. We have a theme this week.

J: Uh-oh.

S: It's a topical theme.

E: Skin.

S: These are recent news items. They all deal with artificial intelligence.

J: Yeah, baby.

C: Oh, shit.

S: Let's see how much you've been paying attention.

J: Let's do it.

E: Oh, gosh.

S: Okay. Item number one. Chat GPT-4 was able to pass the uniform bar exam scoring in the 90th percentile. Item number two. The US Copyright Office has issued guidance that registrants must disclose any AI-generated material in their work and it will not issue copyrights for content created using artificial intelligence software.

C: What?

S: Item number three. An amateur Go player without any computer assistance beats the best Go playing AI in 14 out of 15 matches. All right, Evan, go first.

Evan's Response[edit]

E: Okay, Chat GPT-4. Is that the one that's accessible?

S: Yeah.

E: Isn't there one that's not accessible yet? Is that five? It must be five.

B: Five's not out.

E: Five's not out. That's what I was thinking. All right. So, Chat GPT-4 passed the uniform bar exam scoring in the 90th percentile.

S: So it could be a lawyer, basically.

E: There's not much to analyze here. It just either did or it didn't. I can't see anything in here. It's kind of the either hint one way or the other as to whether it's correct or not correct. And the second one. The US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work. The Copyright Office? And it will not issue copyrights for content created using artificial intelligence software?

S: Yeah, you can't copyright anything that you make if you used artificial intelligence to make it. Right? So that artwork you made using Mid Journey can't be copyrighted.

E: Would the Copyright Office put that kind of restriction on there? I mean, do they care that much?

B: Are you talking about the thing being patented or the actual patent form itself?

S: Copyrighted. Intellectual property.

E: I mean, it sounds like I don't know. Maybe they haven't issued that guidance yet. I don't know. And do they care that much? They've been lax in so many other things. They give you a copyright on the freaking what? The peanut butter and jelly sandwich? I mean, right? But this they're going to go full blown, like, hardcore over? Maybe not. Maybe not. And then this amateur Go player without any computer assistance beat the best Go playing AI 14 out of 15 matches. I'm trying to think about, oh, the computers. We spoke about this years ago. There was a test with computers. And was it the IBM computer? The one that did the Jeopardy one? It was a different one, right? But it definitely was Go. And I remember it.

S: Bob, contain yourself.

E: I remember it performed well. Steve, don't interrupt his assistance in my guess here. But in any case, so, okay. Yeah, but I think that computer was like, what, designed to play Go? Or is Chat-GPT? I'm sorry, what is this? Best Go AI. Go playing AI. This is specifically Go playing AI. Hmm. Maybe the human still has advantages there. I'm going to say the Copyright Office one is the fiction. And I just don't know that they've been this tight with what gets copywritten and what doesn't. Because I've seen all kinds of weird stuff kind of get through. So I have a thing, that one's weak. So, copyright, fiction.

S: Okay, Cara.

Cara's Response[edit]

C: Sure, it passed the bar. 90th percentile. Details matter. Maybe it didn't pass that high. U.S. Copyright Office has issued guidance. I like that wording. Makes me think it doesn't have to be set in stone yet. It's just guidance. You just have to disclose it. And if you disclose it, we're not going to give you a copyright. I don't know if there's laws yet.

S: I mean, I'll just tell you, that means it's their policy.

C: It's their policy, right? But I don't know if there's actual legislation yet.

S: Well, they're a regulatory body. They don't make legislation. They just carry it out. So that's basically as good as it gets for, it's like the FDA making a decision about a drug. It's the same thing. It's not a law. It's just they're just doing their regulatory thing. This is what they're doing. They're not giving copyright to intellectual property created with AI.

C: I think that makes sense. I think that it makes sense that very often when there's something new or something worrisome from a regulatory perspective, there's a sweeping response in like a severe direction. And then it iterates over time. And I think probably the safe response in this case might be a little more draconian as opposed to just being completely open. I don't think we've had time yet to get into the nuance. So in order to prevent any problems, I could see why they would just say, no, no AI in anything you're trying to copyright right now. And then we'll figure it out later. I think the one that bugs me is the Go one because not only are you saying that a Go player beat, like a real person beat the AI 14 out of 15 times. You're saying that an amateur, and I'm not saying an amateur is bad, an amateur is great, but you're not even saying that a professional did it. I don't think that's true. I think that the AI probably was matching tit for tat with this Go player. So I don't know. That's the one that bugs me. I'm going to say that one's the fiction. I think it's easier to pass a test than it is to play a complex game that has a lot of like chaos theory built into it. And even still, I think it still probably did it.

S: Okay, Bob.

Bob's Response[edit]

B: Oh, man. I'm going to be so pissed at you over this. Yeah, the bar exam. I remember reading about that going back when 4 was released. 90th percentile seems a little high. I don't know if this is a new test and they hit 90. For some reason, I think it wasn't quite up to 90th percentile. But yeah, that's within the realm of reasonableness. The copyright office kind of makes sense. It's so crazy right now. I'm just kind of like, yeah, we're not going to get mired in this right now. And I could see them potentially changing their mind in the future. But they just don't want to just even deal with it. I can kind of see that as well. The one that's like, you've got to be shitting me is this third one with the Go. Alpha Go. Alpha Go, deep learning, big news item, defeated the champ. I don't remember how brutal of a beating it was. Or even if it was brutal. I know that Alpha Go won. It was huge because Alpha Go is much more complex than chess. Much harder to do. But they used their new system which basically didn't feed any rules into it at all. It was just like, here's the game, here's the rules. Figure it out. And when they did that with chess, that's when they created the most amazing chess, superhuman chess program ever that no person will ever beat. And if you said that this was chess, then that would absolutely be the fiction. There's no way anyone's beating that Alpha Zero chess. But Alpha Go, I just don't could it have been he played so counterintuitively like Kirk playing Spock that it just like blew out the algorithms? I doubt it. I mean, I really can't imagine it's going to beat this. But you know that I knew that you know that I know that you know that I know that that's bullshit.

C: He's so mad. I love it. Bob, what are you going to say? I got to know if I'm right or wrong.

B: I mean, my gut feeling was like he knows I'm going to, Steve knows I'm going to go for this and call it fiction.

E: He-he. Go.

B: He was trying to catch us out on this, specifically me.

C: You know what a rabbit hole, you know how dangerous it is.

B: Yes, I know. I know the rabbit hole. Everything else seems real. This is the one that's like, really? And I just know I know it's going to be like, oh, yeah, he did this stupid. He did this trick. He did the data trick like data on next day where he played the champ and he didn't play to win. He played not to lose. And he and he frustrated the guy into quitting. Is that what happened? Did he do the data maneuver? I don't know.

S: So that's your answer?

B: I'm going to put my trust in deep learning AlphaGo and just say no, amateur did not beat fiction.

S: Okay.

E: There you go, Bob.

S: Ok, and Jay.

Jay's Response[edit]

J: The first one, I do think that Chat GPT-4 passed the uniform bar exam in the 90th percentile. Yeah, I think that happened. Absolutely. The second one about the copyright office. So this is a little tricky. I'm going to read this correctly. They issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software. Well, I agree with Evan that I don't think they care. I think that they're going to let you copyright something. It's a timestamp. Just says I made this on this date. And if there's ever a conflict, they will check dates to see who created it first. I don't think it matters if AI helped in any way. I think it is the right thing to do to disclose it. And that's a whole other conversation like what level of disclosure should you give and all that stuff. As life goes by, we will figure all that out. But no, I really don't think that the copyright office is paying attention to details on that level. They couldn't administer that. So I agree with Evan. That one is definitely the fiction.

B: Yeah, but that means AlphaGo is science.

E: Well, tune in next week, folks. We'll give you the answer to this week's... Steve, you don't have to play it up.

Steve Explains Item #1[edit]

S: You all agree on the first one, so we'll start there. Chat GPT-4 was able to pass the uniform bar exam, scoring in the 90th percentile. You all think that one is science. And that one is science. Yeah. Now, Chat GPT isn't passing all of the exams that it's giving them. A lot of people are doing it. This is the one I think where it did the best. And it kind of makes sense because the law is very language-based. And it blew away the tests. It would have passed in every state in the US. It exceeded. It got a combined score of 297, which is greater than the highest threshold in the highest state. It's basically 90 percent of human test takers. It did do a little bit better in the multiple choice than in the essay part. But it did well in the essay part, too, having to actually write out answers. So, yeah, it did really, really well. It's not that surprising. I know it's passed medical exams, although not every one. The specialty ones that it was given, it hasn't done better than humans in every professional exam that it's been given. But it's kicking butt so far. Okay.

B: Yeah, pretty impressive.

S: I guess we'll go to, hmm, should we go to two or three? Two or three? Let's go to number three.

Steve Explains Item #3[edit]

S: An amateur Go player without any computer assistance beat the best Go playing AI in 14 out of 15 matches. Bob and Cara think this is the fiction. Jay and Evan think this is science. So let me ask you a question. Did any of you guys see the email today where somebody sent us this news item?

C: No.

E: Wait, what?

C: Are you kidding me?

S: Because-

C: No Steve, we have jobs.

S: -an amateur Go player did beat the best Go playing AI in 14 out of 15 matches. But not without computer assistance. So this is the fiction. So this has been all over the news. I thought I was going to get you on the without any computer assistance thing because you got to read the details.

B: Oh, nice. Nice try. Nice try.

S: What happened was they used a computer to figure out the Go playing AI's weaknesses.

B: Oh, interesting.

S: And it found out a hole in its strategy. And then without further assistance, an amateur Go player was able to learn the technique from the computer and then use it against its Kata Go, K-A-T-A Go, the current, I guess, best Go program. Was able to use it against it and beat it 14 out of 15 times. It has to do with you, like, encircle a group of your enemy's stones with your stones. Like, that's the strategy. It's kind of a weird, it is a weird strategy. And the other thing is, like, a human would see it coming a mile away. But it was just never part of the training data because it's not something a professional would do.

B: I don't think it had training data. So what do you mean by training data?

S: Well, I mean, whoever it was playing against to learn how to be good at playing Go.

B: No, I think that was the point of the latest deep learning models was that it didn't, there was no training data.

S: It wasn't a pre-trained.

B: It trained against itself. It played itself. And that's how it learned what worked.

S: Well, it never encountered this strategy.

B: Wow.

S: So it was a hole. Because it doesn't, it's a good example of how powerful and brittle AI can be, right? At the same time. Because, yes, it can blow away Go masters, no problem. It's really, really good. But if you find something that just, it wasn't part of its pre-existing knowledge base, it doesn't have the real deep understanding to innovate or to see something, like, say, oh, what's, this is a pattern I haven't detected before. What does this mean?

B: Yeah.

S: It doesn't know. It can't figure it out from first principles because it doesn't know the first principles. It just knows the patterns it needs to do in order to win. You know what I mean? So-

B: Yeah. It probably played against itself millions or billions of times to learn how to be a good Go player. And somehow this, I don't know.

E: It never showed up.

B: Maybe it needs to do, it needs to play against itself a hundred billion times. I don't know. But I mean, I can't wait to read about this.

S: Yeah. Fascinating. All right.

Steve Explains Item #2[edit]

S: Which means that the US Copyright Office has issued guidance that registrants must disclose any AI generated material in their work and it will not issue copyrights for content created using artificial intelligence software is science. It kind of had to make a decision because somebody applied for copyright for artwork it generated using Mid Journey and the Copyright Office was, no, we're not going to let you copyright that because you didn't create that. The software created it. And it's being widely criticized as being stupid because it doesn't, it is being overly cautious. Maybe that's one way to justify it. But the idea is that there isn't sufficient human creativity in the process to say it's your intellectual work, which really isn't true. If you're an artist using it as a tool, there's a lot and it really-

C: But what if you're not?

S: -and it really stems from, well, the thing is, it's got to be case by case, but they're making a blanket statement. If you used AI, you don't get credit for it. And that's the problem is that it's such a blanket statement that they're making.

E: You have to disclose it is what they're saying. You can do it, but you have to disclose it.

S: Yeah, but and they won't copyright it.

C: No, they're saying once you disclose it, they won't.

S: Yeah, you have to disclose it.

J: I'm surprised, Steve, because that's a lot of work for them.

S: Yeah. Well, it just means that if you say I used AI to make this, they'll say, OK, you can't, you're not eligible for copyright. So basically downplays the actual input that the human user is doing the prompt maker, if you will, or like the artwork that prompted this decision. It was again, it was an artist using, I think it was Midjourney, who went through hundreds of iterations and there was a lot of work involved in getting the picture that they wanted. And this is like, this is the discussion we've been having. How much of that output is the AI? How much is the user? Is it art? Is it intellectual property? But now a bureaucratic office had to make a decision that was very specific and practical and they just said, no, we're not copywriting it. So interesting it's a very interesting decision. I'm not sure that I agree with it. I do think it's erring way in one direction. But, and again, like, again, there's a lot of the comment that I was reading about. It's like the purpose of copyright is to promote creativity, right? Is to give people credit for the work that they do. And that's, this is not going to accomplish that, right? Because why would somebody put in the hundreds of hours to create something if they're not going to get copyright on it? Because they were using a tool. Because of the tool that they were using. It's like not, again, you could use the photograph analogy. It's like saying, well, you didn't create that. You just took the picture, you know?

E: Oh, yeah, the monkey who took the selfie.

S: Yeah, whatever. Yeah, who gets credit for the monkey who took a selfie? So I guess we're all monkeys now.

E: Yeah.

S: I've been using Midjourney the whole time we've been recording this show, by the way. (laughter)

E: Well, don't try to get a copyright on it.

S: I'm just in the back. I'm not. This is all for my personal use, but it's just in the background. Because you can put your prompt in and forget about it for five minutes. All right. Well, good job, Bob and Cara. Although you did back your way. You backed your way into a victory this week, but it still counts.

B: Yeah.

J: I was absolutely sure Evan and I were correct.

S: Yeah. Jay, I got to say, when you say I'm absolutely sure, you're almost definitely wrong.

J: I know, right?

E: I was not so sure.

Skeptical Quote of the Week (1:48:40)[edit]


It would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures that idea of limited knowledge, of unobtainable information, of unimagined possibilities

 – David Eagleman (1971-present), American neuroscientist, Baylor College of Medicine


S: All right, Evan, give us a quote.

E: "It would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures that idea of limited knowledge, of unobtainable information, of unimagined possibilities." David Eagleman, who's a neuroscientist at Baylor College of Medicine.

S: So what is umwelt?

C: Umwelt.

E: Umwelt.

S: Umwelt?

C: Umwelt is sort of your...

S: Give us the gestalt of umwelt.

C: Well, that's like a good way to put the gestalt, actually, because it's sort of like your perspective, your experience as an individual.

E: Right. A mosquito has a certain perspective of the world. A human has a very different perspective of the world. So the umwelt of a mosquito is different than the umwelt of a human.

C: And individual humans have different umwelts, depending on their culture. Yeah, yeah, yeah.

S: Yeah. That's interesting.

E: So it's interesting.

S: Yeah.

C: And you know, David Eagleman famously, he studies creativity. He also famously studied synesthesia.

S: Oh, yeah? And so he's very interested in the experience, like the relationship between the individual and their experience of how they perceive the world. I worked with David on a TV show once where he was doing this fascinating thing where he developed a vest using little cell phone motors that vibrate. And he basically mapped out almost a version of the cochlea on the vest so that people who can't hear could perceive sound through tactile stimulation.

E: Neat.

C: Yeah, based on the way that the vibrates would be up or down. And it was funny because I was like, how on earth do they make sense of this? Same as you've seen the people who like are blind, but they have the thing on their tongue that makes little prickles on their tongue. Like how on earth do they? And it's like the brain just maps it eventually. It just does it. It's like not even conscious. It's very cool.

S: Yeah, just the idea that the brain maps to the world and creates the illusion of reality inside our brains is a cool one and I think a necessary one for skepticism.

C: Yeah, for sure.

S: All right. Well, thank you all for joining me this week.

B: Sure, man.

C: Thanks Steve.

E: Thank you, Steve.

Signoff[edit]

S: —and until next week, this is your Skeptics' Guide to the Universe.

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        

Today I Learned[edit]

  • Fact/Description, possibly with an article reference[9]
  • Fact/Description
  • Fact/Description

References[edit]

Navi-previous.png Back to top of page Navi-next.png