SGU Episode 884

From SGUTranscripts
Revision as of 03:23, 25 September 2022 by Hearmepurr (talk | contribs) (1st news item transcribed)
Jump to navigation Jump to search
  Emblem-pen.png This episode is in the middle of being transcribed by Hearmepurr (talk) as of 2022-09-23.
To help avoid duplication, please do not transcribe this episode while this message is displayed.

Template:Editing required (w/links) You can use this outline to help structure the transcription. Click "Edit" above to begin.


SGU Episode 884
June 18th 2022
884-Virtual-Pop-Stars.jpeg
(brief caption for the episode icon)

SGU 883                      SGU 885

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Quote of the Week

Science is a search for basic truths about the Universe, a search which develops statements that appear to describe how the Universe works, but which are subject to correction, revision, adjustment, or even outright rejection, upon the presentation of better or conflicting evidence.

James Randi, Canadian-American magician and skeptic

Links
Download Podcast
Show Notes
Forum Discussion

Introduction, JWST Hit, Force Fields

Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Wednesday, June 15th 2022, and this is your host, Steven Novella. Joining me this week are Bob Novella...

B: Hey, everybody!

S: Cara Santa Maria...

C: Howdy.

S: Jay Novella...

J: Hey guys.

S: ...and Evan Bernstein.

E: Good evening everyone!

S: So did you guys hear the news about the James Webb Space Telescope?

B: Oh boy.

S: The James Webb Space Telescope's main mirror was hit by a dust sized micro meteoroid. And it's caused noticeable damage. It actually is affecting the data that they're getting back from it.

B: Oh my god.

S: But they say that it should not affect the mission's overall performance. They'll be able to compensate for it. It's not going to in any way reduce its mission. But it took a hit.

E: Well they anticipated that it would and I think they in an interview or a video that Fraser Kane put out about it. He talked about how these telescopes and other things that are sent into space are built with certain tolerances and certain allowances basically for those kinds of strikes. And they understand it when engineering these devices. And so it's anticipated ahead of time.

S: Yeah but it is a little scary that it got hit this soon. I mean I guess it's just bad luck. But it makes it seem like yeah this is going to be a regular event. It's not like it took years.

B: Yeah is it a one-off or is this something because of Lagrange orbit. Is that something because of where it is because last time we had anything there that was in the news.

S: Well we'll see.

J: That sucks.

E: Affects the resale value of this thing.

S: Yeah.

E: Get a ding like that.

S: Well it's supposed to have a good lifespan 15 to 20 years. So I guess they're just calculating that yeah it's going to slowly take damage over that time but it'll function. It won't be the limiting factor on its function but maybe they'll have to recalculate. Who knows.

E: Yeah and it's not the kind of thing you can repair. (laughs) Maybe that was one of the advantages of a Hubble like system that was so close to Earth it could be fixed as it took hits and other problems occurred with it. But here no chance for that. You're just gonna get what you get. You don't get upset.

S: But there was no choice this had to be far away from the Earth.

E: That's right. Of course. Yeah.

S: Because of the type of instrument. Yeah.

B: Yeah they got to turn on the damn force fields before it happens again. Oh wait that goes against laws of physics.

E: So maybe. I don't know. So you cover that in your upcoming book at all? About force field technology?

B: Yeah we do.

S: Yes we do.

C: Nice.

B: I think I researched that.

E: Is there any plausibility whatsoever to any of it? Or is it total myth?

S: So─

B: Depends how you define force field.

S: ─yeah. So as typically depicted in like Star Trek or whatever. No. Like that kind of force field--I mean there's just there's no known plausible wave within the laws of physics to do something like that. But can you use energy to protect a ship or a location? Sure. Like a really powerful magnetic wave─

B: Plasma, laser.

S: ─would repulse anything that has an ionic charge. Anything that's charged. You can use plasma and other things that would, that could deflect impact. But there's there are huge downsides to that as well because it's not it wouldn't be invisible.

E: Yeah. Wouldn't be invisible. Right. So you have to compromise whatever it is you're designing that in mind. And what's the energy? What sort of energy are you looking at to have to power something like that? Depends on what how effective you need it to be. What are you trying to deflect. But yeah it's an interesting thought experiment. That's one of the parts of the book. The last essentially third of the book is the thought experiment of we go through all of the sci-fi technologies and just talk about could this even happen. What's the closest you could get to this. Or is it just even theoretically impossible. Obviously it's always hard to account for ridiculously advanced technology like a million years from now. But still the laws of physics are the laws of physics so we as we know them so we could talk about plausibility. Which was fun. It's a fun part of the book.

B: Yeah another angle I've just remembered about force fields is that is point forces. I mean if you think about it a 360° force field this is a little bit wasteful. Now imagine, I think in the future we and we have these now to it to a decent extent actually where you can apply the amount of force you need at a specific point very quickly. And so that type you wouldn't call that really a force field though. It's not really a force filed though. Although it can be and is actually some form of a defensive shield. So it depends how you define forest field like we said.

S: Yeah you would imagine that any civilization with the technology to wield those forces that way would probably have advanced enough AI that they could very quickly put the energy where it's needed rather than like being having it in place from all directions at all times. That's would suck up a lot of energy.

E: In the case of the James Webb impact that has occurred is there existing technology that would have potentially prevented that?

B: No. Too small too fast is my knee-jerk reaction.

S: I mean if you had a companion sensor system with lasers that could shoot incoming dust particles.

E: Oh yeah. You put a satellite around the telescope. Something that orbits it.

S: It could theoretically work. But yeah I guess you could you wouldn't want to shoot a powerful laser near the telescope─

E: True.

S: ─because it could cause more damage than the meteor or what you're trying to deflect if you're not careful about it. Yeah it's an interesting problem.

The Skeptics' Guide to the Future (5:54)

S: So since, for those who are interested, since we're talking about it. Thanks for bringing it up Evan. Our next book The Skeptics’ Guide to the Future is coming out September 27th. It discusses futurism itself. It discusses a lot of modern technology and where we think it's gonna go in the future. We talk about science fiction technologies. The near mid and long long term future. A lot of fun to research a lot of fun to write. Coming out September 27th but you can pre-order it right now . And we strongly encourage you to do so. If you pre-order it, especially the hardcover, it really helps our overall sale. So if you're at all ever intending to get the book we do ask you to just pre-order it now before the launch date. You'll get it that's how you'll get it the soonest but it also helps the overall sales of the book so we'd appreciate it.

J: You can go to theskepticsguide.org/our-book. I can't make it any easier than that folks. And you'll see both of our books on there. The new one is The Skeptics’ Guide to the Future.

B: And it's worth it just for the cover there's lots of nice words inside but the cover itself is awesome.

S: Yeah. Cover is fun.

High School Reunions (7:07)

S: Before we go to the news items Bob and I were at our 40th high school reunion this past weekend.

B: Oh boy.

C: Wow.

E: Wow that's eight years worth of reading.

B: I told you not to say that number. (Cara laughs)

S: It's really interesting seeing people. Some people, a couple of people I haven't seen in 40 years. Obviously since high school. Some people have seen that previous reunions. So I had seen them in the intervening time. But what's funny is trying to recognize people that you only knew when they were 18 years old. And it's amazing how different. So there are some people who are identical to the way they were when they were 18. Literally they look like they just had a mild aging program on them (laughter) but they looked exactly the same. Instantly recognizable. And then there are other people where I was struggling to see their 18 year old self in the current. They just for some reason they look so different.

C: They like transform.

S: Almost would not recognize who they were. And only I would not have recognized them if I weren't at a reunion, you know what I mean? Where I knew like this is somebody in my class. But if I saw them on the street I would not make the connection. They look so different for some reason. And it's not just the usual thing. It's not just like weight and age. It was just something that fundamentally morphed about their features.

C: What size was your graduating class?

S: Not big because it was─

B: A 100?

S: ─it was less than a hundred.

C: Oh so you knew literally everyone.

E: We took classes with most of them.

C: T hat's amazing. See when I think about going back to a reunion I'm like maybe I'll recognize a dozen or so people because I went to school with thousands of people.

E: Oh wow.

C: Yeah my graduating class was like I think between 1500 and 2000.

J: Oh my god.

B: Nuts.

J: I mean you legit don't know most of the people in your class.

C: Yeah. And my school only had two grades in it. It was a senior high for that reason because there were so many kids.

S: And we had our die hards. So these are people who were in all four years. It was only 16 of us.

C: What!?

B: Yes. I remember that picture the die hard picture.

C: Oh so other people kind of came and went? They transferred in.

S: They came in later or they came in. They came and went. Absolutely. And only 16 of us were there for all four years.

E: Four years.

S: So it's a very tight group.

C: Yeah.

S: Very tight group.

E: They're all good skeptics, right?

S: Nope. (laughter) Not even close.

B: Nope. Yeah.

S: We won't get into any details but that is not.

E: Just like real life. A sampling of any random group of 16 people.

S: We just I think all just silently agreed not to talk about politics or certain things.

E: Good. Healthy.

s: Would not have been a good idea. We just stuck to like reminiscing about high school days but talking about other things. But we didn't, we knew we shouldn't go there.

E: Compact discs and Die Hard.

S: Yeah. But Bob and I while we were there we met with the librarian and we donated a hard copy of our first book to the library. They have─

C: Oh fun.

S: ─yeah it was fun. They have a shelf for books written by alum. Including JFK. John F Kennedy who went to that school. He went to the same school.

C: You went to the same school as JFK?

E: What's the name of his book.

B: For a little while.

S: The Profiles in Courage I think was there.

C: Were there any like─

E: You think he really wrote that?

C:(laughs) probably not. Next to the Skeptic's Guide book was there like a cryptozoology book? Are there any books written by alum that were like.

S: Yes but they weren't there.

C: Oh. (laughs)

J: So Steve you know the obvious question is are our books there?

S: Or what?

B: Our. O-U-R. Yes. Our book is there now.

C: Or, our.

S: Yeah I couldn't parse that sentence.

E: A-R-E?

C: R-R?

J: So you donated one is what you're saying.

S: We donated one for the library.

C: That is it literally what he said. (laughs)

S: That's what I just--it's on the alumni bookshelf.

B: So Steve brings the hardcover and there's not too many of those bad boys around anymore. He brings a hardcover over to my house because we were going together. He opens the book so we could sign it and there's Cara's name in the book. Like oh okay.

S: Well you pre-signed some books.

C: Oh yeah I signed a bunch of them.

S: Yeah.

E: Ah, perfect.

C: Whoops. Sorry guys.

S: No worries. Doesn't matter.

C: Well now Evan and Jay have to sneak into the school to sign it as well.

S: That's true.

E: Yeah we'll come up with some zany plan like a bad 80s movie.

S: Right.

B: Oh my god.

C: Yeah that could have only gone over the 80s.

B: Time machine.

E: We'll call it meatball house or something. Like animal house.

S: All right well let's move on to our news items.

News Items

S:

B:

C:

J:

E:

(laughs) (laughter) (applause) [inaudible]

Is LaMDA AI Sentient? (11:43)

S: We're going to start with--I think this is probably the most widely discussed science news item of the week.

B: Yeah.

S: Google employee claims that their LaMDA language simulating software is sentient.

E: That's a big claim.

B: Yes.

C: I don't buy it.

S: Yeah no one's buying it.

C: '(laughs) I bet you some are buying it.

E: No one person's buying it.

S: This is Blake Lemoine. Blake Lemoine is a google employee. He was suspended from the company for violating his NDA because he released a transcript of a conversation between himself and unnamed collaborator and google's basically their chat bot software LaMDA which stands for Language Model for Dialogue Applications. The conversation is pretty impressive and as chatbots go. So we could read─

B: It is.

S: ─but here's the one thing though when you read it you have to read LaMDA's part of the transcript as if you're HAL from 2001: Space Odyssey.

C: Even if something is like passes the Turing test really well that doesn't mean it's sentient.

S: Of course not. Turing test is a terrible test for whether something is sentient.

B: It is. It's really [inaudible] worse.

C: But basically that's what you're saying, right? It's in the transcript it seems to pass the Turing test.

S: Oh it might pass the Turing test which means you have to fool a certain number of people into thinking that it's an actual person.

C: Yeah it's not easy but it's definitely far from being able to make a claim of sentience.

E: Oh right yeah it makes no declaration of sentience.

C: No yeah even yeah. That was the point.

S: All right so here's an example. So Lemoine asks "Okay, what about Les Misérables? Have you read that one?" LaMDA says: "Yes, I have read Les Misérables. I really enjoyed it." Lemoine says: "What are some of your favorite themes in the book?" And LaMDA responds: "I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. There’s a section that shows Fantine’s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesn’t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering." All right that's a pretty well put together sentences and it's coherent, and it actually answers the question. I think this would probably, if you didn't know the context, you were just reading this you wouldn't necessarily pick out LaMDA as not a person. As an AI. I think it's shows how far we've come. But there's the--going from that to it's actually sentient because it--so Lemoine, who should know know better, essentially was fooled by the very program that he was working on. Which is supposed to simulate conversations. To simulate words. But there's nothing in the programming. It is a neural network but there's nothing in the programming that in any way can achieve sentience. That's not what it's programmed to do. It's basic so is one of the--there's a lot of experts jumping on him and criticizing him but one of my favorite one ones was LaMDA is basically a really good auto complete.

B: Auto. That was one of my favorite quotes. It really is. It really is.

C: But how does it actually? I mean you said it's a neural net. So just to be clear because like i remember a long time ago doing a story on real dolls which are like these weird sex dolls.

S: Yeah.

C: And they were adding AI to one of them. And I saw it in person. And I saw a guy interviewing it but it was just like scrubbing the internet for content. You could tell. It was just regurgitating things that it found based on key words.

S: Searching. Accessing.

C: Yeah exactly. And sometimes it would do that. It would read URLs.

S: Yeah this is just looking at millions of responses and using software to come up with responses that work. But there's no understanding or knowledge or thinking going on.

C: But it's also not just straight up scrubbing the internet.

S: It's just parsing words.

[talking over each other]

B: It's constructing them. And it uses a transformer. You hear you'll hear the word transformer a lot which is basically it's a deep learning model and it deferentially weigh it is a weighing system of significance for some of the for parts of the input data and stuff. So that and it works very well with sequential data like in languages and stuff and stuff like that. And this thing is meant. It's trained in dialogue which a lot of language models don't really do. So this is trained. This is what it does. It engages in convincing dialogue. This is what it does. It's very good at it and reading what I read of Lemoine's interactions he this guy absolutely should have known better and secondly he should have been working to try to show how brittle this knowledge is. It has no knowledge of meat space. It doesn't have an inner life that you would think that something with sentience or consciousness or self-awareness would have. There'd be some inner life. Some hints to an inner life. And it will allude to an inner life but then in other sentences it doesn't. So it's like it's clearly just doing what it was designed to do very very well.

C: What happened when he asked it introspective questions. like what do you dream about at night or how does something make you feel?

S: They can give it good answers to those questions because that's exactly what it does. It doesn't have to actually feel something. That's just more words to it then comes up with other words that that fit as a response.

C: I'm just wondering how buyable it is. Is it still really passing the Turning test when it does that?

S: Totally.

C: Okay, cool.

S: At least that one transcript that they published. Nothing in there smacked me in the face of like oh yeah this is it. It got broken here. Like this didn't work. It was able to handle all the questions that were thrown at it reasonably well.

B: Let's not forget a lot of what you saw though some of it was edited. So a lot of it that's out there was actually edited and pulled together to be even more convincing. If you search you can find the unedited versions and it's not as tight and it's interesting. But yeah this is clearly--it's a language model of sorts that is really good at what it was designed to do. I mean this is what it was designed to do. I mean that's why this was so disappointing that this guy just fell into it. It's like yeah he was convinced it's like he saw a mind where that was in somewhere that was mimicking your mind. But it goes to show though in the future when we really do get closer to what people are calling sentience or self-awareness or self-consciousness. Then it's going to be hard. It is going to be hard to know is this thing is it walking and quacking like a duck. Is it really a duck or is it really there's something there. This is going to be harder in the future. It's easier now.

C: So Steve how do we define that? How do we? I mean we said the Turing test is a crappy way. That's not even what the Turing test is designed to ask. It's just can you fool a human. But how, what is the test? Is there a computer science?

S: So let me tell you why I and many other people say that the evidence that LaMDA is sentient is crap. So that's the interesting thought experiment here how would we know?

C: Yeah.

S: So as Bob was saying Lemoine fell for the hyperactive agency detection. Which is essentially we have a tendency to anthropomorphize. To see an agent. A sentient behind random events or─

C: A blind watchmaker as it were.

E: It's very comfortable.

S: ─behind--or even that's why cartoons are so effective. If something acts like an agent to our brains treated as if it's an agent and so that's what he was doing. Because it was expressing emotion, expressing feelings and desires he fell for that illusion that hyperactive agency detection. I'm not convinced by the evidence that he finds convincing for a number of reasons. First as Bob mentioned it's just responding to input. It's not driving the conversation or generating questions itself. When I wrote about it I said at no point does LaMDA say hey sorry to interrupt but I was just wondering and then ask some question that had nothing to do with the conversation that they were currently doing. So there was no evidence that there was an in a loop of internal activity happening. When you think about human consciousness when we have wakeful consciousness our brains are functioning in a constant loop of activity. An internal conversation. Where one part of the brain is activating another part of the brain which activates another part of the brain. That constantly is happening.

C: And it's spontaneous.

S: And it's spontaneous. That's what wakeful consciousness is. When that stops happening you are not conscious. So there's no evidence that this is happening in LaMDA. It's just really good at responding to input. But the other thing is Lemoine found Lemoine's emotional expressions convincing and I had the exact opposite reaction. (laughter) Because the fact that they're so human to me is evidence that it's mimicking human emotion. If it had its--so Lemoine because the thing is not designed, programmed, intended to be sentient he would have to believe that sentience in LaMDA was an emergent property somehow.

E: Right. It sparked.

B: Not impossible. The concept is valid.

S: Yeah. Conceptually not impossible. Not not from this. But at some point we may actually, as Bob says, confront that situation where is there something going on here that's emergent? That is some kind of consciousness? But we're not anywhere near that.

C: Yeah because the idea of neural net AI is sort of black box. Like we don't know how it comes up with what it comes up with. So there could be an emergent property.

S: Yeah. That's part of the problem is that that it's self-trained and therefore we don't really know exactly the how it's solving the problems we gave it to solve. But here's the thing if it did have some kind of emergent sentience, what's the probability that it would be so exactly human-like? Wouldn't it be unique and quirky and--I could if Lemoine was saying this computer program is doing something that I cannot account for in its programming and there was a more subtle sort of manifestation. Something quirky like it was breaking in a very interesting way doing something that it shouldn't. Something like that to me would be more interesting, more compelling than exactly mimicking human emotion.

C: It almost reminds me of the concept of like normativity like we can say that there is normalcy. Mean, median, mode, whatever. Central tendency across different variables but no one person is just normal across the board.

S: Right.

C: Like they might have be an average height and they might have an average intelligence but they're not average on everything. And so this AI is like sort of average across the board.

S: Yeah. Yeah. It's too normal.

C: That's so funny and so it's almost like uncanny valley AI. (laughs)

S: Right. Well but why would? Yeah to me that would not that's not what we're going to see. When if there was any emergent consciousness it wouldn't be that way. And you then you get to your question Cara of well if the Turing test doesn't work because it really is not a test for actual sentience. It's a test for how well something could mimic sentience and we know that you could do that really well without being actually sentience. So how do we know if an AI is actually sentient? And I think there's two answers to that. One is because we know how it functions because it's designed to be self-aware to be sentient it's doing something that at least is in the ballpark of sentience of consciousness. This isn't even in the ballpark. Also here's another thing. You know LaMDA is simulating human level consciousness with orders of magnitude and sufficient processing power to do that, right?

C: Right. Yeah. We already know that the technology [inaudible].

S: It doesn't have the processing power to be actually sentience. Only to simulate sentience through language only. Not the actual underlying thoughts. But anyway, if we have a system that is powerful enough, that it has the functionality that could plausibly produce sentience then at least it's a possibility. How would we know? The short answer is we wouldn't. Which is really interesting. How could we know if it's experiencing its own existence or if it's just really good at acting as if it does?

C: Yeah if we haven't figured out how to understand the inner worlds of animals yet I think we're gonna have a really hard time understanding the inner world of a program.

S: Totally. I predict we will get to the point where we have a system where no one can truly say if it's sentient or not. Maybe it'll be 50 years or whatever but we'll get to that point where it's really good at simulating sentience, it might have it, but we have no idea if it's actually conscious. If it's actually experiencing its own existence or if it's just doing a really good job at simulating it.

B: Right and it's also very interesting because it could have lots of utility in terms of say accomplishing goals. Picking problems, coming up with you with unique strategies to solve problems. And it could do all of that but we may find out or maybe never find out it could be just a p-zombie that just--there's no quality that qualia or substance. It's not necessarily gonna be anything like our consciousness.

C: But that raises I think a really important ethical question, right guys? Because we're asking it in the affirmative. How do we know if it is. But the opposite is how do we know it's not. And if we're utilizing these tools to do work. And these tools even have a chance of being sentient is--are we ethically--do we have a moral obligation not to use them for work?

B: We're gonna cross that line.

S: We will have to encounter. I do like the solution that they came up with in the movie Ex Machina where they have to see if it's really sentient or just ask sentient is can it creatively come up with its own problem solving in on its own initiative in order to save its own life.

C: And spoiler alert. (laughter)

B: At this point.

E: It stepped out of the cave.

J: Steve the part of this, we've talked about computer sentience many many times and not much has changed in the last even 10 years. 5 years. Not much has changed. Programming is getting a lot better and everything but we're not that much closer to any real kind of sentience than we were. To me this is a massive lesson in the fact that experts, hyper-educated people, highly intelligent people can be utterly fooled. Just like anybody else. Like this google engineer─

S: They're still human.

J: ─right? They're still human. This google engineer did something intellectually that I find so it's almost unbelievable if I wasn't a trained skeptic. The only thing about me that makes me believe it is that I know that this happens to people. How could this guy fall for that?

S: I'll tell you how. Because being a technical expert doesn't give you critical thinking skills nor does it give you knowledge or expertise outside of your area of focus. And so this guy could be a very competent computer programmer. Do technically what he needs to do but that doesn't mean that he's gonna understand about hyperactive agency detection for example.

C: It's just amazing that somebody who works in a field that grapples with these things philosophically isn't responsibly grappling with these things philosophically.

E: That makes it all the more remarkable.

S: We see that all the time. I certainly see it in my profession. I know people who are technically excellent physicians or surgeons or whatever. But they can't think their way philosophically out of a paper bag.

C: Oh, you see it in psychology too.

E: If only he read our book. If only he read our book.

B: Imagine you're testing this model. This language model. And it's starting to sway you and and you're gonna go to your bosses at google to say this is sentient. Oh my god would I cross my T's and dot my I's. (Cara laughs)

E: You would run it by someone else.

S: Wouldn't you? Like a close friend.

E: You would falsify that presumption. And take the steps.

C: And you try to break it. That's what a good scientist does.

B: Exactly.

C: You try to find a hole.

E: Falsify the thing. Yes.

S: Some people think that Lemoine is not serious or that he that he's a hoax or he's got some ulterior motive. I don't know.

C: I think it's a publicity stunt.

S: I don't think we have to go there. I have no idea.

B: He's kind of disgraced. I mean bad hoax man. If it was.

S: If it was a publicity stunt he failed. But who knows. We'll never know. Like what he actually believes. But I just take people out there at their word unless they have a reason not to. It's certainly possible that he fell for the illusion. But there's lessons here as well. So when people ask will we accept robots in the future as human. My answer is absolute freaking lutely. Without a doubt.

E: Why wouldn't we we?

S: We will accept anything that acts as if it's human. Acts as if it has agency.

C: We think our pets are human.

S: I know. We anthropomorphize anything.

E: We were predisposed for it.

B: How about sex dolls? People are marrying sex dolls.

S: That's actually a really good segue to Jay's item, which is also about--

E: --Sex dolls.

S: --the human acceptance of digital, non-human entities.

AI Influencers (29:43)

S: Jay, tell us about AI influencers.

Kids Don't Get Cancer Because They're Unhappy (52:25)

Free Floating Black Hole (1:00:40)

Who's That Noisy? (1:11:27)

Answer to previous Noisy:
Seal chirps and bellows


New Noisy (1:13:32)

[hissing with rising horn sounds]

this week's Noisy

Announcements (1:14:14)

Questions/Emails/Corrections/Follow-ups (1:15:52)

_consider_using_block_quotes_for_emails_read_aloud_in_this_segment_
with_reduced_spacing_for_long_chunks –

Correction & Follow-up #1: Gun Safety Regulation

Science or Fiction (1:32:23)

Theme: Science Misconceptions

Item #1: CO2 is not the greatest cause of recent global warming, but rather shorter-lived molecules such as methane.[5]
Item #2: Most of the energy generated by the sun is not caused by the fusion of hydrogen into helium.[6]
Item #3: The heating up of a spacecraft as it reenters and descends through the atmosphere is not mostly caused by friction, which is only responsible for a small amount of heat, <5%.[7]

Answer Item
Fiction Co2 and global warming
Science Spacecraft reentry and friction
Science
Sun's energy and h→he fusion
Host Result
Steve clever
Rogue Guess
Jay
Spacecraft reentry and friction
Cara
Co2 and global warming
Bob
Sun's energy and h→he fusion
Evan
Co2 and global warming

Voice-over: It's time for Science or Fiction.

Jay's Response

Cara's Response

Bob's Response

Evan's Response

Steve Explains Item #3

Steve Explains Item #2

Steve Explains Item #1

Skeptical Quote of the Week (1:49:55)

Science is a search for basic truths about the Universe, a search which develops statements that appear to describe how the Universe works, but which are subject to correction, revision, adjustment, or even outright rejection, upon the presentation of better or conflicting evidence.
James Randi (1928-2020), Canadian-American magician and skeptic

Signoff, SoF Bitterness (1:50:39)

S: —and until next week, this is your Skeptics' Guide to the Universe.

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        

Today I Learned

  • Fact/Description, possibly with an article reference[8]
  • Fact/Description
  • Fact/Description

Notes

References

Vocabulary


Navi-previous.png Back to top of page Navi-next.png