SGU Episode 910
This episode is in the middle of being transcribed by Hearmepurr (talk) as of 2022-12-19. To help avoid duplication, please do not transcribe this episode while this message is displayed. |
This episode was transcribed by the Google Web Speech API Demonstration (or another automatic method) and therefore will require careful proof-reading. |
This transcript is not finished. Please help us finish it! Add a Transcribing template to the top of this transcript before you start so that we don't duplicate your efforts. |
This episode needs: transcription, proofreading, time stamps, formatting, links, 'Today I Learned' list, categories, segment redirects. Please help out by contributing! |
How to Contribute |
This is an outline for a typical episode's transcription. Not all of these segments feature in each episode.
There may also be additional/special segments not listed in this outline.
You can use this outline to help structure the transcription. Click "Edit" above to begin.
SGU Episode 910 |
---|
December 17th 2022 |
Click for the gallery of uploaded files |
Skeptical Rogues |
S: Steven Novella |
B: Bob Novella |
C: Cara Santa Maria |
J: Jay Novella |
E: Evan Bernstein |
Guest |
MH: Mark Ho |
Quote of the Week |
QUOTE |
AUTHOR, _short_description_ |
Links |
Download Podcast |
Show Notes |
[ https://sguforums.org/index.php?BOARD=1.0 Forum Discussion] |
Introduction
Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.
S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Saturday, September 24th, 2022, and this is your host, Steven Novella. Joining me this week are Bob Novella...
B: Hey, everybody!
S: Cara Santa Maria...
C: Howdy.
S: Jay Novella...
J: Hey guys.
S: ...and Evan Bernstein.
E: Good evening folks!
S: So no, you are not listening to the episode that aired at the end of September. We are recording two episodes on this day. This episode is coming out in December when we're on our trip to Arizona for our live shows in Arizona. This is part two of a six-hour live streaming show that we did. We recorded two SGU episodes. This is the second one. So this is the episode for sometime in the middle of December. I forget exactly what day it will come out. So we're going to get right to some bits for you guys. We have an interview coming up very quickly with an AI expert. But Cara, you are going to start us off with What's the Word?
What's the Word? (1:06)
- Word_Topic_Concept[v 1]
C: So I came across something really fun that I think you guys will enjoy. It is a website that was started by a man named Jesse Sheidlower. I think I'm pronouncing that correctly. He created the Historical Dictionary of Science Fiction. It went live during the pandemic because he was home a lot and he was bored. I think he used to work for like the Oxford English Dictionary. It's got 1800 entries. I think it's always growing. And it has information about where these terms, these science fiction terms were first coined. He has the passage from where they were used and a little bit of background about the author. So I thought it would be fun to go into the Historical Dictionary of Science Fiction and talk about some common words that of course, you may or may not know were developed by science fiction writers, but are used all the time now in common science parlance. So the very first one, and it's probably the most famous example of this pretty much across all the coverage that I see online, everybody cites this one first, is the word robot.
E: Oh, no. R.U.R.
C: Did you guys know? Yeah. Right. So robot. So the word robot was first used, I think it was, well, gosh, it's been used by so many different writers. A lot of people will remember some of the more recent uses, but before anything, it was actually used in a play by a Czech writer, and I probably can't pronounce their name. Maybe it's Čapek.
B: Yeah.
C: Does anybody know if in Czech with the little thing over the C, is that a ch sound? I'm not sure. But this Czech author wrote a play called Rossum’s Universal Robots, that's the translated title, in which he used the word robot for the first time. And robot came from, I think, the Latin for forced labor. And that's where the word robot really came into play. And so then it's been iterated multiple times since then. But the idea really early on, this was back in 1920, and the idea since then has often come from this idea of forced labor, use of labor in factories, use of labor in armies, cheap labor. That's where robots come from. And today, they still kind of carry that vibe, I guess. But obviously, it's grown to mean so much more than that, just like a non-human technological thing that does something, that does work.
S: Before you go off the robot, though, the idea of a robot kind of goes back to the ancient Greeks. There was this idea that–
C: Not with the word robot.
S: Not with the word robot. Just well, but you think about mechanical things displacing the labor of humans, right? That's basically the basic idea of a robot.
C: Yeah. Conceptually, this is super old. But the first time the word robot was used was by this Czech playwright. And then, of course, a lot of people think of it from 1940 when Asimov wrote about the actual field of robotics. And he had a character who was a roboticist. And so that's where it really did explode. So first use in the 20s, but then in the 40s, it exploded into our lexicon, and it was used all the time after that. Okay. So how about another one? Did you guys know that the word genetic engineering came from science fiction?
B: Cool. Which, where?
E: Well, the–
C: Right. So this was–
E: Not the word thing. They took the two words and put them together.
C: I have all my references. This was from Jack Williamson's novel, Dragon's Island.
E: I don't know that.
C: So it was an occupation within the novel of a genetic engineer. Or no, genetic engineering started in that novel, and then it took several years before genetic engineer the occupation was named by somebody named Powell Anderson. And Asimov used it also in the 70s, but in 1951 was when Williamson used in Dragon's Island: "I was expecting to find that mutation lab filled with some sort of apparatus for genetic engineering."
B: Cara, I just finished a series of books and they kept saying throughout, geneering. Geneering.
C: Geneering. Oh, love it. And that was a modern sci-fi series?
B: Well, within 20 years.
C: Okay. Yeah, yeah, yeah.
B: It was 95 actually, so it's not recent, but-
C: Here's another one that you guys might think, maybe you know this, maybe you don't. Zero gravity or zero G.
S: I heard that one. Yeah.
C: This started in sci-fi and this one's really fascinating because it was all the way back in 1938. The author Binder, Jey Binder, Jack Binder, he was actually, he's a comic book artist and he created Daredevil. He used this in his essay, If Science Reached the Earth's Core, and he wasn't talking about zero gravity in space. He was talking about zero gravity in the core of the Earth.
B: You would float at the center of the Earth because you're being pulled from-
E: Every direction equally.
B: -from the gravity of the mass of the Earth, so yeah.
C: Then later in 1952, Arthur C. Clarke abbreviated the term and made zero G in his novel Islands in the Sky, and that's when it started to take place in space.
S: Although now it's been replaced by microgravity.
B: Yeah, right.
E: Because it's not the actual zero.
C: They were like, let's be scientific about it. Let me see [inaudible].
S: Yeah, it's technically a little bit more accurate.
C: Then of course, alien, the word alien, which kind of is still, we've gotten away with gotten away from the modern usage as it relates to the historical usage. That was a person from another country or from another place. So alien from a location other than one's own. But now we don't tend to use words like illegal aliens anymore. That's quite offensive. And we've kind of advanced our labels, but that's where the word really started. And ultimately that's how it kind of translated into this idea of beings from other planets. So it's long been used to talk about something being foreign or something being from somewhere else. But let me see. The first person to use it in the way of somebody from another planet was a Victorian historian and essayist named Thomas Carlyle. And then apparently in science fiction, we didn't really start seeing the use of alien regularly as a catchall for like ETs for extraterrestrials until 1929 when Jack Williamson's story, The Alien Intelligence, was published in a Science Wonder Stories collective. And then finally, I found some cool stuff with computer terms. So the word worm, you remember computer worms?
B: Yeah.
E: Sure.
C: So this was not developed by computer scientists. This actually came out in a story by Brunner, John Brunner in 1975. His novel was called Shockwave Writer. And so here is one of the, there are two citations in it, but the earliest in the book is: "Fluckner had resorted to one of the oldest tricks in the store and turned loose in the continental net, a self-perpetuating tapeworm, probably headed by a denunciation group borrowed from a major corporation, which would shunt itself from one nexus to another every time his credit code was punched into a keyboard. It could take days to kill a worm like that and sometimes weeks." So this is our first usage of a computer worm.
B: That's cool.
C: Very cool.
B: Don't hear that word very often anymore, but.
C: No, you don't. But it's pretty cool when these kinds of things are first dreamed up. And we hear about this with Star Trek all the time. It's a million examples we can pull from Star Trek. But it's so cool that this one individual, again, I want to give him like huge props here. He's called Jesse Sheidlower, and he was already a word nerd. And he said that because he was kind of home all the time and had the time to do it. He got this site up and running during the pandemic. And it's called the sfdictionary.com, the science fiction dictionary. So look it up. You can you can have fun on there.
J: Neat.
S: That sounds cool. All right. Thanks, Cara.
News Items
Electric Planes (9:06)
S: Evan, you're going to start off the news items telling us about an electric plane.
E: Yeah. Electric airplane in the news this week. Out of Sweden, a company called Heart Aerospace. Their mission is right from their website. Their mission "is to create the world's greenest, most affordable and most accessible form of transport grounded in the outlook that electric air travel will become the new normal for regional flights and can be trans transformational in addressing the industry's key sustainability challenges." So on September 15th, they had something called Hangar Day, in which all their employees, all the everybody in the company and then invited guests come out to their big, big, big hangar. And they made major announcements there. Their biggest announcement was that they have been working on an airplane, an electric all electric airplane called the ES-19. It was it's designed to be a 19 passenger airplane entirely powered, but with batteries. Now they got that to the point in which they made a scale model and that actually did fly. And that was as of this past summer. But their announcement today is that they're stepping it up. It's now the ES-30, a 30 passenger plane. And all of their company's efforts are now going to go into making this design. The other part of this announcement that is significant is that they've got orders for this thing and they have orders from some pretty big hitters in the industry, including Air Canada, Mesa or Mesa M-E-S-A, United Airlines here in the United States and Air New Zealand are among them who have either put in actual purchase orders or have basically said, yeah, we're very interested in getting these airplanes to the tune of hundreds of these things that they're that they're putting in order for now. Let's talk a little bit about the plane itself. It's not built yet, first of all. However, they did right there.
B: It's right there man.
S: But they have the specs.
E: Yeah, they have they have the specs for it. And they have the then they have the test fuselage all built out inside one of the hangers that's hooked up to all the computers and all the simulators and everything. And they say that everything in that simulation is working as it's supposed to.
S: What kind of battery does it have?
E: It's going to be battery source batteries, primary five tons of lithium ion batteries right now.
S: Yeah.
E: So that's a that obviously comes.
S: It's a lot.
E: It's a lot.
B: How many tons?
S: Five tons.
E: Five tons. Yeah, that's what it is.
J: It's kind of weird. The fuselage is kind of weird. It almost looks like a seaplane.
S: That's probably the battery.
E: Yeah, the batteries are loaded down there in its belly as as it were.
B: That's a lot of batteries.
J: I'd like to see what the landing gear down.
E: Yeah, yeah, that would that would be that would be neat to see.
S: I wonder how hot it gets.
E: Yeah. So hot and fire.
B: Use the heat to heat the cabin.
E: How does that exactly work? But they must have it figured out. The range - 200 kilometers right now if you're going to use the batteries. OK. However, it does also in the tail section right below where the tail is at the very end of the plane. There is a liquid fuel reserve, essentially. So you can double the the range with that. And you would have that built into these planes in case, because when you're in flight, you may have to get suddenly diverted to other airports or other or other routes. So it's there strictly as as a contingency for those kinds of emergencies.
S: But 100 kilometers just off the batteries.
E: 200 kilometers just off the batteries from takeoff to landing. If you go if you kick in that that hybrid system, though, yeah, 400.
S: 200 or 400.
E: It's 200 or 400.
S: That's probably a lot of small city to small city routes.
E: That's right. And this was a particular goal, a threshold that they had to reach, because before this, I believe there are 19 maybe had I think was like 140 or 150. And it wasn't quite enough.
S: Yeah.
E: From the perspective of the airlines-
S: Not enough routes.
E: -not for themselves. Right. They couldn't make the routes. But getting to that 200 kilometers ticks boxes and gets you from from real destinations to destinations that you need to get to.
B: How about in air recharging?
E: Yeah. Wouldn't that be?
B: But how long would that take? How long would it take to fully charge?
J: Forget it.
E: But the thing is, this this is filling a niche for a part of the airline industry, obviously, because you are dealing with short routes. So there is right. What do you have right now for refueling on short routes? Nothing because you don't need it. So so basically the same premise. You don't you wouldn't necessarily have to design this thing with a need to recharge mid flight. It's not like you're going across the ocean or something.
B: What's the recharge time after it lands?
E: Thirty minutes, I think, is what they say.
B: Thirty?
E: Yep. Turnaround time. Thirty minute fast charge, fast charge. And maybe there's some other quicker, longer charge. I don't know what that does to the battery life or the life of the airplane, but that's what they're saying. Thirty minute turnaround time right there on their maximum altitude, 20 000 feet, which is apparently where you need to be for these this level. It basically ticks all the boxes that the propeller planes right now on these routes are filling and it meets it price wise also. So that's these are all the points that they're that they made with this announcement is that it's here. We've got it. The specs are here. We're going to build this thing out. We'll get this thing tested and in the air within a couple of years. And we're going to enter these things into service by 2028.
S: 2028?
E: 2028 is the goal.
S: That's a long time. Between now and then I bet you the batteries are going to be better.
E: Well, that's the other thing is that they said we're just dealing with what the technology we've got now and we're counting on things to get better with the battery technology.
S: Yeah. Also, you could slap some organic solar cells on top of those wings. You can't put like silicon panels on there. They'd probably be more heavy than they were worth. But organic or thin and light and very easy. They're not that efficient. But if we get the efficiency up to those like above 20%, I bet you that could add, 40, 50 kilometers to the range.
E: It probably could. Someone's asking in the chat whether they're flying right now. Obviously, this model is not flying right now. They still have to build it. The models that are flying that are all electric seem to be the single or two passenger planes in the Cessna kind of model. So those are out there to be had. I've seen video I've seen videos on that. I read news. I news I was about it in the last couple of years. Those have been out and are being tested. Military is definitely looking into them as options. But what we're talking about here is commercial, the commercial airline industry. Now, I and I always thought it was going to be a problem with with takeoff and getting enough thrust, getting those fans in the engines to turn to turn fast enough to get the thrust.
S: Batteries have a lot of power.
E: But yeah, no, that is not a not an issue. It's just they say what it just however they said, if you're going to do it with fuel, with batteries or with hamsters on a wheel, it doesn't matter. You just have to be able to be generated enough power. Now, the power density of the batteries the fuel to run the airplanes, it's much denser energy energy with the fuel. But the batteries are catching up. And like you said, Steven, within five years, next generation batteries that are coming out, they can only be better.
S: So they cross that threshold, then that's it. Then just get incrementally better from there. Have you ever been on one of those like a prop plane for a short flight?
E: Absolutely.
S: Yeah, they suck.
E: Yeah, they're uncomfortable.
B: They're scary man.
S: The worst is they're loud and they vibrate but these are supposed to be a lot quieter.
E: A lot quieter, practically silent.
S: Probably a much more enjoyable experience certainly than-
E: Certainly from the noise perspective.
S: -what's currently filling those those routes.
E: So yeah, commercial battery powered flight. Here we go. A couple years.
S: And then there's already solid state solid lithium ion batteries. They haven't quite gone into mass production. I think Japan is working, has one that they're actually commercially being used. But when that hits, those have about twice the energy density as the regular.
E: That's a nice game changer.
S: So either it's half the weight or twice the range or some combination of those two things.
B: That's a near term upgrade?
S: I mean, that's something that we could see in widespread commercial use by the end of the century, by the end of the decade. Definitely. I mean, there's already some versions of them in use. But that could be a little jump to twice the energy density.
B: But why eight years from now?
S: Yeah, I don't know. I mean, the production, it's always that commercialization, ramping up the industrialization of it. Doing it on a small scale is just different. So it could be quicker. We'll see. It could be a few years. But that way for this kind of thing, that's really what it's waiting for is the batteries just across that threshold. It's usable. But yeah, it's good to hear that-
E: It's on the way.
S: -we're there. All right.
Zettawatt Laser (18:10)
S: So this is going to be a quickie. This will be a good one to just fill in before. We have an AI interview coming up in about 15 minutes with an AI expert. And so I just want to talk about the upcoming strongest laser for the United States. So this is not the strongest laser in the world, but it puts it up there with the strongest lasers that exist.
E: And the strength of lasers?
B: This is zettawatt?
S: Zettawatt.
E: Zettawatt, yeah. Oh, OK.
S: But Bob, it's the zettawatt equivalent.
E: What does that mean?
B: Well, it's a super short pulse. We're talking femto-attoseconds.
S: That's not what makes it an equivalent.
E: Multiple lasers ganging up to make the zettawatt?
S: Nope.
B: It's pulling in laser power from an alternate dimension, alternate universe?
S: So it's the Zeus. Have you heard about that?
B: Yeah.
S: So the laser part of it itself gets up to 300 petawatts.
B: OK, yeah. Respectable.
S: What they do is they feed supercharged electrons into it, and that gets the effective power up to what a zettawatt laser would produce.
B: No shit.
S: But the laser part of it itself is in a zettawatt laser. It's a 300 petawatt laser.
B: Wow. I didn't see that yet.
S: So that's why they had to use the term zettawatt equivalent in terms of the power that it produces. So it's basically like having a zeta watt laser.
B: It's like those projectors that have lumens in it. It's not really lumens. It's a lumix or some equivalent.
S: Yeah. Lumen equivalent.
B: It's crap. But damn, man. OK. That's interesting and upsetting at the same time.
S: But at the end of the day, it's effectively a super powerful laser. I mean, zettawatt is...
B: It's 1021, I think.
S: Yeah. It's incredibly powerful.
B: That's a whole lot of watts.
S: But you're right. It's very, very brief in terms of, because obviously, they don't have the energy to have that thing going for any length of time. They're going to get it up in stages. In stages. Yeah. In series. So they're first going to shoot it up at only one, not even petawatt. What's before petawatt?
B: Peta... Exa? Exawatt?
S: No, no, no.
J: Gigawatt?
S: No.
B: Wait. Tera? Wait.
S: Tera. One terawatt. It's going to start at like one terawatt. Then they're going to go up by orders of magnitude until they get up to the maximum strength of the 300 petawatts. And then they're going to get it up to the max to the equivalent of the one zettawatt. All right. So what's this really powerful laser for? What's it going to do? Primarily it's for research. This is primarily going to be for research. With this, you could create super hot plasmas, for example. How hot, you might ask? So hot that we can actually do experiments-
B: Big bang?
S: -on the physics they said black holes. The physics near black holes where you have this super, super hot plasma. They always make general statements about like, this will help us research the quantum nature of the universe without getting into a lot of details because they're not designing experiments yet or at least not in the reporting that I'm seeing. But that's just theoretically you're going to, you could use this laser to create super high energy physics, which will get you into the-
B: But what about a Clark gluon plasma? Does it get to that level?
S: It might be able to, I don't know. But well, they didn't comment on that specifically. I think that's the kind of thing that they're talking about. So it just gives us access to new physics in terms of experiments because the energy is so incredibly intense. They also said they could use it for like X-raying very small things. Because, but this of course would be the extremely brief pulse, but at high energy, it allows you to penetrate things that you otherwise wouldn't be able to see the interior of. So like metals and stones and things like that. So it could also be used in research that way. Again, I think it's not exactly a portable or-
B: Record player?
S: Yeah, it's not a portable laser. So I think everyone, when you hear about the most powerful laser that we have or ever or whatever, your mind pretty quickly goes to, could this be a doomsday weapon?
E: Yeah, right. Are we going to blow up Alderaan with this?
S: What are the military applications of this thing?
B: Or hand-held laser pistol.
S: But this wouldn't be useful for that sort of thing. It's not portable enough, not sustainable enough. Bob, a little bit later in this episode, you're going to be talking about laser sails, light sails basically. And I tried to find any mention of using this kind of laser for that application and-
B: Oh, thank you for doing that.
S: -nobody brought it up. But I don't know if that's just because it's not the first thing you think of or that it's just not really useful for this. Probably because it's too short again.
B: Oh, absolutely. Absolutely.
S: You need sustained lasers. You also-
E: And I've though multiple lasers too.
B: The resources.
S: But also, you probably don't want it to be that hot. You don't want to burn up your solar sail.
E: Yeah, you don't want to destroy what you're trying to ship around.
S: Yeah, there's got to be a sweet spot in terms of how energy intense you want that laser to be.
E: Depending on the material you're making your sail out of. But foreshadowing, I like it.
S: Yeah, yeah, yeah. So we'll be talking about that more. What kind of lasers would we want? What we need for light sails? Because I think the laser-driven light sails, as we're going to talk about, are going to be important to the future of space travel.
B: Maybe, probably.
S: But there are other countries out there that have more powerful lasers already. This won't even at maximum power be the most powerful laser in the world.
E: And are they using those lasers for the same purposes?
S: Yeah, basically. It's basically a research tool.
B: Yeah, I was actually doing a search recently for the most powerful, and I came across Zeus here. But they said United States. I'm like, oh, wait, no, I'm talking the world. And it didn't. I had to look away.
E: Ooh, classified.
B: I wonder if they're equivalent. I wonder if the real number one right now, I wonder if it's equivalent, equivalent, or I would think probably.
S: Yeah, yeah, right. I think so. I think so.
B: I mean, that seems like an interesting and cheap way to really upscale your super powerful laser.
S: Yeah, it's a good example of of the fact that humans are clever. That even when we run into theoretical limitations, and we've seen this all the time, this is the theoretical limit for whatever. There's the diffraction limit. We'll never be able to image something smaller than this. And then we find metamaterials that get around.
B: Trixie.
S: Yeah, that get around the diffraction. Oh, we're just going to cheat. And we're really good at figuring out how to cheat the system. Actually, this is like the most powerful laser that we could make with the equipment that we have. Then they figure out a way to cheat. What if we feed super high energy electrons into it? And then you get the equivalent of a more powerful laser than should be able to exist with the materials that we have.
B: It's fascinating idea. I can't wait to read more about that.
S: So this is just very, very early reporting. This is sort of a quickie news item because the reporting is very early. Clearly it hasn't been turned on yet. And it's going to take years to get it up to full power. And then so probably in a few years, we'll be reading about the research that's being done with this laser. But the zettawatt equivalent is a good threshold that I thought it was worth mentioning.
B: Yeah, zetta is huge. I mean, 1021. That's an immensely large number.
S: All right, I hear Ian talking to our AI expert right now.
Interview with Mark Ho (25:40)
- Artificial Intelligence
S: Hi Mark, how are you?
MH: Hi.
B: Hello.
S: Welcome to the Skeptics Guide. Thank you for joining us. So can you tell our audience a little bit about yourself and your expertise?
MH: Yeah. So my, my background is in cognitive science and artificial intelligence. And I kind of work on kind of the intersection of computer science and psychology, studying how people, human cognition works, how people solve problems, how that compares to how machines solve problems, and trying to understand general principles of problem solving and intelligence.
S: So what I'm hearing is that when we finally get an artificial general intelligence, you're going to be their first psychotherapist.
MH: Exactly. (laughter)
B: Cool.
S: But let's back up a little bit to the world of narrow AI where we are now. And so tell us what kind of work you do. Are you trying to model narrow AI after how the human brain works? Is that kind of what you're studying?
MH: So no, I kind of, I'm approaching things from a much more kind of psychological cognitive perspective than a neuro perspective.
S: Okay. So yeah, more basic principle about how thinking works, not necessarily how the human brain works.
MH: Yeah, exactly. And trying to understand, how can we understand kind of people as thinkers and reasoners in a more general sense, and how does that compare to how machines are, current machines and AI systems are reasoning, and to the extent they are reasoning and solving problems.
J: Mark, is the goal to make the AI software think more like a human, or is it just useful information to have when you're figuring out how to program it?
MH: Yeah, so there are really two goals I think of the research kind of field that I'm in, computational cognitive science. One is to use general principles developed in AI and tools and formal methods from AI to model human cognition in a way that we can better understand how people are solving problems and understand things like perception and memory in terms that are precise enough to make quantitative predictions and stuff like that. And then the other side of it is developing better models and predictive general models of how people think and solve problems and perceive and remember things so that we can use that to design AI systems that understand how humans think and work and design better interfaces and stuff like that.
S: Cool. Some of the research in this area that I've read, it sounds like a lot of it is going from the AI to the cognitive theory. Narrow AIs are doing things, we don't know how they're doing it. You're trying to figure out how they do it so we can better understand just cognitive science itself. Is that kind of what you're doing?
MH: Yeah. So a lot of the research is going from the AI formalisms and ideas for how to even build, how you would engineer an AI system gives you a lot of insight into how you could reverse engineer human mind and human cognition and intelligence. So there's been a lot of direction that way recently, but there's actually a long history also the other way, thinking in psychology more formally, a lot of core ideas in AI kind of originated in mathematical psychology and computational cognitive science. Things like connectionist theories are the foundation of deep neural networks today. And a lot of things from a lot of these kind of reinforcement learning algorithms, the ones that were able to solve like chess and go and beat humans in these games, a lot of the basic principles from that were developed in studying learning, associative learning in like rats and stuff. So there's direction that way. And so I do a little bit of both. My research is trying to develop new formalism, new theories of how humans are solving problems and use that in the AI direction. So a big thing that I've been focused on lately is thinking about how do people approach problems and model problems and construct a mental model of problems in a way that's very flexible and general in a way that a lot of AI systems can't right now. You were saying, there's like narrow AI, it's like very focused on a single task. It solves a single task. What's kind of cool about humans is that we can do a whole bunch of things. I can jump on this podcast and start talking to you about stuff even though I've never done it before. I can go cook a cake, bake a cake, whatever. So yeah.
S: How much in your research do you get involved with the question of what are the things that AI is better at than people? And what are the things that people are better at than AI? Even as powerful as AI is getting, it sounds like there are still things that we do that they can't do.
MH: Yeah. So that's a very big active question right now, especially since we're seeing all these AI systems solve problems that for a long time we thought only humans could do. I was saying, like playing chess really well and identifying objects and classifying things, large language models and so on. Currently a lot of the big questions are around things like being able to generate new concepts and generate new kind of combinations of concepts is something that people are really good at, but AI, current AI. The other thing is like what counts as AI kind of shifts over time, 20 years ago, what Google Maps is doing every day, a routing algorithm would have been considered like AI, like artificial intelligence, but now it's just an algorithm that you use on your phone. So what counts as AI is always shifting. But I think right now and for a long time, right now, and especially because right now a lot of the systems are these statistical machine learning systems that don't have a lot of internal structure to them. They're not very good at compositional reasoning that humans are good at. Things like, if you understand what like a cat is, you can imagine a red cat or a purple cat or something like that, even though you've never seen it. Those kinds of generalizations are much harder for neural networks and standard machine learning systems.
S: So we talk a lot about AI on the show. And so it's good to have an expert on. And one of the things I would love to have is if you wouldn't mind giving us a really brief synopsis of like what is a neural network versus machine learning versus deep learning? So the big major concept, because I know we throw them around a lot and probably a huge chunk of our audience doesn't really understand what they are.
MH: Yeah, yeah. So let's start with the neural network. So a neural network is basically a, it's a big matrix of numbers. You can kind of think of it as if you had a machine where there were like knobs on it, you could turn them and there were an image coming through and you could tune it to translate that image in some way. It would filter things in a different way. A neural network is kind of an abstraction of that general idea. There's array of signals coming in and they're passing through a bunch of filters and you can tune what each filter is doing. Big neural networks have a lot of tunable parameters, basically. A lot of knobs that you can turn and a lot of deep learning is just a deep version of that where you have a lot of layers, what are called layers of neurons that you can tune and that gives it the flexibility to transform an input in a very complicated way to give an output taking an image of a cat, of an animal and classify it as a dog or a cat. And basically deep learning is this very relatively simple algorithm of as you feed in an input and get an output, if you know what the output should be, you can say, you can give it a thumbs up or a thumbs down. And the system is designed to back propagate that information to adjust the parameters a little bit. So it does a little bit better next time. You just do this a lot. And eventually the whole thing moves through this big parameter space and learns how to basically classify things that way, learns the right set of tuned parameters.
B: Is that where training comes in?
MH: Yeah. So that's the whole idea of training.
S: So a billion trial and error a billion times until it tweaks it perfectly.
MH: Exactly. It's like trial and error kind of driven learning. Actually the technical term is error driven learning. And then machine learning is actually a broader category, which refers to not just neural networks, but a whole range of methods. But I think what unifies them all is that they're all based on statistical ideas of you're trying to estimate, there's uncertainty about the world and you're trying to estimate the best model of the world or the best set of parameters to explain something or fit some data.
C: So if you had to do a visual description with sort of umbrellas, is AI the largest umbrella or is machine learning the largest umbrella? Which category subsumes each other category?
MH: Yeah, they're kind of partially overlapping umbrellas. Like I said, AI is always shifting. These days AI tends to refer more to these statistical machine learning approaches. But classically AI also referred to what's called good old fashioned AI or go fi, which is this idea, kind of more and more what's called like symbolic reasoning. So things like kind of planning, problem solving, reasoning in a more structured way. Whereas the domains that kind of statistical machine learning works really well in are domains where things like perception, classifying images, and settings where you just have a lot of data, you can chug through that data. And there's some underlying pattern in the data that no single person could write a set of rules to describe, but it's there and it can be learned in this flexible tabula rasa way. AI, I think, yeah. And so AI, those symbolic approaches, they tend to be less data driven classically. And they're much more you have a big complicated problem that you know what it is, but it's just really complicated. And you need like to be smart about how you solve that problem, as opposed to learning through trial and error, how to perceive something or fit pattern match essentially. So there's, people often make a distinction between pattern recognition, which is more statistical machine learning approach, and symbolic reasoning, which is this more deductive thinking and structured reasoning. But I think the big thing right now is how do we bridge these two approaches? Because obviously, people are doing both of these things, doing a lot of both reasoning and complex perception and action.
C: And then so neural nets, which are a sort of subsumed underneath machine learning, that's the engine by which some machine learning takes place? Is it, would you say that neural nets are sort of it now? Is that what most people are putting their chips down on? Or is it just one of many equally effective approaches to machine learning?
MH: Yeah, it's one of many approaches. And right now it is, it's probably the most effective when you have a lot of data, when you have a lot of-
C: When you can scrub the internet.
MH: Exactly.
S: Yeah.
C: Okay.
J: Mark, what would you say the most complicated thing that some artificial intelligence software is doing today?
MH: Oh, the most complicated. I mean, it's-
J: Or give an example of it hitting really hard. What is AI doing today that's really impressive or considered top of the game?
MH: One of the most impressive things that's going on right now is things like these game playing algorithms that can basically, that beat people at games that require very long term planning and look ahead and thinking about what the other person is going to do. I think, yeah. So these like competitive, so I think basically-
B: Chess and Go.
MH: Chess and Go essentially are, they're competitive, well-defined games where we have really good methods for solving them. And it's not just a pattern recognition thing. It's a combination of pattern recognition and symbolic reasoning and reasoning. So these systems that solve Chess and Go, they're combining both pattern recognition, recognizing patterns on the board with planning, with what's called heuristic search, searching through a tree of possibilities. And they're using the patterns to guide the search through the tree. Yeah, I don't know. It's very impressive because I think, we think of game playing as a very human activity. No other animals play games like this. Other animals like perceive things and can move through the world and stuff, but only humans can seem to do this symbolic reasoning in very large state spaces.
S: So it does seem like the AI applications are getting really powerful over the last five to 10 years. They hit their stride and we're seeing all these applications now, beating the world champion in Go and the new art generating software, that kind of stuff.
J: Folding proteins.
S: Folding proteins. All this stuff that is happening. And I've been trying to find out, I've asked multiple people experts in different ways, what they think about it. One of them told me like the AI itself is not necessarily getting better. It's just that we're getting better at training them and we have better data sets to train them with. Do you agree with that? Or do you think that we're getting better at the underlying hardware, the underlying programs themselves? Or is it just that we're getting better at training them?
MH: So I think most of the current progress is due to a lot of improvements in the engineering, being better at training them. I think the hardware is part of that developing special, a big thing that's happened in the last 10 years is they figured out how to take what are called like GPUs, graphical processing units, that are typically used in computers to render graphics and stuff. And the types of computations that those things do are basically what neural networks have to do. And they do them really quickly.
B: They're great.
MH: And so they've been able to kind of build on that technology to get many, many, many orders of magnitude speed ups in how you can train these systems. And so that's a hardware, it's kind of a lucky hardware result. And then, yeah, I think another big thing is the availability of data. Having the internet, lots of text and images out there, things like DALI and the large language models wouldn't be possible without that kind of data. The underlying principles of these statistical machine learning models are actually really pretty simple and have been known for decades or arguably centuries. Some people just think it's calculus. It's more than that. But like, yeah, yeah, it is essentially just calculus.
C: But there's no new math involved.
MH: There's new math to do the engineering. But I think the fundamental ideas are not going to be new to people, or not new. Someone who's familiar with physics, or kind of learned classical physics or something like that can pick up the math relatively quickly, because it's not a fundamental difference. But I do think, yeah, it's hard to say, sometimes a lot of progress in AI is made by putting the right pieces together. And some of those pieces were out there decades ago, but people didn't know what to do with it. And then suddenly, it all clicks into place and things work.
S: Okay, so just so I understand, so obviously, the hardware is getting better, faster and faster computers. And the training data, the availability of training data is much greater than it used to be just because the internet and all that. But the underlying, conceptual basis of AI software is not really fundamentally different than it was even decades ago, although we're learning to use it in new ways. Is that fair?
MH: Yeah, yeah. And you learn new things about the framework as you use it. But yeah, I think a lot of it is pretty similar. And people are rediscovering things all the time that were proposed decades ago.
S: Yeah, yeah, yeah.
J: So the reason why we're doing this live stream is we have a book coming out in three days. And we discuss artificial intelligence in the book. And one thing that we try to do is talk about, where's it going to be in 10 years to 50 years? And I'm curious to hear what you think, what's the short term and when I say short term, five to 10 years, and then longer term, say 50 years, where do you see it going?
MH: Yeah, I mean, it's really hard to predict. But I guess in the next five to 10 years, I think a lot of the technology, I think a lot of a lot of progress in AI is being made, because people are able to take a problem that's out there and fit it into the square peg, or the square hole that is the deep neural network kind of training, training test paradigm. And so people are constantly figuring out creative new ways to do that. And so I think that's going to continue to develop and that's advances in narrow AI, I guess, as you were calling it before, more specialized AI systems. And yeah, I think, over the longer term, there do need to be a conceptual breakthroughs in terms of how we think about intelligence systems and how to design them. Because the way that humans learn and reason is pretty fundamentally different from the way that statistical machine learning systems learn and solve problems. And so I think it's going to depend on whether those breakthroughs happen. And it's hard, those are very hard to predict, obviously. But a lot of people are working on these problems. And things are moving very quickly. I think you'll probably see more. In the short term, you'll probably see more things like DALI and the large language models, because those work really well, and there's a lot of incentive to build those. And so it's the type of thing that you scale up very easily, if you have the resources. And so I think a lot of research is going to go into that, for better or worse, is that the globally optimal thing to do? Who knows? But I think a lot more funding is going to go into that. And at the same time, there are going to need to be conceptual breakthroughs to move beyond that. And also, I think it's an open question whether those scaled up, hyper scaled up statistical approaches are sufficient. I don't think they are in the long run. I think I think it can they can be used to solve a lot of problems. But I don't think it's going to give us the answer to general intelligence.
J: So we're, do you think we're going to see the proliferation of AI though, meaning in five to 10 years, is everything going to have a some type of AI component to it that would help us do things or is that what's in the future?
MH: I think, yeah, there's so many factors besides just the technology that play into that.
C: But where it can be used, won't we see it used? I think back to when I was a kid and I was of the era where my toys went from maybe having, I remember I had a Teddy Ruxpin that had a tape recorder in it, but I didn't have anything with a computer chip in it. I just didn't. And now most children's toys have a computer chip in them. It's just, if you can use it, it's going to be used. Do you think we're going to see that across the board?
MH: Yeah, I think you will see that. Yeah, it'll really depend. On the internet the reason why we're using AI a lot already, AI is already in a lot of systems we use. Anytime you use a search engine, or translate something or take a picture and it finds faces, that's AI. So it's going to be in all those systems, any kind of digital system where there's a very clear task to do like facial recognition or something like that. Well defined tasks, you'll probably see it there. It's already like that in a lot of ways. I think we'll probably see more of it in applications. I think what the large language models have really opened up is the possibility of a general interface for people who aren't experts to give prompts to a system to create things and complete things. And so we'll probably see more of that.
J: Like, Midjourney, we've been using Midjourney a lot. It's an art program. And we're pretty damn blown away by how incredible the images can be. And we were having a discussion about whether or not, how do we how does this affect society? There's so many elements to this especially what it's going to be like in the future. Imagine, I don't know, I don't want to get back into it, Cara, because Cara and I-
C: I know, careful Jay.
J: I'm picking my words carefully.
C: What is the value of art?
J: But it is amazing to see a software program where I give it five words 'sunset in Florence, Italy' and it creates, in my opinion, a profoundly beautiful painting.
MH: Yeah, I it is pretty remarkable, pretty amazing that we have these systems that can do that. I mean, yeah, I don't know. I don't know. It's hard to tell, again, what the social impact is going to be because we do have technologies that can reproduce things pretty well, on a large scale already. And those, the printing press did a lot of the work, I think, in a way, and maybe just the internet did a lot of the work of changing society. I do, a part of me does think that if it becomes so easy to create content and make things up, you might actually see people just automatically not believe things that they see on the internet. Maybe people will be more skeptical.
S: Anything could be a deepfake.
MH: Yeah, exactly. And people will kind of trust, be kind of rely more on their judgment and trust, like who the source of the information. I feel that happened throughout the evolution of the internet, that there was a period where you believed a lot of stuff on the internet, and then you stop believing most of it, except for the sources you really trusted.
C: Yeah, and how you come to those realizations is where the human psychology is so important. You can't ignore that component of this.
MH: Right.
S: I'm interested in your thoughts, especially since you're at the cognitive end of things. The relationship between the kind of narrow or whatever you want to call it, AI that we have today and an AGI, a general artificial intelligence that's actually a self-aware thinking entity. My sense is, which has been strengthened by what you've been saying, is that the current AI algorithms that the neural deep neural nets, whatever, they're not on a path to general AI at all. They're just, they're really good at solving specific problems, but we would need to say conceptual breakthroughs to get to general AI. So do you agree with that? If so, how are we going to get to general AI? Or another question that comes up is, do we even need to get to general AI? And can narrow AI just do everything we need it to do without ever having to worry about is it aware or not?
MH: I think it's, I don't think it's necessary for us to develop general AI. I don't think it's, it's not in my mind, it's personally not my priority. I'm mainly interested in human cognition and how we can develop, improve people's lives and kind of understand how people work better using tools from AI and building with AI and stuff. But I do think, yeah, I mean, in terms of whether the current path of statistical machine learning is kind of paradigm of doing machine learning, I think is sufficient to get to AGI. I don't, qualitatively, it doesn't seem like it would be able to do that because of just how it works. It's just kind of, basically these are, these are systems that are extremely good at pattern recognition and picking out patterns in a well-defined problem that you give it and thinking about things within that, the parameters of the problem you've given it. What I think the large language models have made really evident is that if you kind of have a big enough problem or a big enough dataset and throw that at one of these systems, you mirror, basically mirror all of human text on the internet, mirror that distribution, then it comes to resemble intelligence very well, at least within the, again, the parameters of that problem. And so it might be possible that there's a qualitative jump as you kind of scale these things up, that that's hard to predict from the basic understanding of it, understanding like a, what's the word? A phase shift of some kind.
C: But do you think that that could happen by virtue of the iterative process of the AI itself? Or do you think that that's going to require I think the big fear and concern when people start to get really dystopian about the singularity and things is that they're, at what point can we no longer control the AI? At what point does it become self-aware enough that it says, no, I'm not going to do what you just asked me to do. I'm going to make these decisions on my own. And that's the part that I think is the stuff of film, but it's the realistic fear that a lot of people have. And do you think that we are ever going to be there?
MH: Yeah. I mean, so I don't think that an AI has to be self-aware for it to be dangerous or cause a problem.
MH: Yeah, there's the paperclip problem. It doesn't need to be aware that it's producing paperclips. It just has to be obsessed with producing paperclips. And I think you can, yeah, at all costs.
C: But could you tell it then, please stop producing these paperclips? I guess that's the kind of concern, right?
MH: Yeah, yeah. Well, yeah, I might say, well, you told me to produce them before. Why should I listen to you now?
C: Right.
MH: I think any complex technological system that has these effects that are very hard to predict or get people to agree on how to use them can lead to bad things potentially. And so I think a lot of it, I think the potential for things to go awry is already there in a way, just because these are very big, complex, often uninterpretable systems.
C: And it's already happening. These may not be eschatological outcomes, but they're already massive social justice outcomes that we're contending.
MH: Yeah. Yeah. And I mean, a concrete example is even YouTube algorithm for presenting people with new content. It's not paperclips, it's like clicks.
C: It's not paperclips, it's white supremacy. (laughter)
S: Is YouTube algorithm destroying our democracy, basically?
C: Exactly.
S: It's a plausible question.
B: I'd rather have paperclips.
J: But I often think about this concept, what we have today with artificial intelligence, everything it is so unbelievably far away from an actual conscious intelligence. This isn't going to happen by accident, right?
S: Skynet's not going to wake up spontaneously without us intending to create something that is capable of being conscious. Right? You agree with that?
C: But the question is, does that matter?
S: And that's the separate question, does it matter? Will it act enough like it is sentient that it might as well be in terms of its ultimate behavior?
C: And in terms of its impact on society and human beings, there are human beings who I wouldn't consider sentient. And then there are other human beings who will be very easily duped by something that isn't even remotely passing the Turing test.
J: I guess the point is we're going to have artificial intelligence will get more complicated. It'll be able to do more stuff. It'll be able to do things better and faster. But I think from everything that I've read, I'm just seeing where you're at, Mark. This whole idea of a computer becoming conscious or whatever, a supercomputer, having some type of consciousness that we could as a human being say, yeah, it knows it's alive. That could be a hundred years away. Where does that happen?
S: We have no idea.
MH: Yeah. I mean, we don't even understand it, how it's possible in humans or animals. So how would we even know if it's there in an artificial agent or how to even build it?
J: That's a good point.
MH: Hard problem of consciousness, I guess.
S: So, all right. So let me frame the question to you this way, because this is a separate discussion that we've gone to separate from AI, just again, what's human consciousness. I tend to follow Daniel Dennett's idea that there is no hard problem, that human consciousness is just what you get when you solve all the small problems all at the same time, all talking to each other in real time in a continuous loop. That's consciousness. There is no hard problem. So if that's true, then you could think, well, maybe Skynet will wake up. Maybe if we have enough AIs linked together so that there's a constant self-perpetuating input and output, maybe that is a general AI. What do you think about that? Or do you think there has to be some special sauce in there, not just a bunch of narrow AIs talking to each other?
MH: Yeah. I do think there has to be a special sauce, but I don't think it's like magical or anything like that. I think it could be understood within kind of the functionalist, computational cognitive science, cognitive science framework. Also taking into account being embodied and being cultured and stuff like that.
S: Although those are also, I'm a neuroscientist, so all of those things are little circuits in your brain. There's a circuit in your brain that makes you feel like you're in your body, that makes you feel like you're in control of your body, that makes you feel like you exist. All of these things are just circuits in the brain that can be turned off. And when you think about it that way-
C: And to look at sort of a microcosm of what you were talking about, Steve, we've discussed on the show before the idea of like developing organoids in order to test drugs or to test different surgical techniques. And sort of an organoid which has the neural, and when I say the neural networks, the neuronal networks that are required for self-organization, developing eye spots, developing circuits, for example. At what point does that organoid have enough of that circuitry? Or at what point is that circuitry organized enough? Or at what point does we're talking about the hard problem again, but does the consciousness emerge even if it really is just an emergent property? And sort of how does that relate then to AI? How organized does it need to be? How many inputs does it need to have? How much programming is required?
J: Mark, if you can-
S: Go! Solve this problem! (laughter)
C: Or is that, are we asking the wrong question?
MH: Honestly, I really don't know. I take the perspective that we're really still pretty early in our understanding of the brain and the mind and the general principles underlying these things. And we don't even have the right conceptual, like, the first step is to even define things and describe what's going on. And in a lot of ways, we're still there. We're still cognitive scientists and AI people and people who think about intelligence are constantly arguing about, what even you need for intelligent behavior. Let alone the intelligent behavior plus this feeling of awareness that we have. And so I think the framework of computation and that's AI is built on is the best working model we have for how these things work. But I'm sympathetic to opponents of Dennett, critics of Dennett, who say, well, we do have, there's a way to be a bat. There's a way to be a person that's not just inputs and outputs. I think like we need to, I think, I think we need to exhaust the input output way of thinking about things before we can get there, probably. And so I'm focused on that.
S: So here's another thought that I had about this is that and similar to like, I'll interest in your thoughts on the Google employee who was convinced that his chatbot was sentient. From what I hear most, although I've, interestingly, some smart people that I know pushed back on the idea that it couldn't possibly have been sentient. I think it couldn't possibly have been sentient, but I'll let you tell me what you think.
C: But like I was saying before about people.
S: But I think what that reminds me of though is this sense that we may not know when we get to general AI or even more so, I think that if we move in this direction, if we try to put together a bunch of narrow AI algorithms so that a robot can exist like a person in the real world, we may get to the point where we can't tell the difference. Just like this Google employee, it's indistinguishable in terms of the end output from a sentient being. So how do we know it isn't sentient? Maybe we'll get to the point where we'll have an entity that has all, does all the things that people do, even though they're all circuits that we know, well, that's just machine learning and neural net and it's all brittle, narrow AI, whatever. Yeah, but you put it all together. It's certainly indistinguishable from what, how a person behaves.
J: Steve, I think we'll know.
S: Then we don't know. We won't know.
C: No, because what is the parameter? What is the parameter? That's an operational definition that we set. At what point does it go past what threshold? That's arbitrary anyway.
S: So just, we'll be able to be a bigger version of this, the Google employee who thought his chatbot was sentient.
J: Steve, we'll know when AI becomes truly conscious when it becomes lazy.
S: No, but that may just be part of the behavioral algorithm.
B: The lazy circuit. Come on.
C: It's already lazy. I mean, isn't laziness just increasing efficiency?
S: But to say that, just efficiency, saving energy.
C: We want to be as lazy as possible.
S: Don't want to wear my batteries down.
C: We want to work smart, not hard.
S: But David, we threw a lot of stuff out there. I just wonder if you have any thoughts about that.
MH: Yeah, yeah. I mean, this question of sentience, I don't know what like the definition of sentience is. I'm not a big fan of these kind of obsessive definitions, but like, I am like, kind of like, I don't even know how you would test for sentience. I don't even know where to begin testing, for sentience independent of-
C: Mirror test?
MH: -well, but that's like self-awareness. That's like, is that sentience? Is that what we mean by like, or kind of an ability to kind of recognize some kind of set pattern out there as being caused by yourself or being you or something.
C: Also we might actually be talking about sapience, right Bob?
B: Yeah. I mean, I think it's probably a better word than sentience.
C: It's the ability to feel, right?
MH: Okay. Okay. I guess like sapience, okay. So sapience is a little, it feels a little more well-defined to me. It's closer to things like intelligence, which is also not the most well-defined thing, but I think lately I've been thinking about it in terms of, I don't know, this kind of more, this idea of kind of agency and what do we consider an agent and sufficiently, because I think, there's a fundamental distinction we make and there's evidence that we make this distinction as children or as very early as infants even, that we can parse the world into things that are agents and things that aren't agents, things that seem to be self-propelled and seeking out things in the world and things that, and reacting to things in the world in a smart way and not just mechanistically. And things that are just, more like mechanism. And what's, I think, challenging about AI systems is that depending on who you are, you understand the mechanism well or not well enough that, I mean, it's related to like Dennett's intentional stance ideas, but I think when we think about, I guess the Google employee who thought that the system had a personality and person, personhood or agency that should be respected. I don't, I think just interacting, getting a system to print out, I am a person is not sufficient because one, you can, if you ask it what it's like to be a squirrel, it'll give you a long monologue about how incredible it is to be a squirrel and how it loves nuts and stuff. And so it's-
E: Brittle.
MH: -well, it's actually very flexible. It's just a lot of false positives.
S: It does too many things.
C: It's masquerading.
S: It shouldn't know what it's like to be a squirrel.
MH: Yeah. But I do think, I don't know, another perspective is thinking about these systems, these large language models as components of a larger sociotechnical organism of how the person, how the person how are engineers fine tuning the system? How are prompt engineers developing new ways to extract meaningful outputs from the system and learning. I think there's the agency perspective, I think highlights that we come to see things as being more person like to the extent that we can, it is reacting in an environment adaptively to us and to other things and pursues its goals and isn't brittle, is actually robust. And these models are not, they are kind of adapting over time through the engineering process that includes both the actual system, but the data that it's getting, the people that are fine tuning the parameters, selecting different ways of doing it. Yeah. So I don't know if that, that helps. I think it's hard to say where it is sentient, to k draw a very clear boundary around a thing that is an agent or sentient or intelligent. Because even humans, there's no part of the brain that we know is sentient. It's kind of all.
S: We had to give up that idea.
B: It's emergent.
S: There's no global workspace. There's no seat of consciousness. We don't can't find it.
B: Pineal gland.
C: It's not the pineal gland.
S: Well, Mark, thank you so much for joining us. Sorry to lob you so many softballs during this interview.
MH: Yeah, consciousness.
E: We'll go harder next time.
S: Nature of reality kind of easy, easy questions. Maybe we'll get really down to the hard questions next time when we get you back on the show. Thanks again. It was a lot of fun.
J: Thank you, Mark.
MH: All right.
J: Take care, man.
C: Thanks, Mark.
Who's That Noisy? ()
New Noisy ()
[_short_vague_description_of_Noisy]
Announcements ()
Dumbest Thing of the Week ()
- [url_from_show_notes _article_title_] [3]
Name That Logical Fallacy ()
- _Fallacy_Topic_Event_
_consider_using_block_quotes_for_emails_read_aloud_in_this_segment_
with_reduced_spacing_for_long_chunks –
Questions/Emails/Corrections/Follow-ups ()
_consider_using_block_quotes_for_emails_read_aloud_in_this_segment_
with_reduced_spacing_for_long_chunks –
Question_Email_Correction #1: _brief_description_ ()
Question_Email_Correction #2: _brief_description_ ()
Science or Fiction (h:mm:ss)
Answer | Item |
---|---|
Fiction | |
Science |
Host | Result |
---|---|
Steve |
Rogue | Guess |
---|
Voice-over: It's time for Science or Fiction.
_Rogue_ Response
_Rogue_ Response
_Rogue_ Response
_Rogue_ Response
Steve Explains Item #_n_
Steve Explains Item #_n_
Steve Explains Item #_n_
Steve Explains Item #_n_
Skeptical Quote of the Week ()
You don’t need to predict the future. Just choose a future — a good future, a useful future — and make the kind of prediction that will alter human emotions and reactions in such a way that the future you predicted will be brought about. Better to make a good future than predict a bad one.
– Isaac Asimov, (description of author)
Signoff/Announcements ()
S: —and until next week, this is your Skeptics' Guide to the Universe.
S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.
Today I Learned
- Fact/Description, possibly with an article reference[8]
- Fact/Description
- Fact/Description
Notes
References
- ↑ The Washington Post:: These planes are battery operated. Will that fly?
- ↑ Michigan Engineering:: First light at the most powerful laser in the US
- ↑ [url_from_show_notes _publication_: _article_title_]
- ↑ [url_from_SoF_show_notes PUBLICATION: TITLE]
- ↑ [url_from_SoF_show_notes PUBLICATION: TITLE]
- ↑ [url_from_SoF_show_notes PUBLICATION: TITLE]
- ↑ [url_from_SoF_show_notes PUBLICATION: TITLE]
- ↑ [url_for_TIL publication: title]