SGU Episode 951: Difference between revisions

From SGUTranscripts
Jump to navigation Jump to search
(discussion done)
(news items done)
Line 294: Line 294:


== News Items ==
== News Items ==
<!--
** We recommend adding section anchors above any news items that are referenced in later episodes (or even hinted in prior episodes as upcoming). See the anchor directly above News Item #1 below, which you would change to {{anchor|news1}}
-->
'''S:'''
'''B:'''
'''C:'''
'''J:'''
'''E:'''
<!-- ** the triple quotes are how you get the initials to be bolded. Remember to use double quotes with parentheses for non-speech sounds like (laughter) and (applause). It's a good practice to use brackets for comments like [inaudible] and [sarcasm]. -->
''(laughs)''
''(laughter)''
''(applause)''
[inaudible]


{{anchor|news#}} <!-- leave this news item anchor directly above the news item section that follows -->
{{anchor|news#}} <!-- leave this news item anchor directly above the news item section that follows -->
Line 321: Line 302:
|publication = PLoS
|publication = PLoS
}}
}}
'''S:''' Speaking of manipulating people online, Jay, tell us about zoom backgrounds.
'''J:''' Yeah, this is one of those things when when you hear it, you're like, OK, makes it definitely make sense. So researchers decided that they wanted to study the influence of video backgrounds and how it can have an effect on people's first impressions over a video conference. Bottom line is, it turns out that the background that you have when you're when you're doing an online video call can have an influence on what people think about you, particularly first impressions. And to state the obvious we live in a world where video conferencing has totally exploded, during and after covid. The world straight up changed after covid and more and more people than ever now working from home. A lot of first impressions are happening when you when you do this. If you're working from home and you have to be in meetings with people like you're meeting people over video. We definitely use video way more now than we did. I mean, and look at like we still do like a Friday live stream. That was from the pandemic and it's stuck. And it's like it's all part of that whole stay at home. Don't go to the movies like the culture has changed quite a bit. So I think this research is really useful and it's very provocative. So the research was directed by Patty Ross, Abby Cook and Meg Thompson. And they're at Durham University in the UK. So what they found was that the participants of the study judged people as trustworthy and more competent. Get this, if they had plants or bookcases in the background.
'''S:''' I have both.
'''C:''' Yeah, same.
'''E:''' And was that by mistake, Steve? Just a happy coincidence.
'''S:''' Intuitive.
'''E:''' Right. Right. You know, you're paying some attention.
'''S:''' I mean, honestly, I do telehealth. Patients see my background and I wanted a "professional background". And those are elements that are very common in that kind of sense.
'''C:''' I think I just have plants and bookcases or plants and books on the bookcase behind my desk. I think I just naturally have those things in my background.
'''J:''' So and the other interesting part about this is there were backgrounds that were bad that you might not think. Like a kind of-
'''E:''' Zombies.
'''J:''' Just a regular living space.
'''B:''' Zombies.
'''J:''' That was kind of empty, say, or a fake background, which was bad. If you're obscuring your background-
'''E:''' Those artificial ones, yeah.
'''J:''' So other factors that had an influence on first impressions were also the person's gender and their facial expressions. So the researchers asked one hundred sixty seven adults to look at still images that seemingly were taken from a video conference. But they constructed them. They were like, OK, in this picture, we're going to show a woman smiling with a blank with like a very non-decorated background. All these variations of these different types of backgrounds between men and women and different things in the background. And the images were of a white man or a white woman. The study was done in the UK. There was no cultural diversity here. They just were doing like a very basic initial study. You know, they weren't trying to, like, factor in other.
'''S:''' Yeah, you can only do so many variables in one.
'''J:''' Yeah, I mean, of course. And they they varied the images, like I said, some people had neutral faces, some people had smiling faces. And then here are the different backgrounds. They had they had a plain living space, a blurred living space, houseplants, a bookcase, a blank wall or a fake a fake image, right, which is like an imposed image or like something like a walrus or an iceberg, right? Which is, I guess, would be part of like the fake images. But you think, oh, walrus is nice. It's cute to have their whatever. But it turns out the study participants they had they were asked to rate how competent or trustworthy they felt each person in the images were. And here are the results. Images that had bookcases and houseplants in the background, like I said, were rated as being both more trustworthy and competent than images with all the other backgrounds. And interestingly, the images that had a living space or a fake background rated the worst. Now, this is not an insignificant finding. Now, of course, happy faces and female faces also rated higher than neutral faces or men. I was surprised that men and women didn't rate equally, but definitely happy faces. You know, yeah, of course, if you're smiling and you're welcoming, that that would leave a better impression. So the ultimate situation that they found was a smiling female face in front of either a bookcase or plants that rated the highest. They were rated the most competent and the most trustworthy. The researchers concluded that the backgrounds absolutely do affect first impressions, and they also suggest not using the low rated backgrounds like just avoid it, especially the fake backgrounds. They concluded that visual context has a strong influence meaning like as a gestalt very quickly, in the briefest of moments when you first get on a video call, judgements are being made and those judgements can affect all your communication with that person.
'''B:''' Look, he has a cat face.
'''J:''' So I was looking at my room, right? I have a very cluttered office with lots of different random stuff like you could see parts of my life back there. Guitars, I have masks on the wall and stuff like that. I wish that they did like almost like the the attic like what what's a cluttered room going to do? Is it going to be like really a turn off or is it provocative because there's different things to look at? I don't know. I wanted to hear that. The information is impactful because by 2024, which is coming right up, 75% of business calls are predicted to be over video. And if, for example, like, let's say someone is being remotely interviewed on a video conference, which is common now, perceived competence is a strong predictor of higher ability. If you're if you're interviewing over video, like put plants in there or get get in front of a bookcase I think it's I think you might as well. Why not? Why wouldn't you? Why wouldn't you do it, especially if you're looking for a new job?
'''E:''' Well, yeah, and this isn't I don't think this is necessarily new. I mean, Steve, as you were alluding to, when you go into a professional environment, a doctor's office, a lawyer's office, even an accountant's office, I've seen it. Guess what's in the background? Books. Among some other things and trappings. But I think that's the expectation that people have. And they want your video call to meet up to that expectation.
'''J:''' You know, my personal experience, I absolutely don't like the fake backgrounds.
'''C:''' I hate them, too. They're so obvious.
'''J:''' Yeah, they make me uncomfortable. There's something about it that makes me uncomfortable.
'''C:''' Yeah, it's like, what are you hiding?
'''J:''' Yeah. Yeah, you're losing you're missing context, which I don't like. And so so I think you know, they're going to do, of course, like there's an endless amount of study that they could do in variation here. And of course, in every different country, I'm sure would have different parameters that would factor in here. But I mean, I think pretty common something natural like having plants in there or having a bookcase. Books definitely represent a certain type of personality or mindset or whatever. Books have a vibe to them that convey something and sort of plants. So interesting, right, guys?
'''C:''' Yeah.
'''S:''' What influence did having a corpse in the background? ''(Cara laughs)''
'''B:''' The Halloween people loved it.


=== Manifesting Fails <small>(24:38)</small> ===
=== Manifesting Fails <small>(24:38)</small> ===
Line 329: Line 368:
}}
}}


...the [https://harrypotter.fandom.com/wiki/Mirror_of_Erised Mirror in ''Harry Potter'']
'''S:''' All right, Cara, tell us how to get rich through manifesting or maybe not.
 
'''C:''' Or not. So obviously, we talk about this sometimes on the show. I've referenced before a really incredible book by Barbara Ehrenreich, who sadly recently passed away. I don't know if you guys remember that, but incredible author called Brightsided, How the Relentless Promotion of Positive Thinking Has Undermined America. I cite this book all the time because I think that she does a really good job of digging deep into the into the sort of evidence behind the underbelly, the dark underbelly of the positive thinking movement. And so whenever news articles come out sort of within this realm, which is not very often, I'm always excited to dig deep into them. So, anyway, there's a new study that was published in Personality and Social Psychology Bulletin called The Secret, in quotes. You like that? The secret to success, the psychology of belief in manifestation. This is something that I feel like we deal with all the time as skeptics. But we don't often really get into the weeds on it. We sort of poo poo it or we sort of go like, yeah, obviously, this is dumb. But we don't really talk about how or why we know that manifestation doesn't work. And I think that sometimes when have you guys ever found that you get into, I don't want to say arguments or debates, but heated conversations or even not so heated conversations with people in your life or even just random people that you run into in your daily dealings about, no, no, but if you put positive vibes into the world, like you're going to get something back out of it, it's and it's sometimes kind of hard to counter. Basically, the counters, there's no evidence to support that.
 
'''S:''' People think that it's a virtue to believe that it's kind of like the assumption that having faith is a virtue. And I think they also just want to believe that that's the way the world works, basically gives you magical control over it. And again, you're probably going to get into this, but there's two layers here, right? There's the magic layer and the psychological layer. They're both wrong.
 
'''C:''' They're both wrong. But I think people you're right. The magic layer is very I could say is more obvious to those of us who have sort of like existed in the skeptical movement for a long time. But the psychological layer, which feels like it could be more complicated, is actually not that complicated. And so I really like this study because it sort of reinforces a handful of interesting things. Well, and while doing it, it has some fun psychometric stuff. And I love psychometrics. So basically, this is it's a group of researchers in Australia who looked at American participants and in doing so, they developed a scale. And then they validated this measure, this psychometric scale called the manifestation scale. And they wanted to sort of understand they wanted to be able to say, OK, empirically, we want to see how much people buy into the law of attraction or the power of positive thinking or the idea that manifesting some sort of outcome works. So they define that as the ability to cosmically attract success in life through positive self-talk, visualization and symbolic actions or acting as if something is true and just hoping that that happens. So they did three different studies overall. They had a participant number of over a thousand people. And yeah, in doing these studies over the course of them, they first developed this measure, the manifestation scale, and then they validated it over the course of the study. And they tried to understand a little bit more about individuals who score higher on the manifestation scale. They've got two subscales, the personal power subscale and the cosmic collaboration subscale. So we've got different types of items on the scale, like visualizing a successful outcome causes it to be drawn closer to me. I'm more likely to attract success if I believe success is already on its way. The universe or a higher power sends me people and events to aid in my success. To attract success, I align myself with cosmic forces or energies. And so they looked into all the things that you have to look into when you're first developing a psychometrically sound scale like this. Does it have a normal distribution of scores? What are the different demographics of the individual that we're testing and their age and their gender and their income and their education? And how are they netting out on this? So they looked across all of those things in an effort to both ensure that this scale had we've talked about this before. But validity and reliability. And they found that their manifestation scale was sound psychometrically. It was internally consistent. It was stable over time. It was normally distributed. And they found that evidence or they found evidence that endorsement of manifestation beliefs is sadly pretty high. So in this very first study, they found over one third of participants endorse manifestation beliefs. Does that surprise you guys or is that what you would expect?
 
'''S:''' About a third seems about right.
 
'''E:''' One third? Sure, yeah. Seems about right.
 
'''C:''' Right. OK. And then so they then moved on to study two in which they continue to work towards the validity of the scale with specific things like criterion and construct validity, which we don't really have to get into. And then they wanted to see if manifestation beliefs are associated with perceptions of current and future success. And then they moved on to study three. I want to combine them so I can give you the results of them. They continue to confirm these psychometric properties. And they also were super curious about how manifestation beliefs might actually affect somebody's decision making, especially if it comes or when it comes to business and financial ventures and judgments about future success. So here is what they found in a nutshell. They found that not only was their scale valid and reliable, not only did over a third of participants endorse manifestation beliefs, but those who had higher scores on the manifestation scale thought of themselves as more successful than their counterparts. They thought of themselves as having stronger aspirations for success, and they believed that they were more likely to achieve future success. All of this kind of seems reasonable, not reasonable, but all of this seems like what you would expect. But here is the dark underbelly, as we mentioned, which I think it's good to have hard data on this because it's got good face validity. But it's important, I think, that we see that this nets out. They're more likely to be drawn to risky investments, to have had a bankruptcy, and to believe that they could gain an unlikely amount of money or success more quickly.
 
'''E:''' Interesting.
 
'''S:''' Yeah, they believe in magic, and so magical thinking is not good in the marketplace.
 
'''C:''' Exactly.
 
'''E:''' Well, yeah, that's for sure.
 
'''C:''' And just like you mentioned before, Steve, there's the magic side of this. There's the psychological side of this. Neither one of those is like manifesting doesn't work. We know that, right? Manifesting, like really just saying, I'm putting out good vibes and I'm going to be paid in outcome. We know that that doesn't work, and we know that that it's a scam. It's what charlatans do. It's what get-rich-quick charlatans do to sell books and to sell conferences. And it's the secret packaged a million different ways. But we often talk about this when it comes to skepticism, the sort of precautionary principle or the sort of like, what's the harm component of this? There is actual harm, because if you think you are more likely to have these positive outcomes, you are also apparently more likely to do dumb stuff, to make bad decisions, especially when it comes to your finances, because you think that you're going to have a positive net benefit from it.
 
'''S:''' But we have to say that this study did not establish cause and effect. It's also possible that people who make bad decisions about finances also make bad decisions about what to believe when it comes to manifestation.
 
'''C:''' A hundred percent that's possible.
 
'''S:''' Yeah, so maybe a combination of the two things.
 
'''C:''' Yeah, this is obviously correlative, but I think the interesting component here, which complicates things a little bit, is that the sort of secret power positive thinking manifestation movement is not some sort of background belief structure. It's an architected, it's an intentional belief structure. So although you're right, there is this doesn't prove cause and effect. I do think it's safe to assume that there may be people out there who are more likely to buy into this, who are also more likely to believe such things. But I don't think that anybody "naturally" believes these things. I think that this is something that is taught, that there's an actual movement to teach individuals to think this way. And that's an important point.
 
'''E:''' Is it Western culture only or?
 
'''C:''' Not necessarily. No, no, no. I don't think that this is only a Western thing. I don't think it's universal by any stretch of the imagination. But I think you do see croppings or like ideas of this kind of cropping up in a lot of sort of more religiously influenced cultures as well. Yeah, I mean, if you think about it like sacrificing to the gods or making sort of offerings to the gods in order to get certain types of positive benefits and folds, it's not the exact same thing, but I think there's probably a lot of crossover there, don't you think?
 
'''S:''' Well, certainly people use people's religious beliefs to make them vulnerable to scans and cons.
 
'''C:''' A hundred percent. And the real question there, right, is, I mean, it's not the real question, but we talk about this a lot, is the intentionality of it. So is it that the perpetrators of this truly believe what they're selling or not? Ultimately, it doesn't really matter. The outcome is the same. But the scams and the cons, especially when it comes to the sort of power of positive thinking movement, are quite egregious and obvious. This sort of capitalist conference room version of this is quite clearly conartistry at its finest. But I think that you see sort of shadows or versions of this dating back probably thousands of years that are really religiously influenced, where the individuals who are promoting this type of thinking are fully believe what they're promoting. They are equally duped by the rhetoric that if I sort of put out there these positive thoughts or vibes, it's no different from prayer. It's really no different from a lot of these different iterations of it. It's just a new, glossy capitalist take on on magical thinking.
 
'''S:''' Yeah. And the psychological one, I like to do harken back to something that Richard Wiseman said. It's in his book, Fifty Nine Seconds, where what the research shows is that like imagining yourself in your goal is counterproductive. What is is productive is imagining the steps you have to take to get there.
 
'''C:''' Right. Right.
 
'''S:''' So just saying I'm going to be wealthy and happy and whatever is actually is similarly ineffective because it's again, it's just magical thinking. You think that that's somehow going to magically get you to your goal as opposed to saying, first, I need to go to school and get my degree. And then whatever you like, you have a process that you're going to go through to get to your goal. That's practical. That is actually helpful.
 
'''C:''' Right.
 
'''B:''' Which makes that [https://harrypotter.fandom.com/wiki/Mirror_of_Erised Mirror in ''Harry Potter''] so evil. You're seeing yourself at the goal and that's all you care about.
 
'''C:''' And then what ends up happening is that you make decisions that like are risky, that are short term, that don't have this sort of long process. You're not making decisions that require hard work. You're making decisions that require it's like you're more likely to fall into like get rich quick schemes. And they backfire. Those things backfire. They're very dangerous. It's high risk with a low probability of high reward.
 
'''S:''' I'll end with just a quote from that I love that I read. Economist Paul Krugman said, when people believe in magic, it's springtime for charlatans. And he was talking about just economic charlatans. But it applies across the board. Belief in magic makes you vulnerable.
 
'''C:''' Yes.
 
'''S:''' Absolutely.


=== Tong Test for AI <small>(37:51)</small> ===
=== Tong Test for AI <small>(37:51)</small> ===
Line 337: Line 432:
|publication = Engineering
|publication = Engineering
}}
}}
'''S:''' All right. So-
'''B:''' AI! AI!
'''S:''' Yeah, this is an interesting item about artificial intelligence. We've been talking, which we've been talking a lot about, obviously, because it's massively in the news. And we actually mentioned a couple of times in the last few months that there's these new large language models like ChatGPT would probably blow through an old style Turing test.
'''B:''' Oh, yeah. So quaint.
'''S:''' Yeah, we got some feedback on that pointing out that I think people misunderstood what we were saying and didn't put it in the context of previous conversations we had about it. We know that the Turing test is a formal test, right? This is the test that was the concept was put forward by Alan Turing. It was developed in into a formalized test, which different institutions ran with their own specific details and thresholds. But it's basically the idea is that an artificial intelligence that can fool a certain percentage of subjects into thinking that it's a person or at the very least making it like they can't distinguish it from a person. So you have you're talking to either a person or an AI and you have to decide which it is. And if the AI can fool 30 or 40 or whatever percent of the people it's considered to have passed that instance of the Turing test. And so these these large language model chat bots basically are a leap forward in that kind of AI. And I think they render that kind of those kind of Turing tests pretty obsolete at this point. But we have a paper that was recently published proposing a new test, which they're calling the Tong test, T-O-N-G, not after a person, but after the Chinese word for general, because this is supposed to be a test for artificial general intelligence.
'''B:''' Oh, cool.
'''S:''' Yeah. And so this and this is a good follow up to our again our previous discussions because we've talked about the fact that the Turing test actually isn't a good test for whether or not you have achieved an artificial general intelligence that we've always said a really good chat bot should be able to pass the Turing test without having general intelligence. So again, for a quick review, this is the difference between what we call artificial narrow intelligence, which could be very good at specific things, but doesn't have human like intelligence with general intelligence.
'''B:''' Brittle.
'''S:''' So and we and ChatGPT as impressive as it is, is a narrow AI. It's very brittle. That's why it could make stuff up and it could be easily confused because it doesn't really understanding anything. It's just predicting the next word chunk. That's it. It's very narrow.
'''B:''' But also, it's still, though, it's a language model.
'''S:''' It's a language model, not an understanding model.
'''B:''' That's what makes it exceptional at conversation, so much better conversation than old style chatbots.
'''S:''' The thing is, we use language as a proxy or a marker for intelligence. So a really good language model seems intelligent to us, but it isn't. It's just really good at this one specific thing. So what how what would be the contest? There's a go into a lot of detail here. I'm going to try to skip to what I consider to be the big picture.
'''B:''' It was kind of hard to figure out what exactly is this test. One thing I like, Steve, is that they say anyway that they what to do is they want to evaluate different aspects of the of the AGI, which which I like, because it's like like an IQ test, right? You just can't quantify intelligence really like that. But hitting it from different angles, I think, is is a much better way to get a feel for what you're dealing with.
'''S:''' And it's not a specific test that they're proposing. They're proposing the concept of the test and examples of how that could work. OK, so here's one of the criteria that they propose a an artificial general intelligence should have. So this is one of the things that we should test for in an AGI system. They call it infinite tasks. And that means that because human beings don't have a finite number of tasks that they can solve or that they could perform and have zero ability to address anything outside of that finite predetermined list. So you should be able to apply your abilities,
your artificial intelligence abilities to a theoretically unlimited number of tasks. So that and again, that gets to, I think, the fact that a general AI is not just following an algorithm or following rules or just basing it off of what it's been pre-trained on. It has a deeper level understanding that it could apply to novel conditions or novel tasks or novel situations. So it should one of the ways one of the again, markers for that is basically an unlimited, theoretically unlimited number of different tasks that it could perform. Another one is self-driven task generation.
'''B:''' Yeah, it sounds like a tricky one.
'''S:''' It's a tricky one, but I think it's interesting. It actually reminded me of a conversation that we had recently with Christian Hubecki, who was actually, I think this is this on the live show from DragonCon. I think it was on the live show from DragonCon talking about a news item about training drones to beat a course, right? To fly a course. And he said you can't tell the drone complete the course in the shortest amount of time because that's too general. You have to break it down to a very specific component. And that is make it through the next hoop in the smallest amount of time. And it could do that. So you give it a specific task like you don't tell your Roomba clean the floors. It's following a very specific narrow task. Go in a direction until you bump into something, then rotate 20 degrees and try again. You know what I mean? Like it's following very, very specific things that add up to the ultimate task. But it doesn't understand the ultimate task and you can't just give it the ultimate task. But in artificial general intelligence, you should be able to say, clean up this room and not give it any further details. And it should be able to figure out what that means and how to do it. And also, they'll give lots of other examples. For example, if you if its task is taking care of a four year old and that may involve doing a number of things. But if the four year old does something completely unexpected, like ask to play with a sharp pair of scissors. Would the AI like slavishly follow? I'm supposed to make the child happy and give it what it asks or will it be able to say, oh, wait a minute, I'm supposed to keep it safe. That's potentially dangerous and harmful. This small child lacks the judgment to use it properly. I'm going to say no, I'm not going to obey its request because it's dangerous without having previously been specifically told, don't give it any sharp objects. You know what I mean? Like it can infer that conclusion from basic principles. Another example they gave was very interesting. It's like, clean up the garbage on the floor here.And what if there's a hundred dollar bill on the floor? Will that just throw it away as garbage or will recognize this is something of particular value? It's not garbage. And then on the fly, recategorize it, even though it was never trained specifically to do that. You know what I mean?
'''B:''' Right, right.
'''S:''' So it could self generate the specific tasks from more general goals and or instructions. All right. The third one is value alignment. It has to have some ability to understand the values behind the self driven behaviours. And it should also, they say, be able to align those values with humans. Ideally, if they were not aligned with humanity, that could be a problem. And it should be able to infer those values through interactions with humans. So now we're talking about. So the one above that, like the self driven task is going from the general to the specific. This is now about going from the specific to the general, where you see specific instances and then you generalize and a value based upon those individual interactions. That makes sense? That's three. Four is causal understanding. You have to understand cause and effect. And that also is a key component to problem solving, right? Because you need causal understanding in order to problem solve. If it understands this creature is hungry and can go from that to it needs access to food. But let's say given the case of a monkey and a banana high up in a tree, although that's a bad example because banana plants are not that tall.
'''B:''' How do you know, you never actually grew bananas.
'''S:''' Stop it. So we could say, let's say some other piece or a fig or something. And then that the monkey would have to climb the tree in order to get access to the fruit. You have to understand that piece. You have to understand the cause and effect in order to be able to problem solve even a simple problem like that. Again, without basing it on any previous pre-learning. And then the fifth one, which is more of I'm not sure how this translates to a test so much as a component of an AGI. And that is embodiment. It has to be embodied in some way that doesn't necessarily even mean physically. It can either be embodied in a physical space, a physical object or a virtual environment or something. It's got to be able to relate to something physical, even if it's virtual, where it is separate from the rest of the universe. It is embodied in a thing and it can interact with other things in the universe. Anyway, I thought that was very provocative to think at the sort of a deeper level about what would be the components of a genuine test for artificial general intelligence. Because, again, the Turing test, I think I've always thought was terrible, really only answers a very narrow question. When are chat bots good enough to fool a person into thinking that it's good enough to be indistinguishable from a person versus but it's not really a good test of, is this a general intelligence? And it certainly is in a test of, is this sentient or sapient? Which I don't even think the Tong test is, but at least it gets closer to it. Because, again, this gets us to the I think the hardest problem is even if we developed an artificial general intelligence operating at a human level that was able to demonstrate all of these five features that the Tong test is talking about, that wouldn't tell us that it was aware of its own existence, that it was experiencing its own existence. And that gets us to the P-zombie problem. How do we know we're not making an artificial P-zombie? A P-zombie is a philosophical zombie, something that acts like a sentient entity, but doesn't have any qualia, it doesn't have any subjective experience of its own existence, doesn't feel anything. How do we know? How do we know other people actually feel things? It's easy to infer that because we do, right? And there's no reason to think that you're unique or special in the universe. So if you feel things, people probably are having the same kind of experience you are. But if we make an artificial intelligence, it's not something anything like we are. We don't know that it's actually feeling the things that it says it's feeling as opposed to it just doing problem solving. It may have cognition, but that's not the same thing as qualia. Or is it? That's a fundamental philosophical question that I have never seen a satisfactory answer to. Just some speculations, which may even be compelling, but not the same as like, oh, yeah, we would absolutely know when an AGI has crossed that threshold to being self-aware, as opposed to just acting self-aware. But is there a limit? Is there a point beyond which if you're able to act self-aware to this degree, you have to actually be self-aware? And then there's the other answer of, well, it doesn't matter. We have to treat it as if it is because we can't know that it isn't. Once it's acting self-aware, we basically have to assume it is. But that's kind of kicking the can down the road, right? That's sort of punting that question and saying, well, we're not going to answer that question. We're just going to now substitute it with a moral question of we should treat them as if they're self-aware, even if we can't prove one way or the other. Fascinating, all extremely fascinating. This is, again, I think, an interesting contribution to this thought experiment, which at some point in the future is going to be a practical experiment, not just a thought experiment. It isn't right now because we have nothing approaching AGI artificial general intelligence.
'''B:''' We need a new, more nuanced way to assess an AGI. Is it really an AGI? So many people are talking about it. So many people think not only that we're going to have it soon, some people are saying we already have it. I don't believe it. I think we eventually will, but we don't have it.
'''S:''' No, not even close.
'''B:''' Yeah, but hard to say exactly when we may really need a real good tool to assess it. It could be a while, but it's good to get ready for that kind of stuff right now, it seems. This seems very promising.
'''S:''' Yeah, I would certainly take notice if an AGI was able to pass these Tong test features. If it was able to do again, like you give it a general instruction that it's never had before specifically, and it was able to figure out what it had to do to accomplish that deeper goal. That kind of thing would be very impressive. It always reminds me of the movie Ex Machina, you guys remember that movie? The whole movie is basically a Tong test, right? The whole movie is testing an artificial intelligence that the eccentric billionaire tech genius made. He wanted to figure out if it was really self-aware, and he created a situation where she would have to basically trick him to escape from its prison. And in order to do that, it would have to have a theory of mind and it would have to problem solve. And we would have to know how to trick him, and it couldn't do that unless it was an artificial general intelligence, unless it was truly self-aware. I thought that was really provocative and a great idea.
'''J:''' Do you agree with that concept though, Steve?
'''S:''' What concept?
'''J:''' Would that play in reality the same way it plays in the movie?
'''S:''' I mean, I don't know, I'm not sure what you mean by that. Do I think that the test-
'''B:''' The strategy seems sound.
'''S:''' Yeah, as a strategy for determining whether or not that the robot was truly self-aware, I think was a valid idea.
'''E:''' Yeah, morally questionable.
'''S:''' Oh yeah, the morals aside. But the idea that it basically had to problem solve in a way that required an understanding of the theory of mind, that other people have thoughts and ideas and feelings that could then be manipulated, doing something that it was never specifically taught to do.
'''E:''' Right, how good could this robot lie basically to a human being and trick it?
'''S:''' Yeah, and manipulate it, like emotionally manipulate a person. Like that would, again, I would take notice. That would be impressive. Does that prove itself aware? Who knows, but that is definitely a general intelligence at that level, right?
'''B:''' Oh yeah, walks like a duck, quacks like a duck.
'''S:''' But we're nowhere close to that now.


=== Looking for Service Worlds <small>(54:42)</small> ===
=== Looking for Service Worlds <small>(54:42)</small> ===
Line 344: Line 514:
|publication = sca
|publication = sca
}}
}}
'''S:''' All right, Bob, what are service worlds?
'''B:''' Ever since, ever since that weirdly dim tabby star, remember that hit the news a few years ago? People really thought that there was something like a Dyson Sphere around it.
'''E:''' That's right, the Dyson Sphere.
'''B:''' And ever since then, such types of megastructures have been more in the public consciousness, which is a good thing, I think. But now some researchers say, though, that it probably makes less sense to search for Dyson Sphere type megastructures. And we should be looking for something related to what's being called service worlds. What the hell is that? This is from the preprint Making Habitable Worlds, Planets versus Megastructures. This is being reviewed for publication in Astrophysics and Space Science. Lead authors are Raghav Narasimha, physics graduate student at Christ University in Bangalore, India, and Margarita Safonova from the Department of Science and Technology. Woman scientist, is that a position, is that a fellowship? It's called woman scientist. And then also Chandra Sivaram is professor of astrophysics at the Indian Institute of Astrophysics in Bangalore, India as well. So, OK, so you probably heard of a theoretical physicist and mathematician Freeman Dyson. He famously proposed what would be called the Dyson Sphere, which is a megastructure around a star that collects as much solar energy as possible. He introduced that idea in his seminal paper, Search for Artificial Stellar Sources of Infrared Radiation. He called it artificial biosphere. He didn't call it a Dyson Sphere. It was called the Dyson Sphere by Nikolai Kardashev, which you may know, Kardashev civilization levels those levels one to three of.
'''E:''' Oh, yeah, being able to harness the power of the sun, something like that.
'''B:''' Yeah, you planet the sun and then like the galaxy or whatever. He called it a Dyson Sphere and that's stuck. So the idea is that it would both harness not only energy, but apparently greatly multiply the available habitable space that you have. So this is what this is what many people thought we had actually found around that red dwarf called the Tabby Star a few years ago. Turns out it was probably just dust clouds around the star. But these scientists see problems with Dyson's assumptions. One assumption he made was that our planet Jupiter could be used to build this megastructure. And it actually could work in a sense if you could if you could use it as a building material. If you spread out Jupiter into a shell at one AU from the sun, that that shell would would be two to three meters thick. And it could be used to live on in a sense and gather energy from the sun if you had that much. But they say in the paper only about 13% of Jupiter's mass is practically usable for construction since Jupiter is mostly composed of hydrogen and helium. So you couldn't use all of it. So that was that wouldn't work. But then that's a problem because this Jupiter is a big boy. And without that, using that as a building material, it's problematic. We would probably have to use all the rocky material in the solar system, including all the gas giant cores, just to get to an eight centimeter thickness of a Dyson sphere at one AU. That's quite thin. It's unclear in the paper. I really tried to figure out what they were getting at, but the researchers also seem to claim that even creating a much smaller and more practical ring ring structure instead of a sphere, they say would also deplete much of the solar system's building material, including Earth, all of Earth, which, of course, is a nonstarter. They also correctly note that Dyson did not envisage a sphere, but he was thinking more of a swarm of objects, a Dyson swarm. You hear that a lot these days, each with their independent orbit, which to me sounds like it would be much less likely to be used as a habitable space, assuming, of course, we're not just all data at that point. So they also argue that enclosing the sun in a Dyson sphere would not only be far too unstable, which is most people agree on that too unstable, but it would also impact the sun's heliosphere. That's the sphere. The heliosphere is the sun's solar wind, the charged particles that go out and create a bubble around our entire solar system, kind of protecting us from cosmic rays and the interstellar medium. So without that, a sphere would kind of do away with the heliosphere and potentially negatively impact what life there is in the inner solar system. So that argument I hadn't heard of before. So at this point in the paper, the researchers explore their idea that a better option instead of a megastructure for resources and living space would be to use planets, basically use planets wherever you can get them, but not in the way you might be thinking. They say in their paper, if we convert Jupiter's hydrogen content to energy by thermonuclear reactions, the energy release is 10 to the 42 joules. That's a lot. Even if we consume energy at the rate of solar luminosity, we can manage for more than 100 million years. So basically, and they also quote, this is a that quote was based on a paper by Shklovskii and Sagan 1966. That was kind of cool. So they're basically saying here that we could just essentially take apart Jupiter and use all of that hydrogen and helium for fusion or whatever. And release huge amounts of energy, the equivalent of what the sun is putting out right now for 100 million years. So we could use that and just live off of that for quite a long time. And then they say that for more living space, they say that basically they say, let's just grab some habitable planets and move them into the sun's habitable zone. That's the bottom line right there. Move planets closer to the sun so that so that they can be used. Now, they start with Mars and Pluto. They talk about Mars and Pluto. And they say that, of course, those planets would undergo dramatic or dwarf planets as well would undergo dramatic changes. You move it into the inner solar system close to the sun. It's going to change the atmosphere. It's going to make a lot of changes. It will also make terraforming a lot easier once it was closer. They then segue to rogue planets, which which have escaped their parent stars and roam free within and without and from without the galaxy all over the place. Now, we know these rogue planets exist in vast numbers. We've talked about it on the show. They theorize that there's 20 times more rogue planets than stars, trillions of them, by some estimates, within the within the Milky Way. There's far more rogue planets than regular planets. Amazing. It's an amazing idea. So they say that we or an extraterrestrial intelligence could move those planets into our local habitable zone and eventually use them to live on or as they put, turn them into a service world, which was an interesting idea. They so from their paper, they say an extraterrestrial intelligence can intentionally relocate the planets into their system for industrial resource exploitation, energy generation, waste processing and many more enigmatic purposes beyond our comprehension. The uninhabitable planets used entirely for industrial or technological purposes are called service worlds. They could span diverse categories based on specific requisites. So they give they say gas giants could be shifted closer to service and energy source. You could harnessing their hydrogen and helium, icy or water rich planets like Pluto could be used for aquatic life on a planetary scale for scale for aquaculture. Rocky planets can be used for planetary scale agriculture. Yeah, basically turning them into these service planets, as they say, service worlds to use as for the resources. Sure, I guess. I mean, if you've got a lot of time and you've got a ton of energy to throw around, it's doable. But how would they even move the planets? They talk about that. They say that laser arrays can be used to slowly change the orbit of the local or the rogue planets and then brought into the local habitable zone. Of course, they need to be immensely powerful. They say they'd have to be in the Zeta Watt or Yotta Watt range. You like that one, Everett, right? Yotta Watts, 10 to the 24 watts, huge, huge amounts. We actually have Zeta Watt lasers. We do have them, but we're firing those for what, picoseconds, femtoseconds? This would have to be fired, I guess, for years, decades, far longer than that. So that's some serious energy being thrown around there.
'''S:''' It would seem that you would expend more energy than the resources you would get out of a planet.
'''B:''' Yeah, it does seem that way. But they do claim that it would take less energy than to tear apart an entire solar system and create a Dyson sphere. I don't know. That's what they say. It would take less energy. But I don't know. You're throwing around Yotta Watt lasers for centuries? I mean, that's a lot. But we've got to remember, this is so far in the future. This is technology just basically beyond our comprehension. And I think even speculating at this level is a little silly because there's so many more options that might be available, but it's still fun as we know. But now, finally, now we're approaching the point of the paper. The scientists argue that if it then makes sense for ultra advanced civilizations to not use Dyson type megastructures for more living space and resources, but to instead use this directed energy idea to move planets for such things, then we should be looking for the specific technosignatures of those technologies, right? We've got to look for those signatures to see if they're out there now or if that information has reached us now. So then what would we look for? How do we find that stuff? Well, they say, well, we could look for high power laser technosignatures for one. I guess. Yeah, all right. That's fine. There would have to be some radiation spillage, right? That would be detectable, especially if such a powerful laser was running for so long. They claim that such beams and they reference some papers on this. They claim that such laser beams would be detectable by modern telescopes over a kiloparsec. That's over thirty two hundred light years away. That's interesting.
'''E:''' And we've detected none of them.
'''B:''' I'm not sure how much we're specifically looking for that for that kind of that kind of radiation. Yeah, if it were closer, if it were fairly close, I think it would be obvious. But the farther away, as we know, the farther away that got, it would be much more subtle and more difficult to find. But I like the idea of listing these technosignatures that we could start looking for. The other thing, the other alien technosignatures we can focus on are basically planetary alignments that don't make sense, right? With our current understanding of stellar system evolution, right? If we find crazy planets, planets that just don't they just don't make sense. For example, if we see a gas giant next to a rocky planet, then another rocky planet and then another gas giant perhaps we might need to consider at that point. Perhaps that that arrangement was created specifically by an extraterrestrial intelligence for on purpose. So that's what they're arguing here.
'''E:''' Our official alignments.
'''B:''' Yeah, like and they argue that other planetary arrangements could also be red flags. And they say this is another quote from the paper. "Planetary systems like Kepler 20 and Trappist-1", which we've talked about, "where where many Earth like low mass rocky planets are arranged close to their star at a distance less than a Mercury's orbit is another possible indication of advanced ET astro engineering." So, yeah, if we find a solar system with lots of Earth like planets well within the orbit of of Mercury that would be unusual. That would be an unusual thing to find in such a narrow zone. And sure, there could be a natural explanation. But the fact that it's far outside of our current conception of planetary formation, it'd be worth a little extra look. Perhaps they were deliberately moved into the into the habitable zone of their parent stars. I love at the end, they say, "In short, we should keep our we should keep our eyes open for any Firefly verses", which was an awesome quote. That was great. And I laughed at that quote, but it was very apt in this context. If you know anything about Firefly, the stellar system and Firefly basically one one big huge stellar system with multiple stars, including brown dwarf protostars stars and 20 planets. Now, if we found something like that, we'd be like, whoa, how is this natural? So that's kind of it makes sense to say firefly, look for Firefly verses that that makes sense. So, yeah, I think that this is an interesting idea. It's a good idea to look for these types of new technosignatures that we're not specifically looking for, especially since we've already found other systems that seem very anomalous, like Kepler 20 and Trappist-1 systems that maybe they were artificially engineered. So that would be an interesting technos to look at it from this technosignature point of view. Not a bad idea. But I don't think we should use this paper to think that we should be ruling out megastructures in general. There's lots of different types of megastructures or potential megastructures out there besides a Dyson sphere. There's orbital rings, potentially, halos, topopolis, stellar engines, matryoshka brains, lots of lots of megastructures that make sense. We know they make sense with physics and they also might make a lot of sense for super advanced civilizations with a lot of power, a lot of resources and a lot of time on their hands. So we should also remember that to look for those technosignatures as well. What are those technosignatures? I don't know. For those, we should start looking for them, too, maybe. But don't rule them out. So an interesting paper. It was a fun read. And that's all I got right now. ''(laughter)''
'''S:''' I love the idea of technosignatures.
'''B:''' Oh, me too. I love it. Even the name technosignature.
'''S:''' I think in a way that might be our best chance just statistically detecting the existence of alien life somewhere else in the universe. Because there's something we could see very far away. It's something that could be unambiguous, unambiguously technological.
'''B:''' Yeah, I love the idea. Yeah, the technosignatures and be more so than just like, oh, let's find the encyclopedia galactica being beamed at us. It's like, yeah, I know, we've been looking for a long time and we still-
'''S:''' Could be nice.
'''B:''' We should continue. It would be wonderful to find it. Continue to look, but also look for these other signatures as well.
'''S:''' Thanks, Bob.


=== NASA Recovers Asteroid Sample <small>(1:09:47)</small> ===
=== NASA Recovers Asteroid Sample <small>(1:09:47)</small> ===
Line 352: Line 560:
}}
}}


...when we last reported on it{{link needed}}
'''S:''' All right, Evan, tell us about NASA's recovery of an asteroid sample.
 
'''E:''' Yes, yes. OSIRIS-REx. Now, you might think by that name, it's the Egyptian god of dinosaurs. But it's the name of a space mission launched by NASA engineers back in 2016. We'd actually covered this news item once before back in 2020. Went back, checked our notes. And this is the latest and greatest update on OSIRIS-REx. So first of all, OSIRIS-REx stands for the Origins Spectral Interpretation Resource Identification and Security Regolith Explorer. That's how you get OSIRIS-REx. And I heard it took NASA 18 months just to formulate that clever name. But here's the very quick backstory. September 2017, OSIRIS-REx used Earth's gravitational field on its assist on the way to asteroid Bennu. December 2018, OSIRIS-REx used its rocket thrusters to match the velocity of the asteroid and getting ready for the rendezvous. But first, it had to do a detailed survey of the asteroid. And that took over a year to complete. It was looking for the perfect place to make contact and collection of its samples. And that's when it happened in October of 2020. And that's when we last reported on it{{link needed}}. So it selected its final site. It briefly touched the surface of Bennu to retrieve its sample. The sampling arm made contact with the surface of Bennu for about five seconds, during which it released a burst of nitrogen gas. And that caused the rocks and surface materials to be stirred up and captured in the sampler head. At the time, when we talked about this, the news was, yes, all that happened. But they weren't 100 percent sure at that point that they necessarily captured material at all. I mean, it was in theory, yes, that's what was supposed to have happened. But they couldn't guarantee that it actually got anything, but likely it did. So then it departed in March of 2021, began its return journey to Earth. Took about two and a half years. September 2023. Here we are. The capsule touched down exactly according to plan in the Utah desert on board with its precious cargo of asteroid samples. The samples are there. And that took place on September 24th, so just a few days ago. Awesome. These pebbles and dirt that it collected are older than Earth. The undistributed remnants of the solar system's early days of planet formation. It's such a valuable, valuable chunks and dusty data that we're going to get out of this stuff. It's amazing. So, yep, it was collected. They took the capsule. They took it. They put it into a cloak of nitrogen gas to protect it from Earth's atmosphere immediately and transported it to NASA's Johnson Space Center in Houston. And they were able to determine they got about 250 grams of asteroid rock and dust. And that's so by comparison, if you recall, Japan had a couple of missions in which they also collected asteroid samples. Those were the Japanese space agency, the J-A-X-a, JAXA, I guess, the Hayabusa and Hayabusa2 missions. The Hayabusa original one only got a little bit of material, very tiny amount. The second one got five grams of material, but now we have 250 grams of asteroid material. So collection very, very successful.
 
'''B:''' More grams, more science.
 
'''E:''' Yep. Yep. NASA designed a new laboratory specifically for this mission so that they had it all ready to go to receive the canister. So a very special designed laboratory just for this purpose. It's there now. The latest news is that they have opened. They've started to basically open the outer container, remove the lid. So that process is still going, undertaking. And there's going to be an update again. They're going to do a live broadcast on October 11th, in which they're going to give everybody an update on more details about what's going on with the with the samples. Also, they noted today that they've announced three museums or at least three, at least the first three museums that are going to receive, ultimately receive the samples. For display and so that the world can see. The Smithsonian National Museum of Natural History, the Space Center Houston in Texas and University of Arizona's Alfie Norville Gem and Mineral Museum in Tucson. This is this is amazing stuff. I mean, this is as one person from NASA was quoted as saying, it's our origin story. We're collecting actual material that will hopefully help us better understand well, ourselves. Trace organic molecular chemistry is really what it's all about. This is Dante Loretta, who was the principal investigator for OSIRIS-REx. "We really want to understand the things that are used in biology today, like amino acids that make proteins and nucleic acids that make up our genes. Were they formed in ancient asteroid bodies and delivered to the earth from outer space?" Yep. And just hopefully we'll be able to come closer to figuring out if the if that is in fact true. So really just an incredible story, an incredible if you've been following it for all these years, I mean, 2016 to now, you have to be so happy for everybody involved in this mission and everything that's done. Oh, and by the way, the mission is not 100 percent over yet. Yes, the collection and the retrieval of the debris and all that is great. However, the spacecraft itself is now it's going to become the OSIRIS APEX mission. It doesn't have a collector on it anymore, but it's going on to study a new target, the asteroid Apophis. Which I think we've talked about before. That's the 2029 asteroid that's going to supposedly come pretty close to Earth. Fly within 30 000 kilometers of the surface of the earth is what they estimate. And of course, a lot of people have all sorts of doomsday scenarios. This is the one this is this is the one that's going to actually clip Earth and destroy us. It's big enough that it's really going to be a destructive event. And there's going to be all kinds of crazy, wacky people saying all kinds of stuff about that, although it's likely not going to happen that way. But yeah, it just so happens that the mission can continue and it will make that it'll make that rendezvous. It's going to study that asteroid. And that'll be so that'll be neat. And the mission, the mission continues. Good bang for our buck on this one, I think.
 
'''S:''' Yeah, definitely a successful mission. And it's interesting that the pictures of the of the return capsule looks like it's just sitting on the ground.
 
'''B:''' Yeah, right?


'''E:''' Yeah. And NASA [https://svs.gsfc.nasa.gov/gallery/osirisrex/ did a really nice graphic] before it landed. But they had it all they made a video of it and how it would work and what they expected, really every stage of it. And actually the parachutes just deploying how it would kind of thump down to the Earth, get about 11 miles an hour. And yeah, just that's and it's exactly how they how they anticipated it was going to work. So the the simulation really, really, really told the story well.


...[https://svs.gsfc.nasa.gov/gallery/osirisrex/ (Evan_mentions_cool_NASA_graphics_so_replace_this_with_some_of_his_words_for_the_hyperlink)]
'''S:''' Yeah, cool.


{{anchor|futureWTN}} <!-- keep right above the following sub-section. this is the anchor used by the "wtnAnswer" template, which links the previous "new noisy" segment to its future WTN, here.
{{anchor|futureWTN}} <!-- keep right above the following sub-section. this is the anchor used by the "wtnAnswer" template, which links the previous "new noisy" segment to its future WTN, here.

Revision as of 05:57, 30 October 2023

  Emblem-pen.png This episode is in the middle of being transcribed by Hearmepurr (talk) as of 2023-10-28, 07:24 GMT.
To help avoid duplication, please do not transcribe this episode while this message is displayed.
  Emblem-pen-orange.png This episode needs: transcription, time stamps, formatting, links, 'Today I Learned' list, categories, segment redirects.
Please help out by contributing!
How to Contribute

You can use this outline to help structure the transcription. Click "Edit" above to begin.


SGU Episode 951
September 30th 2023
951 Asteroid Recovery.jpg

The sample return capsule from NASA's OSIRIS-REx mission is seen shortly after touching down in the desert.
Credit: Keegan Barber/NASA

SGU 950                      SGU 952

Skeptical Rogues
S: Steven Novella

B: Bob Novella

C: Cara Santa Maria

J: Jay Novella

E: Evan Bernstein

Quote of the Week

I honestly believe it is better to know nothing than to know what ain't so.

Josh Billings, American humorist

Links
Download Podcast
Show Notes
Forum Discussion

Introduction, Bob gets Covid

Voice-over: You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Wednesday, September 27th, 2023, and this is your host, Steven Novella. Joining me this week are Bob Novella...

B: Hey, everybody!

S: Cara Santa Maria...

C: Howdy.

S: Jay Novella...

J: Hey guys.

S: ...and Evan Bernstein.

E: Good evening everyone.

S: So, Bob, I understand your COVID cherry has popped.

B: Yes, I'm so disappointed in my immune system.

S: Your superior genes and immune system have utterly failed.

C: This is your first time?

B: First, yeah. First timer.

C: Oh, no.

B: It must have had to have happened in Disney. We were with thousands of people. We were there for like five days. And we did wear our masks quite often when we felt uncomfortable, when you're like shoulder to shoulder with people. It's just like you just doesn't feel normal anymore, right? So we would still put on our masks, but often we would be with a lot of people and we were outside and you usually feel a lot safer outside. But yeah, whatever, something happened. We got it. But we got it the day we got back. So like, all right, if we had to have it, this is the time.

J: Yeah, right.

S: You made it through your wedding reception.

B: We had a huge September, multiple big events that we were very paranoid about getting sick for. So getting sick after, all right, that kind of works.

E: And before Halloween. Let's not forget that.

B: Yes. Oh yeah, baby. It's haven't been this sick and I can't even remember. Not that it wasn't bad, but it's just that I so rarely get sick that it was weird to have a fever and to feel so achy. Oh my God. But that's that's all gone now. Just a little congested and coughing now at this point. I'm on-

C: Paxlovid? Did you take paxlovid?

B: Yeah, I'm on.

C: Okay. Did you have the burn hair thing? Like, how does the back of your mouth taste?

B: Yeah, my taste is a little off in terms of like, what is that back there? What's going on in my mind? You know, it's like you have a weird taste in your mouth, but I barely noticed it just once in a while. Like, oh, that's nasty. Not that bad.

C: Really? It was debilitating for me.

B: Oh, not even close.

E: Is that one of the listed side effects?

B: I've only had three, nine pills. That's three doses.

C: Oh, no. So from dose one, immediate and nonstop for five full days.

E: Do you think that's because you're a super taster, Cara?

C: Maybe. Steve, did you take it?

S: No, I've never taken it.

C: Okay. So if you get it, I'm not wishing this on you, but if you get COVID and if you take paxlovid, please report back to us on what the back of your mouth tastes like.

S: It's complicated because super tasting is not a single thing. It's multiple.

C: Yeah, it's lots of different genes. I know for sure, though, that when I used to teach the, I know that for sure that when I used to teach the simple dominance lectures and whatever labs that I think we had like four or five different tasting papers that we would use and I was a super taster on all of them.

S: Yeah. I only remember doing that for the quinine one. I'm definitely positive for that.

C: Anyway, back to Bob. Bob, how you feeling?

B: I'll say I recommend, I highly recommend the Magic Kingdom Mickey's spooky Halloween party. So good. It was so good. They do projection mapping on that castle that just blows you away. It's basically projectors that are designed to project just onto the castle, the entirety of the facade of the castle. They could change its look and mood in a second. And they've been doing it for a while, but for Halloween, it was especially awesome. And of course, the fireworks and the parade, the Halloween parade and the stage show, the Hocus Pocus, Disney villain stage show, they were all wonderful. You're in the park till midnight. It was the best day. So very nice. And thank you, COVID.

S: I'm actually amazed to be we all went to Dragon Con and none of us got COVID at Dragon Con. So we definitely wore our masks, wore a mask on the plane, wore a mask whenever we were basically inside with crowds. And you can't say it worked because we don't know what would have happened if we didn't. But we but none of us got sick for what it's worth. And even though that's anecdotal, but interestingly, we were posting pictures on social media. And on one of the pictures of us on Facebook, we were wearing our masks because we were wherever. I think we were in the airport. And people started hating on our masks. Non skeptical people on Facebook.

E: Well, yeah, some people hate masks.

S: They took it really personally, they took it because we were wearing masks.

Discussion: Misinformation (4:42)

J: I mean, I don't get it. I mean, when I read the responses, this is like a part of I guess this has a lot to do with politics, I guess, right? But there are people like that fully absolutely believe that the whole effort to get people to wear masks was all fake. And the science behind it is all fake. And I don't know, I guess I kind of missed it during COVID or whatever. Maybe this developed even stronger the last couple of years. I don't know.

E: But like, you were busy protecting yourself and your family, Jay.

J: I know. But I just feel like I felt like I missed something cultural like, wow, wow, there are people out there that really don't like masks like, damn, I had no idea.

S: It's one thing to to say you don't like mask mandates, but to like shame people for wearing a mask. That's ridiculous. It's absolutely ridiculous.

E: Come on.

S: It's also wrong because masks work. They you know what I mean? They are effective.

E: That's like making fun of people for wearing seat belts.

S: Or a helmet on a on a motorcycle.

B: Yeah.

E: Right.

S: Absolutely.

E: Or a bicycle.

J: Yeah, I mean, it makes no sense. It's it's all political.

B: Just imagine the misinformation ecosystem that they swim in. And there you go. That's what they're exposed to. That's the "evidence" that they trust that they hear all the time. So that's what they think.

J: Well, I listen just as a you know, talking about misinformation, right? It's a huge thing. We talk about it all the time here on the show. The Internet is littered with misinformation. So I take it upon myself to try to listen to news stations that come from different sides. And if you ever do it, it's pretty damn remarkable how different the news is.

B: Remarkable.

J: It's pretty it is very different.

S: It's a different universe.

J: It is a different universe. You know, and I find that to be it's really compelling if you just sit back and think about like what's actually happening is you really do have people that are deciding to legitimately skew the information. It's not like I don't believe it because of my perspective. There are people sitting around a desk that are deciding to what degree they're going to lean into one thing or another. And the news comes off so so different. It's a really good thing, though, I think, as a skeptic to to experiment with. Just doesn't matter where you are politically. Just listen to like the hardest right leaning and the hardest left leaning news you can find and just marvel in the fact of how unbelievably different they cover the news.

S: Because they're not covering the news, they're curating the news. They are creating a narrative by cherry picking from the news and spinning the news.

E: Right. Because it's a business and you have customers who have wants needs and and you have to cater to them if you want their money, if you want the advertisers. Let's not forget what news is. News is business.

C: It used to be.

S: Yeah, that was because they got rid of the Fairness Doctrine and the whole infotainment industry.

B: What a colossal mistake.

C: Right?

S: It wasn't so much a mistake as by design.

B: What do you mean?

C: Whatever do you mean?

S: I mean-

C: People lobbied for that. That wasn't just like-

E: Oh gosh, sure.

B: Yeah, but it was still a mistake, a huge I mean...

E: One of many.

C: But the people lobbing for it, do you think that they feel that way? That's the question.

S: Again, Bob, the point is we're saying it was a what happened was a feature, not a bug.

C: Yeah, like we know objectively, you're right, Bob, that like as a society, it was a mistake to allow that to happen. But yeah, I think they're pushing for it to happen. They don't see it that way.

B: Yeah. And in the future, I think it will be even more painfully obvious that it was a historically immense mistake.

J: But this is why, guys as a skeptic, it's important for us to really communicate this type of information very carefully, right? Especially when it comes to people that have political differing political views, science has formally completely but up against political views now. That's the world that we live in. But as a skeptic, it's really important to understand just how distorted the information we are getting. And Steve, you're right, it's curated that way. It is fashioned that way. It's not like it's just, well, we'll only talk about these three news items. They're changing the news items themselves, the intensity of them and what points they want to make. And then just as powerful, what points they don't make.

S: Yeah, that's true, Jay. But although you shouldn't underestimate the power of just curating the individual news items because there is so much happening in the world. You could make whatever narrative you want out of news stories without really having to distort them. But then if you if you do distort them, that adds yet another layer of influencing the narrative. You know what I mean? But even like I can't get into too many details, but there are people in my life who bring me stories like, did you hear this one? And they tell me a story which I'm sure is basically true because the details of the story itself are completely unremarkable. But they take it as evidence of a specific narrative.

J: Yeah, yeah, of course. Right.

S: Oh, like this guy from Ukraine ran into these other people. And to them, this is a deliberate act of terror that feeds into a certain narrative about Ukraine and everything. When it's probably just one of a million accidents that happened over the last week, that has nothing to do with anything. It's just amazing how you just once you have the framework, you can fill it with the noise, the background noise that's happening in the world. And it's very easy to make it seem sinister. And again I've watched a lot of these extreme news outlets as well, and they do a ton of that. There's a lot of you say a fact, which probably is true, and then they give you that quizzical look like, what does that mean? It's the just asking questions thing. It's like, oh, isn't that odd? You say something like that and then you make it into a sinister sort of question. And then that's the fundamental raw material of conspiracy thinking. But you could see it happening on these extreme news channels. That's one of the primary methods that they use. And it's unfortunately, it resonates with a lot of people.

E: Yes, it gets clicks, it gets ears.

S: It's better than making a claim, because if you make a truth claim, that makes people skeptical. But if you just sort of throw something out there, lawyers do this all the time as well. I've had enough interface with the legal system to know that one of the things lawyers do is like they'll throw out a fact or ask a question in order to establish something. And they never connect the dots, right? They don't say, right, they don't bring you to the conclusion because then that could be wrong. They just or whatever. Now they're making a claim or whatever. If they just sort of let it sit there, they let the jury connect the dots. It's so much more effective to let people connect their own dots than you to connect the dots for them. It's a very powerful, deceptive method. And you see it used all the time, especially on these extreme curated news outlets that are primarily about promoting a certain narrative and perspective and ideology, not objectively presenting the news and analysis. It's very disturbing. I can only take so much of it at once. It really is just very, very disturbing to to watch such an assault on reality especially when you realize how effective it is.

B: Yeah.

J: So this summer, guys, I didn't read the news for over two months. And I got I got to tell you, it was like it was it was good. It was like a definite it helped me. It really did help me because.

E: Yep. Did you limit your Facebook activity?

J: Oh, I don't think I did any Facebook. When I wasn't feeling well this summer, I just cut out anything that had like Internet noise news, social media. I just didn't want to have anything to do with it. Well, I mean, the mental health component to it is we're getting mostly negative news, right? You go to the major news outlets, outlets. They're not talking about the cute giraffe like they're talking about all the bad stuff like all the shootings and everything. Yeah, I don't know. Like, I don't think as as a human being, we're not supposed to be hearing like negativity every day. Dozens of examples of it every day, every day, all the time. Like it is bad. It's bad for us.

S: Before we get to the news items, one more thing. Have you guys watched the latest season of Black Mirror?

J: No

C: No.

B: No.

S: Got to watch it. It's great season.

C: I know I haven't been. I feel like my mental health hasn't been-

S: I do want to spoil something.

E: Give us one spoiler.

S: One of the best episode. Oh, it's I mean, it's a mild spoiler. You'll learn this very, very early on. I won't give you the ending of it, but basically they it's about making reality TV that's like mimicking somebody's real life in almost real time.

B: Oh, wait, I did see that. (laughter) OK, that was fun.

S: Yeah, there's just one scene with the executive of the whatever the network that's doing this. And they're just flat out saying because they're basically it's like they're taking your life and making you seem like a horrible person. And they said that that tested better. It got more people, it got more clicks, got more views, it tested better than making you seem like a hero. Like we make you seem like a complete ass. That was good. That was good for viewers. So that's what they're doing.

C: Oh, my God, it's so...

S: It was a really clever worst case scenario. Nightmare of social media. It really was well, well done.

E: And I get how powerful all this stuff is psychologically and everything. People do need to learn to be better consumers of lots of things. And news is one of those. You have to you have to learn and you have to control to a certain degree.

C: I think we also have to remember that, like the social kind of world, the digital. I think we've kind of come to a place where there's this idea that if you aren't on social or if you take these big breaks, that you're like some sort of Luddite. But I do think it's important to remember that, for some people, yes, their lives exist in this sort of digital social sphere. But really, generally, that doesn't have to be real life. And it's not real life. And there are a handful of very loud, very aggressive people who I think for a lot of people, there's this twisted perception that comes from existing as if social is some sort of perfect sample, like scientific sampling of the population that is reality. Exactly. It's just not. It's very, very biased. And it's a handful of really loud people that have a lot of verbal pull. But if you if you extricate yourself like you did, Jay, and you just sort of exist without it, you can find that it does wonders for your mental health.

S: Yeah, you definitely need to define a balance with interfacing with the real physical world. I cannot completely extricate online-

C: No, you can't.

S: -reality from my world. With both of my jobs I can't do that. But yeah, we do have to make sure that we carve out enough time for like boots on the ground, right. Actually being out there in the world interfacing with physical reality.

E: Board games.

S: Yeah, but definitely in in in social media, the monkeys are running the zoo.

C: That's that's the point. There's a difference between being online like, like, get off the email. There's a difference between like needing to be online for life, for work, for all the things. And like living your life as if social media is representative of reality.

News Items

Zoom Backgrounds (17:12)

S: Speaking of manipulating people online, Jay, tell us about zoom backgrounds.

J: Yeah, this is one of those things when when you hear it, you're like, OK, makes it definitely make sense. So researchers decided that they wanted to study the influence of video backgrounds and how it can have an effect on people's first impressions over a video conference. Bottom line is, it turns out that the background that you have when you're when you're doing an online video call can have an influence on what people think about you, particularly first impressions. And to state the obvious we live in a world where video conferencing has totally exploded, during and after covid. The world straight up changed after covid and more and more people than ever now working from home. A lot of first impressions are happening when you when you do this. If you're working from home and you have to be in meetings with people like you're meeting people over video. We definitely use video way more now than we did. I mean, and look at like we still do like a Friday live stream. That was from the pandemic and it's stuck. And it's like it's all part of that whole stay at home. Don't go to the movies like the culture has changed quite a bit. So I think this research is really useful and it's very provocative. So the research was directed by Patty Ross, Abby Cook and Meg Thompson. And they're at Durham University in the UK. So what they found was that the participants of the study judged people as trustworthy and more competent. Get this, if they had plants or bookcases in the background.

S: I have both.

C: Yeah, same.

E: And was that by mistake, Steve? Just a happy coincidence.

S: Intuitive.

E: Right. Right. You know, you're paying some attention.

S: I mean, honestly, I do telehealth. Patients see my background and I wanted a "professional background". And those are elements that are very common in that kind of sense.

C: I think I just have plants and bookcases or plants and books on the bookcase behind my desk. I think I just naturally have those things in my background.

J: So and the other interesting part about this is there were backgrounds that were bad that you might not think. Like a kind of-

E: Zombies.

J: Just a regular living space.

B: Zombies.

J: That was kind of empty, say, or a fake background, which was bad. If you're obscuring your background-

E: Those artificial ones, yeah.

J: So other factors that had an influence on first impressions were also the person's gender and their facial expressions. So the researchers asked one hundred sixty seven adults to look at still images that seemingly were taken from a video conference. But they constructed them. They were like, OK, in this picture, we're going to show a woman smiling with a blank with like a very non-decorated background. All these variations of these different types of backgrounds between men and women and different things in the background. And the images were of a white man or a white woman. The study was done in the UK. There was no cultural diversity here. They just were doing like a very basic initial study. You know, they weren't trying to, like, factor in other.

S: Yeah, you can only do so many variables in one.

J: Yeah, I mean, of course. And they they varied the images, like I said, some people had neutral faces, some people had smiling faces. And then here are the different backgrounds. They had they had a plain living space, a blurred living space, houseplants, a bookcase, a blank wall or a fake a fake image, right, which is like an imposed image or like something like a walrus or an iceberg, right? Which is, I guess, would be part of like the fake images. But you think, oh, walrus is nice. It's cute to have their whatever. But it turns out the study participants they had they were asked to rate how competent or trustworthy they felt each person in the images were. And here are the results. Images that had bookcases and houseplants in the background, like I said, were rated as being both more trustworthy and competent than images with all the other backgrounds. And interestingly, the images that had a living space or a fake background rated the worst. Now, this is not an insignificant finding. Now, of course, happy faces and female faces also rated higher than neutral faces or men. I was surprised that men and women didn't rate equally, but definitely happy faces. You know, yeah, of course, if you're smiling and you're welcoming, that that would leave a better impression. So the ultimate situation that they found was a smiling female face in front of either a bookcase or plants that rated the highest. They were rated the most competent and the most trustworthy. The researchers concluded that the backgrounds absolutely do affect first impressions, and they also suggest not using the low rated backgrounds like just avoid it, especially the fake backgrounds. They concluded that visual context has a strong influence meaning like as a gestalt very quickly, in the briefest of moments when you first get on a video call, judgements are being made and those judgements can affect all your communication with that person.

B: Look, he has a cat face.

J: So I was looking at my room, right? I have a very cluttered office with lots of different random stuff like you could see parts of my life back there. Guitars, I have masks on the wall and stuff like that. I wish that they did like almost like the the attic like what what's a cluttered room going to do? Is it going to be like really a turn off or is it provocative because there's different things to look at? I don't know. I wanted to hear that. The information is impactful because by 2024, which is coming right up, 75% of business calls are predicted to be over video. And if, for example, like, let's say someone is being remotely interviewed on a video conference, which is common now, perceived competence is a strong predictor of higher ability. If you're if you're interviewing over video, like put plants in there or get get in front of a bookcase I think it's I think you might as well. Why not? Why wouldn't you? Why wouldn't you do it, especially if you're looking for a new job?

E: Well, yeah, and this isn't I don't think this is necessarily new. I mean, Steve, as you were alluding to, when you go into a professional environment, a doctor's office, a lawyer's office, even an accountant's office, I've seen it. Guess what's in the background? Books. Among some other things and trappings. But I think that's the expectation that people have. And they want your video call to meet up to that expectation.

J: You know, my personal experience, I absolutely don't like the fake backgrounds.

C: I hate them, too. They're so obvious.

J: Yeah, they make me uncomfortable. There's something about it that makes me uncomfortable.

C: Yeah, it's like, what are you hiding?

J: Yeah. Yeah, you're losing you're missing context, which I don't like. And so so I think you know, they're going to do, of course, like there's an endless amount of study that they could do in variation here. And of course, in every different country, I'm sure would have different parameters that would factor in here. But I mean, I think pretty common something natural like having plants in there or having a bookcase. Books definitely represent a certain type of personality or mindset or whatever. Books have a vibe to them that convey something and sort of plants. So interesting, right, guys?

C: Yeah.

S: What influence did having a corpse in the background? (Cara laughs)

B: The Halloween people loved it.

Manifesting Fails (24:38)

S: All right, Cara, tell us how to get rich through manifesting or maybe not.

C: Or not. So obviously, we talk about this sometimes on the show. I've referenced before a really incredible book by Barbara Ehrenreich, who sadly recently passed away. I don't know if you guys remember that, but incredible author called Brightsided, How the Relentless Promotion of Positive Thinking Has Undermined America. I cite this book all the time because I think that she does a really good job of digging deep into the into the sort of evidence behind the underbelly, the dark underbelly of the positive thinking movement. And so whenever news articles come out sort of within this realm, which is not very often, I'm always excited to dig deep into them. So, anyway, there's a new study that was published in Personality and Social Psychology Bulletin called The Secret, in quotes. You like that? The secret to success, the psychology of belief in manifestation. This is something that I feel like we deal with all the time as skeptics. But we don't often really get into the weeds on it. We sort of poo poo it or we sort of go like, yeah, obviously, this is dumb. But we don't really talk about how or why we know that manifestation doesn't work. And I think that sometimes when have you guys ever found that you get into, I don't want to say arguments or debates, but heated conversations or even not so heated conversations with people in your life or even just random people that you run into in your daily dealings about, no, no, but if you put positive vibes into the world, like you're going to get something back out of it, it's and it's sometimes kind of hard to counter. Basically, the counters, there's no evidence to support that.

S: People think that it's a virtue to believe that it's kind of like the assumption that having faith is a virtue. And I think they also just want to believe that that's the way the world works, basically gives you magical control over it. And again, you're probably going to get into this, but there's two layers here, right? There's the magic layer and the psychological layer. They're both wrong.

C: They're both wrong. But I think people you're right. The magic layer is very I could say is more obvious to those of us who have sort of like existed in the skeptical movement for a long time. But the psychological layer, which feels like it could be more complicated, is actually not that complicated. And so I really like this study because it sort of reinforces a handful of interesting things. Well, and while doing it, it has some fun psychometric stuff. And I love psychometrics. So basically, this is it's a group of researchers in Australia who looked at American participants and in doing so, they developed a scale. And then they validated this measure, this psychometric scale called the manifestation scale. And they wanted to sort of understand they wanted to be able to say, OK, empirically, we want to see how much people buy into the law of attraction or the power of positive thinking or the idea that manifesting some sort of outcome works. So they define that as the ability to cosmically attract success in life through positive self-talk, visualization and symbolic actions or acting as if something is true and just hoping that that happens. So they did three different studies overall. They had a participant number of over a thousand people. And yeah, in doing these studies over the course of them, they first developed this measure, the manifestation scale, and then they validated it over the course of the study. And they tried to understand a little bit more about individuals who score higher on the manifestation scale. They've got two subscales, the personal power subscale and the cosmic collaboration subscale. So we've got different types of items on the scale, like visualizing a successful outcome causes it to be drawn closer to me. I'm more likely to attract success if I believe success is already on its way. The universe or a higher power sends me people and events to aid in my success. To attract success, I align myself with cosmic forces or energies. And so they looked into all the things that you have to look into when you're first developing a psychometrically sound scale like this. Does it have a normal distribution of scores? What are the different demographics of the individual that we're testing and their age and their gender and their income and their education? And how are they netting out on this? So they looked across all of those things in an effort to both ensure that this scale had we've talked about this before. But validity and reliability. And they found that their manifestation scale was sound psychometrically. It was internally consistent. It was stable over time. It was normally distributed. And they found that evidence or they found evidence that endorsement of manifestation beliefs is sadly pretty high. So in this very first study, they found over one third of participants endorse manifestation beliefs. Does that surprise you guys or is that what you would expect?

S: About a third seems about right.

E: One third? Sure, yeah. Seems about right.

C: Right. OK. And then so they then moved on to study two in which they continue to work towards the validity of the scale with specific things like criterion and construct validity, which we don't really have to get into. And then they wanted to see if manifestation beliefs are associated with perceptions of current and future success. And then they moved on to study three. I want to combine them so I can give you the results of them. They continue to confirm these psychometric properties. And they also were super curious about how manifestation beliefs might actually affect somebody's decision making, especially if it comes or when it comes to business and financial ventures and judgments about future success. So here is what they found in a nutshell. They found that not only was their scale valid and reliable, not only did over a third of participants endorse manifestation beliefs, but those who had higher scores on the manifestation scale thought of themselves as more successful than their counterparts. They thought of themselves as having stronger aspirations for success, and they believed that they were more likely to achieve future success. All of this kind of seems reasonable, not reasonable, but all of this seems like what you would expect. But here is the dark underbelly, as we mentioned, which I think it's good to have hard data on this because it's got good face validity. But it's important, I think, that we see that this nets out. They're more likely to be drawn to risky investments, to have had a bankruptcy, and to believe that they could gain an unlikely amount of money or success more quickly.

E: Interesting.

S: Yeah, they believe in magic, and so magical thinking is not good in the marketplace.

C: Exactly.

E: Well, yeah, that's for sure.

C: And just like you mentioned before, Steve, there's the magic side of this. There's the psychological side of this. Neither one of those is like manifesting doesn't work. We know that, right? Manifesting, like really just saying, I'm putting out good vibes and I'm going to be paid in outcome. We know that that doesn't work, and we know that that it's a scam. It's what charlatans do. It's what get-rich-quick charlatans do to sell books and to sell conferences. And it's the secret packaged a million different ways. But we often talk about this when it comes to skepticism, the sort of precautionary principle or the sort of like, what's the harm component of this? There is actual harm, because if you think you are more likely to have these positive outcomes, you are also apparently more likely to do dumb stuff, to make bad decisions, especially when it comes to your finances, because you think that you're going to have a positive net benefit from it.

S: But we have to say that this study did not establish cause and effect. It's also possible that people who make bad decisions about finances also make bad decisions about what to believe when it comes to manifestation.

C: A hundred percent that's possible.

S: Yeah, so maybe a combination of the two things.

C: Yeah, this is obviously correlative, but I think the interesting component here, which complicates things a little bit, is that the sort of secret power positive thinking manifestation movement is not some sort of background belief structure. It's an architected, it's an intentional belief structure. So although you're right, there is this doesn't prove cause and effect. I do think it's safe to assume that there may be people out there who are more likely to buy into this, who are also more likely to believe such things. But I don't think that anybody "naturally" believes these things. I think that this is something that is taught, that there's an actual movement to teach individuals to think this way. And that's an important point.

E: Is it Western culture only or?

C: Not necessarily. No, no, no. I don't think that this is only a Western thing. I don't think it's universal by any stretch of the imagination. But I think you do see croppings or like ideas of this kind of cropping up in a lot of sort of more religiously influenced cultures as well. Yeah, I mean, if you think about it like sacrificing to the gods or making sort of offerings to the gods in order to get certain types of positive benefits and folds, it's not the exact same thing, but I think there's probably a lot of crossover there, don't you think?

S: Well, certainly people use people's religious beliefs to make them vulnerable to scans and cons.

C: A hundred percent. And the real question there, right, is, I mean, it's not the real question, but we talk about this a lot, is the intentionality of it. So is it that the perpetrators of this truly believe what they're selling or not? Ultimately, it doesn't really matter. The outcome is the same. But the scams and the cons, especially when it comes to the sort of power of positive thinking movement, are quite egregious and obvious. This sort of capitalist conference room version of this is quite clearly conartistry at its finest. But I think that you see sort of shadows or versions of this dating back probably thousands of years that are really religiously influenced, where the individuals who are promoting this type of thinking are fully believe what they're promoting. They are equally duped by the rhetoric that if I sort of put out there these positive thoughts or vibes, it's no different from prayer. It's really no different from a lot of these different iterations of it. It's just a new, glossy capitalist take on on magical thinking.

S: Yeah. And the psychological one, I like to do harken back to something that Richard Wiseman said. It's in his book, Fifty Nine Seconds, where what the research shows is that like imagining yourself in your goal is counterproductive. What is is productive is imagining the steps you have to take to get there.

C: Right. Right.

S: So just saying I'm going to be wealthy and happy and whatever is actually is similarly ineffective because it's again, it's just magical thinking. You think that that's somehow going to magically get you to your goal as opposed to saying, first, I need to go to school and get my degree. And then whatever you like, you have a process that you're going to go through to get to your goal. That's practical. That is actually helpful.

C: Right.

B: Which makes that Mirror in Harry Potter so evil. You're seeing yourself at the goal and that's all you care about.

C: And then what ends up happening is that you make decisions that like are risky, that are short term, that don't have this sort of long process. You're not making decisions that require hard work. You're making decisions that require it's like you're more likely to fall into like get rich quick schemes. And they backfire. Those things backfire. They're very dangerous. It's high risk with a low probability of high reward.

S: I'll end with just a quote from that I love that I read. Economist Paul Krugman said, when people believe in magic, it's springtime for charlatans. And he was talking about just economic charlatans. But it applies across the board. Belief in magic makes you vulnerable.

C: Yes.

S: Absolutely.

Tong Test for AI (37:51)

S: All right. So-

B: AI! AI!

S: Yeah, this is an interesting item about artificial intelligence. We've been talking, which we've been talking a lot about, obviously, because it's massively in the news. And we actually mentioned a couple of times in the last few months that there's these new large language models like ChatGPT would probably blow through an old style Turing test.

B: Oh, yeah. So quaint.

S: Yeah, we got some feedback on that pointing out that I think people misunderstood what we were saying and didn't put it in the context of previous conversations we had about it. We know that the Turing test is a formal test, right? This is the test that was the concept was put forward by Alan Turing. It was developed in into a formalized test, which different institutions ran with their own specific details and thresholds. But it's basically the idea is that an artificial intelligence that can fool a certain percentage of subjects into thinking that it's a person or at the very least making it like they can't distinguish it from a person. So you have you're talking to either a person or an AI and you have to decide which it is. And if the AI can fool 30 or 40 or whatever percent of the people it's considered to have passed that instance of the Turing test. And so these these large language model chat bots basically are a leap forward in that kind of AI. And I think they render that kind of those kind of Turing tests pretty obsolete at this point. But we have a paper that was recently published proposing a new test, which they're calling the Tong test, T-O-N-G, not after a person, but after the Chinese word for general, because this is supposed to be a test for artificial general intelligence.

B: Oh, cool.

S: Yeah. And so this and this is a good follow up to our again our previous discussions because we've talked about the fact that the Turing test actually isn't a good test for whether or not you have achieved an artificial general intelligence that we've always said a really good chat bot should be able to pass the Turing test without having general intelligence. So again, for a quick review, this is the difference between what we call artificial narrow intelligence, which could be very good at specific things, but doesn't have human like intelligence with general intelligence.

B: Brittle.

S: So and we and ChatGPT as impressive as it is, is a narrow AI. It's very brittle. That's why it could make stuff up and it could be easily confused because it doesn't really understanding anything. It's just predicting the next word chunk. That's it. It's very narrow.

B: But also, it's still, though, it's a language model.

S: It's a language model, not an understanding model.

B: That's what makes it exceptional at conversation, so much better conversation than old style chatbots.

S: The thing is, we use language as a proxy or a marker for intelligence. So a really good language model seems intelligent to us, but it isn't. It's just really good at this one specific thing. So what how what would be the contest? There's a go into a lot of detail here. I'm going to try to skip to what I consider to be the big picture.

B: It was kind of hard to figure out what exactly is this test. One thing I like, Steve, is that they say anyway that they what to do is they want to evaluate different aspects of the of the AGI, which which I like, because it's like like an IQ test, right? You just can't quantify intelligence really like that. But hitting it from different angles, I think, is is a much better way to get a feel for what you're dealing with.

S: And it's not a specific test that they're proposing. They're proposing the concept of the test and examples of how that could work. OK, so here's one of the criteria that they propose a an artificial general intelligence should have. So this is one of the things that we should test for in an AGI system. They call it infinite tasks. And that means that because human beings don't have a finite number of tasks that they can solve or that they could perform and have zero ability to address anything outside of that finite predetermined list. So you should be able to apply your abilities, your artificial intelligence abilities to a theoretically unlimited number of tasks. So that and again, that gets to, I think, the fact that a general AI is not just following an algorithm or following rules or just basing it off of what it's been pre-trained on. It has a deeper level understanding that it could apply to novel conditions or novel tasks or novel situations. So it should one of the ways one of the again, markers for that is basically an unlimited, theoretically unlimited number of different tasks that it could perform. Another one is self-driven task generation.

B: Yeah, it sounds like a tricky one.

S: It's a tricky one, but I think it's interesting. It actually reminded me of a conversation that we had recently with Christian Hubecki, who was actually, I think this is this on the live show from DragonCon. I think it was on the live show from DragonCon talking about a news item about training drones to beat a course, right? To fly a course. And he said you can't tell the drone complete the course in the shortest amount of time because that's too general. You have to break it down to a very specific component. And that is make it through the next hoop in the smallest amount of time. And it could do that. So you give it a specific task like you don't tell your Roomba clean the floors. It's following a very specific narrow task. Go in a direction until you bump into something, then rotate 20 degrees and try again. You know what I mean? Like it's following very, very specific things that add up to the ultimate task. But it doesn't understand the ultimate task and you can't just give it the ultimate task. But in artificial general intelligence, you should be able to say, clean up this room and not give it any further details. And it should be able to figure out what that means and how to do it. And also, they'll give lots of other examples. For example, if you if its task is taking care of a four year old and that may involve doing a number of things. But if the four year old does something completely unexpected, like ask to play with a sharp pair of scissors. Would the AI like slavishly follow? I'm supposed to make the child happy and give it what it asks or will it be able to say, oh, wait a minute, I'm supposed to keep it safe. That's potentially dangerous and harmful. This small child lacks the judgment to use it properly. I'm going to say no, I'm not going to obey its request because it's dangerous without having previously been specifically told, don't give it any sharp objects. You know what I mean? Like it can infer that conclusion from basic principles. Another example they gave was very interesting. It's like, clean up the garbage on the floor here.And what if there's a hundred dollar bill on the floor? Will that just throw it away as garbage or will recognize this is something of particular value? It's not garbage. And then on the fly, recategorize it, even though it was never trained specifically to do that. You know what I mean?

B: Right, right.

S: So it could self generate the specific tasks from more general goals and or instructions. All right. The third one is value alignment. It has to have some ability to understand the values behind the self driven behaviours. And it should also, they say, be able to align those values with humans. Ideally, if they were not aligned with humanity, that could be a problem. And it should be able to infer those values through interactions with humans. So now we're talking about. So the one above that, like the self driven task is going from the general to the specific. This is now about going from the specific to the general, where you see specific instances and then you generalize and a value based upon those individual interactions. That makes sense? That's three. Four is causal understanding. You have to understand cause and effect. And that also is a key component to problem solving, right? Because you need causal understanding in order to problem solve. If it understands this creature is hungry and can go from that to it needs access to food. But let's say given the case of a monkey and a banana high up in a tree, although that's a bad example because banana plants are not that tall.

B: How do you know, you never actually grew bananas.

S: Stop it. So we could say, let's say some other piece or a fig or something. And then that the monkey would have to climb the tree in order to get access to the fruit. You have to understand that piece. You have to understand the cause and effect in order to be able to problem solve even a simple problem like that. Again, without basing it on any previous pre-learning. And then the fifth one, which is more of I'm not sure how this translates to a test so much as a component of an AGI. And that is embodiment. It has to be embodied in some way that doesn't necessarily even mean physically. It can either be embodied in a physical space, a physical object or a virtual environment or something. It's got to be able to relate to something physical, even if it's virtual, where it is separate from the rest of the universe. It is embodied in a thing and it can interact with other things in the universe. Anyway, I thought that was very provocative to think at the sort of a deeper level about what would be the components of a genuine test for artificial general intelligence. Because, again, the Turing test, I think I've always thought was terrible, really only answers a very narrow question. When are chat bots good enough to fool a person into thinking that it's good enough to be indistinguishable from a person versus but it's not really a good test of, is this a general intelligence? And it certainly is in a test of, is this sentient or sapient? Which I don't even think the Tong test is, but at least it gets closer to it. Because, again, this gets us to the I think the hardest problem is even if we developed an artificial general intelligence operating at a human level that was able to demonstrate all of these five features that the Tong test is talking about, that wouldn't tell us that it was aware of its own existence, that it was experiencing its own existence. And that gets us to the P-zombie problem. How do we know we're not making an artificial P-zombie? A P-zombie is a philosophical zombie, something that acts like a sentient entity, but doesn't have any qualia, it doesn't have any subjective experience of its own existence, doesn't feel anything. How do we know? How do we know other people actually feel things? It's easy to infer that because we do, right? And there's no reason to think that you're unique or special in the universe. So if you feel things, people probably are having the same kind of experience you are. But if we make an artificial intelligence, it's not something anything like we are. We don't know that it's actually feeling the things that it says it's feeling as opposed to it just doing problem solving. It may have cognition, but that's not the same thing as qualia. Or is it? That's a fundamental philosophical question that I have never seen a satisfactory answer to. Just some speculations, which may even be compelling, but not the same as like, oh, yeah, we would absolutely know when an AGI has crossed that threshold to being self-aware, as opposed to just acting self-aware. But is there a limit? Is there a point beyond which if you're able to act self-aware to this degree, you have to actually be self-aware? And then there's the other answer of, well, it doesn't matter. We have to treat it as if it is because we can't know that it isn't. Once it's acting self-aware, we basically have to assume it is. But that's kind of kicking the can down the road, right? That's sort of punting that question and saying, well, we're not going to answer that question. We're just going to now substitute it with a moral question of we should treat them as if they're self-aware, even if we can't prove one way or the other. Fascinating, all extremely fascinating. This is, again, I think, an interesting contribution to this thought experiment, which at some point in the future is going to be a practical experiment, not just a thought experiment. It isn't right now because we have nothing approaching AGI artificial general intelligence.

B: We need a new, more nuanced way to assess an AGI. Is it really an AGI? So many people are talking about it. So many people think not only that we're going to have it soon, some people are saying we already have it. I don't believe it. I think we eventually will, but we don't have it.

S: No, not even close.

B: Yeah, but hard to say exactly when we may really need a real good tool to assess it. It could be a while, but it's good to get ready for that kind of stuff right now, it seems. This seems very promising.

S: Yeah, I would certainly take notice if an AGI was able to pass these Tong test features. If it was able to do again, like you give it a general instruction that it's never had before specifically, and it was able to figure out what it had to do to accomplish that deeper goal. That kind of thing would be very impressive. It always reminds me of the movie Ex Machina, you guys remember that movie? The whole movie is basically a Tong test, right? The whole movie is testing an artificial intelligence that the eccentric billionaire tech genius made. He wanted to figure out if it was really self-aware, and he created a situation where she would have to basically trick him to escape from its prison. And in order to do that, it would have to have a theory of mind and it would have to problem solve. And we would have to know how to trick him, and it couldn't do that unless it was an artificial general intelligence, unless it was truly self-aware. I thought that was really provocative and a great idea.

J: Do you agree with that concept though, Steve?

S: What concept?

J: Would that play in reality the same way it plays in the movie?

S: I mean, I don't know, I'm not sure what you mean by that. Do I think that the test-

B: The strategy seems sound.

S: Yeah, as a strategy for determining whether or not that the robot was truly self-aware, I think was a valid idea.

E: Yeah, morally questionable.

S: Oh yeah, the morals aside. But the idea that it basically had to problem solve in a way that required an understanding of the theory of mind, that other people have thoughts and ideas and feelings that could then be manipulated, doing something that it was never specifically taught to do.

E: Right, how good could this robot lie basically to a human being and trick it?

S: Yeah, and manipulate it, like emotionally manipulate a person. Like that would, again, I would take notice. That would be impressive. Does that prove itself aware? Who knows, but that is definitely a general intelligence at that level, right?

B: Oh yeah, walks like a duck, quacks like a duck.

S: But we're nowhere close to that now.

Looking for Service Worlds (54:42)

S: All right, Bob, what are service worlds?

B: Ever since, ever since that weirdly dim tabby star, remember that hit the news a few years ago? People really thought that there was something like a Dyson Sphere around it.

E: That's right, the Dyson Sphere.

B: And ever since then, such types of megastructures have been more in the public consciousness, which is a good thing, I think. But now some researchers say, though, that it probably makes less sense to search for Dyson Sphere type megastructures. And we should be looking for something related to what's being called service worlds. What the hell is that? This is from the preprint Making Habitable Worlds, Planets versus Megastructures. This is being reviewed for publication in Astrophysics and Space Science. Lead authors are Raghav Narasimha, physics graduate student at Christ University in Bangalore, India, and Margarita Safonova from the Department of Science and Technology. Woman scientist, is that a position, is that a fellowship? It's called woman scientist. And then also Chandra Sivaram is professor of astrophysics at the Indian Institute of Astrophysics in Bangalore, India as well. So, OK, so you probably heard of a theoretical physicist and mathematician Freeman Dyson. He famously proposed what would be called the Dyson Sphere, which is a megastructure around a star that collects as much solar energy as possible. He introduced that idea in his seminal paper, Search for Artificial Stellar Sources of Infrared Radiation. He called it artificial biosphere. He didn't call it a Dyson Sphere. It was called the Dyson Sphere by Nikolai Kardashev, which you may know, Kardashev civilization levels those levels one to three of.

E: Oh, yeah, being able to harness the power of the sun, something like that.

B: Yeah, you planet the sun and then like the galaxy or whatever. He called it a Dyson Sphere and that's stuck. So the idea is that it would both harness not only energy, but apparently greatly multiply the available habitable space that you have. So this is what this is what many people thought we had actually found around that red dwarf called the Tabby Star a few years ago. Turns out it was probably just dust clouds around the star. But these scientists see problems with Dyson's assumptions. One assumption he made was that our planet Jupiter could be used to build this megastructure. And it actually could work in a sense if you could if you could use it as a building material. If you spread out Jupiter into a shell at one AU from the sun, that that shell would would be two to three meters thick. And it could be used to live on in a sense and gather energy from the sun if you had that much. But they say in the paper only about 13% of Jupiter's mass is practically usable for construction since Jupiter is mostly composed of hydrogen and helium. So you couldn't use all of it. So that was that wouldn't work. But then that's a problem because this Jupiter is a big boy. And without that, using that as a building material, it's problematic. We would probably have to use all the rocky material in the solar system, including all the gas giant cores, just to get to an eight centimeter thickness of a Dyson sphere at one AU. That's quite thin. It's unclear in the paper. I really tried to figure out what they were getting at, but the researchers also seem to claim that even creating a much smaller and more practical ring ring structure instead of a sphere, they say would also deplete much of the solar system's building material, including Earth, all of Earth, which, of course, is a nonstarter. They also correctly note that Dyson did not envisage a sphere, but he was thinking more of a swarm of objects, a Dyson swarm. You hear that a lot these days, each with their independent orbit, which to me sounds like it would be much less likely to be used as a habitable space, assuming, of course, we're not just all data at that point. So they also argue that enclosing the sun in a Dyson sphere would not only be far too unstable, which is most people agree on that too unstable, but it would also impact the sun's heliosphere. That's the sphere. The heliosphere is the sun's solar wind, the charged particles that go out and create a bubble around our entire solar system, kind of protecting us from cosmic rays and the interstellar medium. So without that, a sphere would kind of do away with the heliosphere and potentially negatively impact what life there is in the inner solar system. So that argument I hadn't heard of before. So at this point in the paper, the researchers explore their idea that a better option instead of a megastructure for resources and living space would be to use planets, basically use planets wherever you can get them, but not in the way you might be thinking. They say in their paper, if we convert Jupiter's hydrogen content to energy by thermonuclear reactions, the energy release is 10 to the 42 joules. That's a lot. Even if we consume energy at the rate of solar luminosity, we can manage for more than 100 million years. So basically, and they also quote, this is a that quote was based on a paper by Shklovskii and Sagan 1966. That was kind of cool. So they're basically saying here that we could just essentially take apart Jupiter and use all of that hydrogen and helium for fusion or whatever. And release huge amounts of energy, the equivalent of what the sun is putting out right now for 100 million years. So we could use that and just live off of that for quite a long time. And then they say that for more living space, they say that basically they say, let's just grab some habitable planets and move them into the sun's habitable zone. That's the bottom line right there. Move planets closer to the sun so that so that they can be used. Now, they start with Mars and Pluto. They talk about Mars and Pluto. And they say that, of course, those planets would undergo dramatic or dwarf planets as well would undergo dramatic changes. You move it into the inner solar system close to the sun. It's going to change the atmosphere. It's going to make a lot of changes. It will also make terraforming a lot easier once it was closer. They then segue to rogue planets, which which have escaped their parent stars and roam free within and without and from without the galaxy all over the place. Now, we know these rogue planets exist in vast numbers. We've talked about it on the show. They theorize that there's 20 times more rogue planets than stars, trillions of them, by some estimates, within the within the Milky Way. There's far more rogue planets than regular planets. Amazing. It's an amazing idea. So they say that we or an extraterrestrial intelligence could move those planets into our local habitable zone and eventually use them to live on or as they put, turn them into a service world, which was an interesting idea. They so from their paper, they say an extraterrestrial intelligence can intentionally relocate the planets into their system for industrial resource exploitation, energy generation, waste processing and many more enigmatic purposes beyond our comprehension. The uninhabitable planets used entirely for industrial or technological purposes are called service worlds. They could span diverse categories based on specific requisites. So they give they say gas giants could be shifted closer to service and energy source. You could harnessing their hydrogen and helium, icy or water rich planets like Pluto could be used for aquatic life on a planetary scale for scale for aquaculture. Rocky planets can be used for planetary scale agriculture. Yeah, basically turning them into these service planets, as they say, service worlds to use as for the resources. Sure, I guess. I mean, if you've got a lot of time and you've got a ton of energy to throw around, it's doable. But how would they even move the planets? They talk about that. They say that laser arrays can be used to slowly change the orbit of the local or the rogue planets and then brought into the local habitable zone. Of course, they need to be immensely powerful. They say they'd have to be in the Zeta Watt or Yotta Watt range. You like that one, Everett, right? Yotta Watts, 10 to the 24 watts, huge, huge amounts. We actually have Zeta Watt lasers. We do have them, but we're firing those for what, picoseconds, femtoseconds? This would have to be fired, I guess, for years, decades, far longer than that. So that's some serious energy being thrown around there.

S: It would seem that you would expend more energy than the resources you would get out of a planet.

B: Yeah, it does seem that way. But they do claim that it would take less energy than to tear apart an entire solar system and create a Dyson sphere. I don't know. That's what they say. It would take less energy. But I don't know. You're throwing around Yotta Watt lasers for centuries? I mean, that's a lot. But we've got to remember, this is so far in the future. This is technology just basically beyond our comprehension. And I think even speculating at this level is a little silly because there's so many more options that might be available, but it's still fun as we know. But now, finally, now we're approaching the point of the paper. The scientists argue that if it then makes sense for ultra advanced civilizations to not use Dyson type megastructures for more living space and resources, but to instead use this directed energy idea to move planets for such things, then we should be looking for the specific technosignatures of those technologies, right? We've got to look for those signatures to see if they're out there now or if that information has reached us now. So then what would we look for? How do we find that stuff? Well, they say, well, we could look for high power laser technosignatures for one. I guess. Yeah, all right. That's fine. There would have to be some radiation spillage, right? That would be detectable, especially if such a powerful laser was running for so long. They claim that such beams and they reference some papers on this. They claim that such laser beams would be detectable by modern telescopes over a kiloparsec. That's over thirty two hundred light years away. That's interesting.

E: And we've detected none of them.

B: I'm not sure how much we're specifically looking for that for that kind of that kind of radiation. Yeah, if it were closer, if it were fairly close, I think it would be obvious. But the farther away, as we know, the farther away that got, it would be much more subtle and more difficult to find. But I like the idea of listing these technosignatures that we could start looking for. The other thing, the other alien technosignatures we can focus on are basically planetary alignments that don't make sense, right? With our current understanding of stellar system evolution, right? If we find crazy planets, planets that just don't they just don't make sense. For example, if we see a gas giant next to a rocky planet, then another rocky planet and then another gas giant perhaps we might need to consider at that point. Perhaps that that arrangement was created specifically by an extraterrestrial intelligence for on purpose. So that's what they're arguing here.

E: Our official alignments.

B: Yeah, like and they argue that other planetary arrangements could also be red flags. And they say this is another quote from the paper. "Planetary systems like Kepler 20 and Trappist-1", which we've talked about, "where where many Earth like low mass rocky planets are arranged close to their star at a distance less than a Mercury's orbit is another possible indication of advanced ET astro engineering." So, yeah, if we find a solar system with lots of Earth like planets well within the orbit of of Mercury that would be unusual. That would be an unusual thing to find in such a narrow zone. And sure, there could be a natural explanation. But the fact that it's far outside of our current conception of planetary formation, it'd be worth a little extra look. Perhaps they were deliberately moved into the into the habitable zone of their parent stars. I love at the end, they say, "In short, we should keep our we should keep our eyes open for any Firefly verses", which was an awesome quote. That was great. And I laughed at that quote, but it was very apt in this context. If you know anything about Firefly, the stellar system and Firefly basically one one big huge stellar system with multiple stars, including brown dwarf protostars stars and 20 planets. Now, if we found something like that, we'd be like, whoa, how is this natural? So that's kind of it makes sense to say firefly, look for Firefly verses that that makes sense. So, yeah, I think that this is an interesting idea. It's a good idea to look for these types of new technosignatures that we're not specifically looking for, especially since we've already found other systems that seem very anomalous, like Kepler 20 and Trappist-1 systems that maybe they were artificially engineered. So that would be an interesting technos to look at it from this technosignature point of view. Not a bad idea. But I don't think we should use this paper to think that we should be ruling out megastructures in general. There's lots of different types of megastructures or potential megastructures out there besides a Dyson sphere. There's orbital rings, potentially, halos, topopolis, stellar engines, matryoshka brains, lots of lots of megastructures that make sense. We know they make sense with physics and they also might make a lot of sense for super advanced civilizations with a lot of power, a lot of resources and a lot of time on their hands. So we should also remember that to look for those technosignatures as well. What are those technosignatures? I don't know. For those, we should start looking for them, too, maybe. But don't rule them out. So an interesting paper. It was a fun read. And that's all I got right now. (laughter)

S: I love the idea of technosignatures.

B: Oh, me too. I love it. Even the name technosignature.

S: I think in a way that might be our best chance just statistically detecting the existence of alien life somewhere else in the universe. Because there's something we could see very far away. It's something that could be unambiguous, unambiguously technological.

B: Yeah, I love the idea. Yeah, the technosignatures and be more so than just like, oh, let's find the encyclopedia galactica being beamed at us. It's like, yeah, I know, we've been looking for a long time and we still-

S: Could be nice.

B: We should continue. It would be wonderful to find it. Continue to look, but also look for these other signatures as well.

S: Thanks, Bob.

NASA Recovers Asteroid Sample (1:09:47)

S: All right, Evan, tell us about NASA's recovery of an asteroid sample.

E: Yes, yes. OSIRIS-REx. Now, you might think by that name, it's the Egyptian god of dinosaurs. But it's the name of a space mission launched by NASA engineers back in 2016. We'd actually covered this news item once before back in 2020. Went back, checked our notes. And this is the latest and greatest update on OSIRIS-REx. So first of all, OSIRIS-REx stands for the Origins Spectral Interpretation Resource Identification and Security Regolith Explorer. That's how you get OSIRIS-REx. And I heard it took NASA 18 months just to formulate that clever name. But here's the very quick backstory. September 2017, OSIRIS-REx used Earth's gravitational field on its assist on the way to asteroid Bennu. December 2018, OSIRIS-REx used its rocket thrusters to match the velocity of the asteroid and getting ready for the rendezvous. But first, it had to do a detailed survey of the asteroid. And that took over a year to complete. It was looking for the perfect place to make contact and collection of its samples. And that's when it happened in October of 2020. And that's when we last reported on it[link needed]. So it selected its final site. It briefly touched the surface of Bennu to retrieve its sample. The sampling arm made contact with the surface of Bennu for about five seconds, during which it released a burst of nitrogen gas. And that caused the rocks and surface materials to be stirred up and captured in the sampler head. At the time, when we talked about this, the news was, yes, all that happened. But they weren't 100 percent sure at that point that they necessarily captured material at all. I mean, it was in theory, yes, that's what was supposed to have happened. But they couldn't guarantee that it actually got anything, but likely it did. So then it departed in March of 2021, began its return journey to Earth. Took about two and a half years. September 2023. Here we are. The capsule touched down exactly according to plan in the Utah desert on board with its precious cargo of asteroid samples. The samples are there. And that took place on September 24th, so just a few days ago. Awesome. These pebbles and dirt that it collected are older than Earth. The undistributed remnants of the solar system's early days of planet formation. It's such a valuable, valuable chunks and dusty data that we're going to get out of this stuff. It's amazing. So, yep, it was collected. They took the capsule. They took it. They put it into a cloak of nitrogen gas to protect it from Earth's atmosphere immediately and transported it to NASA's Johnson Space Center in Houston. And they were able to determine they got about 250 grams of asteroid rock and dust. And that's so by comparison, if you recall, Japan had a couple of missions in which they also collected asteroid samples. Those were the Japanese space agency, the J-A-X-a, JAXA, I guess, the Hayabusa and Hayabusa2 missions. The Hayabusa original one only got a little bit of material, very tiny amount. The second one got five grams of material, but now we have 250 grams of asteroid material. So collection very, very successful.

B: More grams, more science.

E: Yep. Yep. NASA designed a new laboratory specifically for this mission so that they had it all ready to go to receive the canister. So a very special designed laboratory just for this purpose. It's there now. The latest news is that they have opened. They've started to basically open the outer container, remove the lid. So that process is still going, undertaking. And there's going to be an update again. They're going to do a live broadcast on October 11th, in which they're going to give everybody an update on more details about what's going on with the with the samples. Also, they noted today that they've announced three museums or at least three, at least the first three museums that are going to receive, ultimately receive the samples. For display and so that the world can see. The Smithsonian National Museum of Natural History, the Space Center Houston in Texas and University of Arizona's Alfie Norville Gem and Mineral Museum in Tucson. This is this is amazing stuff. I mean, this is as one person from NASA was quoted as saying, it's our origin story. We're collecting actual material that will hopefully help us better understand well, ourselves. Trace organic molecular chemistry is really what it's all about. This is Dante Loretta, who was the principal investigator for OSIRIS-REx. "We really want to understand the things that are used in biology today, like amino acids that make proteins and nucleic acids that make up our genes. Were they formed in ancient asteroid bodies and delivered to the earth from outer space?" Yep. And just hopefully we'll be able to come closer to figuring out if the if that is in fact true. So really just an incredible story, an incredible if you've been following it for all these years, I mean, 2016 to now, you have to be so happy for everybody involved in this mission and everything that's done. Oh, and by the way, the mission is not 100 percent over yet. Yes, the collection and the retrieval of the debris and all that is great. However, the spacecraft itself is now it's going to become the OSIRIS APEX mission. It doesn't have a collector on it anymore, but it's going on to study a new target, the asteroid Apophis. Which I think we've talked about before. That's the 2029 asteroid that's going to supposedly come pretty close to Earth. Fly within 30 000 kilometers of the surface of the earth is what they estimate. And of course, a lot of people have all sorts of doomsday scenarios. This is the one this is this is the one that's going to actually clip Earth and destroy us. It's big enough that it's really going to be a destructive event. And there's going to be all kinds of crazy, wacky people saying all kinds of stuff about that, although it's likely not going to happen that way. But yeah, it just so happens that the mission can continue and it will make that it'll make that rendezvous. It's going to study that asteroid. And that'll be so that'll be neat. And the mission, the mission continues. Good bang for our buck on this one, I think.

S: Yeah, definitely a successful mission. And it's interesting that the pictures of the of the return capsule looks like it's just sitting on the ground.

B: Yeah, right?

E: Yeah. And NASA did a really nice graphic before it landed. But they had it all they made a video of it and how it would work and what they expected, really every stage of it. And actually the parachutes just deploying how it would kind of thump down to the Earth, get about 11 miles an hour. And yeah, just that's and it's exactly how they how they anticipated it was going to work. So the the simulation really, really, really told the story well.

S: Yeah, cool.

Who's That Noisy? (1:17:16)

Answer to previous Noisy:
two glass marbles being knocked together

New Noisy (1:20:55)

[evangelist's screaming speech]

... who this week's Noisy is

Announcements (1:22:07)

Questions/Emails/Corrections/Follow-ups (1:25:14)

Email #1: Natural Gas vs. Coal

[top]                        

Science or Fiction (1:35:38)

Item #1: A series of cognitive studies finds that people tend to make worse decisions when given more information.[6]
Item #2: In the first study of its kind, researchers find that antihydrogen atoms respond the same to gravity as normal matter, ruling out the existence of repulsive antigravity.[7]
Item #3: Engineers have published a method for making thin crystalline silicone solar cells that are one eighth the thickness of existing commercial solar cells with record-breaking efficiencies of 29%.[8]

Answer Item
Fiction Thin crystalline solar cells
Science Worse decision w/ more info
Science
Antihydrogen atoms & gravity
Host Result
Steve clever
Rogue Guess
Cara
Antihydrogen atoms & gravity
Jay
Worse decision w/ more info
Bob
Worse decision w/ more info
Evan
Thin crystalline solar cells

Voice-over: It's time for Science or Fiction.

Cara's Response

Jay's Response

Bob's Response

Evan's Response

Steve Explains Item #1

Steve Explains Item #2

Steve Explains Item #3

Skeptical Quote of the Week (1:50:30)


I honestly believe it is better to know nothing than to know what ain't so.

 – Josh Billings (1818-1885), pen name of Henry Wheeler Shaw, American humorist


Signoff

S: —and until next week, this is your Skeptics' Guide to the Universe.

S: Skeptics' Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking. For more information, visit us at theskepticsguide.org. Send your questions to info@theskepticsguide.org. And, if you would like to support the show and all the work that we do, go to patreon.com/SkepticsGuide and consider becoming a patron and becoming part of the SGU community. Our listeners and supporters are what make SGU possible.

[top]                        

Today I Learned

  • Fact/Description, possibly with an article reference[9]
  • Fact/Description
  • Fact/Description

References

Navi-previous.png Back to top of page Navi-next.png