SGU Episode 123

From SGUTranscripts
Revision as of 07:09, 9 March 2014 by Av8rmike (talk | contribs) (Add categories with comments)
Jump to navigation Jump to search
  Emblem-pen-orange.png This episode needs: proofreading, links, 'Today I Learned' list, segment redirects.
Please help out by contributing!
How to Contribute


SGU Episode 123
28th November 2007
Neocorticalcolumn.jpg
(brief caption for the episode icon)

SGU 122                      SGU 124

Skeptical Rogues
S: Steven Novella

B: Bob Novella

R: Rebecca Watson

J: Jay Novella

E: Evan Bernstein

Quote of the Week

The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.

Steven Weinberg

Links
Download Podcast
Show Notes
Forum Discussion


Introduction

S: Hello and welcome to the Skeptics' Guide to the Universe. Today is Wednesday, November 28th 2007, and this is your host, Steven Novella, president of the New England Skeptical Society. Joining me this evening are Bob Novella...

B: Hey, everybody.

S: Rebecca Watson...

R: Hello, everyone.

S: Jay Novella...

J: Hey, guys.

S: And Evan Bernstein.

E: Good evening, everybody.

S: How is everyone tonight?

B: Good, Steve.

E: Quite fine.

R: Couldn't be better.

J: Very good but apparently, you're not good, Steve?

S: Oh, we'll get to that a bit later. (laughter) Jay's referring to a recent Skeptiko podcast, which talks about the Skeptic's Guide specifically. We'll be getting into that in the beginning of the e-mail section of our show. But first, let's do some news items.

Science and Faith (00:57)

S: Several of our listeners referred us to this New York Times editorial by Paul Davies—this past Saturday's edition of the New York Times, in which he claims that science is based upon faith. Have you guys had a chance to read this?

E: Yep.

B: Yes.

S: This is a claim that we hear frequently. Davies, for example, writes:

the problem with this neat separation into non-overlapping magisteria—as Steve Jay Gould describes science and religion—is that science has its own faith-based belief system. All science proceeds on the assumption that nature is ordered in a rational and intelligible way. You couldn't be a scientist if you thought the universe was a meaningless jumble of odds and ends haphazardly juxtaposed. When physicists probe to a deeper level of subatomic structure or astronomers extend the reach of their instruments, they expect to encounter additional elegant mathematical order. And so far, this faith has been justified.

So, you know, this is a claim that we hear frequently and I think Davies is making the really common mistake of confusing methodological naturalism with philosophical naturalism. What he's saying is that science assumes that the laws of the universe are stable, and that they make sense. And he says that science requires faith in that. And that is absolutely not correct. That is a complete misunderstanding of science. Science doesn't really require anything, because science is just a system of methodology. It assumes methodological naturalism, the idea that effects have causes, that the system internally functions together and makes sense—the system meaning nature—because it has to assume that. It takes that as a premise only because the methods of science only work within that framework. So it's actually not an assumption about reality; it's not faith in any particular metaphysical ultimate reality. It's just saying, "these are the methods that work, and therefore these are the methods that science is going to use", because it's the only ones in which you can proceed with empirical hypothesis testing. It actually is agnostic towards the ultimate metaphysical realities of the universe. So his entire premise is false.

J: So, Steve, would you say that the following statement is wrong: "I have faith in the scientific method".

S: Well, it depends what you mean by that. I think we use the term "faith" differently. If that means that it has worked so far and therefore I think it's highly probable that it will continue to work in the future, then I think that that's a legitimate statement. But if you're saying that you believe something that's a choice without justification, then I think that that doesn't apply. The term "faith" doesn't apply.

J: OK, because I say that. I say, "I have faith in the scientific method" because from my perspective, I'm saying that I'm banking on the proof upon proof that science has delivered over the years.

S: Right. So we hear this a lot from the intelligent design crowd and I'm sure they love this kind of editorials, because this is their mantra: the notion that you have to have faith in science or faith in evolution, and that they've been complaining endlessly. And this is... Phillip Johnson, who basically started the Intelligent Design movement, this was his core premise: that science should not be based upon the assumption of naturalism, because that's rigging the game. It's rigging the game against supernatural or spiritual explanations. And they're continuing to make that case. In fact, in preparation for our show tonight, I was listening to an episode of Skeptiko—the podcast Skeptiko—from a few episodes ago, where he interviewed an Intelligent Design proponent, and that's what it was all about. It was all about "scientists are assuming philosophical naturalism and they're not following the evidence where it goes; they're restricting their inquiry to naturalistic explanations and that's not fair; that's rigging the game". What that misses is that methodological naturalism is not a choice; it's a necessity. We're not limiting the answers that we're willing to consider to the ones that fit our paradigm. We're limiting the questions to ones that can be answered scientifically. If you can't formulate your hypothesis in a way that can be tested, it can be falsified, then it doesn't meet the minimum criteria for being considered as science. They totally do not get that at all, and that's true at the spiritual end, like the Intelligent Design proponents, and it's true on the New Age end, like Alex from Skeptiko. Because they were in complete agreement on this point, that skeptics and scientists are feeding into their own assumption of philosophical naturalism and it's completely untrue.

J: What I don't understand is... going back to what Carl Sagan said so eloquently: "science delivers the goods". Science in and of itself is a system that has been proven over and over and over again to work.

S: That's a good point, Jay, and I often refer to that as the meta-experiment of science. If methodological naturalism didn't work because our universe was hopelessly not rational, or not naturalistic...

B: Acausal

S: ...it was acausal or retro-causal, or the rules, the laws of the universe changed frequently or could be suspended at random, or by the whim of some being. If these things were true, or if our universe were part of a larger universe that we could not access but that influenced our universe... whatever. If any of those situations were true, then science wouldn't work very well. We would be constantly running up against enduring anomalies that we could never resolve, we couldn't make sense out of. Things that we thought were well-established would be overturned chaotically and at random, and that's just not the case. Science has been working quite well over the last few hundred years: slowly, methodically building an ever-improving model that is very, very powerful in its ability to predict the future, to predict what's going to happen. That is the only criterion by which science is really judged: how well does it predict the outcome of future observations. That doesn't prove philosophical naturalism, just like you can't really "prove" anything in science, right? Nothing is proven metaphysically in science. All we can say is that so far, all the evidence is pointing in that direction.

Computer Brain (7:53)

S: Bob, you sent me the next news item on a new computer brain. Why don't you tell us about that?

B: Yeah, pretty interesting. One of the holy grails of neuroscientists, I think, since its inception is the creation of a simulation of a human brain. Of course with the advent of computers, it's obvious that the best way to embody that simulation would be in a digital computer. Scientists in Switzerland working with IBM researchers have shown that a computer simulation of a part of a rat brain called the neo-cortical column, which is arguably one of the most complex parts of the mammalian brain, it appears to behave like its biological counterpart, which they're calling a pretty big milestone. Now, up until yesterday I'd never even heard of neo-cortical columns, not specifically. And apparently, they are the basic building blocks of the cortex, the outer part of the brain, specifically the neo-cortex, which is the most recently evolved outer folded part of the brain. They consist of, in humans, about 60,000 neurons and they're pretty small. They're about half a millimetre wide and about two millimetres long, so they're pretty tiny. But they are the fundamental functional unit of the brain, and they're extremely complex. They've figured that if you have a goal of duplicating or simulating the brain, that's the one thing you really need to nail, so that's what they've been trying to do. Also, another thing about the neo-cortical columns: apparently that was a milestone in human brain development about two hundred million years ago, Steve, when mammals split from reptiles and the neo-cortex started to grow at that point.

S: Yeah, that's true; the reptillian brain is what in mammals you actually call the reptilian brain; that's the deep primitive brain structures. And reptiles have only a very minimal cortical ribbon on top of that. In mammals, that became the bulk of the brain.

B: Right, I think it's about 80% of the brain now; the neo-cortex is about 80% of the brain.

S: Yeah, and it's the more primitive, reptilian part of it. It functions for more automatic type of reflexes, as well as basic emotions, things like that. But the thinking part of the brain is the cortex, is the mammalian—

B: The so-called higher cognitive functions. Well, it's these columns that have been multiplying for millions of years. And as they've multiplied, our brains became more and more sophisticated. And they're actually responsible for the folds you see in the brain because there's obviously selective pressure to have more of these columns in the cortex. In order to make room, they kind of just expanded any way they could, and that's actually why we have all the folds in the outer part of the brain, because it's the classic—

S: It's not just the volume, it's the surface area, because this vertical organisation, that's the bulk of the connections in the cortex run vertically. Of course there's also horizontal cross-connections, but the primary processing unit, if you will, is this vertical column. So to get more vertical columns in, you need more surface area, and therefore you need the wrinkled folds of the brain, not just an expanding balloon, for example. So, this is really cool. I like in the title it says "a computer simulation could eventually allow neuroscience to be carried out in silico". I like that term "in silico".

B: Yeah, right.

S: I've been thinking about this a lot, actually, because as we reverse-engineer the brain, we start to actually see that this piece of the brain is doing this, and connects to this other piece of the brain, which has this other specific function; this is how they interact. You know, we're definitely going to be moving in the direction where this line of technological development—actually computer modeling the brain—is then going to start working alongside of the neuroscience which is modeling the different pieces of the brain. So, imagine in five, ten, twenty years, thirty years, when these computer models are actually not just duplicating the raw structure of the brain, but actually in greater and greater detail, and we can actually test our models of how the brain works, by simulating them in a computer. And then do things like say, "well what happens if we turn off this nucleus or this part of the brain" and then see what the net result is. I think that these two parallel lines of research are going to play off of each other in very, very interesting ways, over the next couple of decades.

B: Now, Henry Markram, the co-director of the Brain-Mind Institute at the Ecole Polytechnique Fédérale de Lausanne in Switzerland—I probably mangled that pronunciation—he said what you said, Steve: "what we're doing is reverse-engineering the brain".

S: Yeah.

B: But he had another quote. He said "we're not trying to build a copy of the human brain or some magical, artificial intelligence device". I wonder why he threw—I don't like how he put "magical" in there, like you need magic to get AI. "This is really a discovering of how the brain works" is how he sees it. And I agree with him, that this is what they're doing, but I don't think he can ignore the potential applications because they're at the point right now, this milestone they've passed, where the output of this simulation matches what they're seeing biologically. Now they're taking the next step and they want to start actually—extend it beyond just this one neo-cortical column and extend it into many others, and eventually the whole brain of at least the mice that they're trying to simulate here. And eventually after that, a human, obviously. But their time scale is three years for a rat brain, and actually, ten years for a human brain. But of course, some scientists scoff at that idea. They think that ten years is way too soon to even be talking about that.

S: We'll see.

B: Well, we'll see, and if it's ten or twenty or thirty years, eventually, I don't think it's inherently impossible that we won't have this type of simulation. And imagine what can come from that. He's kind of poo-pooing that, trying to stay away from the whole AI thing, and the whole "copy the human brain", but I think it's definitely—those are extensions of his research that will inevitably come.

S: Yeah, I agree. I think there's no question about that.

J: So, when you say they make a computer model of the brain... The first thing I think of is, is it possible that if they make a complex computer model and if it's complete enough, could it obtain any kind of consciousness, just on its own? And I know you just brought up the whole AI thing, but you would imagine if they could simulate the functionality of the human brain, why wouldn't it become conscious?

B: I agree. I think if they have a sufficient level of detail and I'm sure many other factors. I mean, I don't think neurons are some magical substrate that allows consciousness and nothing else can. I think if you have many other different types of substrates, as long as they're connected properly and lots of other things, I think there's no reason why you can't have this in software.

S: Right. The interesting thing will be, in whatever timeframe—twenty, thirty, forty years—we have powerful enough computers and we have adequately modeled the brain. We could create a virtual human brain in silicon, or whatever computers are made of. And the result of that behaves as if it is conscious and aware and artificially intelligent. Of course, I predict that will lead to the philosophical debate about whether or not it is really conscious and self-aware, or whether it just acts like it does. I don't know how we would be able to resolve that. That's interesting in terms of the dualism versus materialism debate. Of course, the dualists think the physical brain is not adequate to explain the phenomenon of mind, of consciousness. And I maintain, and a lot of neuroscientists maintain that the materialist model's doing just fine, thank you. And we do not need dualism to explain any neurological phenomena. But if the dualists are right, then we should run into problems. If we make a purely materialistic model of the brain, it should not result in something that acts like consciousness. There should be some mysteries, something spiritual, something missing.

B: Missing, right; a missing ingredient

S: Missing from our attempts to simulate the brain. I'm sure, you know, the dualists will have some rationalisation if and when that occurs. And if it does turn out that there is something missing, then we'll definitely have to reconsider our materialistic assumptions.

J: Steve, does the human brain have software-like programming?

S: Um, sure, I mean, it's more of an analog type of programming in the way our neurons are connected together. Sure.

R: My brain runs UNIX.

J: I mean, a child is born, and genetically that brain is created with base information. There's base information in the brain. Understanding of living in a 3-D world and whatnot, right? There's all sorts of things that are built into it.

S: Well, there are certain universals that all humanity shares, and the assumption is that those universals are universal because they're built into how the brain is designed and functions.

B: It's hard-wired.

S: Yeah, it's hardwired in the structure of the brain itself. This gets a little bit into the nature-versus-nurture debate, but I think the evidence is pretty clear that ultimately behaviour and whatnot is a complex interaction of the two things. But we're not born as blank slates, just processing machines. We're born with certain tendencies, emotions, desires, motivations and behaviours. And those interact with the environment as we develop. So that's something that's encoded in the genes and reflected in the hard-wiring of the brain. Absolutely. Well, this is cool research and we'll definitely be interested in following it as it develops.

Psychic Ripoff (18:24)

S: And Rebecca, the third item comes from your blog this week about another fairly dramatic psychic rip-off.

R: Yeah, yet another crazy psychic makes off with hundreds of thousands of dollars of someone's money. Her name is Lola Miller. She's a real looker. Which you can see on the blog if you head over there.

J: (laughs) That picture looks like there's... it's super low-res but maybe it isn't. (laughs)

R: Yeah, I don't know. Maybe she just actually looks like that. I don't know.

B: What's that skin disease—not impetigo—where your... some problem with the melanin so that she's got this weird funky, uneven skin colouring all over her face or is it just a bad picture?

S: You're thinking of vitiligo, but I think it could just be wash-out in the picture.

R: I think it's called "evil psychic disease", where the ugliness just sort of seeps out the skin. Like, if you look at Sylvia Brown, there's a similar sort of thing going on there. But enough of the fun ad-homs. (laughs) She actually is an evil person. She convinced a woman in San Jose that the woman was cursed and that her entire family was cursed and that the only way to remove this curse was by—ta-da—giving her $350,000 in cash and another hundred grand or so in furniture and gift cards and all sorts of—like a limo, and a hotel room.

S: Yeah.

B: Three hundred and fifty thousand! She was liquid!

R: Yeah, there's quite a bit of cash there.

S: Right.

B: Wow.

R: Right, so of course she made off with the money. But the nice thing is that cops did get on the case and track her down to New Jersey, and brought her back to California, and now she is due in court December 5th, apparently. So, we'll see how it goes.

S: Did the mark get her money back?

R: Um, not as of yet, I don't think. I mean, especially because a lot of it was in services as well as goods.

S: But... $350,000 cash and $95,000 in goods and services. So it's still mostly cash.

R: Yeah, I'm not sure... I couldn't dig up that info.

J: So it's against the law to convince someone to hand over a bunch of money.

S: Under false pretenses. That's right.

R: It's such a fine line that they draw, because in this case, the woman lied to someone in order to get the money off of them. [S: Mmmm] But Sylvia Browne does that every day of her life. But she's not being tracked down and dragged to court, unfortunately. Um, yet.

S: You made a good point in your blog where you say that "if you pretend to be a psychic to steal half a million dollars from one person, that's illegal. But if you're stealing millions of dollars collectively from hundreds or thousands of individuals, then that's OK".

R: Exactly. And if you look at someone like Sylvia Browne, she's couched the whole thing as a religion that people buy into. It's kind of amazing that we allow some people to build up a whole industry around it and other people we track down and sue. But, to be optimistic about it, at least they got this one. It's always good press on our end when you get this sort of thing out there, that "hey, look, curses aren't a real thing. This woman was a scam artist and don't do that". (laughs) So, I think the more stories like this we have the better.

S: Yeah, although the defenders will always say that "just because someone's abusing it doesn't mean there aren't any real psychics out there".

R: Oh, yeah.

J: She's just a bad apple, you know.

S: Yeah.

R: As soon as we see a real psychic, you know.

S: (laughs) Right. Which is true, the logic of that is true, but yeah, show us the real quote-unquote "real psychic".

R: "Trot one out", as Houdini said.

S: Yeah. until then, they're all frauds.

Wi-Fi and Autism (22:30)

S: One more news item before we go onto e-mail. And again, many of our listeners sent this in to us, so thanks for sending in these items; it does help us. This one is on wireless technology and autism. Autism seems to be a favourite target these days for quacks and charlatans. This is a press release that was sent out and dutifully repeated on many news sites, especially many techie news sites, like computerweekly.com printed the press release without any real independent journalism or skepticism. And the press release claims that electromagnetic radiation from Wi-Fi devices causes autism. For example, the article is quoted as saying, "the authors say that the rise in cases of autism is paralleled by the huge growth in mobile phone and Wi-Fi usage since the late 1990s, with world-wide wireless usage having reached nearly 4 billion people." Of course as I've stated before, there is no increase in the true incidence of autism. The research clearly shows that autism diagnostic rates are increasing because the definition was broadened, and because surveillance has increased.

R: Thank god, because it seems like everything causes autism these days.

J: There's people out there that just love autism, like—

S: Anything that's been happening over the last ten, fifteen or twenty years you could say correlates with an increase in diagnostic rates of autism. So, it's just looking for correlations and then assuming causation. Although, this study that the press release was referring to wasn't an epidemiological study. It wasn't actually showing a correlation between Wi-Fi usage and autism. It was based on the assumption that heavy metals cause autism, like mercury, so it's buying into the whole mercury—

R: AC/DC.

S: —yeah, causes autism thing. And then, that Wi-Fi electromagnetic radiation impairs the brain cells' ability to clear out these heavy metals. It further assumes that treatments designed to treat heavy metal poisoning actually do help autism, which has not been demonstrated to be true, and that Wi-Fi devices keep these treatments from working. So, this is the study that they did: they looked at children with autism, and then they put the children into a zero-Wi-Fi environment, EMR-free environment. And they treated them with some treatment. The methods in the study don't actually give a lot of details. But, some treatment designed to get the heavy metals out of their system. And they claim that the kids improved. Of course this is based upon just subjective, you know, assessment by the parents. And they measured the heavy metals in hair and urine and faeces, and towards the end of the study period, there was an increase in the amount of heavy metals that were coming out as a result of the treatments. They concluded that this means that the longer they were in the EMR-free environment, the more the cells were able to mobilise the heavy metals, and that's why it was increasing throughout the course of the treatment.

E: Sounds like a logical fallacy.

S: Yeah, I mean, the whole study was horrendously bogus, for multiple reasons. This is a really interesting thing; I've never seen this done before. This all open label, it's not blinded, which is why it's all totally worthless. But what they did was, they did this study on one case, which they called the "sentinel case". And then, whatever the result was, that was the outcome that was considered positive. And then they did the other people and they had a similar result. That's cheating! That's peeking at the results and then declaring that a positive outcome.

J: As if that's what they were looking for.

S: Right! What we find—OK, we'll call that a positive result. There was no controls; they didn't do this in an EMR-free and not free, and autism and no autism; they had no real a priori reason to predict that whatever pattern they were seeing meant anything or confirmed their conclusions. But that's actually the least of it. Everything that I just described is why the results are completely worthless. But, when you read the study, it's chock full of pseudo-science. One of the methods that they used to assess the subjects was acupuncture.

[knowing laughs]

J: [sarcastic] Of course. That makes perfect sense.

S: Here's some of their methods: subjects were given intervention in a sequential protocol that included a series of non-chelation provocations—we're not told what those are—and nutritional formularies—we're not told what those are—focused on mitochondrial resuscitation. That's utter pseudo—

E: Gobblede-gook.

S: Techno-babble. Depending on the clinical profile of the client, they divided them into two clinical profiles. Get this: "Two general categories of subjects were defined for clinical purposes: those with liver clearance as an indicated vulnerability, and those with kidney function weakness. These determinations are critical for precision of intervention for each subject and were based on a priori laboratory analyses, acupuncture meridian tests, medical history consultations with subjects' parents and clinical observations." So, they decided which kids had problems with liver function and which kids had problems with kidney function, based upon pseudo-scientific nonsense. And get this: in order to put the kids into an EMR-free environment, they tell us in their methods, "applications of body-worn sympathetic resonance technology, energy resonance technology and molecular resonance effect technology were introduced as appropriate".

E: [disgusted] Oh my god...

S: Look those up, that's all pseudo-scientific nonsense. These are not established devices. This is just using magic to prove magic.

E: It's made up!

S: Right.

E: It's fiction.

J: Well, I mean, what better to prove magic with than with magic, you know?

S: So, it was a terrible study. It was done by people who totally buy into this kind of... their credentials are all in alternative medicine crap. They're not really legitimate researchers. And the whole premise of the study was based upon... the multiple premises of the study are all not true; they're all false premises. The methods are terrible, and they are sprinkled throughout with just pseudo-scientific nonsense. And yet, the press release really doesn't give you that impression. The press release is, you know, "we've shown that Wi-Fi causes autism". And it was really just credulously reproduced by many, many sites. There were a couple of sites which did ask some really basic questions, like "what is the journal in which it was allegedly published". It was published in the Australasian Journal of Clinical and Environmental Medicine. It claimed to be peer reviewed. But it's not listed on any of the peer-reviewed listing sites like the National Library of Medicine. It has nothing that indicates it's actually a legitimate journal. They link only to a website which is in construction, so there's no way to actually get your hands on it.

E: (laughs) How convenient...

S: I got my hands on the PDF because a listener in Sweden sent it to me, otherwise I would have had no way of digging it up. It wasn't on any—and I have access to everything through Yale. So if it exists in the medical literature and it's legitimate, I can get access to it. You know, a couple of people did ask these basic questions like "what's this journal? Who are these guys?" and it turned out that—

E: Who funded this?

S: It's all—again, the journal is the official publication of an organisation dedicated to this stuff. You know, it's not like an independent scientific organisation or a university or anything legitimate. So it's just another example of this alternate universe that's being created within so-called complementary and alternative medicine. They're just creating their own pseudo-scientific infrastructure, and completely bypassing legitimate science, and this is the result. And if you're not savvy, even if you're like Computer Weekly, you can get totally snookered by this superficially legitimate-looking and sounding press release. But you gotta do the basic journalism and dig a little deeper to see that this is all just crap.

Well, let's go onto your e-mails.

Questions and E-mails (30:58)

Skeptiko Podcast and the SGU

S: The first e-mail comes from David Cano, who gives his location as "the US", and he writes,

Dear skeptics,

I'm a longtime listener, first-time e-mailer. I decided to e-mail you after listening to the last episode of the Skeptiko podcast. I like to listen to that show mostly because it's very interesting to listen to some famous skeptics being interviewed by a guy who claimed to be a "skeptiko" but it's obvious that he is a profound believer. Anyway, I was a little bit annoyed by their last episode. The whole episode was a clear attack on the skeptic community. The episode is a good source for almost any logical fallacy you can think of. More specifically, he directed his disappointment to Steve and the SGU as this was the main focus of the episode. I think you guys should listen to the episode and maybe address some of his claims, particularly when he accused Steve of not doing research in psychic detectives.

Keep up the good work,

David

Well, thank you, David. There were several other listeners who also alerted us to this. There is a thread on the SGU forums. We certainly did take a listen to Alex's—this is Alex Tsakiris who does the Skeptiko podcast. I was interviewed on his podcast a few months ago. And his last episode is really just him saying, "all right; I give up" and just ranting against skeptics and the skeptical community for a half an hour or so. I have to say, it really was a terrible episode. He makes three basic claims against skeptics: 1) that we do not do research in the paranormal, 2) that we don't read the research, and the third is that we don't listen to the skeptical research, that we don't abide by its findings, by the paranormal research. And he used quotes from myself and from James Randi and Richard Wiseman, primarily, to sort of make his points.

Let's take things one at a time. First, that we don't do research into the paranormal. His primary piece of evidence for this was because when we interviewed Jan Helen McGee at the end of 2005, we agreed with her that we would test her ability as a psychic detective and that when a suitable case, local to us here in Connecticut, cropped up, that we would get her to give a psychic reading on that. And we haven't had an opportunity to do that since we agreed to do the test. This is due to a couple of reasons: one is because I think in retrospect, our criteria for a suitable case is probably too difficult. I wanted a case that was local to us that she would not have direct access to, that was fresh enough that we were likely to get... that was still unsolved, but was fresh enough that we would be getting an outcome within a reasonable period of time. So I didn't want cold cases or cases that are unsolved and who knows how long they would take. They can't be cases that we just find through the popular media because then she could find out all the information that we could find out. I wanted to control this as much as possible. But also, the fact is, we're extremely busy and things like that tend to get back-burnered. It's hard for us to really keep on top of everything and follow through with everything that we want to follow through on. So, but he was concluding from that that we're unwilling to do the research, that we're unwilling to even do personal explorations into this kind of stuff. And that is a completely unfair, completely, patently, factually false claim, and either he really hasn't been paying attention or listening to our show, or he just has not—the facts are just not penetrating, which is what I really think.

For example, at the end, he gives a challenge to skeptics to get a psychic reading; find a "good psychic" and don't give them any feedback and then score their results. And Jay and I were chatting about this before the show. It's just one of those happy coincidences of quirkinesses of randomness and fate that pretty much exactly what he said skeptics should do to convince themselves is exactly what we did last weekend—or two weekends ago and reported on last week's podcast[link needed], just by coincidence. We went to a psychic fair; we had a reading with three different psychics; these were psychics that had reputations that were being promoted by the organisation. They were not obscure or anything; they were as good as anybody out there. And we recorded the entire process, and guess what, Alex? They scored zippo.

E: Zero!

S: They didn't even come close. It was pathetic. And just just to run down some of the things else we've done—we've never turned away an offer to do research. We were contacted by some local investigators who thought they had good evidence for electronic voice phenomenon. We went on a reading with them; we went into the field with them to look at their methods; we looked at all their evidence. We said, "give us your best piece of evidence"; they did; we looked at it. We investigated the Warrens; again, give us your best evidence; we looked at anything they would give us.

J: And that was a long-term investigation; that was not—

S: That was long term. We've done preliminary investigations for the Randi psychic challenge looking at the Ouija board guys, the psychic guy, people who claimed to have ESP. We've investigated multiple local ghost-hunting organisations and allegedly haunted houses. So, we do this kind of basic investigatory research, not lab research. We're not lab researchers in parapsychology. But we'll investigate anything anybody wants to show us, so.

J: And if anything, to be honest, if we had more time, if we weren't all working 40+ hours a week, I would love to do this every day. If this could be my full-time job, I would do it, but it just doesn't work out that way.

E: Plus, Alex cites us as the example, like somehow this is very common no matter what skeptics you're talking to, whether you're talking to James Randi, whether you're also talking to the people at CSI. I mean, forget it. Between Joe Nickell and James Randi, do you know how many investigations and research that they've done into these things over the course of the years? It's plentiful. It's bountiful. And Alex brings up the Jan Helen McGee as "that's typical of what skeptics are all about". It's so wrong.

S: Yeah; Richard Wiseman, which he brings up, does investigations and collaborates with paranormal researchers. Ray Hyman has done research and has collaborated with others on this. So, it's just a patently absurd charge. In fact, the skeptics are the only ones who are doing these kind of research and investigations. Mainstream scientists are generally not doing it because they know it's all bunk and they're not interested in it. So, he's... and I told him this during—when I was interviewed for his show, that skeptics are scientists or scientifically minded individuals who are showing interest in these areas because of the great public interest in them. And we're doing them a favor. If it weren't for us, nobody would care about this. I mean, nobody in the scientific community. Obviously, the public has an interest in it.

J: Steve. You know, I think this would be a good opportunity for us to mention Richard from The Tank. He does research, right, Rebecca?

S: Richard Saunders?

R: Richard Saunders. And also, today is his birthday. F.Y.I.

S: Happy birthday, Richard.

R: Happy birthday, Richard.

J: Remember Skeptics West? We met them at TAM 5. Didn't they do research? I don't remember when we talked to them.

E/S: CFI West. Center for Inquiry West.

S: Absolutely.

E: That's Jim Underdown—

R: Jim Underdown.

E: —runs the show out there.

R: Ben Radford. Yeah.

S: It's also a complete non sequitur. You know? And we hear this from the UFO people, Bigfoot people, not just the ESP enthusiasts. Everyone who does not like skeptical attention to their claims and their methods tries to pull this one on us. "Well, you're not really doing research yourself; you're just nay-sayers". It's irrelevant. Everyone... even if you do research full time and that's your job, whatever research you do is only a tiny, tiny slice of all the research that's gets done. Even in your area; forget about the broader areas of science. Because of the consilience of science, because of interdisciplinary science, you have to rely upon the research of people or adjacent or tangential areas to your own. You have to develop the ability and the skill to interpret the literature, even if you're not doing research in that area. And what scientific skeptics are trying to do is provide the kind of peer review and critical analysis that typically happens in mainstream science and apply that to more of these fringe areas, because the mainstream scientists are ignoring it out of hand, usually. It really is incredible that Alex has chosen to attack the skeptics because he doesn't like the conclusions that we come to.

J: Well, he's—one of his premises was that he was—he sounded to me, actually, to be very disappointed that his podcast didn't actually fuel a lot of this heated debate between fringe research and skeptics. I think that's what he wanted Skeptiko to do. And what he ended up getting was much more of a dismissal from the skeptics that he was interviewing as "listen! There's nothing here to chew on. It's worthless. It's a dead end and it's a waste of time".

S: But you know, Jay, that was a totally self-fulfilling prophecy on his part.

E: Yep.

S: You know, he—if you listen to these interviews, there was no learning curve for Alex over the 30 interviews—shows that he has done, over the interviews that he has done. He went into this with certain biases and assumptions clearly demonstrated in his interview styles and the questions that he asked. And he showed no learning curve, meaning that the kind of things that people—that skeptics were telling him along the way, he never incorporated into his thinking. The things that he was ranting about on this most recent episode are things that I addressed when I was interviewed on his show. It's as if it never happened. Like the discussion was never even taking place in his mind. He was just cherry-picking quotes from the skeptics in order to support his a priori assumptions. The big assumption that he makes is that—and he says he discovered this as he started to explore that, and that's fine. Let's say, early on, he said, "wow! There's a lot of evidence for psi. How come scientists and skeptics are not as compelled and impressed by this as I am?" But he very quickly, if not initially, very quickly got to the conclusion that there was compelling evidence for psi. And he could not understand, even though I laid it out for him, and others have laid it out for him in detail, he could not understand or accept that scientists and skeptics have legitimate reasons for not being compelled by that evidence. And in fact, on my interview with him, I told him, "that is the key—you're trying to understand the gulf between skeptics and believers? That's it. It's why are you compelled by the evidence and why am I not compelled by the evidence? That's where we should focus our discussion." He never went there. He never wanted to go there. He only—

E: He had pre-conceived notions.

S: Yeah, I mean obviously. He only went to... he proceeded from the conclusion that it was compelling. And then he... his reasoning is, "well, if it's compelling, and skeptics don't accept it, then they're just dismissing it and ignoring it". Or they're... then that leads him to the conclusion that, "well, we must be afraid". We're afraid of the implications of this research because we have to protect our precious materialist paradigm.

J: Yeah, or he... he also claims that because it doesn't fit into our ideology, we ignore it and reject any evidence that has come up that proves that, for example, like Dean Radin.

S: Yeah.

J: His research supposedly proves that there's precognition.

B: Well, he would... Alex would respond, I think, saying that—didn't he mention in his podcast that Radin's research has been duplicated? Is that true?

S: Replicated. Let me get to that, because he does focus on—his next premise is that skeptics don't read the literature. And again, he arrives at that conclusion based upon his assumption, his premise, that there's compelling evidence. And therefore, we must not be reading the literature, 'cause we would then see the same compelling evidence that he does. And he uses Dean Radin's experiments as an example, and he uses the dogs anticipating when their owners are coming home as an example. And let me address those two that he discussed in his last podcast.

So, Dean Radin's experiments. Dean Radin did a meta-analysis of studies looking at the ability for people to know, psychically, when they're being stared at. So, they show some evidence that they know that they're being stared at, even when they're not getting any sensory cues to that effect. And he took Ray Hyman to task and through that, took me to task for not being familiar with the latest meta-analysis that Dean Radin did about this. And he still is clinging to that; he's really clinging to this—"See? They're not looking at this research, which is compelling". Well, I did review Dean Radin's meta-analysis of this literature. And this is a perfect example of why I'm not compelled by the evidence and why Alex and Dean Radin and Rupert Sheldrake and Marilyn Schlitz and that crowd is compelled by it: Because they're not good scientists. And they don't get it. In order for a science to be compelling enough to establish a new phenomenon in science, we need to see a few things: We need to see science that has good methodology, where any artifacts are weeded out. We need to see results that are statistically significant. We need to see replication, so we know it's just not one lab or one guy. And we need to see an effect size that's above noise. And we need to see all of those things at the same time. And that's what—

B: Throw in a mechanism. That would be nice.

S: I'm not even going there, Bob.

B: Yeah, I know, but it would be nice. It would be nice.

S: Mechanism; that'd be the icing on the cake.

B: Right.

S: But let's just say we don't understand enough to know what the mechanism would be. Let's just decide if it exists as a phenomenon. Then we could worry later about the mechanism.

B: Right.

S: The fact that we lack these things in those areas for which there is no plausible mechanism, I think is not a coincidence. But let's just look at those four criteria. So what we'll get from people like Alex is, "well, look at this study has good statistical significance, and this study has good methodology". And maybe you have two or even three of those things, but never all four at the same time. You have studies which show large effect sizes, but then as you improve the methods, the effect sizes shrink. Or you have small studies that have no statistical significance. Or whatever. Now, Dean Radin's meta-analysis showed that—looking at all of the—what he considered all of the quality studies, showed that the effect size, with 50-50 being chance, the effect size was about 55%. A 5% variance. Now it was statistically significant because there was tons of trials that were run, and this was after the methodology evolved, so that the obvious methodological flaws were removed. So you have... Let's grant good methodology. Let's grant replication. Let's grant statistical significance. Effect size teeny-tiny. Why is it that the effect size is so small? So what Alex doesn't understand is that we're not compelled by tiny effect sizes because, to be compelled by it, the unstated major premise is that perfect methodology should produce an effect size of zero. Meaning that there's no noise in the system. Meaning that we're able to conduct trials with people and get everything perfect. And that's just not the case. That's not what we see in science. When you get down to single-digit effect sizes, we assume that's the noise in the system; that that's a null effect. Negative. So I look at that data, and I see "oh! As the methodology improved, the effect size shrank". And when you pool everything together, you get down to an effect size of about 4 or 5%. Dean Radin himself, in his own analysis admits that there's a publication bias, and if you take that into account, the effect size shrinks further. But it doesn't get all the way to zero. So maybe we're down to 3 or 4%. That's negative, folks! That's negative. Alex finds that absolutely compelling enough to say that the materialistic paradigm is dead based upon a 4% variance.

J: Steve, what would the number have to be in order for the light to go on?

S: It depends on what you're doing research in, but when you're dealing with people anywhere in the system, we like to see effect sizes of—to say that this is compelling, you like to see effect sizes of 30% or so. To say that it's interesting, this is something we need to look at, I would like to see at least a 20% effect size. And then if you have some really objective outcome measures and minimal potential for unforeseen biases in the way the data is being collected, maybe like 10 to 20% is like—that's borderline. You know, that's where it's like, "eh, that's kind of small. Not really convinced by it, but maybe there's something there". Less than 10% doesn't even deserve a pass. I mean, that's just—single digit is noise.

J: So, Steve, let's say that you did have 10%, and then you conducted the next study. There was enough interest there. And then you tighten up your science; you're double blind; you're doing it better next time; you're really digging in now. And then... OK, and this is what, supposedly, this is the way that scientific research is supposedly to work—then you would have a much more clear picture of what's happening. So that's my question. OK, so there's 4%. So they do another study. So they get 4%. So they do another study, and they get 4%. The number never goes up. They never get anywhere with it. It just remains the same.

S: Yeah. And the same is true of the Ganzfeld experiments, where initially they were saying, "oh, 20, 30% effect size. Let's clean up the methodology. Oh, it's 10%. Oh, let's fix this. Oh, it's 4%."

E: It disappears.

S: "But look, it's very statistically significant". No. It's shrunk to noise. And listen—I do do research. You know, not in this area, but I do do research, and I realize how easy it is to throw a little bias into the numbers. Just to give you and example—I'm not saying that any of this thing actually happened, but just to give you an example of a really easy way in which subtle bias can creep in to this kind of study. Let's say you run a series and the results are not looking good and then you think to yourself, "huh, did I calibrate my equipment properly? You know, let's start over and calibrate the equipment and then go forward and just not count this trial". You know, how do I know that that kind of stuff's not happening in any of these studies? That's not necessarily even conscious fraud; it may be completely legitimate to think that you gotta, whatever, run some controls before... But if... you might be more likely to think to do that if the result—if you just happen to be on a negative streak than if you had a few positive in there. Maybe you would not think or do anything to sacrifice those. And again, this is just a hypothetical example. But there's a hundred things like that, where really subtle bias can creep into studies like this. So, you just can't believe effect sizes that are that tiny. But Alex does. And even... The thing is, at least admit that that's the difference. At least—I tried to explain to him, this is what the difference is; this is why you believe one thing and I believe something else. Instead, he's talking about paradigms and being afraid of changing our world view and being dismissive and not knowing the literature. It really was a whiny, insulting, really childish approach that he took to the whole thing.

Just for the record, I did e-mail Alex and invite him on our show so that he could defend himself directly. I did not give him much leeway, so it's not really—I'm not saying that he refused to come on, or he might not have have even gotten my e-mail yet, but just wanted to let the audience know. I did invite him on, and I'm willing to have him on the show at a later date if he decides to come on. Be happy to have this conversation with him directly. This all happened very quickly, so he didn't really get much notice for tonight's show.

J: When he started the show, I gave a listen to it and my initial impression was, "well, you know, this is interesting. He has a different attitude; he kind of believes both in skepticism and in the supernatural in some degree. Let it play itself out." We didn't criticize his podcast—I don't think we've ever criticized any podcast on this show. After he interviewed you, Steve, I sent him an e-mail, and I told him I thought he did a great job with the interview and I thought it was a very good show. I think we were very civil with him, even though we saw the podcast slipping into much more of a pseudo-scientific direction. We never talked bad about him in any way. Out of the blue he decides that he's going to rabbit punch us. Didn't—

E: Make an example of us.

J: Yeah. Didn't initiate the conversation.

S: I don't think this is out of the blue if you were paying attention. And also, Jay, this is exactly what we experience every time. We had our long-term relationship with Ed and Lorraine Warren; it was the same thing. Very civil; very nice; we're just interested in doing observations, but as the results of our analysis were going against them, they turned on us. At one point it dawned on Ed Warren that we were not going to endorse his crappy evidence. And then, the switch flips and then he's against us. And the same thing with Alex. I mean, Alex just didn't realize—and he still doesn't get—he doesn't understand the difference between his position and the skeptical position. That's because he never really made an honest attempt to understand the skeptical position. Clearly, 'cause he clearly doesn't understand it. And you know, eventually just realized that "wait, these people aren't all coming over to my way of seeing things. The problem must be with them. So, I'm gonna attack them." That was the process.

Very quickly, since he brought up Richard Wiseman's experiment, let me go over that. Very, very quickly. He said that Rupert Sheldrake did an experiment where he looked at the ability for a dog to know when their master is coming home. So, he had one crew watching the dog at home, another crew out with the owner. When the owner decides to come home, the dog goes out to the porch to wait for him. So as if there was some kind of psychic connection. Richard Wiseman replicated the research, but instead of using more open-ended criteria, he said, "OK, you have to have some specific criterion for what we're going to count as a hit". So he said "OK, if the dog goes out to the porch the first time, that's what we'll count as the dog anticipating the owner coming home". And the result was negative. Based upon some feedback, he said "OK, maybe there's too much noise in there, so let's say the first time the dog goes out to the porch for more than two minutes, we'll consider that a positive outcome". They did the study; it was negative. So, Wiseman says, "I replicated the research; it was negative". Sheldrake then says, "well, no, you have to look at all of the doggie behaviour and you do an analysis of the patterns, and the dog is more likely to go out to the porch to spend more time there at the time that the master is returning home". And if you go back and look at Wiseman's video, that the dogs follow that same pattern. And then, Alex, thinking he had Wiseman—you know, he has that "gotcha" moment; it's like, "I got Wiseman. 'Cause Wiseman admitted that the pattern was same and that he replicated Sheldrake's research and now he still won't admit that that's proof of psi, so I got him. He's a hypocrite". But Alex completely... on the show, he didn't even acknowledge Wiseman's response to that. He's accusing us of not knowing the literature and being up on the latest thing. He dinged Wiseman for that, whereas Wiseman spelled out in detail why he thinks that the studies still don't show evidence for psi. One is, because we have no idea that Sheldrake wasn't just retro-dicting, looking for patterns and then declaring that a positive. Wiseman set the criteria up ahead of time, and even revised them to try to make it more fair. And it still was negative. The other thing is that Wiseman brought up—very, very good point—that if you hypothesize that when the owner leaves, the longer the owner is away, the more anxious the dog is going to get for the owner's return. So the dog will go to the porch more and more frequently and spend more and more time there until the owner returns. And of course, the owner returning ends the cycle. So the dog will have spent the most time at the porch right before the owner comes home. So that explains all the data without hypothesizing a psychic ESP connection between the dog and the owner. That's why Wiseman didn't think it was evidence for psi. It's a completely, perfectly reasonable interpretation of that research.

B: Occam's razor.

S: And Alex missed it. Completely missed it, and he was criticizing Wiseman for being a hypocrite.

E: So was Sheldrake employing the Texas sharpshooter fallacy?

S: We don't know, because he didn't really publish his methods in a way that we could know, you know?

E: Hmm.

S: The reason why we're spending so much time talking about this—it's partly because he directly challenged us, but... everything that Alex said, this is the standard line of the psi community. This is what they all say. All the points that Alex makes; this is the party line of the paranormal believers.

J: I think what we need to do is, first off, we need to take Alex up on his challenge; we will go find three more—I think he asked for the skeptic to go to three mediums.

S: Yeah.

J: We'll go to three mediums; we will—

S: Again.

J: Again. We will do it exactly the way he wants; whatever criteria that he wants, we'll do.

S: Hey Alex, you can even choose the mediums. Choose the three best psychics, mediums you want.

J: And we will do this and we will report on this and, you know, we should pick a date—a time limit, so we make sure that we do complete it in a reasonable amount of time.

S: Well, you know, if he picks 'em, we'll make a weekend and we'll go. We'll do it. We'll pick subjects that—not us. People that we know that the psychics will not know and cannot get information on. We'll record it and we'll score it. But... I do have to say that Alex again is proposing this protocol and it's really naive; it's just incredibly naive because it's all in the scoring. It's all in how you score the hits and misses. And he completely does not consider the phenomenon of looking for specifics and generalities. That's how psychics make it seem as if they're being specific when they're not. Like when Rebecca and I sat at that one woman—she said, "I see a uniform". First she said, "policeman or..."

R: Fireman.

S: Fireman. "Policeman or fireman". Yeah, then we didn't endorse that, they said, "some kind of uniform". That's something that sounds specific, but I mean, come on. Think of... who doesn't have somebody in their life that doesn't have some kind of uniform?

E: I wore a uniform when I was eight years old!

S: It's something that's designed to be—to sound specific when it really isn't. So, we'd have to decide how we were really going to score the reading. But you know what? I'm convinced enough that if we trained the subjects not to give positive feedback that they'll perform as terribly as the three psychics we sat with a couple weeks ago.

E: Zero.

S: And they'll get only the random, occasional vague hit. One final point is that Alex accused us of having basically a double standard, of changing the rules, changing the nature of the game for psi research. And again, it's just incredibly naive on his part. You know, I interpret medical literature every day and apply it to my practice. And I apply the same kind of standards to medical research. If a drug company was trying to get me to prescribe their drug because it had a 5% effect size, I would laugh at them. I mean, again, that's just noise in the system. So, he's making that accusation without any factual basis whatsoever. And if he actually knew how science functioned, he would know—or even just me personally—he would know that we're applying the same rules to psi as we do to other research, not different rules; he just doesn't know what those rules are.

J: That being said...

S: (deep voice) With that...

E: (chuckles) That horse has been clubbed sufficiently.

Science or Fiction (1:00:01)

S: Each week, I come up with three science news items or facts, two genuine and one fictitious. And then I challenge my panel of skeptics to tell me which one is the fake. And here are the three items for this week. Item #1: Astronomers have discovered organic building blocks in the upper atmosphere of Saturn's moon Titan. Item #2: New research suggests that pedophiles consume more meat than normal controls. And item #3: Psychologists have demonstrated that a simple written test significantly increases the accuracy of a lie-detector test. Evan, go first.

E: Item one: Astronomers discovered organic building blocks... upper atmosphere of Saturn's moon Titan. Er. Dubious. Number two is: research suggests pedophiles consume more meat than normal controls. Totally plausible. Number three: Psychologists demonstrated written test significantly increases the accuracy of a lie-detector test. Boy. I'd like to say one... is fiction, but... I don't know; I think that's a curve ball. I'm usually pretty good at reading the stitches on the curve ball. So I'll say... I'm going to say that the pedophiles consuming more meat. That's the fiction.

S: OK. Rebecca?

R: I agree. That's bizarre. I'm going to go with that one as well.

S: All right-y. Bob?

B: (exhales) Number one: the organic building blocks in the upper atmosphere of Titan; I guess I could see that. We found organic building blocks in lots of other places. It'd be interesting, though, to have it so close to home. Pedophiles and more meat; I don't know. Now three. This one's interesting. I don't know if I'm interpreting this correctly, Steve. In what way—do you mean a written test instead of the verbal test?

S: As an added procedure.

B: As an added procedure. Oh, I gotta think about that one.

R: Oh, God.

B: Cue the music. I would think that the fact that you've actually written it would make you more confident about your lies in a way and maybe even stifle the feedback that the lie-detector test relies on, in a way. It's kind of like rehearsing your lie. You rehearse it so much that it becomes harder to detect that you're actually lying. All right. So for that reason, I'm going to say three is fiction.

S: All right-y. Jay?

J: Wow. I didn't expect Bob to pick that one as the fiction. I'm actually leaning towards the astronomers discovering organic building blocks in the upper atmosphere, but there's something that's scratching me about pedophiles eating more meat. That just sounds so incredibly weird to me.

E: On many levels.

J: And my mind just went somewhere really dirty and I'm not going there again.

B: Jeez.

E: I hear cannibals eat more meat than the average person.

J: That's true.

S: They eat more meat of the average person.

(laughter)

J: I like to be... I'm gonna go on my own and I'm gonna say that the first one is the fake. About the...

S: Titan?

B: All over the board!

S: OK.

E: Ugh.

S: Well, let's take 'em in order.

(laughter)

J: I knew it... I'm so stupid.

S: Astronomers have discovered organic building blocks in the upper atmosphere of Saturn's moon Titan. This story is science. Sorry, Jay.

J: I knew that.

S: Very interesting. So Titan, which is the second largest moon in the solar system. Anyone know what the first largest moon is?

E: Uh, Earth's moon.

S: Nope. Not even close.

B: Wait, wait... oh, crap. It's not Titan...

S: We used to think it was Titan, because we were measuring it to the edge of the atmosphere, but if you measure it to the edge of the surface—

R: Is it Saturn's...

E: Isn't it... maybe Pluto's moon?

S: No.

B: Charon? No.

R: Ganymede?

E: Cerberus?

B: Gotta be one of Jupiter's moon. Is it Ganymede?

S: Ganymede, right. It's Ganymede.

R: Oh, I thought that was Saturn's.

S: Yep, Ganymede of Jupiter is one; Titan of Saturn is two. They're both larger than Mercury. The planet Mercury. But Titan's been very interesting for a while, because it has an atmosphere. It's a moon with an atmosphere. It also has—

B: Dense; isn't it dense?

S: It's a very dense atmosphere. Also has lakes, although it's lakes of liquid methane and it rains methane. And they thought that it would probably be too cold for this kind of thing to exist on Titan. So they were a bit surprised by this. But what they found with spectroscopic analysis, is that there are very heavy hydrocarbon-based negative ions in the upper atmosphere of Titan. And these could serve as building blocks for more complex organic molecules.

B: Kinda like—what's that, nucleation sites, or...

S: Yeah. Right. Very interesting; makes Titan even still more intriguing. Definitely have to send more probes there. Item number two: New research suggests that pedophiles consume more meat than normal controls. And this one is... fiction. That is the fiction.

E: Woo-hoo! High-five, Rebecca!

S: Congratulations, Evan and Rebecca.

R: (laughs)

S: Yeah, it's a little bizarre. But I did base this on a real item.

J: (quietly) I hate all you people.

S: So, new research that shows that pedophilia may be the result of faulty brain wiring. This particular study, which is based on research conducted by the Center for Addiction and Mental Health, found that pedophiles have less white matter in their brains than normal controls. There's other research which also shows other neurological differences between pedophiles and controls. Pedophiles have lower IQs, are three times more likely to be left-handed—

E: Sinister.

S: And even tend to be physically shorter than non-pedophiles. So these are all inferential; they're suggestions that maybe there's something genetically or developmentally different about pedophiles. That maybe their brain did not develop fully or in the way that brains usually develop. And that this somehow correlates with their pedophilia. Again, we're far away from establishing any cause and effect. The researchers are clear to say that this in no way should be taken as implying that pedophiles should not be held responsible for their actions. Just that this suggests that there may be an underlying neurological reason for their behaviour. This does tend to contradict earlier—I mean, going back decades—assumptions that pedophiles were created, psychologically, by their environment.

Which means... "Psychologists have demonstrated that a simple written test significantly increases the accuracy of a lie-detector test" is science. That's an interesting thought you had, Bob; I actually hadn't considered that. But what they're doing is supplementing the lie-detector test with a written test and they're using what's called Symptom Validity Testing in conjunction with polygraph testing. And it improves the accuracy of the results.

B: Significantly?

S: Yes. What this does—they look for patterns in word usage when they asked them to write about whatever it is—the information that they may be trying to conceal or be untruthful about. And you could use a statistical method to analyse the word choices that people will make. For example, it says, "we showed that the accuracy of the concealed information test can be increased by adding a simple pencil and paper test. When 'guilty' participants were forced to choose one answer for each question, a substantial proportion did not succeed in producing the random pattern that can be expected from 'innocent' participants." So if you're asking somebody if they have any knowledge about something, and they do but they're trying to pretend that they don't, and then you analyse their answers, if they really don't have any knowledge about it, and you give them 20 yes or no questions, they should have a random distribution of correct and incorrect and yes or no answers. If they do have knowledge about it and they're concealing it, they'll give a non-random result. They won't be able to fake generating a random result. It's a really interesting approach. And I bet you we could use that in a lot of other ways, too, that same basic approach, 'cause I think that it is very difficult for people to consciously fake random patterns or normal patterns. They will tend to betray in the way—the details of how they will report things and the words that they use, etc. Very interesting approach. So, congratulations Evan and Rebecca.

R: Thank you, Steve.

E: Ah, thanks. Thanks.

J: What about me and Bob, Steve?

S: You guys get honorable mention.

E: Yeah, congratulations to you guys, too.

R: Which means you lost.

J: Bob, good work, man.

B: Yeah.

S: It's good; you guys thought your way through it. It's fine.

R: You thought your way through to the wrong answer.

J: Bob, you and I were represented, you know?

B: Well, at least I didn't pick number one.

E: It's OK; Alex would have guessed one or three.

(laughter)

E: Don't worry, guys.

Skeptical Puzzle (1:09:32)

S: Evan, you got a puzzle for us this week? Actually, can you tell us the answer to last week's puzzle?

E: Yeah, we gotta go back to last week's puzzle. Talk about that one first. "110 of them are gone, 1 is here now, and 1 has yet to come. What is it?" And the answer is... the prophecy of Saint Malachy, also known as the Prophecy of the Popes. Have you guys heard of this?

B: No.

R: No.

J: No.

E: Saint Mal—(chuckles) Saint Malachy...

S: Is it Malachi?

E: He—Malachy. M-A-L-A-C-H-Y. 12th-century bishop—

R: Isn't that Malachi?

E: No. It's Malachy. That's all.

S: Whatever.

R: OK. Sorry.

E: 12th-century bishop of Armagh in Ireland, and 1139... he came up with a prophecy having to do with the popes in which he attributed—there are 112 of them; little phrases in Latin that would describe the next 112 popes and something apparently quite grand would happen at the end of the 112th pope. The number 167 through 268 that I mentioned in the puzzle represent the numbers, in sequential order, of the popes starting in 1143, and of course, running right up to 2007 currently. Benedict XVI, who is number 111 currently. So, 110 are gone; 110 popes have come and gone; 1 is currently here; 1 is yet to come, so. Number 268 represents the one that's yet to come, and who knows what's going to happen after that's done, because we're going to be out of Latin phrases to attribute to the popes. So, who knows. That must be Judgment Day or something; you never know with these religious sorts.

S: Right, right. And I hate to ask you who got it right first, 'cause I think I know the answer.

E: Yeah, I guess it didn't take a prophet to see this one coming, but Ole!

All: (cheering)

E: —Eivand again. Olé! [pronounced as such]

J: That guy's on a roll!

S: Continuing his domination of the puzzles.

R: Does he just get them all right?

E: Got it pretty quick, so he obviously was familiar with this from the get-go and recognized, so. But congratulations, Olé

J: I think Olé is a computer program. He's not real.

E: Well, there's a rumor going around that Olé's actually Steve under another name; you know, it's his assumed name.

R: Steve does have a lot of time on his hands and tends to do things like this.

E: Oh yeah. Oh yeah; he's all over these puzzles like—

S: I'm looking for stuff to do; I just got all this time on my hands.

(laughter)

R: And I don't know if anybody knows this, but Steve is actually Evan.

S: Mmm-hm.

R: Just doing a different voice; it's really impressive.

J: He's quick.

E: Yes. Yes. What? Who?

J: (laughing)

S: You'll notice we never talk at the same time.

J: Rebecca, you know I'm tired when what you just said confused me a little bit.

(laughter)

J: Oh, my God; I need help.

S: Evan, what's this week's puzzle?

E: OK. This week's puzzle is as such. It's a poem—a short poem.

An enemy of Scientology

And a used car salesman to boot
He developed a strange philosophy
Of which Zen mastery was the root

Self-taught with no formal degree
He changed his name, and ran from home
Gullible masses made him rich, you see

He fled the country with money to roam

Who am I talking about? So think about it, and good luck, everyone. Except you, Olé. I'm done giving you good luck; you don't need it.

R: Good luck, Olé; I hope you win again.

(laughter)

E: Stop that! Don't give him—

R: Just to be contrarian.

S: Rebecca, you're a sap.

R: We don't believe in luck; we're skeptics.

S: That's right.

R: Luck is probability taken personally.

J: I said Olé could be a woman.

R: Yeah, maybe Olé's a girl.

E: Olé could be anybody, anything; could be Alex for all we know. Who knows? Maybe Alex has Skeptiko up as a ruse to make people think he's really not with it, or something. I don't know.

R: I like to think Olé is a beautiful fairy. No, unicorn.

S: The puzzle fairy? Olé is the puzzle fairy. Who rides a unicorn.

R: Who rides in on rainbows.

S: Who rides unicorns that farts rainbows. All right.

(laughter)

Quote of the Week (1:13:56)

S: Jay, give us a quote to end our show this week.

R: She's beautiful. The rainbow.

S: Olé...

J: This is a quote from Olé: "I won, bitches!"

(laughter)

J: This is a quote from Steven Weinberg, who is an American physicist. And Mr. Weinberg says:

The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy.

Steven Weinberg!

S: That's a good quote; I like that.

E: Yeah.

J: I like it.

R: That is a good quote.

E: Very nice.

S: Very poetic.

R: Is he related to you, Steve?

S: A lot of Steves around.

R: I guess so. You can't all be related. OK.

S: But maybe he's one of the Steves.

B: Yeah.

R: He's in the family of Steves.

B: Sign the... papers.

R: Or is that like a genus? Is that a genus?

E: There's a prophecy of the Steves, but that's a whole 'nother story.

Announcements (1:14:53)

S: Couple quick announcements. Just a reminder that I'll be giving a lecture for the New York City Skeptics on December 8th from 1 to 4 PM at the New York Public Library, 425 Avenue of the Americas. And also, quick plug for TAM 5.5, which is coming up January 26th through 27th in Ft. Lauderdale, Florida. You can—

R: Where I will be speaking.

S: Yes, where Rebecca, the lovely Rebecca will be speaking, representing the SGU while we're preparing to all of us go to TAM 6 in June.

E: Woo-hoo.

S: But she'll be holding down the fort for us at TAM 5.5 in January. And I understand there is definitely still room, so go on to the James Randi Educational Foundation website and get your tickets.

R: And, as an added bonus, they might even be doing a test for the Million Dollar Challenge at TAM 5.5.

S: Wow.

E: Nice.

R: It's not set in stone yet, but it could be very fun.

S: Not making any promises—

R: No promises.

S: —but make your travel plans accordingly.

R: But we might have found a real psychic. No promises.

S: This could be the one. Could be the one after all these years.

R: This could be the one.

E: The psychic.

R: You need to be there.

S: Well, thank you everyone for joining me again.

R: Thank you, Steve.

B: Anytime!

E: Thanks, Steve; very interesting stuff tonight.

S: Always a pleasure. And until next week, this is your Skeptics' Guide to the Universe.

S: The Skeptics' Guide to the Universe is produced by the New England Skeptical Society in association with the James Randi Educational Foundation. For more information on this and other episodes, please visit our website at www.theskepticsguide.org. Please send us your questions, suggestions, and other feedback; you can use the "Contact Us" page on our website, or you can send us an email to info@theskepticsguide.org'. 'Theorem' is produced by Kineto and is used with permission.


References


Navi-previous.png Back to top of page Navi-next.png