SGU Episode 339: Difference between revisions

From SGUTranscripts
Jump to navigation Jump to search
(added tricorder x-prize)
(added Sheldrake on Presentiment)
Line 214: Line 214:
http://www.dailymail.co.uk/news/article-2083279/Psychic-powers-How-thought-premonitions-telepathy-common-think.html
http://www.dailymail.co.uk/news/article-2083279/Psychic-powers-How-thought-premonitions-telepathy-common-think.html


=== Physics Cranks <small>( )</small>===
S: Well Evan, tell us about [http://en.wikipedia.org/wiki/Rupert_Sheldrake Rupert Sheldrake's] latest shenanigans.
 
E: Yeah, latest shenanigans, more like a new gig he has.
 
S: Yeah.
 
E: If you can believe it.  Well, but not exactly.  So, he has a new book out that he is self-promoting.  The Science Delusion: Freeing the Spirit of Inquiry by Rupert Sheldrake.  Which, you know, his new book.  And what he's doing is he's teamed up with the folks over at, what is it the Mail Online, the Daily Mail and he is producing a series of articles, you know glorified blog posts, otherwise, in promotion of this book, some of the topics that he talks about in his book and he's turned into these little articles so that the Mail gets articles out of it and Sheldrake gets to promote his book out it, so for them it's a win-win.  For the rest of us though, we have to suffer through it.  In his début, he titled it Why We All Have Psychic Powers, and How Thought, Premonitions and Telepathy are More Common Than We Think.  For those of us who are familiar with Rupert Sheldrake, he's well known for his research into parapsychology and for having proposed a scientifically, we'll call it unorthodox account of morphogenesis.  His books and papers stem from the theory of morphic resonance.
 
S: Mmhmm.
 
E: If you were to read his own definition of morphic resonance, you'd have to read it maybe about four or five times to sort of maybe get a clue about what he's trying to say, but I think [http://en.wikipedia.org/wiki/Susan_Blackmore Susan Blackmore] puts it best when she says that the idea behind morphic resonance is that memory is inherent in nature and so that when a certain or structure has occurred many times, it's more likely to occur again.
 
J: Oh, that makes perfect sense.
 
E: She says, if this were true, if this were to be able to play out, if we could actually test this and if it were actually happening, then things like newly synthesized chemicals would become easier to make, puzzles would become easier to solve just from the fact that people have been solving the puzzle more and more from all over the world, that would kind of build on itself, and video games would also become easier to play as more people played them.  And it would also offer an explanation for such things as psychokinesis and telepathy and other things that Sheldrake champions all due to his morphic resonance theory which is solely his, I must say.
 
S: Mmhmm.  It's essentially made up mystical BS.
 
E: It is.
 
S: There's no, yeah there's no actual scientific basis to it.
 
E: So this article, in particular, he describes some stories from his book which are anecdotes from the past in which these people have pre-sentiments, which are feelings that something was going to happen without knowing exactly what it would be.  You know, a little bit different from premonitions in which you have a little bit of insight into what lies ahead, but these are pre-sentiments in which something just doesn't feel right, so you should kind of, sort of stop what you're doing, allow whatever events that were going to be bad happening to you in the very near future occur, and therefore you have successfully avoided some bad consequences of whatever those bad actions were.
 
R: It's like spidey-sense, but useless.
 
(laughter)
 
E: Yes...
 
S: And non-existent.
 
E: And non-existent, yeah.  You know, he's very good at describing stories and anecdotes about, and I'm sure he has hundreds if not thousands, he only offers a few in this article, about how people had this sense.  Oh you know, I knew something was wrong, I decided to stop my car and get out and it turns out that down the road a tree fell, and what if that tree had hit me?  That I had no idea.
 
S: Right, the're anecdotes.  He presents them breathlessly and uncritically, without any discussion about why scientists are not compelled by these stories.  And then he wonders why science doesn't take him seriously, and he feels like he has to write a whole book about what the problem is with science.
 
E: That's right, he bashes science in the wake of his lack of being accepted for his unique notions about these sorts of things, right.  And has it ever occurred to him that for every person who has had some sort of presentiment, that turned out to be favourable, I mean how many presentiments did they have that turned out to be absolutely nothing?  Right?  Had no consequences, good, bad or whatever?
 
S: Yeah of course, unless you have some kind of statistical analysis or controlled situation, there could be a thousand times where people thought they had a premonition and then nothing happened so they forgot about it.  People remember the remarkable coincidences and then that gets spread...
 
J: The hits.
 
S: Yeah, they remember the hits, forget the misses.  That's basic, basic critical thinking type stuff that Sheldrake seems to be unaware.
 
R: But you know, that one anecdote was pretty convincing, the one about the kid who was about to get on a flight with his classmates and then he had a premonition of the plane crashing so he didn't get on and he got some of his friends off and then the plane crashed.  Wait, wait no that was a movie.  That was [http://en.wikipedia.org/wiki/Final_Destination_(film_series) Final Destination] one, sorry.
 
S: Yeah, yeah.
 
E: Right?  I mean...
 
J: (laughs) And I always think the plane is going to crash, I mean every time I'm on a plane I'm like this is it, this is the statistic right here.  Evan, you get the feeling with people like this that really want to believe, you know.
 
S: Oh yeah.
 
J: They're not following...
 
E: That's the main point I think Jay, is that you're right, people do want to believe in this, you know it's also good that you brought up the example you did because you know, Sheldrake, those are exactly the kind of examples that he talks about also in his research.  For example, he says that over time, so many people have told him that for no apparent reason they were just going along in their lives and then the phone rang, right?  And they knew who was calling before they even picked it up.  And he did an experiment basically to test this.  You know, it involved asking subjects for the names and phone numbers of four friends or family members before placing them along in a room with a land line telephone and no caller ID.  He then selected one of the four callers to call at random, asking them to phone the subject who had to say who, and they had to say well who was on the line before answering, and you would think, he says you know by guessing by random, it would be right one in four times, 25%.  But his research shows that it happens 45% of the time.
 
S: Mmhmm.
 
E: The problem is, is that when you look deeper into his methodology...
 
S: Mmhmm.
 
E: ...there's problems.  There's problems with making sure that these things are properly set up, properly blinded...
 
R: No shit.
 
E: ...properly randomised.  Right?  Devil in the detail, Steve?  We've talked about that before.
 
S: Evan, let me go on a little tangent here though.  Sheldrake also presents a lot of his own research in a, saying as if it's accepted.  It's like oh really?  We've proven that ESP exists 100 years ago and then nobody noticed?  You know so, quoting meta-analyses of studies from like the 1870s to the 1930s, you know back before they knew how to even do good experimental protocol.  Here's the thing.  You could think about these studies in a couple of ways.  One is that you could, if you read the protocol, you can find blatant flaws or sometimes subtle flaws that could account for what's often a small statistical effect.  Obviously the crappier the methodology, the bigger the effect.  He describes another paper where the effect was 33.3% by chance and then people performed 37%.  Whoopdiedoo.  You know, 37% vs 33%.  And you think, yeah but it's statistically significant, but that's, it's actually not a compelling.  And here's why.  Even when a study looks iron-clad on paper, it's possible that the researchers made a lot of choices in how the study was conducted, maybe even throughout the data collection, that could have influenced the outcome.  I wrote a blog post last week called [http://theness.com/neurologicablog/index.php/publishing-false-positives/ Publishing False Positives], which was a discussion of a recently published research paper where the researchers did a very interesting thing.  They collected data in a completely iron-clad way.  In other words, the methodology in, at the end when you describe the methodology, you can't find any flaws.  And they looked at something that they knew was impossible.  They looked at whether or not listening to music about old age, like the song When I'm 64, actually made people younger.
 
J: What!?
 
S: Made them physically younger.  Not feel younger.  Yeah, I mean the whole point, Jay was to collect data on a question that is blatantly impossible.
 
J: OK.
 
S: So that was the point.  And then they showed that they could make the data statistically significantly positive just by exploiting what they called the researcher degrees of freedom.  Which means that choices that researchers make about variables to include, when to stop collecting data, which comparisons to make and which statistical analyses to make, you can justify each individual decision, you can make it all sound reasonable, but if you didn't make all of those choices before you started collecting data, that you can bias each of those choices in a way that pushes the data in the positive direction.  So essentially what they're saying is, that even when the methodology looks good on paper, researchers can make those kinds of subtle, non-transparent decisions, that you could manufacture a false-positive out of almost any data.  And that's what people like Sheldrake, in my opinion, are doing.  Or at the very least, that's why, when they publish a result that's like, oh 37% vs 37%, that's within the noise that you can manufacture by exploiting these researcher degrees of freedom. And it may not be something that you can see on the published paper.  You would have to do a precise replication, you would have to make all the same choices, collect a fresh set of data and see if it still comes out positive.  And that's why exact replications are so important.  But the researchers also point out that prestigious journals don't like to publish exact replications because they're boring.  If you remember, [http://en.wikipedia.org/wiki/Richard_Wiseman Richard Wiseman] just had this problem trying to get published exact replication of [http://en.wikipedia.org/wiki/Daryl_Bem Daryl Bem's] precognition research.  And even the journal that published Bem's original research was not interested an exact replication because they said we don't do that, because it doesn't get them press releases and headlines, it's boring.  But that's a problem because that's one of the best ways to...
 
R: Reproducibility is like a cornerstone of good science, and it's sad that actual academic papers are using the same excuses that you hear from TV producers for why they would rather have a show about magical prayer healing than something about a scientific investigation of prayer healing.  Just, you know, they find that the sad reality is boring to them.  Which, OK for TV producers I'm willing to at least shake my head and say well that's life, but for an academic journal?  That's terrible.
 
S: Well, they just use the term impact factor, that's not going to help our impact factor.
 
R: Right, ratings, let's just call it ratings.
 
S: Yeah, it's ratings, yeah.
 
E: Exactly.
 
=== Physics Cranks <small>(33:32)</small>===
http://theness.com/neurologicablog/index.php/cranks-and-physics/
http://theness.com/neurologicablog/index.php/cranks-and-physics/
S: Exactly.  This actually dovetails with the next news item, which I also wrote about, and that was an article published in Slate recently.  It was actually republished from New Scientist by Margaret Wortheim, that discusses the phenomenon of physics cranks.


=== Witch Hunter comes to US <small>( )</small>===
=== Witch Hunter comes to US <small>( )</small>===

Revision as of 01:13, 13 May 2012

Template:Draft infoBox

Introduction

You're listening to the Skeptics' Guide to the Universe, your escape to reality.

S: Hello and welcome to the Skeptics' Guide to the Universe, today is Wednesday, January 11th, 2012 and this is your host, Steven Novella. Joining me this week are Rebecca Watson.

R: Hello everyone.

S: Jay Novella.

J: Hey guys.

S: And Evan Bernstein.

E: Did we miss someone?

J: Where's Bob?

S: Yeah, Bob is on vacation this week.

E: Vacation? I didn't know we get vacations!

(laughter)

E: Nobody told me, six and a half years.

S: You guys do. Occasionally.

E: Lucky guy.

J: I had a pretty cool day today because Bob was on vacation. I got about thirty pictures and one video sent to me from Bob from Disney World.

S: Is that where he is?

R: Aaah, he's in Disney World. I want to go to Disney World.

E: Isn't that the theme?

R: This show sucks.

(laughter)

This Day in Skepticism (0:57)

January 12, 1990 Is the death-date of Dr. Laurence Peter, creator of the Peter Principle, a scientific observation that employees will rise to the level of their incompetence. http://en.wikipedia.org/wiki/Peter_Principle

S: Well Rebecca, you have an interesting This Day in Skepticism this week.

R: You'd better hope I do, Steve.

S: Yes.

R: And I do! Yes, this podcast is going out on January 14th which is, first of all I want to mention, it's an important Christian holiday, the feast of the ass, so I just want to put that out there.

J: What?

R: It's uh, the feast of the ass...

(laughter)

R: It was a medieval feast that was observed on January 14th, mostly in France apparently and it celebrated a lot of the biblical stories that involve donkeys. So feast of the ass, look it up, real thing.

S: Real but obscure.

R: However, the event that I would actually like to talk about happened on January 12th, and I really want to talk about the person who unfortunately died, January 12th 1990 the world lost a great scientific mind. That of Dr. Lawrence Johnson Peter who gave his name to what we call the Peter Principle. For those of you who are unaware, the Peter principle states that members of a hierarchical organisation will climb the hierarchy until they reach their level of maximum incompetence. And it is the defining principle of most office environments where middle managers flourish despite being completely unable to do their jobs. Now the idea behind the Peter Principle is that the best members of the hierarchy in question are rewarded with promotions. And these rewards take into account the member's competence at his or her current level, but not the level to which he or she is being promoted to. So in 2009, Italian scientists published a computational study that showed that when a hierarchy utilises the promotional system that I've just described, the Peter Principle is inevitable. They also showed that this results in a significant reduction in the efficiency of the organisation. Of course, because you have completely incompetent people at every level. Using game theory, the researchers found that the ideal promotional technique was actually either to promote people at random or to randomly promote the very best and the very worst members in terms of competence.

J: Oh my god.

R: Isn't that crazy? So seeing as that system might actually cause complete chaos in an organisation of real humans, you could also just train people for a position prior to promoting them, which would ensure that they have the required level of competency at the start. So that's the Peter Principle. Unfortunately, Lawrence Peter, Dr. Lawrence Peter died January 12th, 1990.

J: Now, this guy's middle name was Johnson and his last name was Peter.

R: Yes, yes.

J: Just checking.

S: That reminds me of the Dunning Kruger effect which is a distinct effect described by a psychologist, Dunning and Kruger. No relation to uh...

R: Brian.

S: Brian Dunning.

R: Or Freddy.

S: Yeah, to Brian. Yeah, or Freddy Krueger.

(laughter)

E: Freddy Dunning.

S: It essentially says that peoples' incompetence makes them incompetent to detect their own incompetence.

R: Indeed. Yeah, I believe that these two principles work in tandem to create the worst possible office environment.

E: Oh it's like when people have body odour and they don't smell it on themselves but everybody else smells it.

S: (laughing) kind of.

R: Yeah, yeah.

E: That happens.

R: Which also combines with these two principles to make the worst office environment ever.

S: It is also in effect at Dragon Con.

E: (laughs)

J: Most people that have something wrong with them, we're kind of saying now they can't detect it.

R: Well, the Dunning Kruger effect is specifically about incompetence. As is the Peter Principle, is about incompetence.

J: Because I'd put bad breath on that list as well.

R: And with the Peter Principle though, it's not necessary that the person who is incompetent not know that they're incompetent. I'm sure that there are plenty of people out there who know that they're completely incompetent at their jobs but they soldier on regardless. So it's only when combined with the Dunning Kruger effect that you get the rare, unfortunately not very rare, manager who is completely incompetent and yet is absolutely sure that he or she knows what he or she is doing.

J: It sounds like the TV show The Office.

S: Yeah.

R: And every office I've ever worked in. I'll also mention that Dr. Peter is known for the quip, the noblest of all dogs is the hot dog. It feeds the hand that bites it. That is all.

E: Wow.

R: This day in science.

J: Why thank you, Rebecca.

S: I would point out though that I think the Dunning Kruger effect, this is my speculation, is there really a fundamental difference at the low end of the competency range, or essentially can we generalise from that to say that we're all incapable of truly assessing ourselves? Because we can only assess our own level of knowledge and competence from the perspective of our own knowledge and competence. It's just that it's really obvious in people who are less knowledgeable and competent than you are. But somebody who is more competent and knowledgeable than you would think that the Dunning Kruger effect applies to you. Do you think that's true, or is it really something that is unique to the low end of the competency scale?

R: I don't know, consider that I don't have any idea how to play the violin, and I have knowledge that I can't play it even though I'm completely incompetent at it.

J: (laughs) I'm so incompetent that I can't even follow this conversation.

S: I'm going to answer my own question.

R: You always do.

S: It's an interesting question, I think if I had to hazard a guess, I would say that, although I don't think this is necessarily true, but I do think there's a tendency for greater humility in a way as you learn more, because as you begin to gain knowledge in any area, you go through this phase where you, the first thing you realize is how little you know. And then you progress to the point where you start to feel confident in your knowledge. So I think the Dunning Kruger effect are people who haven't even got to the phase yet where they know how much they don't know.

News Items

Tricorder X-prize (7:04)

http://nextbigfuture.com/2012/01/ten-million-dollar-qualcomm-medical.html

S: Well, let's move on from that to the Tricorder X-Prize. What's better than this, Jay? 10 million dollars for inventing a Star Trek-esque tricorder.

J: Yeah, that's what caught my eye originally and when Steve and I were talking about this news item, he said, well, he goes you know you and I know about the X-Prize thing but why don't you write a quick summary about what is the X-Prize and you know what Steve, I found out that I didn't know quite as much about it as I thought. So I'll get to the tricorder thing towards the end, but let me give you a little info on the X-Prize and some of the past and current X-Prizes and talk to you about why X-Prize exists, which was the actual part of this that I didn't really know all the details on. So the X-Prize Foundation conducts incentivised competitions and they typically have a substantial payout in the millions of dollars. And I've read of X-Prizes going from 10 to 30 million dollar payouts. There is four prize groups: education and global development, energy and environment, life sciences, and the last one is exploration which both covers ocean and deep space. Deep ocean and deep space, I guess. The point to the prize is to inspire research and development in technology that will have a far-reaching potential for humanity. And that's a big thing to say but the real idea here is that they develop these prize concepts. So you know, they don't just throw around a few ideas and pump them out. They really put a lot of time and energy into coming up with basically where are the holes in current technology? What are some things that humanity perceives are too costly or too impossible to actually achieve, and what of these goals, which ones would benefit humanity the most? So then what they do is they put a prize together, usually with a number of sponsors, but a lot of times you will see a company's name attached to the actual prize because typically they're involved with it, mostly financially I'm guessing, but maybe they actually have a hand in helping them shape it as well. But the prizes then end up being so minute compared to the amount of time and energy and money that companies would put into doing the research to become a part of the people who are competing in that prize, so like let's say that there was a 10 million dollar prize, a lot of companies were spending 50 million, 100 million dollars to develop the technology. And the prize is really more of an inspiration at that point, and not really you know, if we win that prize money it's going to pay us back all the money we put into it, that's typically not the case. But it's a good thing because when companies latch onto these prizes, they're also acknowledging that they agree that first of all, they're achievable, and second of all that there is a payout for them eventually, because very good well-thought-out technology that can do certain things typically does have a pay out.

S: So I mean the 10 million dollars is nice, it probably doesn't pay back the investment, but it helps you, if you do win the prize, it helps recoup some of that cost, but the real payout is in the technology itself.

J: Right. These prizes are not rewarding good researching, they're actually rewarding success. They're rewarding a finished product, they're rewarding the achievement of the goal OK, so like...

S: And they don't care how you get there.

J: Absol... well they have parameters, there's always parameters. And those parameters vary greatly depending on what the actual prize is, like there's usually goals that have to be met and there are time-frames that have to be met. But let me give you some examples, you know you might know some of these, you might not. The Progressive Insurance Automobile X-Prize. That was a global competition, it awarded 10 million to three teams that built cars that achieved at least 100 miles per gallon in real-world driving. And that real-world driving means four wheels, four passengers, the vehicle had to weigh X amount and blah blah blah, right?

S: Right.

J: SO the cars had to be safe, affordable, with the ultimate goal of appealing to consumers.

S: Mass-producible.

J: Yep. Three people got awarded this, and you go on the website, you could take a look at the winning cars and read the stories behind it, but that is of course so obvious in its utility and it's helping humanity in so many ways, just that one idea. I was very much following that. Uh, the next one, which was one of my favourites, the Ansari X-Prize, this one also awarded 10 million dollars. The two guys that won that prize, Burt Rutan and financier Paul Allen led the first private team to build and launch a spacecraft capable of carrying three people to 100 km above the Earth's surface twice within two weeks. And I was watching that, I was actually watching it as it unfolded and it was really exciting and it was really cool. And that technology absolutely has moved on to bigger and better things. So a couple of more quick ones. These are ones that are in progress right now. The Archon Genomics X-Prize which is, it's an advanced genome sequencer that can do 100 people and they want to pick 100 people that are above 100 years old. And the trick with this one is, is that it's not, it's doing the reading very accurately. Much more accurately than we're doing today. So basically they're one-upping the whole reading a person's genome and getting useful information out of it.

S: SO hang on, they want to sequence the genes of 100 people over 100?

J: Right.

S: Thinking that they'll find some longevity genes in there.

J: Yeah, they want to see what the difference is, and they want to actually find specifics in there, that was part of the contest. And then the other one which I'm really looking forward to seeing some footage on and seeing them actually attempt it is the Google Lunar X-Prize. This one pays out 30 million and this is paid out to privately funded teams that safely land a robot on the surface of the moon, there's more details in there of course but that's a really cool one and I recommend that you read some of these to see what's happening. But now we come to this new one that was announced recently. And this is the Qualcomm Tricorder X-Prize. This one is paying out 10 million, it's a global competition, and this is in hopes to stimulate innovation with precision diagnostic technologies. What Steve said is correct, they want to make a tricorder-like device, so obviously it's not going to be like the Star Trek tricorder that can give any information. The machine has to be 5 pounds or less, I mean I keep seeing people talk about it like it would be like you're holding an iPhone type of device but it could be bigger than that and heavier than that. And it also doesn't necessarily need to be limited to just one single device, it could be like that's the home-base device and then there could be things that attach to a human here or there, it could have wires or whatnot, I mean they're really leaving that open-ended. They want the device to basically be able to pick up on a lot of body information and be able to diagnose 15 diseases. So they were saying of course it's going to pick up metrics like blood pressure and respiratory rate and body temperature, but of course it would need to collect a lot more than that. It would need to probably be taking blood samples and things of that nature. And also discovering things that we as of today are not discovering. It's not just collecting processes that we're able to recognise today, you know you have a protein spike and that means this or that or whatever, they want to come up with new stuff, but it's a pretty hefty goal. But the payout here is, and this is actually the part I found most interesting, was that they wanted this machine to be more of a daily recorder of what your thresholds are, and gathering of information in such a way where they can watch you over time and then they can extrapolate data from that. And, the one big thing that they wrote on the website that I found really surprising was they wanted to take health care, not away from doctors, but out of their total control, meaning that you don't have to go to a doctor to get a readout on what your current state of health is. And obviously with the explosion in population and everything, this would be a much less expensive way, if people were able to have these devices or at least have access to them when they needed them, it would be a cheaper way for people to maintain and monitor their current status, and then if they needed to go to a doctor then they could bring that information with them to the doctor, or of course by the time this comes out there would be software in place where it would be automatically uploaded to your doctor and all that. But it's very cool, and I'd like to add to this, which I find very interesting, when they fashion these contests, they truly do believe that the technology is possible. So they're really not saying, you know maybe people will do this, they have a very strong inclination that people are going to be able to do this, and I'm really excited about this one. Anything that has to do with healthcare and lowering costs of health care I think is a great thing for the general population.

S: Yeah absolutely. I thought though, that the description of how this technology would be used was a little naive. Saying that they want to put the diagnostic tool in the hands of the consumer. And it was sort of taking it away from the health care professional. So from my perspective, obviously I have a certain perspective on this as I'm a physician. But you know, information, especially health care information, we're used to thinking of the general principle that more information is always better. But it's not always better because information, especially if it's all the time, low threshold, just gathering tons of monitoring-type of information actually could be extremely counter-productive. If you look for things and you monitor parameters, you're going to see a lot of normal fluctuation that will freak people out and cause them to seek health care that they otherwise would not have sought out and that they don't need, and that's going to lead to a lot of downstream negative consequences. This has come up even when physicians are doing the monitoring. Here's an example. Research looking at foetal monitoring, you would think oh that's great, you're monitoring the status of the foetus. But there definitely a time when the research was showing that more foetal monitoring was actually a net negative because it was causing physicians to do things and they were overreacting to just fluctuations that weren't clinically significant. Imagine now that same kind of monitoring in the hands of a lay-person. Especially, who are the people who are going to do this? People who are generally neurotic, right? They're going to be obsessing over tiny fluctuations in their own parameters, whether it's blood pressure or whatever, and that could actually drive a lot of anxiety, it could drive a lot of unnecessary consumption of further testing and of health care and I wouldn't assume that that's going to be a net positive. That, having said all that, I think this kind of technology can be very powerful and very useful. I just don't think that their vision, at least as they outlined here, of how it's going to be use or how it would be used when we have this kind of thing, matches the way it really is going to be used. I think it's one of those things that, once it becomes, once it's available when the marketplace, both professional and non-professional, starts to tinker with it, then we'll see what it's really useful for. It's like a lot of new technologies, it's hard to imagine how they're going to be used and what people naively think does not turn out to be the case but then people find really awesome uses for the new technology and that's what takes off.

J: Well Steve, the thing is, I understand what you're saying but I would imagine that, as these technologies progress and they become more common, that won't the software and won't the actual device know that there's fluctuations that happen all the time and it won't be so alarming? I would imagine that people could have a readout of what's happening on their device and read it but at the same time, if there's a problem I bet you the software would say you should go say your physician now.

S: Sure, but again that's not contradicting what I'm saying. Now we're going to have a bunch of people calling their doctor saying oh this device said I need to call you, when in fact it's not necessary. Right? So you see how that would drive up unnecessary access or utilisation of health care services?

J: Sure, I could see that, and that's probably one of the challenges is to make it all work, to make it so that it is less of a drain and...

S: Right, so it's not, so in other words, it's not a no-brainer. How this technology would be ultimately applied is actually fairly complicated. If we had it today, we would have to think about how really to incorporate it best into health care so that it doesn't become a way of driving unnecessary tests and treatments.

E: Huh. Are there any privacy issues also involved with this technology that might prove to be problematic, sort of have unintended consequences?

S: I guess it depends on range and things like that, and maybe you might need to have some kind of receiver that's like actually on your person, you know what I mean, like attached to your skin or something. Right now, I mean the other thing I was thinking about is like, is this really doable? You can't, while I love the concept of the X-Prize, the X-Prize has a sweet spot for technology that is possible and right on the cusp and bringing that technology to fruition. I don't think however, it would enable us to level-jump, you know to create technology that we're just not ready for.

J: When you say not ready for, what do you mean?

S: In other words, you might need some fundamental breakthroughs before we could even approach this kind of technology.

J: Well, what about the smart phone, Steve? I mean maybe it's more of a collection of technology that's achievable.

S: Yeah, so is it achievable with an application or extension or baby steps taken from existing technology?

J: Steve, didn't you ever watch the Six Million Dollar Man? We will rebuild him, you know.

Sheldrake on Presentiment (21:34)

http://www.dailymail.co.uk/news/article-2083279/Psychic-powers-How-thought-premonitions-telepathy-common-think.html

S: Well Evan, tell us about Rupert Sheldrake's latest shenanigans.

E: Yeah, latest shenanigans, more like a new gig he has.

S: Yeah.

E: If you can believe it. Well, but not exactly. So, he has a new book out that he is self-promoting. The Science Delusion: Freeing the Spirit of Inquiry by Rupert Sheldrake. Which, you know, his new book. And what he's doing is he's teamed up with the folks over at, what is it the Mail Online, the Daily Mail and he is producing a series of articles, you know glorified blog posts, otherwise, in promotion of this book, some of the topics that he talks about in his book and he's turned into these little articles so that the Mail gets articles out of it and Sheldrake gets to promote his book out it, so for them it's a win-win. For the rest of us though, we have to suffer through it. In his début, he titled it Why We All Have Psychic Powers, and How Thought, Premonitions and Telepathy are More Common Than We Think. For those of us who are familiar with Rupert Sheldrake, he's well known for his research into parapsychology and for having proposed a scientifically, we'll call it unorthodox account of morphogenesis. His books and papers stem from the theory of morphic resonance.

S: Mmhmm.

E: If you were to read his own definition of morphic resonance, you'd have to read it maybe about four or five times to sort of maybe get a clue about what he's trying to say, but I think Susan Blackmore puts it best when she says that the idea behind morphic resonance is that memory is inherent in nature and so that when a certain or structure has occurred many times, it's more likely to occur again.

J: Oh, that makes perfect sense.

E: She says, if this were true, if this were to be able to play out, if we could actually test this and if it were actually happening, then things like newly synthesized chemicals would become easier to make, puzzles would become easier to solve just from the fact that people have been solving the puzzle more and more from all over the world, that would kind of build on itself, and video games would also become easier to play as more people played them. And it would also offer an explanation for such things as psychokinesis and telepathy and other things that Sheldrake champions all due to his morphic resonance theory which is solely his, I must say.

S: Mmhmm. It's essentially made up mystical BS.

E: It is.

S: There's no, yeah there's no actual scientific basis to it.

E: So this article, in particular, he describes some stories from his book which are anecdotes from the past in which these people have pre-sentiments, which are feelings that something was going to happen without knowing exactly what it would be. You know, a little bit different from premonitions in which you have a little bit of insight into what lies ahead, but these are pre-sentiments in which something just doesn't feel right, so you should kind of, sort of stop what you're doing, allow whatever events that were going to be bad happening to you in the very near future occur, and therefore you have successfully avoided some bad consequences of whatever those bad actions were.

R: It's like spidey-sense, but useless.

(laughter)

E: Yes...

S: And non-existent.

E: And non-existent, yeah. You know, he's very good at describing stories and anecdotes about, and I'm sure he has hundreds if not thousands, he only offers a few in this article, about how people had this sense. Oh you know, I knew something was wrong, I decided to stop my car and get out and it turns out that down the road a tree fell, and what if that tree had hit me? That I had no idea.

S: Right, the're anecdotes. He presents them breathlessly and uncritically, without any discussion about why scientists are not compelled by these stories. And then he wonders why science doesn't take him seriously, and he feels like he has to write a whole book about what the problem is with science.

E: That's right, he bashes science in the wake of his lack of being accepted for his unique notions about these sorts of things, right. And has it ever occurred to him that for every person who has had some sort of presentiment, that turned out to be favourable, I mean how many presentiments did they have that turned out to be absolutely nothing? Right? Had no consequences, good, bad or whatever?

S: Yeah of course, unless you have some kind of statistical analysis or controlled situation, there could be a thousand times where people thought they had a premonition and then nothing happened so they forgot about it. People remember the remarkable coincidences and then that gets spread...

J: The hits.

S: Yeah, they remember the hits, forget the misses. That's basic, basic critical thinking type stuff that Sheldrake seems to be unaware.

R: But you know, that one anecdote was pretty convincing, the one about the kid who was about to get on a flight with his classmates and then he had a premonition of the plane crashing so he didn't get on and he got some of his friends off and then the plane crashed. Wait, wait no that was a movie. That was Final Destination one, sorry.

S: Yeah, yeah.

E: Right? I mean...

J: (laughs) And I always think the plane is going to crash, I mean every time I'm on a plane I'm like this is it, this is the statistic right here. Evan, you get the feeling with people like this that really want to believe, you know.

S: Oh yeah.

J: They're not following...

E: That's the main point I think Jay, is that you're right, people do want to believe in this, you know it's also good that you brought up the example you did because you know, Sheldrake, those are exactly the kind of examples that he talks about also in his research. For example, he says that over time, so many people have told him that for no apparent reason they were just going along in their lives and then the phone rang, right? And they knew who was calling before they even picked it up. And he did an experiment basically to test this. You know, it involved asking subjects for the names and phone numbers of four friends or family members before placing them along in a room with a land line telephone and no caller ID. He then selected one of the four callers to call at random, asking them to phone the subject who had to say who, and they had to say well who was on the line before answering, and you would think, he says you know by guessing by random, it would be right one in four times, 25%. But his research shows that it happens 45% of the time.

S: Mmhmm.

E: The problem is, is that when you look deeper into his methodology...

S: Mmhmm.

E: ...there's problems. There's problems with making sure that these things are properly set up, properly blinded...

R: No shit.

E: ...properly randomised. Right? Devil in the detail, Steve? We've talked about that before.

S: Evan, let me go on a little tangent here though. Sheldrake also presents a lot of his own research in a, saying as if it's accepted. It's like oh really? We've proven that ESP exists 100 years ago and then nobody noticed? You know so, quoting meta-analyses of studies from like the 1870s to the 1930s, you know back before they knew how to even do good experimental protocol. Here's the thing. You could think about these studies in a couple of ways. One is that you could, if you read the protocol, you can find blatant flaws or sometimes subtle flaws that could account for what's often a small statistical effect. Obviously the crappier the methodology, the bigger the effect. He describes another paper where the effect was 33.3% by chance and then people performed 37%. Whoopdiedoo. You know, 37% vs 33%. And you think, yeah but it's statistically significant, but that's, it's actually not a compelling. And here's why. Even when a study looks iron-clad on paper, it's possible that the researchers made a lot of choices in how the study was conducted, maybe even throughout the data collection, that could have influenced the outcome. I wrote a blog post last week called Publishing False Positives, which was a discussion of a recently published research paper where the researchers did a very interesting thing. They collected data in a completely iron-clad way. In other words, the methodology in, at the end when you describe the methodology, you can't find any flaws. And they looked at something that they knew was impossible. They looked at whether or not listening to music about old age, like the song When I'm 64, actually made people younger.

J: What!?

S: Made them physically younger. Not feel younger. Yeah, I mean the whole point, Jay was to collect data on a question that is blatantly impossible.

J: OK.

S: So that was the point. And then they showed that they could make the data statistically significantly positive just by exploiting what they called the researcher degrees of freedom. Which means that choices that researchers make about variables to include, when to stop collecting data, which comparisons to make and which statistical analyses to make, you can justify each individual decision, you can make it all sound reasonable, but if you didn't make all of those choices before you started collecting data, that you can bias each of those choices in a way that pushes the data in the positive direction. So essentially what they're saying is, that even when the methodology looks good on paper, researchers can make those kinds of subtle, non-transparent decisions, that you could manufacture a false-positive out of almost any data. And that's what people like Sheldrake, in my opinion, are doing. Or at the very least, that's why, when they publish a result that's like, oh 37% vs 37%, that's within the noise that you can manufacture by exploiting these researcher degrees of freedom. And it may not be something that you can see on the published paper. You would have to do a precise replication, you would have to make all the same choices, collect a fresh set of data and see if it still comes out positive. And that's why exact replications are so important. But the researchers also point out that prestigious journals don't like to publish exact replications because they're boring. If you remember, Richard Wiseman just had this problem trying to get published exact replication of Daryl Bem's precognition research. And even the journal that published Bem's original research was not interested an exact replication because they said we don't do that, because it doesn't get them press releases and headlines, it's boring. But that's a problem because that's one of the best ways to...

R: Reproducibility is like a cornerstone of good science, and it's sad that actual academic papers are using the same excuses that you hear from TV producers for why they would rather have a show about magical prayer healing than something about a scientific investigation of prayer healing. Just, you know, they find that the sad reality is boring to them. Which, OK for TV producers I'm willing to at least shake my head and say well that's life, but for an academic journal? That's terrible.

S: Well, they just use the term impact factor, that's not going to help our impact factor.

R: Right, ratings, let's just call it ratings.

S: Yeah, it's ratings, yeah.

E: Exactly.

Physics Cranks (33:32)

http://theness.com/neurologicablog/index.php/cranks-and-physics/

S: Exactly. This actually dovetails with the next news item, which I also wrote about, and that was an article published in Slate recently. It was actually republished from New Scientist by Margaret Wortheim, that discusses the phenomenon of physics cranks.

Witch Hunter comes to US ( )

http://blog.newhumanist.org.uk/2012/01/notorious-nigerian-witch-hunter-to.html

Who's That Noisy? ( )

Answer to last week: Antivaxers

Name That Logical Fallacy ( )

Professor Iain Graham from Southern Cross University’s School of Health yesterday defended his university, saying the use of alternative therapies, such as homeopathy, can be traced as far back as ancient Greece. “Eighty per cent of Australians seek alternative therapies,” Prof Graham said. “Obviously orthodox medicine is not working for everyone,” he said. http://theness.com/neurologicablog/index.php/fighting-cam-in-australia/

Science or Fiction ( )

Item number one. The Hubble telescope has identified the furthest galaxy protocluster ever discovered, about 13 billion light years away. Item number two. Researchers have designed a nanoparticle material that can automatically repair glass materials, such as the surface of an electronic device. And item number three. New research finds that surgeons generally continue to improve in skill and performance into their 60s.

Skeptical Quote of the Week ( )

Segment: Skeptical Quote of the Week Skeptical Quote of the Week "Where there is shouting there is no true knowledge." - Leonardo daVinci

Announcements ( )

Template:Outro1

Navi-previous.png Back to top of page Navi-next.png