Science


In recent times in my adventures in the social media universe, I’ve started to see more and more prevalently, a certain riposte to arguments that champion science. It comes in the form of “…but science can’t be a hundred percent sure of it, right?” You’ll have seen the same thing I’m sure: you proffer that global warming is a serious problem, with over 95% of scientists working in climate science attesting to its seriousness, and someone chimes in with the argument that because there’s that 5% for whom the jury is out, ((And that’s an important thing to remember here: the 95% figure that’s often quoted are the scientists who are certain, but that does not imply in any way that the other 5% are just as certain global warming is not happening or not of concern. Some of that 5% just don’t think the data is in. That’s a very different prospect to having an unequivocal position against.)) then there is some question of validity of the great weight of the argument in favour.

It’s difficult to get most non-scientific people to understand the philosophical cornerstones on which science is built, but the one that provides the most problem is, perhaps, the scientific idea of falsifiability. Simply put, it works like this: a question of science is posed in such a way that it is held up to scrutiny for its robustness against pulling it down again.

Let me give you a very basic example. Let’s suppose that one day I leave an apple out on the bench in my back yard. The next day, I notice that the apple has been knocked to the ground and there are bites out of it. I advance to you an hypothesis: fairies at the bottom of the garden have a love of apples and they are the culprits. If you chose to disagree with this interpretation of the situation, and were to approach this scientifically, you might question my hypothesis and devise ways to show me that my suggestion ((For that’s really what an hypothesis is; a fancy kind of ‘suggestion’)) is not the best explanation for the facts. You might, for example, decide to leave out a new decoy apple, stay up all night and, from a hidden spot, observe what happens to it. You might rig up a camera to photograph the apple if it is moved. You might put out a plastic apple and see whether it gets eaten or moved. There are numerous things you might do to chip away at my hypothesis.

Together – pending the evidence you gathered – we would establish the likelihood of my hypothesis being correct, and in the event that it started to seem unlikely, gather additional evidence that might set us on our way to a new hypothesis involving another explanation. Possums, maybe.

You might think that this is a simplistic, and perhaps even patronising, illustration. But consider this: you can never, ever, prove to me definitively that fairies weren’t responsible for that first apple incident, or any subsequent incidents that we weren’t actively observing. This is because we have no explicit data for those times.

The philosophy of scientific process unequivocally requires it must be like this. It seems like a bizarre Catch 22, but the very idea is a sort of axiom built into the deepest foundations of science, and an extremely valuable one, because it allows everything to be re-examined by the scientific process should additional persuasive data appear. It’s a kind of a ‘don’t get cocky, kid’ reminder. It’s a way for the scientific process to be flexible enough to cope with the possibility of new information. If we didn’t have it, science would deteriorate rapidly into dogma.

The problem is that people who don’t understand science very well tend to think rather too literally about this ‘loophole’ of falsifiability. They take it to mean that, if we did a thousand nights of experimental process in my backyard, and 999 of those nights we got photos of the possums chewing on the apple, then the one night where the camera malfunctioned it’s possible that the apple actually could have been eaten by fairies. Worse than that, they mistakenly go on to extrapolate that the Fairies Hypothesis therefore has equal weight with the Possum Hypothesis.

Even worse still, this commitment of science not to make assessments on the data it does not have is frequently wheeled out by an increasing number of people as if it’s a profound failing – a demonstration that ‘science is not perfect’.

But here, I will argue to the contrary. At least, I will say that science may not be perfect, but it does its very best to strive to understand where the flaws in its process might arise, and take them into account.

This should not be taken to mean, however, that nothing in science has any certainty, and everything is up for grabs. Why? Because science is all about probabilities. If you are not comfortable with talking in the language of probabilities, then you should really butt right out of any scientific discussion. ((If you can’t think in probabilities, you almost certainly have a heck of a time living your life too, because – hear me – nothing is certain.))

Of course, in the fairies vs possums scenario, we’ve already factored the probabilities into account: our brains can’t help but favour the hypothesis that we think is the most likely, given the observations that have accumulated over our lives: we know that possums like fruit; we know that they are active at night; we have seen possums. On the other hand, we have little evidence for the predilection of fairies for apples, or even for the existence of such beings. Taking into account all the things we know, it’s much more likely to be possums eating the fruit than it is to be fairies. But I will reiterate – because it’s important – that the thing to remember is that there is no way that anyone can ever scientifically prove to you that the one time out of a thousand when you weren’t looking that it wasn’t the fairies who took a chomp on the apple.

But you still know it wasn’t, right?

This is the point where it gets frustrating for real scientists doing real science. Fairies vs possums is a reasonably trivial scientific case, and most ((But not all, trust me…)) people have the educational tools to make a proper and rational assessment of the situation. However, in the case of a non-scientific person arguing that because 5% of scientists don’t agree with the rest on global warming there’s a cause for doubt on the whole thing, this looks to scientists – the people in possession of the greater number of facts and understanding of those facts – like someone arguing that the fairies ate the apple.

It’s not just the Climate Change discussion that suffers from this problem. A large part of the reason we now get into these kinds of debates is that our scientific understanding of the world has, in this age, become so intricate and detailed that it’s very difficult for non-specialists to properly grasp the highly complex nature of certain subjects. Climate science is one of those areas. Evolution is another, and vaccination one more. Because most of us don’t have a lifetime’s worth of education in any of these highly complex fields, and our brains don’t have the tools we need to assess the required data in any meaningful way, we tend to fall back on thinking patterns that are more attuned to the solving of simple, easily defined problems. We then superimpose those simple-to-understand patterns on subjects we don’t understand. Everyone does this, whether it’s in an effort to understand economics, or politics, or even our phone’s data plan. We just can’t help it.

What’s truly sad and frustrating is that when scientists tell us things that are hard to understand, don’t fit with what we know, and are not things we want to hear, many people (including, it has to be said, far too many of the politicians who make the decisions that rule our lives) start to try to find reasons why the scientists MUST be wrong. I’m sure you’ve heard all the variations: scientists are in it for their own agendas (the Frankenstein scenario); they’re being paid to give false results by Big Pharma/Agriculture/Data/Tech/Whatever; or, as we’ve discussed, because they don’t know everything.

Science doesn’t know everything. The thing is, contrary to what a lot of people seem to believe, it knows that it doesn’t know everything, and this understanding of its limits is built into its very structure. As such, it is not a weakness, but a very great strength.

___________________________________________________________________________

PS: This is the very first time on TCA that I’ve deployed a clickbait headline… and I’m not sorry.

I know I said we were going to be looking at CieAura’s business practices today, but I thought instead that I might take a little detour, and think a bit about the central concepts behind what the product is offering. Specifically, we’re going to look at holograms, what they are, how they work and their relevance to any kind of biological or medical efficacy.

The first thing I’m going to assert is that the CieAura doesn’t use true holograms. I’ve never seen a CieAura ‘chip’ in reality, so I’m going off web images, but to me these look like ‘stacked’ or ‘2D/3D’ holos, which are found extensively in toys, credit and ID cards, and product design. These are just 2D layers which give the illusion of depth. They are stupidly easy to manufacture, and incredibly cheap, as we have seen. You can easily have them made to your own design.

It is vaguely possible that the CieAura holograms are what is known as Dot Matrix holos, and they are pretty much what they sound like: holograms made by specialized machines which stamp images into foil masters using a dot pattern. The process is somewhat similar to the way old-fashioned desktop printers worked. These kinds of holos are generally used when high levels of security are required, as they can encode what are called ‘shape scattered’ patterns. Electron-beam lithography makes even higher quality holograms still, and due to their very high resolution (up to a quite impressive 254,000 dots per inch) can encode all kinds of hard-to-copy detail. These last two are rather more expensive than stacked holos, but once you’ve made a master, it’s still relatively cheap to manufacture millions of clones.

Whatever the case, you should understand that what’s happening with all of these methods is that a machine is simply etching finely detailed patterns into a piece of metal, which is then used as a master to print the actual holograms onto plastic or metal foil.

Without wanting to get too technical about what a proper hologram is, and how it works, I’ll attempt a little explanation: even though light travels very fast (299792458 metres per second, in fact) it can be slowed down by materials it passes through, such as water or glass.

Here, the light bouncing off the pencil and reaching your eyes is slowed very slightly as it goes through the water in the glass, and when you compare it to the light coming off the pencil above the water, you can clearly see a discontinuity (and you can see that there is a depth-perception illusion in play – the pencil looks more magnified, and appears to be ‘elsewhere’ from where you know it to be). You will have seen this kind of effect countless times in your life; distortions in windows, raindrops on glass, the brilliance of cut gems like diamonds. If you wear spectacles, the warping of light by changing its speed is what helps correct your vision. Any transparent or semi-transparent medium can, and mostly always does, change the speed of light.

A lesser known example of the speed of light being altered is when you see an oily puddle on the road.

In this case, the rainbow effect is caused by the constituent parts of white light being bounced off the puddle at slightly different speeds – the white light of the sun is being separated into colours due to minute optical delays introduced by the oil/water mix on the road.

This changing of the speed of light as it goes through different material is called refraction (or diffraction, according to whether it’s reflected or transmitted). I’m sure you’ll already have made the link between oily rainbows and the holograms you see on credit cards, and indeed, you’re seeing exactly the same principle at work. The very cool thing about refraction/diffraction is that if you can slow light down controllably, and in just the right way, you can fool your eyes into thinking that the delay caused by what we call the refractive index of a material is not simply a colourful effect, but a function of distance. In other words, under certain conditions, and in just the right light, we can trick our eyes into seeing refractive changes as depth.

And this is exactly what a hologram does. The very small and highly organised grooves and pits on a holographic film refract the light in such a way as to give an illusion of depth – that’s what creates the hologram’s 3D effect. You will know from experience, that these little holograms work best when you have a very defined, single point light source, and when you view them from one angle. That’s simply because the refraction effect is most effective when it’s lined up exactly with a light source and your eye.

What I’m getting at here, of course, is that there is really nothing at all mystical about a hologram. Holograms are exploiting simple and extremely well understood properties of optics, and have no more magic in them than the magnetic strips on your credit card.

On the CieAura site we read that:

The holographic chips are actually small skin colored patches that are infused with specific formulas designed to balance the body when placed along energy sensitive points of the body called meridians. Some call the holographic chips and the results like “acupuncture without the needles”.

And…

The CieAura Chip technology communicates with the body through the human electromagnetic field. This is known as bio-magnetic transfer. It works similar to acupuncture.

And…

CieAura’s products operate from the infusion of Intrinsic Energy into a holographic chip. Intrinsic Energy is synonymous with subtle energy as used in other texts. Once the holographic chip is placed within an inch or so of the body, these energies communicate externally with the body’s energetic fields. The chip aids the body to move itself toward its optimum energetic state. The chips use physics as opposed to chemicals to externally communicate with the body’s intrinsic energy fields.

…Nothing enters the body. Intrinsic energy operates in the quantum physics area (smaller than an atom). As a result, there is currently no device capable of measuring the signal.

Let’s think carefully about what’s being claimed here: information recorded holographically (that is, by altering the refractive index of plastic to vary the frequencies of light travelling through it) is somehow ‘infused’ with ‘formulas’ that ‘communicate’ via ‘bio magnetic transfer’ and ‘intrinsic energy’ with the body’s ‘energetic field’. And this effect is not currently measurable with any known technology (how wonderfully convenient).

As we have seen before with ShooTag, this is nothing more than a collection of absurd and diffuse terms combined in a melange of completely meaningless waffle. Not one thing in the sentences above has even an ounce of scientific credibility. You can’t ‘infuse’ formulas into holograms like you would steep some herbs in hot water – that makes absolutely no sense. The term ‘biomagnetic transfer’ occurs nowhere in scientific literature because it’s bunk. ‘Intrinsic energy’ is a made-up term that means nothing at all. The human body has no ‘energetic field’ – that’s complete bullshit. And all this tied into acupuncture, which is a folk remedy that has virtually no credibility outside of a minute chance that it might have a barely discernable effect on pain. ((Acupuncture is difficult to test scientifically for a number of reasons, not the least of which is that it’s pretty easy to tell if someone’s sticking needles into you. Nevertheless, the best science we have on it indicates that it’s ineffectual.))

It’s more than clear that all the sciencey-sounding verbage you encounter on the CieAura site is abject gibberish. It may be that Melissa Rogers is so badly educated that she really believes this baloney… but I don’t really think so. I believe that all this pseudo-mystical-sciencey stuff is smoke-and-mirrors distraction designed to deflect anyone from too-readily discerning the real purpose of CieAura.

And that purpose is what we’ll hone in on in the next instalment…

The person who has just been appointed to the head of Australia’s once ((I say ‘once’ because, like everything else in this country lately, it seems that the idiotic buffoons who aspire to be some kind of ‘government’ here, are hell bent on making it the laughingstock of the educated world.)) world-admired science organisation, the CSIRO, ((You know WiFi? The CSIRO invented that. Yeah, WIFI!)) believes in magic.

Yes dear Cowpokes, Dr Larry Marshall, a man whose scientific credentials barely cast little more than a dim glow from within the deep shadow of his business escapades, and whose tumbling grammatical trainwreck of a biography uses expressions like ‘leverage’ and ‘serial entrepreneur’, wants to create water dowsing machines.

Larry says he would…

…like to see the development of technology that would make it easier for farmers to dowse or divine for water on their properties.

“I’ve seen people do this with close to 80 per cent accuracy and I’ve no idea how they do it,” he said. “When I see that as a scientist, it makes me question, ‘is there instrumentality that we could create that would enable a machine to find that water?’

You know what, Larry? When you see that – as a scientist – you should actually ask yourself why no real scientists believe, for even a nano-second, that dowsing works.

You have no idea how they do it? My suggestion is that you look up the ideomotor effect and watch this video. Several times, if you don’t get it on the first run through.


Image: Bill Brooks Creative Commons; Some Rights Reserved

Or: You Keep Using That Word. I Do Not Think It Means What You Think It Means

Homeopathy is crap. Serious, unmitigated, archaic, superstitious hogwash-laden crap. There is no defensible argument for why it might have the magical qualities with which it is imbued by some. On that, Faithful Acowlytes, I think you and I are agreed. I’ve noticed in recent times, however, a growing tendency from the dozen or so remaining supporters of homeopathy, to wheel out the justification that its validity might lie in the Placebo Effect. ((For numerous reasons that I won’t even bother to go into here, that’s seriously clutching at straws, in any case.)) The Placebo Effect is also cited by supporters of various other dubious unscientific medical practices (yes, I’m looking at you Mr Acupuncture) as a possible legitimate modus explainii. ((Yes, I know that I just made that term up, and it bears not even the faintest resemblance to correct Latin.))

The problem is that the concept of the Placebo Effect has become eroded over the decades into a magical-thinking term of its own, specifically, a notion that a placebo invokes some kind of Mysterious Ability Unknown to Science for the human body to heal itself, based on a sort of ‘mind-over-matter’ mechanism that remains to this day entirely unexplained. As someone who understands what the Placebo Effect actually is, this really annoys me. And when I’m annoyed, I dust off the soapbox.

Today on TCA, we’re going to look at the exact meaning and intention of running placebo mitigated trials in medicine, and why the explanation for the Placebo Effect is most likely dull and unexciting. Prepare to have your illusions shattered.

To help illustrate things, I’m going to give you a very basic example of how a clinical trial involving placebos might work – it’s not the definitive way of conducting a placebo-based experiment, but for the sake of simplicity it covers all the issues that we need to examine.

Imagine that you have invented a new drug for the relief of nausea. All your theory says that this drug is the bees knees, but to meet the many requirements of getting a modern pharmaceutical legally to market, you must demonstrate this to the satisfaction of the various bodies that regulate this kind of thing (and, as unbelievable as a lot of people seem to find it, this is actually quite tough). What you are obliged to do is to set up a blind – or double-blind – trial (we’ve talked about blind trials before on the Cow, but click on that link it you want a refresher) which takes into consideration numerous factors that might influence your potential outcome. Understand: you do this in order to rule out as much external influence as possible that might offer alternative explanations for results of your experiment. In other words, you’re trying to demonstrate that your drug, and your drug alone, is responsible for any observed lessening of nausea for your patients.

The problem is this: in many areas of medicine, the results of interventions are not totally clear cut. Experience of nausea, for example, is partially subjective, and what you’re trying to do with your experiment is to get an objective overview of how your drug influences a patient’s assessment of nausea. It is very important, therefore, to iron out any irregularities that might be caused by, for example, a subject’s expectation of what a treatment might do.

If you have a hundred patients, and you give fifty of those patients a pill and fifty nothing at all, then half your study knows with certainty that they didn’t get the ‘anti-nausea pill’. This might influence what they report in regard to their nausea. Maybe it won’t, but you have to consider the possibility that it will, and set about ruling it out. The obvious thing to do, then, is to split your group into three parts instead of two, give one third the new drug, one third a capsule identical to the one containing the anti-nausea drug – but with no active ingredient – and one third nothing at all. If the drug has any merit, then what you would expect to see here is positive results from the drug, and then equally neutral results from both the the empty pill (the placebo) and those who got nothing at all.

Are you with me? Does this sound reasonable?

Well that’s exactly what scientists do in blind test trials with placebo controls. Only… pretty much every time this kind of experiment is run, the results inevitably look funny. If the drug is efficacious, the patients who get an active ingredient post a positive assessment of their nausea relief, as you would indeed hope. The patients who got zip (representing what is called the baseline) report a neutral assessment of change in nausea levels. The placebo arm in this kind of experiment, however, almost invariably returns a result of marginal improvement. Better than baseline, but not as good as the active drug. In other words, it seems that the patients who think they might be getting some kind of medicine appear to get an actual physiological benefit from simply popping a pill.

How utterly weird is that? Imagine the puzzlement among experimenters the first few times these kinds of results came back!

Now we get to the real problem of the misunderstanding of the Placebo Effect. Over the years, this result, which is a very real result and is seen almost without fail in a great number of clinical trials, has been taken to mean that the ‘idea’ of taking a pill (or indulging in some other kind of intervention) can have an actual physiological effect on a patient. To put it another way, it appears that if someone thinks they’re being treated, then somehow they seem to physically benefit from being under that illusion.

Only, that’s not exactly what the Placebo Effect is showing us.

In science, a placebo trial has a specific and clearly defined purpose: to account for all other variables from the experiment that can’t be explained by the agonist of the experiment itself. This would indeed include any strange psychological influence on physiology should such a thing exist, ((And it should be noted that in the special case of pain – and a few other stress-related illnesses – it has been shown that a psychological element can come into play depending on the subject’s mental state. It is well to clearly understand, though, that it’s rare for such a psychological element to come even close to matching the level of pharmacological effects)) but it need not necessarily be constrained to only this. What most people fail to understand is that the Placebo Effect may also include numerous other factors. Some of these are: patient reporting bias; risk justification; confirmation bias and even just the kind of bias that might be inherent in being involved in a clinical trial in the first place. What do I mean by some of these? Well, let’s say you’re a patient in a study such as the one I suggested above. You are given a pill twice a day for the period of two weeks. You’ve given up some of your time to be on this trial (recording and reporting results and so forth) and you like the doctor who is treating you. This might very well influence what kind of modification to your results you record – only a little bit, perhaps, but ‘only a little bit’ is the scale of the typical observed Placebo Effect. ((Placebo Effects are never profound.)) Note that you might not necessarily be really feeling any difference in your nausea levels, but you are being ‘kinder’ on reporting them to the nice doctor (you would not even be aware of this – you are being given a pill and, in your mind, hey, it might be the anti-nausea drug… maybe you should be feeling a little better…) In addition to this kind of scenario, people involved in clinical trials behave differently to people in their actual usual lives. There is a tendency, for example, for them to be more aware of their day to day health and to take a little more care than usual with it. This of course can produce real physiological results that can easily colour their experience in the trial.

These things are very difficult to iron out of an experiment, and that’s EXACTLY what the Placebo Effect is all about – it is a generic container for the strange and uncatchable inconsistencies that occur when attempting to run an experiment where there are a lot of variables.

To boil all this down, it may well be that the Placebo Effect in any given clinical trial – and perhaps in all clinical trials – is down to nothing more than erroneous reporting; not any kind of physiological outcome at all, but just a noise phenomenon in the experiment that produces illusory effects simply because it is an experiment and not reality. In the actual real world, the thing we think of as the Placebo Effect may not even exist, and it’s impossible to verify such a speculation since trying to do so would necessitate the undertaking of an experiment and thus risk producing a horrible spiral of nausea-inducing recursion.

So the next time you hear someone justifying some kind of pseudoscientific ‘alternative’ remedy or other by invoking the Placebo Effect, I suggest you do the following: look them squarely in the eye and say, with a lisp… “Inconceivable!”

No-one else is running with that headline, so I’m just doing my duty…

Yes, it’s actually science.

Next Page »