The CDC has issued a press release on an increase in the number of measles cases in the U.S. this year, with 131 cases reported so far. Of those measles patients, 95 were eligible for vaccination but 63 of them were not vaccinated because of their--or most likely their parents'--belief that vaccines are "dangerous."
I'm not sure how much evidence there need to be to convince parents that the MMR vaccine, and/or the preservative in it, do not cause autism. Obviously, piles of evidence accrued from gigantic longitudinal studies of thousands of children over many years are not enough. (see here, here, here, here, and here, for starters.) Will an epidemic of preventable, deadly infections be the only thing that will change some people's minds?
If people want to put themselves in harm's way because of their willingness to believe a bunch of pseudoscientific wackaloons, that's their problem. It's a terrible shame that they're putting their kids at risk too.
Showing posts with label science. Show all posts
Showing posts with label science. Show all posts
Sunday, September 21, 2008
Thursday, September 11, 2008
The Darwin Particle
My father works with the particle accelerator at Fermilab in Illinois. As a child, when I would visit him at work and we would walk through the underground tunnels where the beam-control equipment was kept, the towering machines on each side of the walkway would terrify me: they thundered, blinked, screamed "Warning: Radiation" to me at every turn.
Now CERN has built an even bigger, scarier accelerator. And all I have in my lab is jars of termites.
Fortunately, we biologists now have our own accelerator to terrify our progeny with. At last, the final secrets of evolution are within our reach.
Now CERN has built an even bigger, scarier accelerator. And all I have in my lab is jars of termites.
Fortunately, we biologists now have our own accelerator to terrify our progeny with. At last, the final secrets of evolution are within our reach.
The Black Widow Spins Her Deadly Web... Well, Sometimes.
I always get disproportionately excited when I read about a new discovery showing behavioral complexity in invertebrates. Of course, now that we understand the incredible intricacy of honeybees' language, these sorts of things shouldn’t surprise me too much. But I still love this stuff, wherever it pops up.
Black widows alter their web architecture to be better insect traps as they get hungrier. A neat little paper in this month’s issue of Animal Behavior [76(3):823-829] describes how this happens.
When a black widow spider is well-fed, it uses that energy with gusto—spinning out lots of thread, but making a chaotic, disordered cloud of a web near the opening to its hideout. This kind of web is called a ‘tangle-based’ web, and for all that thread, isn’t really that sticky.
However, take a spider who hasn’t been fed for a week, and watch her spin a web. That jumble becomes a well-engineered, efficient killing machine. Instead of making a jumble of undifferentiated, generic thread, the spider uses three different kinds of silk structures to spin a very specific trap (using less thread overall):
(This image is taken from the original paper)

SH is the silk sheet, a flat plane on which the spider can easily maneuver, supported by a network of threads (ST). Anchoring the web to the substrate are the very sticky gumfooted threads (GF), which are kept under constant tension.
The researchers set up homes for 112 juvenile black widow females, fed half of them every day for a week, and the other half nothing for a week. After the members of each group spun their characteristic webs, the webs were imaged to quantify the structural differences between them. Then the researchers saw what happened when half of each group was then moved to the other group’s webs, and then fed.
Under most metrics used, the researchers found that it was indeed much easier for black widows of both types to catch prey when the spider was using a hungry spider’s three-thread design, rather than the chaotic mass of thread of a fed spider. It seemed that the silk sheet made it easier for the spider to sprint out towards its prey. The prey may have alerted the spider to its presence by touching the gumfooted threads and sending out vibrations, and perhaps was slowed down by those threads’ especial stickiness.
Why would black widows switch between a poor web and a good web, though? Wouldn’t it be better to just maintain the efficient functionality of the three-thread design whether the spider was hungry or satiated? This is especially head-scratching when you consider that it uses up more silk to make the jumbled webs of the full spiders. The spider would be using up energy both to modify the web as well as to make the loads of silk needed to the tangle-based web.
Consider this though: the authors point out that many spiders are prone to killing more prey than they can eat, and just leaving the extra prey rolled up in silk on their webs. It’s also been documented that some spiders may just eat themselves to death if able to catch too many prey. Perhaps the switch to the poorly-trapping tangle-based web is a smart move by the black widows, at the very least saving them the energy of pursuing and killing more prey than they can use, at the most sparing them a fate like Monty Python’s Mr. Creosote. The authors also suggest that the tangle of web of a fed spider might serve as a predator defense in those times in which it doesn’t need the web to serve as a food trap.
I think the behavioral plasticity that can be built into such a small animal is really fascinating, and the adaptability of the webs of these spiders is just another example of that. So three cheers for the black widow (one for each part of her web)!
And one more that we aren’t one of the flies who get caught by her web when she’s hungry…
Black widows alter their web architecture to be better insect traps as they get hungrier. A neat little paper in this month’s issue of Animal Behavior [76(3):823-829] describes how this happens.
When a black widow spider is well-fed, it uses that energy with gusto—spinning out lots of thread, but making a chaotic, disordered cloud of a web near the opening to its hideout. This kind of web is called a ‘tangle-based’ web, and for all that thread, isn’t really that sticky.
However, take a spider who hasn’t been fed for a week, and watch her spin a web. That jumble becomes a well-engineered, efficient killing machine. Instead of making a jumble of undifferentiated, generic thread, the spider uses three different kinds of silk structures to spin a very specific trap (using less thread overall):
(This image is taken from the original paper)

SH is the silk sheet, a flat plane on which the spider can easily maneuver, supported by a network of threads (ST). Anchoring the web to the substrate are the very sticky gumfooted threads (GF), which are kept under constant tension.
The researchers set up homes for 112 juvenile black widow females, fed half of them every day for a week, and the other half nothing for a week. After the members of each group spun their characteristic webs, the webs were imaged to quantify the structural differences between them. Then the researchers saw what happened when half of each group was then moved to the other group’s webs, and then fed.
Under most metrics used, the researchers found that it was indeed much easier for black widows of both types to catch prey when the spider was using a hungry spider’s three-thread design, rather than the chaotic mass of thread of a fed spider. It seemed that the silk sheet made it easier for the spider to sprint out towards its prey. The prey may have alerted the spider to its presence by touching the gumfooted threads and sending out vibrations, and perhaps was slowed down by those threads’ especial stickiness.
Why would black widows switch between a poor web and a good web, though? Wouldn’t it be better to just maintain the efficient functionality of the three-thread design whether the spider was hungry or satiated? This is especially head-scratching when you consider that it uses up more silk to make the jumbled webs of the full spiders. The spider would be using up energy both to modify the web as well as to make the loads of silk needed to the tangle-based web.
Consider this though: the authors point out that many spiders are prone to killing more prey than they can eat, and just leaving the extra prey rolled up in silk on their webs. It’s also been documented that some spiders may just eat themselves to death if able to catch too many prey. Perhaps the switch to the poorly-trapping tangle-based web is a smart move by the black widows, at the very least saving them the energy of pursuing and killing more prey than they can use, at the most sparing them a fate like Monty Python’s Mr. Creosote. The authors also suggest that the tangle of web of a fed spider might serve as a predator defense in those times in which it doesn’t need the web to serve as a food trap.
I think the behavioral plasticity that can be built into such a small animal is really fascinating, and the adaptability of the webs of these spiders is just another example of that. So three cheers for the black widow (one for each part of her web)!
And one more that we aren’t one of the flies who get caught by her web when she’s hungry…
Saturday, August 23, 2008
Valley Girls Know What They Want: Sexual Selection and Elevation in Finches
Each year as spring comes, opening your window will invite in a cacophony of ebullience from the male songbirds in your neighborhood. They find a good spot, puff out their chests, and sing their hearts out to any ladies of their species who might be listening. The females then choose the male with the best song, because he is more likely to be healthy and able to pass on good genes to his offspring. But odds are you live in an area that’s relatively low in elevation, unless somehow this blog’s gotten picked up by an Alpine sheepherder with a satellite internet connection. If you were, however, to travel up to very high elevations one spring to hike in alpine meadows or pine forests, you may notice that you’re not getting nearly as much of a show from the local bird populations. They don’t sing with as much gusto or showiness, even though they may look very much like the birds from back home. So what gives?
Emilie Snell-Rood, from the University of Arizona, and Alexander Badyaev, from Indiana University, did notice this. In a new paper in Oecologia [157(3):545-551] they are the first to prove this variation is a function of elevation. As their study group they chose a large group of finches, called the Cardueline finches, which contains 126 species with the widest elevational range of any living subfamily of birds in the world. Many of the finches that live at lower elevations have very close relatives at higher elevations, which sets up a perfect study system to test for elevation-based differences between species.
In their study Snell-Rood and Badyaev used sonographs to get a “picture” of each species’ song. With this they were then able to quantify different aspects of the songs’ showiness, for example number of notes, length, pitch range, etc. They then analyzed the relationship between these variables and the maximum elevation of the birds’ range. They were able to control for other factors that are known to vary with song complexity and might have otherwise confounded their results, such as body size, bill size, and habitat type. Once these confounding factors were controlled for, any existing differences between species’ songs would be more likely to be due only to elevational distances. They did find, in fact, that between “sister species,” the species with the lower breeding territory was more likely to sing elaborate, loud courtship songs than their counterparts higher up the mountain. Lower-elevation species also had longer songs with more notes. For an example of the kind of showiness in these carduelines, check out the song of the pine grosbeak, one of their lower-elevation species. Although I can’t find a file online of the song of its sister species—the crimson-browed finch—to use as a comparison, you can clearly hear the complexity and length of the pine grosbeak’s elaborate song.
The interesting thing about this variation in song complexity with elevation is that it correlates to differences in the breeding behavior and family lifestyles of these birds. The cardueline finches at lower elevation need more elaborate songs because there is more sexual selection in these species—the females spend more time evaluating and choosing mates, who are more showy in song and in plumage than the females. This is because, for these lower-elevation birds, the males don’t always really do much to help raise the offspring. Any contribution they will make to the next generation will be through good genes, which must be evaluated at the offset by the female through some proxy like song or plumage brightness. At higher elevations, a far more important predictor of offspring success is the amount of help it receives from its parents. Since food is much more scarce farther up in the mountains, both parents are required to forage for food to feed the baby birds. The male’s success lies in his ability to care for his young, not in convincing a female that he’s worthy of a copulation with her. He then does not need to invest as much in elaborate, difficult songs as his polygamous counterparts down in the valleys.
Next time you hear an effluence of birdsong through your window, or up on a hike in the mountains, try to pick all the songs apart as you listen and analyze them. As in most things in nature, there is more going on than it seems.
Emilie Snell-Rood, from the University of Arizona, and Alexander Badyaev, from Indiana University, did notice this. In a new paper in Oecologia [157(3):545-551] they are the first to prove this variation is a function of elevation. As their study group they chose a large group of finches, called the Cardueline finches, which contains 126 species with the widest elevational range of any living subfamily of birds in the world. Many of the finches that live at lower elevations have very close relatives at higher elevations, which sets up a perfect study system to test for elevation-based differences between species.
In their study Snell-Rood and Badyaev used sonographs to get a “picture” of each species’ song. With this they were then able to quantify different aspects of the songs’ showiness, for example number of notes, length, pitch range, etc. They then analyzed the relationship between these variables and the maximum elevation of the birds’ range. They were able to control for other factors that are known to vary with song complexity and might have otherwise confounded their results, such as body size, bill size, and habitat type. Once these confounding factors were controlled for, any existing differences between species’ songs would be more likely to be due only to elevational distances. They did find, in fact, that between “sister species,” the species with the lower breeding territory was more likely to sing elaborate, loud courtship songs than their counterparts higher up the mountain. Lower-elevation species also had longer songs with more notes. For an example of the kind of showiness in these carduelines, check out the song of the pine grosbeak, one of their lower-elevation species. Although I can’t find a file online of the song of its sister species—the crimson-browed finch—to use as a comparison, you can clearly hear the complexity and length of the pine grosbeak’s elaborate song.
The interesting thing about this variation in song complexity with elevation is that it correlates to differences in the breeding behavior and family lifestyles of these birds. The cardueline finches at lower elevation need more elaborate songs because there is more sexual selection in these species—the females spend more time evaluating and choosing mates, who are more showy in song and in plumage than the females. This is because, for these lower-elevation birds, the males don’t always really do much to help raise the offspring. Any contribution they will make to the next generation will be through good genes, which must be evaluated at the offset by the female through some proxy like song or plumage brightness. At higher elevations, a far more important predictor of offspring success is the amount of help it receives from its parents. Since food is much more scarce farther up in the mountains, both parents are required to forage for food to feed the baby birds. The male’s success lies in his ability to care for his young, not in convincing a female that he’s worthy of a copulation with her. He then does not need to invest as much in elaborate, difficult songs as his polygamous counterparts down in the valleys.
Next time you hear an effluence of birdsong through your window, or up on a hike in the mountains, try to pick all the songs apart as you listen and analyze them. As in most things in nature, there is more going on than it seems.
Monday, August 4, 2008
A Simple Question
So here’s a statistical question for you. Now, mind you, this is a question that ground my graduate-level experimental design class to a halt for at least half an hour, and that also brought my boyfriend and I into something resembling not a discussion, but an actual argument. Pretty heavy stuff, for statistics.
Here’s the situation: You are a botanist and you want to study the effect of two different light regimes on petunia growth—let’s say 12h light and 12 dark, and 18h light and 6h dark. You have 40 little petunia seeds planted in pots waiting for you, and your university has 2 environmental chambers for you to use. You put 20 pots in each chamber, set the light timers, and start the experiment. After the prescribed number of days on this program, you measure all your petunias, and begin to analyze the data.
Now here’s the question, and I’ll phrase it a couple different ways: How many experimental units do you have? In other words, how many independent data points? How many degrees of freedom will you get in an analysis of these data?*
Answer: 2 experimental units, 2 independent data points. 0 degrees of freedom.
“What???!!” you may splutter. “But there were 40 petunias!!” You thought there were going to be 40 experimental units and 19 degrees of freedom, didn’t you?
Well, what did happen to those pots of petunias? The problem stems from when they were all put into only 2 environmental chambers. Once in an environmental chamber, the light turned on and shone into the entire room. The light was applied to all the pots together, as a group. If the lightbulb, say, started flickering and going out in one room, it would be flickering over all those plants together. In other words, the treatment (the light) was applied to one unit, the room. Therefore the environmental chamber as a whole becomes the unit of experimentation, not the individual plants. If the experimenter were to ignore this, he would be committing the mortal, yet frighteningly common sin of pseudoreplication.
Pseudoreplication occurs when there is a lack of independence between supposed experimental units, when the treatment is applied collectively, not individually. What this means, practically, is that each of your little units you are assuming are independent are actually irrevocably linked to each other, in a way that can mask the effect that you’re actually trying to see.
Now let’s suppose that there’s a problem with one of the lights, the one in the chamber on the 12/12 regime. That light tends to flicker when the MRI machine next door gets turned on. It’s new, and people don’t generally tend to hang out in environmental chambers and read the Times and have a coffee, so it hasn’t been noticed yet. But every single petunia in that chamber collectively feels all of those light flickers. In fact it happens frequently enough that it negatively affects their growth—all of them, together. So when the data are collected, the plants in the 12/12 room are just a little bit shorter than they might have been otherwise. Their growth was stunted just enough that those plants’ heights are less than those of the plants in the 18/6 room. When the experimenter analyzes those data (not realizing yet that the experiment is pseudoreplicated), he finds a significant difference between the two and concludes that an 18/6 light regime for petunias helps them grow taller. What he doesn’t realize is that he hasn’t detected a difference due to light regime, he’s detected a difference due to faulty wiring—not at all helpful. Incidentally, even if he did finally recognize the pseudoreplication, he wouldn’t be able to analyze the results. With only one (true) experimental unit in each light regime, he wouldn’t be able to take an average and compute the variation around that average—there’s no variation because with only one data point, there’s nothing to vary. Without that, he can’t figure out if his two treatments truly are different from each other outside of the range of normal background variation. No conclusions can be made, and the entire study is wasted.
It’s easy to imagine other situations in which the pseudoreplicated nature of this study could screw up the results: a careless undergraduate props the door to one of the chambers open for a minute, forgets about it when his girlfriend calls, and then goes to lunch. In the meantime that room loses all its humidity through the open door. Or one of the lightbulbs burns out and nobody notices it for 8 hours. Et cetera.
Experimental units are independent when treatments are applied to each one individually. If this study used little light lamps for each plant, they would truly be the independent experimental units, because each one would be receiving an independent treatment. If one of the bulbs flickered and screwed up the growth of that one plant, the results overall may not be affected much, because there’s still 19 other independent data points in each that will all be averaged with the screwed-up one. Not ideal, but not the end of the world. Doing it this way sounds like a lot more work, but sometimes correct experimental design calls for a little more creativity and effort in order to get it right, and get valid results.
You roll your eyes and tell me that I’m being entirely impractical and unrealistic. “OK let’s assume that this is a well-funded university that can afford a decent electrician. Everything in the rooms has been tested and checked out. They’re fine. They’re completely monitored in every way so that if something goes wrong it’ll be noticed immediately and fixed. Stop being such a curmudgeon.” Yes, probably everything will be fine. But what if there is some variation that you don’t know about yet? You can’t monitor something you don’t know of. You have to design your study well enough, and with all precautions in place, to take care even of the most unforeseen circumstances. Only then can you get results that prove what you say they prove, with as much confidence as you think.
Pseudoreplication is everywhere. For example, a major study in my thesis area is pseudoreplicated, and sometimes I wonder if I’m the only one who’s noticed. (A developmental hormone was applied to some insects. The experimenters squirted hormone onto filter paper in the bottom of a Petri dish, and let groups of insects walk around on it and absorb it. Thus, the experimental unit here was not the insect, It was the Petri dish. But you can bet that each insect was treated as independent in the statistical analysis.) I know that sometimes I tend to lazily skim over the methods sections in papers to get to the conclusions. It’s a temptation, and a strong one, too, when there’s so much to read and so much else to do. But so much can go wrong in those dry methods sections. If we biologists can’t be trusted to always remember the lessons of our statistics classes way back in grad school, then all of us have to be on guard to catch our colleagues’ mistakes, before those unnoticed mistakes become accepted and cited in future research, even though they may well be completely erroneous.
*”data” is plural. “These data.” Not “this data.” Really. Don’t be That Guy**
**In normal situations “That Guy” might refer to the dude at the bar with his shirt tucked into his underwear who can’t figure out why all the girls are shooting him down. In nerd circles, it refers to the person who uses “data” as a singular noun. Hopefully it’s not the same person who also has his shirt tucked into his underwear, or he’ll never get a date.
Here’s the situation: You are a botanist and you want to study the effect of two different light regimes on petunia growth—let’s say 12h light and 12 dark, and 18h light and 6h dark. You have 40 little petunia seeds planted in pots waiting for you, and your university has 2 environmental chambers for you to use. You put 20 pots in each chamber, set the light timers, and start the experiment. After the prescribed number of days on this program, you measure all your petunias, and begin to analyze the data.
Now here’s the question, and I’ll phrase it a couple different ways: How many experimental units do you have? In other words, how many independent data points? How many degrees of freedom will you get in an analysis of these data?*
Answer: 2 experimental units, 2 independent data points. 0 degrees of freedom.
“What???!!” you may splutter. “But there were 40 petunias!!” You thought there were going to be 40 experimental units and 19 degrees of freedom, didn’t you?
Well, what did happen to those pots of petunias? The problem stems from when they were all put into only 2 environmental chambers. Once in an environmental chamber, the light turned on and shone into the entire room. The light was applied to all the pots together, as a group. If the lightbulb, say, started flickering and going out in one room, it would be flickering over all those plants together. In other words, the treatment (the light) was applied to one unit, the room. Therefore the environmental chamber as a whole becomes the unit of experimentation, not the individual plants. If the experimenter were to ignore this, he would be committing the mortal, yet frighteningly common sin of pseudoreplication.
Pseudoreplication occurs when there is a lack of independence between supposed experimental units, when the treatment is applied collectively, not individually. What this means, practically, is that each of your little units you are assuming are independent are actually irrevocably linked to each other, in a way that can mask the effect that you’re actually trying to see.
Now let’s suppose that there’s a problem with one of the lights, the one in the chamber on the 12/12 regime. That light tends to flicker when the MRI machine next door gets turned on. It’s new, and people don’t generally tend to hang out in environmental chambers and read the Times and have a coffee, so it hasn’t been noticed yet. But every single petunia in that chamber collectively feels all of those light flickers. In fact it happens frequently enough that it negatively affects their growth—all of them, together. So when the data are collected, the plants in the 12/12 room are just a little bit shorter than they might have been otherwise. Their growth was stunted just enough that those plants’ heights are less than those of the plants in the 18/6 room. When the experimenter analyzes those data (not realizing yet that the experiment is pseudoreplicated), he finds a significant difference between the two and concludes that an 18/6 light regime for petunias helps them grow taller. What he doesn’t realize is that he hasn’t detected a difference due to light regime, he’s detected a difference due to faulty wiring—not at all helpful. Incidentally, even if he did finally recognize the pseudoreplication, he wouldn’t be able to analyze the results. With only one (true) experimental unit in each light regime, he wouldn’t be able to take an average and compute the variation around that average—there’s no variation because with only one data point, there’s nothing to vary. Without that, he can’t figure out if his two treatments truly are different from each other outside of the range of normal background variation. No conclusions can be made, and the entire study is wasted.
It’s easy to imagine other situations in which the pseudoreplicated nature of this study could screw up the results: a careless undergraduate props the door to one of the chambers open for a minute, forgets about it when his girlfriend calls, and then goes to lunch. In the meantime that room loses all its humidity through the open door. Or one of the lightbulbs burns out and nobody notices it for 8 hours. Et cetera.
Experimental units are independent when treatments are applied to each one individually. If this study used little light lamps for each plant, they would truly be the independent experimental units, because each one would be receiving an independent treatment. If one of the bulbs flickered and screwed up the growth of that one plant, the results overall may not be affected much, because there’s still 19 other independent data points in each that will all be averaged with the screwed-up one. Not ideal, but not the end of the world. Doing it this way sounds like a lot more work, but sometimes correct experimental design calls for a little more creativity and effort in order to get it right, and get valid results.
You roll your eyes and tell me that I’m being entirely impractical and unrealistic. “OK let’s assume that this is a well-funded university that can afford a decent electrician. Everything in the rooms has been tested and checked out. They’re fine. They’re completely monitored in every way so that if something goes wrong it’ll be noticed immediately and fixed. Stop being such a curmudgeon.” Yes, probably everything will be fine. But what if there is some variation that you don’t know about yet? You can’t monitor something you don’t know of. You have to design your study well enough, and with all precautions in place, to take care even of the most unforeseen circumstances. Only then can you get results that prove what you say they prove, with as much confidence as you think.
Pseudoreplication is everywhere. For example, a major study in my thesis area is pseudoreplicated, and sometimes I wonder if I’m the only one who’s noticed. (A developmental hormone was applied to some insects. The experimenters squirted hormone onto filter paper in the bottom of a Petri dish, and let groups of insects walk around on it and absorb it. Thus, the experimental unit here was not the insect, It was the Petri dish. But you can bet that each insect was treated as independent in the statistical analysis.) I know that sometimes I tend to lazily skim over the methods sections in papers to get to the conclusions. It’s a temptation, and a strong one, too, when there’s so much to read and so much else to do. But so much can go wrong in those dry methods sections. If we biologists can’t be trusted to always remember the lessons of our statistics classes way back in grad school, then all of us have to be on guard to catch our colleagues’ mistakes, before those unnoticed mistakes become accepted and cited in future research, even though they may well be completely erroneous.
*”data” is plural. “These data.” Not “this data.” Really. Don’t be That Guy**
**In normal situations “That Guy” might refer to the dude at the bar with his shirt tucked into his underwear who can’t figure out why all the girls are shooting him down. In nerd circles, it refers to the person who uses “data” as a singular noun. Hopefully it’s not the same person who also has his shirt tucked into his underwear, or he’ll never get a date.
Monday, July 28, 2008
Score One More for Evolution--Magical Disappearing Fish Eyes
OK PEOPLE. Evolution is a fact. The details are up for grabs--that's how science goes. The book of Genesis, the seven days, the whole shebang, is an allegory, ok?
Now, can we just get over it and stop being stupid jerks and ruining kids' science educations? Really.
So we all know that deep down in dark caves lives oodles of weird animals without eyes. You've got your blind shrimp, your blind isopod, your blind salamander, and your blind fishy friend, the charismatic Astyanax jordani, the subject of a big huffy fit courtesy Casey Luskin at the Discovery Institute.
Luskin's argument is that loss-of-function mutations--i.e. mutations which occur randomly, screwing up a functioning gene so it "breaks"--are not contrary to the idea of intelligent design. The idea seems to be that God created something a certain way, and then something in our imperfect, corrupted world caused some oops to happen in the DNA that messed up God's perfect design. Ignoring the flaws in that argument for a moment (If that were so, and loss-of-function mutations were just unfortunate mistakes, how would they get fixed in a population? Could it be--gasp! natural selection??), let's examine Luskin's claim briefly.
He claims that the only thing that "Darwinism" (what a bad, incorrect term) cannot explain is gain-of-function, when an organism actually acquires a new trait.
Unfortunately for him, this loss-of-function in Astyanax jordani is actually a GAIN-of-function: a developmental gene called sonic hedgehog is actually upregulated, which increases skin sensitivity (beneficial to cave life, no?) while having the side-effect of disrupting eye development. PZ Myers has a great discussion of the flaws in Luskin's "argument" and the details of how A. jordani lost its eyes over at the Panda's Thumb.
Take that, Discovery Institute.
Now, can we just get over it and stop being stupid jerks and ruining kids' science educations? Really.
So we all know that deep down in dark caves lives oodles of weird animals without eyes. You've got your blind shrimp, your blind isopod, your blind salamander, and your blind fishy friend, the charismatic Astyanax jordani, the subject of a big huffy fit courtesy Casey Luskin at the Discovery Institute.
Luskin's argument is that loss-of-function mutations--i.e. mutations which occur randomly, screwing up a functioning gene so it "breaks"--are not contrary to the idea of intelligent design. The idea seems to be that God created something a certain way, and then something in our imperfect, corrupted world caused some oops to happen in the DNA that messed up God's perfect design. Ignoring the flaws in that argument for a moment (If that were so, and loss-of-function mutations were just unfortunate mistakes, how would they get fixed in a population? Could it be--gasp! natural selection??), let's examine Luskin's claim briefly.
He claims that the only thing that "Darwinism" (what a bad, incorrect term) cannot explain is gain-of-function, when an organism actually acquires a new trait.
Unfortunately for him, this loss-of-function in Astyanax jordani is actually a GAIN-of-function: a developmental gene called sonic hedgehog is actually upregulated, which increases skin sensitivity (beneficial to cave life, no?) while having the side-effect of disrupting eye development. PZ Myers has a great discussion of the flaws in Luskin's "argument" and the details of how A. jordani lost its eyes over at the Panda's Thumb.
Take that, Discovery Institute.
Stats 101 for Journalists--Correlation vs. Causation
While perusing your favorite newspaper, you may have run across an all-caps, bold-print headline with a title something like this: EATING SPINACH EVERY DAY WILL PREVENT CANCER, DOCS SAY. Generally, these articles will include speculation from researchers on how exactly this miracle food will keep you cancer-free; perhaps its those antioxidants, perhaps its high fiber content. Whatever the reasoning, the implication is that you should run out immediately to the grocery store and commence a daily diet reminiscent of a rabbit’s.
Not that there’s anything wrong with spinach. Your mom and popeye were right, spinach is very good for you, in fact. And hey, maybe it is a key part of a diet that will aid in preventing cancer.
The problem is, in fact, a statistical one. A common statistical error that perennially causes stats profs to tear out their hair in frustration, or perhaps, if they’re old and jaded, to merely roll their eyes and shrug.
The problem is the inference of causality from correlation.
Most likely, the study had a design something like this: hundreds of people were followed throughout a number of years, and periodic surveys were sent to them asking them about their diets. They filled out the form stating how many times a week they ate certain foods, and sent it into the study center. Or maybe they got phone calls from research assistants, asking the same questions. But regardless, it was not an experimental study—that is, nobody put these hundreds of people in cages and gave them different kinds of diets, each with different amounts of certain foods. It was observational, meaning that the researchers worked with what they could get—the pre-existing diets of their study volunteers, over which they had no control. Instead of creating and administering different conditions to them, they scientists just watched the subjects and saw what happened. Their data allowed them to correlate a factor with an outcome, but not prove causation.
This may seem like an academic difference but it has far-reaching implications. In experimental conditions—say, working with mice in a laboratory—all of the conditions are carefully controlled, in order that any effects can be attributed exactly to a cause. Say that ethics regulations allowed scientists to put people in cages and experiment on them to see the effects of spinach on cancer. Every person would receive the exact same cage conditions: exact same lighting, medical treatment, air temperature, amount and type of exercise, etc etc etc. And they would receive the exact same diet—except for one key difference. Half of the caged experimental humans would receive a diet that had more spinach than the other group’s. Then, after many years of monitoring under these same conditions, if there were any difference in cancer rates between the two groups, this could be attributed exactly to the one difference that existed between the groups—that of spinach consumption. The only way to infer causality is through experimentation—manipulating conditions in a controlled manner to see what affects these differences have between groups.
Obviously, because of ethical and monetary restrictions, this kind of study design with humans is impossible. So why can’t you infer causality from observational studies—the type of survey study that was carried out to create the flashy newspaper headline? The problem is that nothing is controlled in the research subjects—you don’t know if they have the same conditions at home, the same income level, the same amount of exercise, the same anything. What if it is not the spinach that is causing some people to have lower rates of cancer, but something else, that happens to be associated somehow, coincidentally, with spinach consumption? Fresh vegetables are expensive. They also require more time, generally to prepare—to wash, cut, etc. What if its not the fact that the cancer-less people are eating spinach, it’s that they can afford to have more fresh vegetables in their diet because they are wealthier, and maybe their extra wealth allows them to see the doctor more frequently? Or what if the extra little bit of time they have in their day that allows them the time to prepare fresh vegetables like spinach also happens to be enough extra time to go jogging as well? Any number of other, hidden, things could be the actual cause, or one of many causes of the lowered cancer rates in these people. The spinach may have nothing to do with it; it may just have been associated somehow with the actual, unrecorded cause.
Experimental studies, which allow true inference of causality, are impossible in many cases when the study animals are human beings. The best correlational studies looking at human habits and disease outcome over many years have huge numbers of people and try to get as much information about their participants as possible—background health info, income, marital status, exercise habits, etc, in order to take all these factors into account. And they often find very interesting and useful results, linking certain types of diets, lifestyles, or exercise habits to long-term rates of disease. But no matter how well these studies are designed and carried out, no newspaper can ever report on their findings using the word “cause.” Even if they record as many different variables as they can think of from their study participants—exercise, religious beliefs, geneology, length of their little toe, etc, it’s impossible to know whether or not they recorded any information about the factor that is truly causing the differences seen in the study. The conditions and the participants themselves are just too variable. To talk about causation in this context is simply incorrect, and perhaps even false.
What’s then the use of these large-scale observational survey studies? These studies are useful in finding links to diseases, which can then be studied directly in an controlled experiment using mice—which are 80-some-percent genetically related to us. Once this same connection is found in a controlled, experimental environment, one can finally come to some conclusion about causation.
I had a stats prof who had written his master’s thesis on biologists’ understanding of statistics. He found that over 70% of research published over several years in a peer-reviewed biological journal had statistical errors. It’s no surprise then, that newspaper writers are prone to the same kinds of statistical mistakes. It’s then up to the discerning reader to look beyond the headline, dig a little deeper, and figure out if the research was carried out in such a way as to merit the flashy headline. Dramatic words like “causes” and “leads to” and even just “will” sell newspapers. But they may not be statistically and scientifically accurate—be smart and judge for yourself.
Not that there’s anything wrong with spinach. Your mom and popeye were right, spinach is very good for you, in fact. And hey, maybe it is a key part of a diet that will aid in preventing cancer.
The problem is, in fact, a statistical one. A common statistical error that perennially causes stats profs to tear out their hair in frustration, or perhaps, if they’re old and jaded, to merely roll their eyes and shrug.
The problem is the inference of causality from correlation.
Most likely, the study had a design something like this: hundreds of people were followed throughout a number of years, and periodic surveys were sent to them asking them about their diets. They filled out the form stating how many times a week they ate certain foods, and sent it into the study center. Or maybe they got phone calls from research assistants, asking the same questions. But regardless, it was not an experimental study—that is, nobody put these hundreds of people in cages and gave them different kinds of diets, each with different amounts of certain foods. It was observational, meaning that the researchers worked with what they could get—the pre-existing diets of their study volunteers, over which they had no control. Instead of creating and administering different conditions to them, they scientists just watched the subjects and saw what happened. Their data allowed them to correlate a factor with an outcome, but not prove causation.
This may seem like an academic difference but it has far-reaching implications. In experimental conditions—say, working with mice in a laboratory—all of the conditions are carefully controlled, in order that any effects can be attributed exactly to a cause. Say that ethics regulations allowed scientists to put people in cages and experiment on them to see the effects of spinach on cancer. Every person would receive the exact same cage conditions: exact same lighting, medical treatment, air temperature, amount and type of exercise, etc etc etc. And they would receive the exact same diet—except for one key difference. Half of the caged experimental humans would receive a diet that had more spinach than the other group’s. Then, after many years of monitoring under these same conditions, if there were any difference in cancer rates between the two groups, this could be attributed exactly to the one difference that existed between the groups—that of spinach consumption. The only way to infer causality is through experimentation—manipulating conditions in a controlled manner to see what affects these differences have between groups.
Obviously, because of ethical and monetary restrictions, this kind of study design with humans is impossible. So why can’t you infer causality from observational studies—the type of survey study that was carried out to create the flashy newspaper headline? The problem is that nothing is controlled in the research subjects—you don’t know if they have the same conditions at home, the same income level, the same amount of exercise, the same anything. What if it is not the spinach that is causing some people to have lower rates of cancer, but something else, that happens to be associated somehow, coincidentally, with spinach consumption? Fresh vegetables are expensive. They also require more time, generally to prepare—to wash, cut, etc. What if its not the fact that the cancer-less people are eating spinach, it’s that they can afford to have more fresh vegetables in their diet because they are wealthier, and maybe their extra wealth allows them to see the doctor more frequently? Or what if the extra little bit of time they have in their day that allows them the time to prepare fresh vegetables like spinach also happens to be enough extra time to go jogging as well? Any number of other, hidden, things could be the actual cause, or one of many causes of the lowered cancer rates in these people. The spinach may have nothing to do with it; it may just have been associated somehow with the actual, unrecorded cause.
Experimental studies, which allow true inference of causality, are impossible in many cases when the study animals are human beings. The best correlational studies looking at human habits and disease outcome over many years have huge numbers of people and try to get as much information about their participants as possible—background health info, income, marital status, exercise habits, etc, in order to take all these factors into account. And they often find very interesting and useful results, linking certain types of diets, lifestyles, or exercise habits to long-term rates of disease. But no matter how well these studies are designed and carried out, no newspaper can ever report on their findings using the word “cause.” Even if they record as many different variables as they can think of from their study participants—exercise, religious beliefs, geneology, length of their little toe, etc, it’s impossible to know whether or not they recorded any information about the factor that is truly causing the differences seen in the study. The conditions and the participants themselves are just too variable. To talk about causation in this context is simply incorrect, and perhaps even false.
What’s then the use of these large-scale observational survey studies? These studies are useful in finding links to diseases, which can then be studied directly in an controlled experiment using mice—which are 80-some-percent genetically related to us. Once this same connection is found in a controlled, experimental environment, one can finally come to some conclusion about causation.
I had a stats prof who had written his master’s thesis on biologists’ understanding of statistics. He found that over 70% of research published over several years in a peer-reviewed biological journal had statistical errors. It’s no surprise then, that newspaper writers are prone to the same kinds of statistical mistakes. It’s then up to the discerning reader to look beyond the headline, dig a little deeper, and figure out if the research was carried out in such a way as to merit the flashy headline. Dramatic words like “causes” and “leads to” and even just “will” sell newspapers. But they may not be statistically and scientifically accurate—be smart and judge for yourself.
Subscribe to:
Posts (Atom)