Studies show...
Studies Show…
Note: In this article, I will refer to “debates” and “arguments” a lot. I am not referring to arguments/debates here on the forum, of course, but rather these practices in general.
Let’s start with a question.
Has anyone here ever read a study?
I don’t mean an OP/ED or article citing a study. I mean read a study? The actual contents of the study?
Why do I ask?
Consider the ubiquity of the following words:
Studies show…
This is the central topic of the following piece I composed and hopefully the discussion that will follow. It is intended to be a harsh criticism of the way these two words are used so offhandedly and with such a total lack of rigor in opinionated pieces, articles, and non-scientific publications. This article is not a criticism of scientific studies, the people who author them, or the general methodology employed. It would be rather foolish of me to do so since I rely so heavily on them.
Our question of immediate concern is: What is a study? I provide my own definition:
A study is a scientific undertaking whose ultimate aim is to establish a correlation between two variables. The applications of the general methodology used to do this is very broad in scope. These two variables might be, say, IQ and obesity rates, or ethnicity and high school grades, or cancer rates versus TV hours watched, etc. etc. In general, these variables can be characterized as either being independent (ie, the input variables, those upon which the selection parameters are generated) and dependant variables (the output variables, whose value depends on the independent variable). Some studies merely involve the gathering of pre-existing data. Some such studies involve the assemblage and analysis of existing studies, such are called metastudies. Other studies actually collect it. In these cases, based on the parameters of the study, they will follow a group of volunteers called the sample. In the case of establishing a certain correlation, another aim of a study is to establish a sort of causality or lack thereof between the variables under investigation. With this in mind, another methodological principle of a study is to take into account other variables which might be affecting the dependant variable besides the independent variable. This is called variable control. The parameters of the sample will generally reflect the control of variables which might be affecting the trial other than the independent variable. If we are examining the effectiveness of a certain drug on alleviating hypertension, and our sample includes only men of a certain ethnicity, then it would be invalid to argue that the results could apply to females or to men of a different ethnicity since both play a role in physiological makeup and response to drugs. Alternatively, if we were examining gender disparities in math results, for example, we would have an invalid study if the results of one gender were drawn from a poverty stricken inner city school, and that of the other drawn from an exclusive private school. In both hypothetical cases presented, there was a failure to take external variables into account.
In a drug trial, one such way of eliminating such biases is to employ a control group to determine the effectiveness of a drug on a certain condition which is actually due to the drug and not external factors, because in these cases, the only variable difference between the two groups will be one received the drug, and the other didn’t. Another such way, is a critical elimination of selection bias and psychological effects by the randomized and double-blinded administration of a placebo. In pharmaceutical trials, to state that a certain drug does not better than control or does not better than a placebo is to state that it is not effective.
It may be a good time to point out that there are all these concepts in studies that require some fairly good knowledge of statistics to fully grasp. Some of these, like a double-blind, or a randomized trial are quite simplistic, but what about a discrete probability distribution or a p-value or a Bell curve, or more complex distributions like a Gauss distribution or a Chi-square and inverse-Chi square distribution? Indeed, if it were up to me, statistics would be a mandatory part of high school. Data gathering by other people hugely affects our lives, so people should be able to have a reasonable ability to interpret and understand statistics.
Here I should point out that I contend that a properly done study with controlled variables and randomized blinded trials following a correct and proper methodology are the most empirically valid and accurate manner of assessing a correlation. Disputing this statement is not the purpose of this article.
If you read an OP/ED, a journalistic piece, an argumentative article, etc. there is a reasonable possibility that you will come across a study citation. To prove a certain point, the author might state “A study carried out by…shows that”. My principle argument is that in this highly non-rigorous and non-scientific context, this is not necessarily trustworthy, and there is no substitute for actually reading the study yourself, in the context of a reasonable knowledge of statistics. One of the beauties of a study is that to read it, you don’t necessarily need to know about the subject of the study. To examine a drug study or pharmaceutical study does not necessarily require an intimate knowledge of molecular biology or pharmacology. What it will require, however, is a good deal of skill in data interpretation and statistics.
Returning to the previous point, the reason that this non-rigorous but totally ubiquitous “studies show” should not be trusted is because in these cases, you are not only contending with the data from a study, but also the opinion of the author, which is usually a third or fourth hand opinion. When I say “fourth hand”, I mean that it is not likely that the author read the study (there are exceptions. If the writer is an MD, and the study pertains to drug trials, it is likely that he/she will have read it before citing it). Instead, the author will have read a simplified analysis of the study, and then presented that analysis to you, the reader, in a simple one-sentence summary “this study show that…”. Like a disc that gets corrupted after being copied so many times, or a game of Chinese whispers where the phrase gets more and more unrecognizable from the original, you, the reader, will probably receive the most convoluted and simplified understanding of what the study actually entailed, because to say that a study “shows no link between…” or “shows a link between…” is actually a gross oversimplification.
In other cases, there may not be methodological grounds for inferring what is presented to the reader by that author on the basis of the actual data of the study. Suppose here a hypothetical study follows the link between exposure to a certain chemical and cancer rates, and finds no such correlation. A similar summary and simplification may be presented in an opinionated article in favor of this particular chemical, as a hypothetical example. But, for example, if the study only followed the subjects for 5 years, then this would not be a valid conclusion to draw. There exist cases where opinionated articles are presented which do make similar errors. There are articles from fat-rights groups citing studies which apparently demonstrate that there is no link between overeating and obesity (a claim which can clearly be demonstrated false on the grounds of simple mathematics since when there is an excess of energy being taken in, in the form of eating more calories, compared to the energy used in burning calories, the remainder is stored as lipid). There are also articles by those who argue that saturated fat does not cause arteriolosclerosis citing studies of the heart health of the Masai, who eat a very-high saturated fat diet of beef blood and meat (but the Masai eat a low-calorie diet and have a very high activity level).
These are facts which would be recognized by an actual reading of the study in question and the conclusions that can be drawn from the data, but not necessarily (indeed, very rarely) from the manner in which they are presented in many arguments and opinionated articles. It is for this reason that in a discussion, it is a pet peeve of mine when an interlocutor says “well, this study shows that x,y and z”. Chances are, that person read about the study in a non-scientific article. Unless the writer of that article is a professional, chances are that the writer of that article read a simplification of the study and simplified it from pages to sentences (or a sentence). You don’t need to be a brilliant psychologist or epistemologist to see that massive bias will creep in and that this is totally untenable in terms of taking it at face value. By the time the interpretation gets to you, there is a possibility that the interpretation might not resemble the correlation drawn in the data. Of course, I’m not suggesting this is always the case, but there is no rational way to take such statements as the generalized example I gave above, at face value. I’m not suggesting that every or even most study citations in a non-scientific context are wrong. What I am suggesting is that since they are presented in such a non-rigorous and overly-summarized manner, there is no way of telling when you’ve got a solid interpretation from a ridiculous one.
To only way to spot that is to read the study itself. The authors of a study will usually put forth a hypothesis explaining their data. The next time someone says “I’ve got a study that shows…”, don’t treat it like an ace in the hole. Read the actual study itself, and be prepared to argue about interpretations of data. To not do so is anathema to critical thinking.
I think it is a reasonable conclusion to draw that it is simply not epistemologically tenable to pick up an article, which cites and highly summarizes a study, and draws a conclusion in support of a particular opinion and presents it to the reader. If you want to believe a certain conclusion can be drawn from the data in a study, you’ll have to read it yourself. Many debates today and argumentative pieces involve one or both sides throwing studies at each other, studies which, usually, neither side of the debate has actually read and probably the interpretation of which neither understands. Hopefully in future, more intelligent debates will revolve not around this practice, but instead both sides using the primary material (the study itself) and arguing for interpretations on this basis, instead of just taking their interpretation ,presenting it as fact, summarizing it in a sentence after it has been simplified, cut-down, and re-simplified and presenting it to the reader. Otherwise, the potential for misleading (sometimes deliberate) is serious.
Discuss.
"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.
-Me
- Login to post comments
My only gripe with your writing, DG, is that you hide crucial points in the middle of your essays. I've long since learned that the hyperbolic use of BIG BOLD ITALIC LETTERS and frequent repetition are necessary when trying to convey points to people who get bored reading articles in Cosmo.
Again, let me reiterate for those just coming over from Cosmo.... LOTS OF REALLY BIG BOLD ITALICS... LOTS OF REALLY BIG BOLD ITALICS.... LOTS OF REALLY BIG BOLD ITALICS.
Now, lest I lose track of what I'm saying, there are very few things that aggravate me more than someone talking about a study that I have read when it's obvious that they haven't read it. Here are a few tips. If you're interested in the study, just follow links in the article. Often, you can find the real study in pdf form. If you can't find it online, look the author up. Most college professors have public emails, and many of them are happy to forward a copy of the study to you. If there's a university nearby, just go to their library. Most colleges have hard copies of journals, and many have access to online periodicals.
Do your freakin homework, people.
Not to mention that this same study might not have even addressed the correlation between this chemical and say, infertility. Just because it doesn't cause cancer, it doesn't mean it's perfectly safe. Most studies are not trying to determine if X chemical is safe. They are trying to determine if there's a correlation with one very specific variable. The best way to say that a chemical is safe is through metastudies, where the results of multiple tests for multiple variables all came out looking good.
It's a bit of a tangent, but I get really aggravated when I'm presenting evidence for a scientific principle, and people say things like, "You can't trust that. You know scientists are biased, and for every study that says one thing, there's a study that says something else." Usually, what they mean is that non-scientist writers have very different opinions of the matter, and have displayed their bias in Op-Ed pieces. Since I have read the study, I know which side to believe, if any, and I know exactly how certain the results actually are. Yet, for some reason, people who want to believe what they believe trust the argument that everybody's guessing. Nobody reads the damn thing for themselves.
By the way, if you can't tell, this is a pet peeve of mine.
Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin
http://hambydammit.wordpress.com/
Books about atheism
While I'm griping... most scientists do NOT have agendas. They are observing protocols and making sure that their tests are acceptable by scientific standards. Almost everyone I've seen who was agenda driven was NOT spending their time in the lab.
Atheism isn't a lot like religion at all. Unless by "religion" you mean "not religion". --Ciarin
http://hambydammit.wordpress.com/
Books about atheism
And obviously one of mine as well, or I wouldn't have written this.
"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.
-Me
Books about atheism
This is certainly true. Most of the critics of modern science and those who accuse scientists of being hopelessly biased have (a) never spent time in a lab, ever (or indeed, on any scientific research project. A lot of scientific research, particularly in glaciology, oceanography, climatology, zoology, evolutionary biology, astronomy, etc. is done out of the lab) and (b) Never read in entirety and fully grasped a write-up of experimentally gathered data in a technical journal.
Funny, these are the same people who are likely to make precisely the error I detailed in the above piece. In general, I adhere to the principle that the world would be a vastly better place if people limited their strong opinions to those things in which they knew what they were talking about.
"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.
-Me
Books about atheism
My first thoughts without reading further...
Which studies? I want to read them.
I really want to respond to this, but flippant study citation drives me so incredibly mental that I can't find the words to express the rational part of my opinion. All I can think of is St. Lawrence grilling himself.
I'll just walk it off and be back.
Saint Will: no gyration without funkstification.
fabulae! nil satis firmi video quam ob rem accipere hunc mi expediat metum. - Terence