From ‘I smell a rat’ to ‘when pigs fly’, bad science makes its rounds

pigs flyFrom ‘I smell a rat‘ to ‘when pigs fly’, bad science has been making the rounds of late. The multi-authored article A long-term toxicology study on pigs fed a combined genetically modified (GM) soy and GM maize diet” reports that pigs fed a diet of only genetically modified grain show a markedly higher incidence of stomach inflammation than pigs that ate conventional feed.

This paper is fresh off the press and ready for ravenous consumption by anti-GMO enthusiasts. However, it seems that – post-publication – the paper and its evidence fail the independent peer-review process on many fronts:

The Evidence: David Tribe reviews the paper here: He says, “It’s what some call a fishing expedition in search of a finding, and a known pitfall of animal feeding trials on whole foods…” Tribe points out (among other things) that some of the study’s observations might be attributed to compositional differences in the variety of soybeans or corn fed to the pigs “..there is relatively little information in the paper about nutritional formulation, methods used for producing the pig diets, storage time for the grain and which particular varieties of grain were used in the diets.”

Update – June 14th – – – Anastasia Bodnar expands upon this further in her post in Biofortified Lack of care when choosing grains invalidates pig feeding study: “The authors aimed to do a real world study, with pig feed that can be found in real life. It intuitively seems right to just go get some grain from some farms. After all, that is what pigs eat, right? Unfortunately, it’s just not that simple…To hone in on any differences that may be caused by the GM traits, they would have to use feed with one or more GM traits and feed that doesn’t have the GM traits but that is otherwise as similar as possible. If the feeds aren’t very similar, then we can’t know if any differences in the animals is due to the GM traits or due to something else.”

Update June 14th – – – Dr. Robert Friendship (via Terry Daynard) – swine expert from the University of Guelph – points to methodological problems with “visual scoring” and assessment of ‘inflammation’: “…it was incorrect for the researchers to conclude that one group had more stomach inflammation than the other group because the researchers did not examine stomach inflammation. They did a visual scoring of the colour of the lining of the stomach of pigs at the abattoir and misinterpreted redness to indicate evidence of inflammation. It does not. They would have had to take a tissue sample and prepare histological slides and examine these samples for evidence of inflammatory response such as white blood cell infiltration and other changes to determine if there was inflammation.”

Andrew Kniss clearly demonstrates the failings of the statistical analysis, poking holes in the study’s evidence. He states, “If I were to have analyzed these data, using the statistical techniques that I was taught were appropriate for the type of data, I would have concluded there was no statistical difference in stomach inflammation between the pigs fed the two different diets. To analyze these data the way the authors did makes it seem like they’re trying to find a difference, where none really exist.”

Another matter worth mentioning: in the experiment, half of the pigs died of pneumonia. [update: 50% of the pigs did NOT die but, rather, were ‘sick’ with pneumonia – my error] This is an indication of bad stewardship. In events such as this, it is only appropriate to throw away the results – maybe a ‘do-over’ (next time using a better methodological approach (and take better care of the pigs)).

Credibility: This was the first time I had ever heard of The Journal of Organic Systems. As Mark Lynas observes (in GMO pigs study: more junk science), “The journal does not appear in PubMed, suggesting it is not taken very seriously in the scientific community.” In the world of science, publishing a good, sound piece of science in a good journal is an indicator of quality and credibility. I mean, think about it… if this study was a ground-breaking piece of ‘all that,’ wouldn’t it have been published by Nature or Science? At the very least, the paper would have been picked up by a journal within the study’s subject area.

Bias: You only need glance at the acknowledgement list at the end of the paper to see that it is a ‘who’s who’ of the anti-GMO world.  This kind of makes the statement “The authors declare that there are no conflicts of interest” pretty much ‘moot.’  One author – Howard Vlieger – is the President of Verity Farms, Iowa, an organization that markets itself as non-GM.  Judy Carman (lead author) is widely known as a long-time anti-biotech campaigner. She even has a website called ‘GMOJudyCarman‘ (launched in late May – timely, no?)

Other interesting bits? In an April 2008 interview, Dr. Carman stated that her work received funding from Jeffrey Smith and the Institute for Responsible Technology. Jon Fagan, listed in the acknowledgements, is the head of Genetic-ID. Genetic-ID is the company that conducted the DNA analysis for the study confirming that the GM corn used contained a combination of NK603, MON863 and MON810 genes (page 40). Genetic-ID is based in Fairfield, Iowa and has satellites the world over. Genetic-ID is a GMO testing company and part of a convoluted network of actors with vested anti-GM interests, weird politics and Vedic-scienc-y stuff, and a long list of celebrities (see here).

It would seem that Carman et al have taken some pages from Seralini’s ‘playbook’ – but there are no ‘silver linings’ here.  This is just another exercise to “prove” that GMOs are dangerous rather than to objectively investigate them. Given the conflict of interests of the authors and affiliates involved, what other conclusion could they come to? The science, however, doesn’t pass the sniff-test. It’s a case of faulty methodology and poorly interpreted data magically making it through the peer review process.  Throw in some colorful (scary) pictures of pig uteri for good measure, add to that a bit of bias and credibility issues and you have the makings for some really ‘shoddy science’.

– – –

  • Check out Fourat Janabi’s post @Fouratj: “Pigs, GMOs and Bullshit Fourat provides a point by point critique of the Carman et al article – Easy-to-consume with none of the BS. :O)
  • Then there is this post from Julee @sleuth4health who quips, “At this point, anybody who’s ever judged a High School Science Fair has got to be thinking “F.”” 
  • Catalyzing Illinois writes Something Smells and its not the Pigs“We are not dealing with “disinterested and objective science” here.”
  • Contrary to Popular Belief: Latest anti-GMO study: more bullshit

33 thoughts on “From ‘I smell a rat’ to ‘when pigs fly’, bad science makes its rounds

  1. Pingback: Carman-Vlieger “Pig” Study Links | Vegan GMO

  2. Nice roundup of the paper reviews. In the paper, I think they state that >50% of the animals had pneumonia, not that they had died. The mortality was still high at 13-15%, though. Bad animal care, no matter how you look at it 🙁

    • Can you blame them? It always looks legit. Bad Science + PR = Public Buy-In. Let’s face it, the ‘real’ scientists take a more understated approach in presenting their work. None of this doomsday, apocalyptic-type media manipulation stuff.

      I am a social scientist (not a natural or engineering scientist). It has taken me YEARS to understand this stuff (and I still have so much to learn). I have been working with scientists for decades which has been extraordinarily helpful. So, even if I don’t actually completely understand the science, I can instinctively smell a bad study a mile away.

      The problem is is that most people are so far removed from the Ivory Tower (a Tower that I suggest is quickly turning ‘beige’). Scientists need to engage more… for science’s sake.

  3. Pingback: Pigs, GMOs & Bullshit « Random Rationality

  4. I looked up the Journal of Organic System’s journal impact factor, which is a measurement of how many times a year an average article in the journal is cited by another article. For example, one of the highest impact science journals is Nature, with an impact factor of 36.2. This means, on average, every Nature article is cited in another peer-reviewed publication 36 times every year. Science and PNAS have similar numbers.

    This Journal had an impact factor that wasn’t measurable. Why? Because it was so infrequently cited that no rating could be established, meaning it was close to 0.

    In most top-tier research institutions in the USA, publications by faculty in journals with less than 10 impact factor won’t count towards tenure.

    In general, you shouldn’t use impact factor to dismiss an individual article, but in this case, it adds to the other information here. It’s junk science.

    • Michael, this may have some merit (see below), but the way you wrote that: it makes me really pull my hair out.

      cami, please note that judging a papers contents by its cover (i.e. by its JIF) is regarded as bad practice. The JIF is a tool for librarians, and not useful to judge the quality of papers. The JIF does not or very weakly correlate with quality, novelty or importance of the work. If there are any correlations, they are negative: the higher the JIF, the lower the quality. One thing predicts the JIF very well: rate of retractions. High-JIF means generally flashy stuff, little more. Judging scientists by the number in high-ranked journals is hence the worst practice one can imagine … the reason may be that physicians, biologists and other life-scientists are pretty much math & statistics illiterates (me included) and are used to using methods they do not fully understand. Eg in chemistry or mathematics (IIRC) nobody cares about JIF.

      Please check out the work by Ferris Fang or Björn Brembs regarding bad effects of over-use of impact factors.

      That said, this still seems like a shoddy journal. If the papers are not cited at all, this certainly does mean something. There are a lot of journals out there that have only “pro forma” peer review.

      • I agree on the judge/cover thing. But this paper was judged long before I was aware of the journal’s JIF. The (poor) peer-review process of the journal is an even MORE important factor in all this. A factor that feeds into low JIF? Whatever. Many aspects of the study are poor. Period.

  5. Pingback: Richard Dawkins talks about GMO crops

  6. Pingback: Fear Mongering: Bad Science/Pseudo Science « Catalyzing Illinois

  7. Pingback: The New Pig Study Gets An “F” In Science | SLEUTH 4 HEALTH

  8. Yep Cami. Nailed it! Hardly proof that “GMOs turn pig stomachs to mush” as the health ranger so graphically stated in his recent headline. Send this rubbish to the National Enquirer where it belongs!

  9. Yes, definitely, and I’m sorry if it looked like a wanted to give the poor paper any support. Just had to say something against the JIF thing. Sorry for that.

    • Please don’t apologize! Sorry if I came off “sharp” – that wasn’t my intention. The JIF thing is and academic thing that DOES matter. On a number of levels. To be honest, I don’t really follow JIF. If I find something peer-reviewed, relevant, interesting and related to my area of expertise, I read it. And, if applicable, reference it. It doesn’t really matter what the JIF is. Some academic departments really emphasize JIF, though, when they evaluate for tenure. Not an uncommon approach. But as an interdisciplinary academic, doing interD research, I find it hard to target high JIF journals as they are very Uni-disciplinary in their approaches. So, my work is not often published in high ranking journals. I don’t think that I am alone (as an academic researcher) in this.

      I think that when we begin to critically analyze scientific articles, as in this case, we deconstruct ALL aspects of the paper as well as the publication. The JIF of this journal is non-existent (so it seems) but, as I said, more problematic was the quality of the peer review on the piece.

  10. Note that they didn’t get mammary tumors like the Seralini rats. At the very least it says that pigs don’t work like rats and that humans likely don’t either. It also could mean that the rat study, the pig study, or both might be goofy. Hmm.

  11. So some of the rebuttals are ad hominems, attacking the journal rather than the science, or critiques of the study’s methodology despite the same methodology having been used on similar studies which GMO manufacturers point to show their safety?

    Pretty weak rebuttals indeed. The examinations of the pig’s stomachs wasn’t done by the study’s authors, it was done by a third party through double blind means. So calling bias is rather pathetic.

    It’s a weak criticism to say that since the strains of non-GMO feed wasn’t identified the study is invalid. The point of the GMO companies is that there is absolutely no difference between GMO and conventional. If the author of that blog is saying such then, by law, all GMOs should be taken off the market since their approval is based upon there being no difference. It’s essentially a weak rebuttal anyway and just reveals the desperation of the science deniers. But if the critics think this a valid criticism then present some science rather than whining on a blog. It’s always amusing how the science denying pro-GMO crowd thinks a personal blog is more valid than a peer reviewed journal, but that’s what happens when one side is desperate.

    The most amusing thing is this study used a large sample size while the pro-gmo studies used extremely small sample sizes (maybe someone can point me to those criticisms on this blog, I’m guessing they don’t exist). If they had used only 12 pigs then the first criticism would have been the small sample size, yet that wasn’t a criticism with the corporate funded studies. Go figure.

    The criticisms of the methods of raising the pigs is extremely weak. The pigs were raised in a commercial environment, not a lab setting, because it’s testing real market conditions. Any scientist worth their salt knows a different response is derived from field vs. lab settings. Why this was even a criticism is beyond me.

    • Thanks Doug for contributing. My response…

      In this ongoing debate around GMOs, inevitably the loudest voices are often those hanging off at the fringe of the debate – either side. Ad hominems-type rebuttals (to me) appear to be a part of the debating strategy. I see on both sides of the issue.

      That aside, let’s look at the ‘science’. Journal quality and reputation means “a lot” when one looks at the science. It is an indicator of the quality and efficacy of the peer review process. The fact that this study was published in a journal with a subject focus that lies outside of the subject matter of the study is problematic to say the least. Thus, one has to consider the study and its results in the context of those publication limitations.

      Re: examination of pigs’ stomachs. Evaluating based on “visual scoring”? Really? It matters little if it was “blind” or conducted by “third party” when it is an inappropriate and not scientifically accurate methodolological approach. What about histological testing? It is my understanding that when attempting to measure/determine level of inflammation, including these types of tests in the methodology are a must.

      Re: feed. I think that I will just leave you to read Anastasia’s most excellent rebuttal/report on the study “Lack of care when choosing grains invalidates pig feeding study” ( I think that she covers everything from the perspective of variety/strains of feed and the limitations of the study from that perspective.

      Can you please provide me with the references of those so-called “pro GMO”, “corporate funded” studies that use extremely small sample sizes?

      ‘Real world’ conditions clearly lead to issues for ‘control’ in terms of experimentation (and those ‘real world’ conditions weren’t really clearly outlined by authors). What factors in this ‘real world’ context account for the relatively higher mortality rates for the pigs? As a reviewer, I would want to know this.

      You are hard pressed to convince me of the virtues of this study on many fronts. It is NOT an example of ‘good’ science. Lots of holes. That’s problematic in my mind. We can and should do better – I don’t care who funds it or what biases are in the ‘mix’. Period.

  12. Responding to a scientific article by criticizing the journal isn’t science, it’s a cheap cop out. Andrew Wakefield published in the Lancet, that didn’t elevate his crap study from refutation, what did was the poor science and the poor science was what was argued. So a journal’s prestige, or in the critics case, his lack of knowledge about the journal, isn’t a valid criticism of the science.

    How was the methodology of examining the inflammation of the stomachs different from the other studies? Also, do you conclude that the studies that show that feeding bt grain to pigs doesn’t produce inflammation to be invalid because they didn’t list the brand of non-gm feed? I just wonder what leads you to blindly follow the results from the shorter term, smaller sample size studies that have some of the same problems that you listed. I assume you know about them since the study does include references and you’ve already concluded that the methodology is wrong therefore you must have known if the methods differed from previous studies. If you don’t bother reading the studies then how do you get off criticizing them and automatically conclude the studies you haven’t read are valid? That’s evidence of obvious bias on your part.

    As for the real world method the pigs were grown as typically commercial pigs are grown, on a pig farm from piglet to death where they could experience the conditions that pigs for market are typically grown.

    • Doug:

      I don’t recall inferring that critiquing the quality/subject matter of the journal was what you refer to as “science”. What I said is that it is certainly a factor to consider. In academia, the relevance, quality and citation rates (within relevant subject matter) of a journal publication DOES matter. It IS an indicator (one of many) of not only the quality of peer review process attached to a journal but it can also say quite a bit about the article itself. I raised some good points in this blog post for consideration when evaluating the quality of a journal article. I stand by them. These are factors that I look at when examining all scientific articles.

      Again, can you please provide specific references for these shorter term, smaller sample size studies? I am not in a position to comment on this until I can actually review these articles that you suggest that I apparently so “blindly” follow.

      This blog is a forum for debate. I make my observations and I articulate them. I welcome your comments and your challenging questions. My goal is to provide some balance when there is (what I view to be) overwhelming evidence of misinformation (circulating on the media), questionable scientific practice and/or poor or inappropriate interpretation of results. Bias? If by ‘bias’ you mean ensuring that others have new insights on a science issue (or interpretations of scientific output); something that enables others to think more critically about the information that is presented to them – then, guilty as charged.

      Real world: Yes, but those real world conditions need to be outlined in the study. If not, parameters of the study (methodology, analysis and results) are incomplete.

      • Using your method of argumentation I can conclude that since this isn’t a widely read blog that all the comments you say are not worthy of taking seriously. That’s essentially your argument for dismissing the science in a journal that you make the subjective conclusion about it’s value based upon alleged readership. That sets up bias point number one.

        Bias point number two is that you didn’t read the study you are criticizing because it makes mention of the only other studies done regarding pig inflammation and GMO consumption, yet you plead ignorance about these studies, yet formed a conclusion about the methodology of the study you didn’t like the conclusion while ignoring whether or not they were attempting to replicate the methods of the prior studies. Bias point number two.

        Alas, here are the other studies:

        The full studies are available elsewhere online but I don’t care to search for them at the moment.

        As for the comments by one of the critics regarding the need to outline the different feed because of differing nutrition content, the science says that is unnecessary since there is no difference. Therefore, Bodnar’s criticism is irrelevant and based merely upon her biased assumptions:

        As for the method of the pigs being raised, it was mentioned in the study that the pigs were raised using typical commercial means. The pigs were raised by pig farmers who weren’t aware of which pigs were in which test groups. That was all mentioned in the study you didn’t read.

      • Awesome Doug. Further to your comment, “…since this isn’t a widely read blog [“mine”], all the comments you say [“me”] are not worthy of taking seriously.”

        Hey, let’s not waste your time or my time.
        Oh wait. This is *my* blog. Apparently its *your* time that has been wasted on this blog that “is not widely read”. Not good use of your time, Doug. Just sayin’.

        Enjoy what’s left of your weekend. 🙂

  13. Pingback: GMO study is pseudoscience | Violent metaphors

  14. Pingback: There’s no room in science for provocateurs | Cami Ryan

  15. Pingback: Verdict: promise not YET met #GMOs | Cami Ryan

  16. Pingback: Debunking “10 scientific studies proving GMOs can be harmful to human health” | The Logic of Science

  17. Pingback: On GMOs and effective environmentalism | Stijn Bruers, the rational ethicist

  18. Great blog, great responses to comments and… I admire your patience. I wonder why the belligerent commenters are always the ones with the least understanding of the information they’re discussing? Maybe they think that’s how arguments are won?

    • Thanks. It’s an older post but it is a topic (or related to other topics) that seems to continually re-emerge.

      We humans are interesting creatures. We are very attached to our beliefs and we will go to great lengths to protect those beliefs, even when presented with information that wholly discounts those beliefs. I guess one way to win the war against our fundamental human cognitive habits is to not to get attached to any of our own hypotheses. That can be a constant struggle.

Leave a Reply