GMOs Toxins and unborn babies… a deeper examination of the study.

May 20, 2011

“GM food toxins found in the blood of 93% of unborn babies” (see: http://www.independent.ie/lifestyle/parenting/toxic-pesticides-from-gm-food-crops-found-in-unborn-babies-2652995.html)

These headlines (or a version of it) are making their rounds in the media these days.  They refer to a study done in Quebec. Aziz Aris and Samuel Leblanc claim to have detected herbicides and/or the insecticidal protein Cry1Ab in the blood of Canadian women, pregnant or not pregnant, and in umbilical cords.  Their study / results were recently  published in the journal Reproductive Toxicology (TITLE: “Maternal and fetal exposure to pesticides associated to Genetically Modified Foods in Eastern Townships of Quebec, Canada”).

In April, I received an anonymous email from someone who challenged me on the results of this study (amongst other things…)

“While I can see the potential benefits of GMOs, I am uncomfortable with how readily pro-GMO scientists dismiss the gathering evidence of potential harmful impacts (such as the very recent study finding the BT toxin in mother’s breast milk).”

My response was as follows – and points to problems with the methodological approach…

“I think that you are referring to the article by Aris etal and their study on the sera (blood) (as opposed to breast milk) that was published in a recent issue of Reproductive Technology (2011).  I read the article and, quite frankly, have some questions regarding the methodology.  First, there seems to be a lack of controls in the experimental approach.  What are the serum levels of female organic farmers who spray Bt vs those conventional female farmers who plant Bt soy, corn and cotton? Bt is one of the most effective pesticides used in the organic industry and, generally, the number of applications is even higher in organic crops than in conventional/GE.   What are serum levels of women who eat no corn or soy products and do not buy organic (having no exposure)?  The lack of controls in this study is alarming and can account for false positives in results (I refer you to the paper in the J. Agric. Food Chem. 2005, 53, 1453-1456:  “To avoid misinterpretation, samples tested positive for Cry1Ab protein by ELISA should be reassessed by another technique”).  In my opinion, the Aris etal study is only moderately interesting and very, very incomplete.” 

As far as I can tell, there is a real problem with ‘credibility’ here.  I question the peer review process. This is echoed in another response to this publication…

http://www.marcel-kuntz-ogm.fr/article-aris-72793155.html

So, how do we accomplish a balance between “expedited publication” (which, after the long-term, laborious research process, the researcher desires – it’s the “reward”) and “thorough, competent review”? (I cover this a bit further in my blog entry “Peer Review, Peer Rejected”)

Peer review, improperly executed, leads to devastating results.  Take for example, the fallout from an article published in The Lancet in 1998 (later retracted) that claimed a connection between the MMR vaccine and Autism.  These claims (based on a study that was improperly reviewed) rippled through media causing an uproar (fuelled by the celeb-fluence of Jenny McCarthy, I might add) which, ultimately, led to the reduction in numbers of childhood vaccinations (bringing with it a whole other set of problems).

Science is a good thing.  But key to good science is a set of checks and balances that monitors and challenges results and ensures accountability in the process.

The peer review process…  Maybe it needs to be ‘peer reviewed’?

Peer review? Peer rejected?

Hey academics! Ever had a paper rejected? Hell ya. We all have. Then you might want to check out the latest issue of “The Scientist” It features articles on the failings of the peer review process and how some journals are trying to address the issues…

In “I Hate Your Paper”, journalist Jeff Akst outlines three problems (and some solutions):

1. Reviewers are biased by personal motives
Resolutions: eliminate anonymous peer review, run open peer review alongside traditional peer review

2. Peer review is too slow, affecting public health, grants and credit for ideas
Resolutions: shorten publication time to a few days, bypass subsequent reviews and publish first drafts

3. Too many papers to review
Resolutions: recycle reviews from journals that have rejected the manuscript, wait for volunteers and reward reviewer efforts

If a paper is rejected simply because it doesn’t belong in that journal, aren’t the review still valid? Good point, I say.
How do we accomplish a balance between “expedited publication” and “thorough, competent review”? Hmmm….

http://www.the-scientist.com/2010/8/1/36/1/

Facilitating the Public Peer Review Process

Peer to Patent
“community patent review”
“…Peer-to-Patent opens the patent examination process to public participation for the first time.Become part of this historic program. Help the USPTO find the information relevant to assessing the claims of pending patent applications. Become a community reviewer and improve the quality of patents.”

http://www.peertopatent.org/