PUMPED
New member
Somebody will post a new thread with a title " This is the cycle that will fuck everything up" with a ton of views and reply's. This topic will get nothing! How to Know What and Who to Trust on the ‘NetPart 1 of this article introduced “ways of knowing” as a framework for understanding how we absorb health, fitness, and strength and conditioning-related information. In this final segment, I’ll tackle, head on, the notions of trusting authority and the scientific method as ways of knowing.Ways of Knowing: AuthorityAs an authority myself (yes, I said it!), I believe that unchecked acceptance of authority-based information can be one of the most dangerous ways bodybuilders and strength athletes acquire information. I say this for two primary reasons—I often see assertions that I disagree with made by “authorities” without any support or rationale for those statements, and it seems that a logical fallacy of argumentum verbosium (proof by intimidation) is often at work here. This is a “logical fallacy” whereby an individual’s propositions aren't questioned because of how they are made or who is making them.The fictitious description of muscle fibers I concocted (see the introduction of part 1) exposes the issue of trusting in authority. The facts presented and the references provided in that fictitious description of muscle fiber development and morphology were ludicrous to most of you, but the method of delivery was authoritative and filled with scientific terminology. I am often asked about sources for the information I present in my articles for elitefts™, when those sources are already present in the citations and associated bibliographic references, and this makes me wonder to what extent readers are critically evaluating my work.Bottom line: If you aren't willing to take the time to fact check your sources of authority, at least on occasion, the possibility of having the wool pulled over your eyes may be a very real one.Authority: The Lay PressIn the United States, medical developments first come to many of us via the press [news broadcasting and publications (30)]. If you’re reading this, chances are that you may very well be an expert or authority (or damn close) when it comes to your occupation and/or your passion. You might ask yourself, "In my area of knowledge, how accurate is the press in reporting information?" Why would you believe that it might be different in an area you are less familiar with?The press has been criticized as a source of scientific and health news for the following reasons (33):Sensationalism: This can cause false hope or misrepresented findings.Biases and conflicts of interest: Any news story could be considered de facto a conflict of interest, as news stories are, in a sense, the product “sold” by the press.Lack of follow-up: Scientists themselves have a hard time following the scientific literature, not to mention reporters not trained or specialized in a given area.Stories that aren't covered: There is no onus of responsibility for the lay press to cover particular stories, regardless of relevance.Bottom line: The lay press is a poor source of scientific information.Authority: The Internet in GeneralIn the Internet’s infancy, high variability in quality and accuracy of healthcare-related information was perhaps more of concern (19) than currently when [some have suggested (26)] consumers may have grown a bit more savvy.However, there are minimal (if that) checks on the assertion and air of authority that permeates blogs, message boards, or other forms of social media. The Internet is the epitome of a free for all when it comes to unchecked dissemination of information (21). Perhaps the worst case of this is intellectual theft: plagiarism. Even over a decade ago, I noticed this with some regularity as a college professor. The data support that I wasn't alone in finding cheaters who had plagiarized their writings from the Internet (32).Just this past week (on June 3, 2014), I suspected an article on fat loss (14) that I read on a major bodybuilding site had been recycled, as it was supported by citations [e.g. this one (8)] for which more up to date information (31, 35, 36, 43) exists. A Google search for the text comprising the first paragraph of said copyrighted article revealed it had landed (unreferenced) in a blog (listing just the first five references) and a Facebook post (completely omitting the bibliography). Plagiarism can run the other way, too. A recently published peer-reviewed article appearing in Nutrition and Metabolism (3) contains a clearly plagiarized section from a well-known online site’s discussion (10) of the same topic (alcohol and muscle metabolism).Bottom line: The Internet is filled with unchecked information, filtered through other’s interpretations, and sometimes plagiarized.Authority: ScientistsIn an ideal world, scientists would conduct their research with Vulcan-like objectivity. However, the scientists currently authoring today’s body of research literature are all humans (as far as I know). Unfortunately, I have been privy to “how to lie with statistics (15)” via research performed in the consumer products industry as well as in academia.Sometimes scientists can slip up a bit, even if not entirely intentionally. For instance, an abstract may report statistically significant results [e.g., when testing creatine supplementation’s effects on muscle growth (42)] whereas the data presented within the paper itself (see Table 6) does not (9). Sometimes, a string of research findings leaves an “interesting” paper trail as well. In the early 70s, two studies suggested that creatine was a potent stimulus for contractile protein synthesis in growing embryonic muscle cells (16, 17). Apparently after two of the colleagues working together on these publications had split ways (and were working at other educational institutions), one of them (Morales) published a “reexamination” of the data (12) [which had already been published in at least three locations (16, 17, 25)], refuting and contradicting the first set of findings.Bottom line: Scientists are people who make mistakes and are subject to external influences that may color how their research results are presented.Authority: Experts/AuthorsIn my opinion, (expert) authors should be put under higher scrutiny when making claims rather than the opposite. One particular senior science editor of a major bodybuilding magazine has earned a scientific credential (doctorate) but rarely cites scientific references to substantiate his writings. This is contrary to the expectations of such academic training (I have a doctorate myself) and, for me, quite a letdown. I would love to be able to follow up on many of the notions he puts forward, but I'm left out in the cold because his scientific assertions are not backed up with scientific references. (I am intentionally not naming this individual for political reasons and to create a bit of irony, as many of you would likely love to know exactly who I'm referring to. Frustrating, isn’t it?)A particular “form” of the logical fallacy of “argument from authority” is that of argumentum ad verecundiam, which could also be considered an argument “of” authority (40). More specifically, a person’s notoriety as an expert does not necessarily mean they have expertise in other, albeit related, areas. When falling prey to this logical fallacy, one is essentially trusting the opinion of an authority in a general manner outside that authority’s area of expertise.An unfortunate example of the above presents itself in medical settings where some, but not all, physicians (22) provide nutritional advice despite admittedly lacking appropriate training in this area (20). As a specific example from my personal interaction with allopathic physicians, I have known MD-credentialed doctors to discount the possibility of false positive liver function tests due to previous recent resistance exercise (my area of expertise). They have done this even though I have explained the biological mechanism to them [leakage of transaminase enzymes from damaged skeletal muscle (24)] and even provided scientific literature to substantiate this effect (29).Bottom line: Expertise within a specific realm of knowledge does not necessarily warrant authority outside that area of competence.Ways of Knowing: The Scientific MethodObviously, I am greatly influenced in my thinking by the findings of (modern) western science. Unfortunately, deep understanding of the scientific method is not typical in persons not trained or regularly exposed to this way of knowing.In my opinion, two aspects of the scientific method are especially worth considering for the “lay person”
ractical versus statistical significanceExternal validityIn western science, it has become accepted practice to use an essentially arbitrary (37) probability level of 5 percent (or about a one in twenty chance) as the criterion for statistical significance. That is, given certain assumptions, one would assume that if something happens less than 5 percent of the time, this is not by random chance. This gold standard 5 percent value can be traced back to a textbook published nearly ninety years ago (11). However, even before this, it was warned that experimental tests may reveal practical (meaningful) group differences even if this (or some other arbitrary) level of statistical significance is not met (4).As an example, for a bodybuilding judge looking at “conditioning” (leanness or muscularity), statistical significance may be irrelevant. Below are some fabricated data comparing the body fat estimates for “Team Scott” (competitors prepped by me) versus those on the fictional “Team Lean.” An analysis of variance [ANOVA (1)] reveals that the mean difference in body fat estimates of nearly 1 percent (6.57 percent versus 5.62 percent) between Team Scott and Team Lean members isn't statistically significant (probability statistic 'p' is greater than 0.05 or 5 percent). However, I can assure you that any good bodybuilding judge and the athletes themselves would easily see a 1 percent reduction in body fat as having practical value (and noticeable on stage).“External validity” is a fancy way of referring to how well a study’s findings can be generalized outside the confines of an experiment (34). Scientists may be guilty of focusing on “internal validity” by creating strict experimental criteria (subject age, strength, experience level, and gender) that strengthen the (internal) validity of scientific conclusions (13) but limit application of their findings out in the “real world.” I have known bodybuilders to scoff at scientific studies in general, rejecting research findings outright (without having read the studies themselves) because “they [scientists] don’t study guys like us.” There’s a fine line in taking from research what is relevant and disposing of what is not. In my opinion, a minimal effort of reading a given study, in its original form, is needed to discern that line well enough to make an informed decision about the study’s relevance.On the other end of the spectrum, the lay press in particular has perpetuated the notion of “scientific proof” whereby science can “prove” a fact (18), as if a scientific study can reveal a particular immutable, undeniable, and omnipresent cause and effect relationship in the universe. Epistemological concerns aside, this sort of “proof” falls within the realm of mathematics and logic, not natural science (27). For example, despite the strong association of smoking with lung cancer, smoking is a not a 100 percent guarantee of lung cancer and non-smokers do get this disease (23). Research can't “prove” that smoking causes lung cancer, but it can provide evidence that smoking increases one’s risk for lung cancer.Biological responses are also subject to inter-individuality when it comes to processing foods (41), body composition adaptations to dietary perturbations (6, 7), and the metabolism of foreign chemicals [toxins, drugs (44)] and even over-the-counter supplements (38). Muscle growth adaptations to resistance training vary as well (2, 5, 28), and it’s even possible for some trainees tolose muscle fiber size when all other subjects in a given study are growing (39). Much to my chagrin, science has not “proven” that resistance training increases muscle size.Bottom line: Science delineates phenomena of the natural world but doesn't “prove” how it works.Final Thoughts on Who to TrustFrom a practical standpoint, I doubt many of you reading this will have the time or inclination to perform a full background check on each and every author/expert you come across. However, if you’re formulating your philosophy or founding a good bit of your knowledge on what someone else tells you, you might look for the following in an expert or authority you trust:Honesty: Does the expert ever say, “I don’t know?” (Each expert’s knowledge is limited.)Creativity: Is the expert creative? Does he/she have enough knowledge and intelligence to speculate (and acknowledge doing so) in a productive manner?Credentials: What are his/her credentials really? Who has the person worked with, trained under, and/or coached? Where and what has he/she studied and, if applicable, is the academic degree the person has relevant and from a real/appropriately accredited institution?Consistency: Does what an expert professes, when taken as a whole, paint a consistent picture? Or are his/her writings a collection of concepts that, overall, are confusing and inconsistent?Personally, as a consumer of information, it has served me well to consistently remind myself to be open-minded, despite what I think I know. When in doubt, I first attempt to understand and only then to criticize. Lastly, I hope I’ve provided ample citations for you to do just that when it comes to this article, and I hope I have broadened your perspective such that you consider doing so as you continue to peruse the Internet.








