* * * * * * * * * *
Of course, despite life's business, I have been keeping up with various bits and pieces of the scientific literature. As much as is possible, anyway, what with school and work and my recent engagement - I asked, he said yes! In any event, allow me to pander to something other than my rather uninteresting personal life, like... fat loss from a pill? One could only hope.
Salehpour, et al. (2012) purports to show that, after 12 weeks of vitamin D supplementation, 39 "healthy" overweight, non-pregnant/non-lactating females lost more body fat than did a parallel cohort of 38 representatively similar overweight females taking a placebo.
First, everything was free-living and self-reported, but what else is new in diet-related research? So, naturally, all the same standard limitations apply, here. Second, although this was a supplement trial, they collected food-frequency questionnaires (FFQ) and 24-hour dietary records to try to ensure standardization across the board, so that nobody was getting away with significantly lower food intakes, skewing the statistics in favor of one arm or another, for instance. (They also tried to standardize physical activity, as well.) Per usual, these techniques are quite poor, however "validated" nutrition scientists claim they are. If you don't already understand why this is, Schoeller, et al. (2013) provide a good explanation. Luckily, we don't need them to be great for this particular study, and I'm glad they tried to do something to standardize the groups - sometimes, a little something is better than nothing. And they did claim to have counted how many of the pills each participant, in both the intervention and placebo arms, had consumed at weeks 4 and 8, and adherence was estimated to be roughly 87%. You might be tempted to complain that this number isn't higher, but it might as well be 90%, and a score of 9/10 (essentially an A-) is pretty darn good. What little added benefit one might have achieved, hypothetically, with one extra dose which was accidentally skipped when rushing out the door in the morning for work, is probably small enough for our purposes as to not be worth considering. Asking people to give A+ effort at all times simply doesn't happen in the general populous. We are interested in real life, after all. But there are other reasons to be skeptical of their resultant data, which I will cover momentarily.
Participants were randomly allocated from an 85-person list to receive either 25 mcg/day of cholecalciferol (vitamin D3) from seal oil or 25 mcg/day of lactose (placebo), although they do not mention how this randomization was performed (e.g. whether it was done by random number generation or some other such thing). This is a minor bellyache, but I still prefer to see all the data, and since the Nutrition Journal is open access and there are no page number limitations that I am aware of, there's really no excuse to publish papers without the full sequence of methods, laid out plainly for all to see and validate or even replicate independently if they'd choose.
The study went on for 90 days, or 12 weeks time. Subjects' food frequency questionnaires were reviewed once per month (the authors never mention coaching participants on their 24-h diet record to improve adherence there), so presumably three times throughout the course of the study, which means only the first two really mattered much for the purposes of keeping them on course, assuming it does so at all.
At baseline, the data from all randomized participants were normally distributed across all measures, with the exception of serum calcium and fat free mass (2.2 mmol/L vs 2.3 mmol/L, and 44 kg vs 46 kg, respectively). These figures are even enough such that I'm not sure it matters. One rather important measurement these authors did not get was resting metabolic rate (RMR), which they admit in the discussion.
Over the life of this study, eight participants dropped out for various reasons, which brought the sample down from 85 to 77 persons. No big deal. But, then, instead of having to cope with these drop outs, the noise in the system as it were, by following through with their originally proposed intention-to-treat analyses, the authors decided to supplant this with a per-protocol analysis instead, which means they only incorporated and analyzed data from those who actually completed the intervention and adhered to the protocol as was asked of them. Ranganathan, Pramesh, & Aggarwal (2016) do a good job of explaining why this approach can be problematic, but, essentially, doing only the per-protocol analysis introduces bias both in the randomization and in subsequent interpretations of the data.
Common though it is, unfortunately, the authors did not report much in terms of their statistical analyses, but merely posited an alpha (statistical significance) of p < 0.05, utilized analyses of covariance (ANCOVA) for biochemical variables**, and then performed Pearson correlation coefficients to try to show some kind of a relationship between 25(OH)D/iPTH and body fat mass. I'd like to have known explicitly what their 1 - B (statistical power) was, but I can only assume it was set at 0.8 (80%), as most of these trials tend to be. Working off of this assumption, an alpha of 0.05 and 1 - B of 0.8 would make the Cohen's d (effect size) approximately 0.32, which translates to a Pearson correlation coefficient (r) of 0.16, a very small linear relationship, at best. (For those interested, a d of 0.32 and an r of 0.16 would equal a number needed to treat (NNT) of approximately 11, meaning 11 people would have to be treated with this supplement in order for 1 person to glean whatever benefits there might be, assuming there are any.)
**I must say, I am happy to see that the authors knew the importance of quantifying biochemical variables in the serum (vitamin D, PTH, etc.) and correlated them back to the outcome measures of interest. It seems self-evident that this ought to be done, but you might be surprised at how many studies purport to show that a supplement, drug or substance does or does not produce some benefit or harm, when the authors never actually measured its concentrations in the serum, and so we actually have no idea what they're "measuring" at all.
So what were their results?
Serum 25(OH)D levels increased in the intervention arm, as would be expected, since 25 mcg/day is about 1,000 IU, which brought their serum values from roughly 15 ng/mL at baseline to approximately 30 ng/mL at the end of the study. 1,000 IU isn't an awfully large amount by common standards, but was probably a reasonable dose. It also strikes me that the subjects initial values were very low (< 20 ng/mL). Should we really be looking at two separate trials, here? One to demonstrate the efficacy of this kind of intervention in people with normal 25(OH)D levels, and another to demonstrate whether it is efficacious in those with abnormally low levels of 25(OH)D, such as the participants in this trial? Something to ponder, anyway.
Whereas serum iPTH values decreased slightly in the intervention arm (-0.26 pmol/L), they increased slightly in the placebo arm (0.27 pmol/L), p < 0.001.
Body weight change was minuscule and non-significantly different between groups, where the intervention arm lost -0.3 kg (or 0.7 lb.) and the placebo arm lost -0.1 (or 0.2 lb.).
They seemed to be suggesting that waist circumference was somehow meaningfully different between groups, where the intervention arm lost 0.3 cm around the waist, while the placebo arm gained 0.4 cm, but this was a non-significant change with a p < 0.38. Besides, even if this was statistically meaningful - and it's not - after 12 weeks of fairly religious pill popping, we're talking about a difference of less than one centimeter, here!
Hip circumference decreased in both groups, although non-significantly (-0.39 cm vs. -0.9 cm for intervention and placebo arms, respectively; p < 0.36).
Body fat mass supposedly decreased in both groups, where the vitamin D group (intervention arm) supposedly lost 2.7 kg (~6 lb.), and the placebo arm supposedly lost 0.4 kg (~1 lb.). So, let me get this straight: there's an 0.5 lb. difference in body weight between arms, but a 5.0 lb. difference in fat mass between arms? How on earth is that possible? Did the average subject in the intervention arm both lose 6 lb. of fat and gain 5+ lb. of muscle in 12 weeks? Give me a break. How much of this purported change could easily be explained away by the fact that the researchers used bioelectrical impedance to estimate body fat percentage in order to calculate these fat mass values? That would be my primary contention. So, no. I don't buy it. Perhaps if their body weights were also at least somewhat reflective of this kind of change. (But, no cigar.)
They claim to have demonstrated statistically significant inverse Pearson correlation coefficients (r) between the changes in 25(OH)D in serum and body fat mass (r = -0.319) from baseline - which was only significant in that it achieved a p < 0.005, in my view - and changes in iPTH in serum and body fat mass (r = -0.318) from baseline - which also achieved a p < 0.005. On the flip side, they also claim to have demonstrated a positive r between the changes in serum iPTH concentrations and body fat mass from baseline (r = 0.32, p < 0.004). However, that these values were statistically significant, doesn't change the fact that the correlation coefficients were small, as can be seen in the scatter plots provided below.
Lastly, they say they've shown that changes in these values correlate linearly with the outcomes posited above, yet their last statement in the results section states:
How in the world could it be that changes in 25(OH)D and iPTH were linearly correlated with fat mass, while serum 25(OH)D and iPTH concentrations were simultaneously not correlated with fat mass?
It took me a while to realize that, although there was apparently some kind of a linear relationship between the serum changes (from baseline) of these values to the outcomes they've posited above, the actual serum concentrations of 25(OH)D and iPTH at any given moment were not linearly correlated to these same outcomes. (And notice how they didn't give a value for r or an alpha for this last measure. Bit sneaky, if you ask me.)
And, ultimately, what do you think these scatter plots and correlation coefficients would look like, if the authors had kept their analyses true to the original intention-to-treat?
I'll say it, again: I don't buy it. Do you? Until next time.
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Ranganathan, Pramesh, & Aggarwal. (2016). Common pitfalls in statistical analysis: Intention-to-treat versus per-protocol analysis. Perspectives in clinical research. 7(3), 144-146.
Salehpour, et al. (2012). A 12-week double-blind randomized clinical trial of vitamin D3 supplementation on body fat mass in healthy overweight and obese women. Nutrition Journal, 11, 78.
Schoeller, et al. (2013). Self-report–based estimates of energy intake offer an inadequate basis for scientific conclusions. The American journal of clinical nutrition. 97(6), 1413-1415.