Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Medical Research: Dual Adaptation in Deaf Brains

The Scientist: Dual Adaptation in Deaf Brains.



The brains of people who cannot hear adapt to process vision-based language, in addition to brain changes associated with the loss of auditory input.



The brains of Deaf people reorganize not only to compensate for the loss of hearing, but also to process language from visual stimuli sign language, according to a study published today (February 12) in Nature Communications. Despite this reorganization for interpreting visual language, however, language processing is still completed in the same brain region.



“The new paper really dissected the difference between hand movements being a visual stimulus, and cognitive components of language,” said Alex Meredith, a neurobiologist at Virginia Commonwealth University, who was not involved in the study.



The brain devotes different areas to interpreting various sensory stimuli, such as visual or auditory. When one sense is lost, the brain compensates by adapting to other stimuli, explained study author Velia Cardin of University College London and Linköping University in Sweden. In Deaf people, for example, “the part of the brain that before was doing audition adapts to be doing something else, which is vision and somatosensation,” she said. However, Deaf humans “don’t just have sensory deprivation,” she added they also have to learn to process a visual, rather than oral, language.



To untangle brain changes due to loss of auditory input from adaptations prompted by vision-based language, the researchers used functional MRI to look at brain activation in three groups of people: Deaf people who communicate through sign language, Deaf people who read lips but don’t understand sign language, and hearing people with no sign language experience.



The researchers showed the three groups videos of sign language and videos that held no linguistic content. The signing videos were designed to allow Cardin’s team to pinpoint which areas had reorganized to process vision-based language, as these areas would only activate in Deaf signers. In contrast, the language-free videos would allow the researchers to identify areas in Deaf brains that had adapted to the loss of auditory input, as these brain areas would activate in both Deaf groups, but not in the brains of hearing volunteers. ... Read more: http://www.the-scientist.com/?articles.view/articleNo/34363/title/Dual-Adaptation-in-Deaf-Brains/

io9 - A Drug That Restores Hearing In Deaf Mice

A drug that restores hearing in Deaf mice.



Advances in regenerative medicine are coming in fast and furious these days, and a remarkable new breakthrough can be added to the list. Scientists at Massachusetts Eye and Ear and Harvard Medical School have restored partial hearing in mice suffering from sensorineural hearing loss a condition that happens after prolonged exposure to noise.



Given the rise of an aging population not to mention a preponderance of people who blast their ears with portable MP3 players it's an important bit of scientific insight that could someday help millions of people get their hearing back.



To learn more about this important breakthrough, we contacted lead researcher Dr. Albert Edge, whose study appears in the January 10 issue of Neuron.



Edge agreed that sensorineural hearing loss is a growing concern.



"The National Institute of Deafness and Communications Disorders of the NIH estimates that approximately 15 percent of Americans between the ages of 20 and 69 have hearing loss due to exposure to loud sounds or noise at work or in leisure activities," he told io9. "So this is a very serious problem with little that can be done to treat it."



No doubt, it's a problem that currently affects 250 million people worldwide.



Edge says that hearing aids can help, but his team is hoping to develop a treatment that goes all the way one that can actually replace the lost cells.... Read more: http://io9.com/5974633/a-drug-that-restores-hearing-in-deaf-mice



Related: Deaf Gerbils 'Hear Again' After Stem Cell Cure

Rochester's Deaf Population Among Largest Per Capita in U.S.





ROCHESTER, NY. - Rochester has more Deaf and Hard of Hearing residents per capita than the national average and a larger Deaf population than many other similarly sized cities, a new report out of Rochester Institute of Technology has found.



Rochester’s sizable Deaf community has often been assumed but was never quantified until the report, written by Gerard Walter and Richard Dirmyer from RIT’s National Technical Institute for the Deaf.



The study found other cities have more total Deaf residents per capita, but among college and working aged people, Rochester has one of the largest populations in the country. In particular, the study found Rochester has far and away the highest percentage of Deaf residents enrolled in secondary education, likely driven by NTID.



“Often times it’s difficult to understand how many people are really in the community,” said Thomas Pearson, director of the National Center for Deaf Health Research at the University of Rochester. “This has been a real challenge for anyone interested in the field.”



Using American Community Survey data, Walter and Dirmyer found there are 43,000 Deaf or Hard of Hearing residents in the Rochester metro area, about 3.7 percent of the population. That’s higher than the national average, which is 3.5 percent.



Rochester doesn’t have the highest population per capita as is often suggested, however. The report only looked at a handful of cities, but found 3.9 percent of Pittsburgh’s population is Deaf or Hard of Hearing. The authors of the report attribute that to more elderly residents living in Pittsburgh than in Rochester, and the onset of age-related deafness.



Read more: http://www.democratandchronicle.com/article/20120925/NEWS01/309250048/Rochester-Institute-of-Technology-deaf?odyssey=nav|head



Related News - Wham ABC: http://www.13wham.com/news/local/story/Study-Rochester-Has-Largest-Deaf-Community/NIohNf_5HkSnV30z7zDwdg.cspx



RIT-NTID News: http://www.ntid.rit.edu/news/rochester-areas-deaf-population-better-defined



RocWiki blog: http://rocwiki.org/Deaf_Community

The China Study II: Wheat’s total effect on mortality is significant, complex, and highlights the negative effects of low animal fat diets

The graph below shows the results of a multivariate nonlinear WarpPLS () analysis including the variables listed below. Each row in the dataset refers to a county in China, from the publicly available China Study II dataset (). As always, I thank Dr. Campbell and his collaborators for making the data publicly available. Other analyses based on the same dataset are also available ().
    - Wheat: wheat flour consumption in g/d.
    - Aprot: animal protein consumption in g/d.
    - PProt: plant protein consumption in g/d.
    - %FatCal: percentage of calories coming from fat.
    - Mor35_69: number of deaths per 1,000 people in the 35-69 age range.
    - Mor70_79: number of deaths per 1,000 people in the 70-79 age range.


Below are the total effects of wheat flour consumption, along with the number of paths used to calculate them, and the respective P values (i.e., probabilities that the effects are due to chance). Total effects are calculated by considering all of the paths connecting two variables. Identifying each path is a bit like solving a maze puzzle; you have to follow the arrows connecting the two variables. Version 3.0 of WarpPLS (soon to be released) does that automatically, and also calculates the corresponding P values.


To the best of my knowledge, this is the first time that total effects are calculated for this dataset. As you can see, the total effects of wheat flour consumption on mortality in the 35-69 and 70-79 age ranges are both significant, and fairly complex in this model, each relying on 7 paths. The P value for mortality in the 35-69 age range is 0.038; in other words, the probability that the effect is “real”, and thus not due to chance, is 96.2 percent (100-3.8=96.2). The P value for mortality in the 70-79 age range is 0.024; a 97.6 percent probability that the effect is “real”.

Note that in the model the effects of wheat flour consumption on mortality in both age ranges are hypothesized to be mediated by animal protein consumption, plant protein consumption, and fat consumption. These mediating effects have been suggested by previous analyses discussed on this blog (). The strongest individual paths are between wheat flour consumption and plant protein consumption, plant protein consumption and animal protein consumption, as well as animal protein consumption and fat consumption.

So wheat flour consumption contributes to plant protein consumption, probably by being a main source of plant protein (through gluten). Plant protein consumption in turn decreases animal protein consumption, which significantly decreases fat consumption. From this latter connection we can tell that most of the fat consumed likely came from animal sources.

How much fat and protein are we talking about? The graphs below tell us how much, and these graphs are quite interesting. They suggest that, in this dataset, daily protein consumption tended to be on average 60 g, whatever the source. If more protein came from plant foods, the proportion from animal foods went down, and vice-versa.


The more animal protein consumed, the more fat is also consumed in this dataset. And that is animal fat, which comes mostly in the form of saturated and monounsaturated fats, in roughly equal amounts. How do I know that it is animal fat? Because of the strong association with animal protein. By the way, with a few exceptions (e.g., some species of fatty fish) animal foods in general provide only small amounts of polyunsaturated fats – omega-3 and omega-6.

Individually, animal protein and wheat flour consumption have the strongest direct effects on mortality in both age ranges. Animal protein consumption is protective, and wheat flour consumption detrimental.

Does the connection between animal protein, animal fat, and longevity mean that a diet high in saturated and monounsaturated fats is healthy for most people? Not necessarily, at least without extrapolation, although the results do not suggest otherwise. Look at the amounts of fat consumed per day. They range from a little less than 20 g/d to a little over 90 g/d. By comparison, one steak of top sirloin (about 380 g of meat, cooked) trimmed to almost no visible fat gives you about 37 g of fat.

These results do suggest that consumption of animal fats, primarily saturated and monounsaturated fats, is likely to be particularly healthy in the context of a low fat diet. Or, said in a different way, these results suggest that longevity is decreased by diets that are low in animal fats.

How much fat should one eat? In this dataset, the more fat was consumed together with animal protein (i.e., the more animal fat was consumed), the better in terms of longevity. In other words, in this dataset the lowest levels of mortality were associated with the highest levels of animal fat consumption. The highest level of fat consumption in the dataset was a little over 90 g/d.

What about higher fat intake contexts? Well, we know that men on a high fat diet such as a variation of the Optimal Diet can consume on average a little over 170 g/d of animal fat (130 g/d for women), and their health markers remain generally good ().

One of the critical limiting factors, in terms of health, seems to be the amount of animal fat that one can eat and still remain relatively lean. Dietary saturated and monounsaturated fats are healthy. But when accumulated as excess body fat, beyond a certain level, they become pro-inflammatory.

Want to make coffee less acidic? Add cream to it

The table below is from a 2008 article by Ehlen and colleagues (), showing the amount of erosion caused by various types of beverages, when teeth were exposed to them for 25 h in vitro. Erosion depth is measured in microns. The third row shows the chance probabilities (i.e., P values) associated with the differences in erosion of enamel and root.


As you can see, even diet drinks may cause tooth erosion. That is not to say that if you drink a diet soda occasionally you will destroy your teeth, but regular drinking may be a problem. I discussed this study in a previous post (). After that post was published here some folks asked me about coffee, so I decided to do some research.

Unfortunately coffee by itself can also cause some erosion, primarily because of its acidity. Generally speaking, you want a liquid substance that you are interested in drinking to have a pH as close to 7 as possible, as this pH is neutral (). Tap and mineral water have a pH that is very close to 7. Black coffee seems to have a pH of about 4.8.

Also problematic are drinks containing fermentable carbohydrates, such as sucrose, fructose, glucose, and lactose. These are fermented by acid-producing bacteria. Interestingly, when fermentable carbohydrates are consumed as part of foods that require chewing, such as fruits, acidity is either neutralized or significantly reduced by large amounts of saliva being secreted as a result of the chewing process.

So what to do about coffee?

One possible solution is to add heavy cream to it. A small amount, such as a teaspoon, appears to bring the pH in a cup of coffee to a little over 6. Another advantage of heavy cream is that it has no fermentable carbohydrates; it has no carbohydrates, period. You will have to get over the habit of drinking sweet beverages, including sweet coffee, if you were unfortunate enough to develop that habit (like so many people living in cities today).

It is not easy to find reliable pH values for various foods. I guess dentistry researchers are more interested in ways of repairing damage already done, and there doesn't seem to be much funding available for preventive dentistry research. Some pH testing results from a University of Cincinnati college biology page were available at the time of this writing; they appeared to be reasonably reliable the last time I checked them ().

The China Study II: How gender takes us to the elusive and deadly factor X

The graph below shows the mortality in the 35-69 and 70-79 age ranges for men and women for the China Study II dataset. I discussed other results in my two previous posts () (), all taking us to this post. The full data for the China Study II study is publicly available (). The mortality numbers are actually averages of male and female deaths by 1,000 people in each of several counties, in each of the two age ranges.


Men do tend to die earlier than women, but the difference above is too large.

Generally speaking, when you look at a set time period that is long enough for a good number of deaths (not to be confused with “a number of good deaths”) to be observed, you tend to see around 5-10 percent more deaths among men than among women. This is when other variables are controlled for, or when men and women do not adopt dramatically different diets and lifestyles. One of many examples is a study in Finland (); you have to go beyond the abstract on this one.

As you can see from the graph above, in the China Study II dataset this difference in deaths is around 50 percent!

This huge difference could be caused by there being significantly more men than women per county included the dataset. But if you take a careful look at the description of the data collection methods employed (), this does not seem to be the case. In fact, the methodology descriptions suggest that the researchers tried to have approximately the same number of women and men studied in each county. The numbers reported also support this assumption.

As I said before, this is a well executed research project, for which Dr. Campbell and his collaborators should be commended. I may not agree with all of their conclusions, but this does not detract even a bit from the quality of the data they have compiled and made available to us all.

So there must be another factor X causing this enormous difference in mortality (and thus longevity) among men and women in the China Study II dataset.

What could be this factor X?

This situation helps me illustrate a point that I have made here before, mostly in the comments under other posts. Sometimes a variable, and its effects on other variables, are mostly a reflection of another unmeasured variable. Gender is a variable that is often involved in this type of situation. Frequently men and women do things very differently in a given population due to cultural reasons (as opposed to biological reasons), and those things can have a major effect on their health.

So, the search for our factor X is essentially a search for a health-relevant variable that is reflected by gender but that is not strictly due to the biological aspects that make men and women different (these can explain only a 5-10 percent difference in mortality). That is, we are looking for a variable that shows a lot of variation between men and women, that is behavioral, and that has a clear impact on health. Moreover, as it should be clear from my last post, we are looking for a variable that is unrelated to wheat flour and animal protein consumption.

As it turns out, the best candidate for the factor X is smoking, particularly cigarette smoking.

The second best candidate for factor X is alcohol abuse. Alcohol abuse can be just as bad for one’s health as smoking is, if not worse, but it may not be as good a candidate for factor X because the difference in prevalence between men and women does not appear to be just as large in China (). But it is still large enough for us to consider it a close second as a candidate for factor X, or a component of a more complex factor X – a composite of smoking, alcohol abuse and a few other coexisting factors that may be reflected by gender.

I have had some discussions about this with a few colleagues and doctoral students who are Chinese (thanks William and Wei), and they mentioned stress to me, based on anecdotal evidence. Moreover, they pointed out that stressful lifestyles, smoking, and alcohol abuse tend to happen together - with a much higher prevalence among men than women.

What an anti-climax for this series of posts eh?

With all the talk on the Internetz about safe and unsafe starches, animal protein, wheat bellies, and whatnot! C’mon Ned, give me a break! What about insulin!? What about leucine deficiency … or iron overload!? What about choline!? What about something truly mysterious, related to an obscure or emerging biochemistry topic; a hormone du jour like leptin perhaps? Whatever, something cool!

Smoking and alcohol abuse!? These are way too obvious. This is NOT cool at all!

Well, reality is often less mysterious than we want to believe it is.

Let me focus on smoking from here on, since it is the top candidate for factor X, although much of the following applies to alcohol abuse and a combination of the two as well.

One gets different statistics on cigarette smoking in China depending on the time period studied, but one thing seems to be a common denominator in these statistics. Men tend to smoke in much, much higher numbers than women in China. And this is not a recent phenomenon.

For example, a study conducted in 1996 () states that “smoking continues to be prevalent among more men (63%) than women (3.8%)”, and notes that these results are very similar to those in 1984, around the time when the China Study II data was collected.

A 1995 study () reports similar percentages: “A total of 2279 males (67%) but only 72 females (2%) smoke”. Another study () notes that in 1976 “56% of the men and 12% of the women were ever-smokers”, which together with other results suggest that the gap increased significantly in the 1980s, with many more men than women smoking. And, most importantly, smoking industrial cigarettes.

So we are possibly talking about a gigantic difference here; the prevalence of industrial cigarette smoking among men may have been over 30 times the prevalence among women in the China Study II dataset.

Given the above, it is reasonable to conclude that the variable “SexM1F2” reflects very strongly the variable “Smoking”, related to industrial cigarette smoking, and in an inverse way. I did something that, grossly speaking, made the mysterious factor X explicit in the WarpPLS model discussed in my previous post. I replaced the variable “SexM1F2” in the model with the variable “Smoking” by using a reverse scale (i.e., 1 and 2, but reversing the codes used for “SexM1F2”). The results of the new WarpPLS analysis are shown on the graph below. This is of course far from ideal, but gives a better picture to readers of what is going on than sticking with the variable “SexM1F2”.


With this revised model, the associations of smoking with mortality in the 35-69 and 70-79 age ranges are a lot stronger than those of animal protein and wheat flour consumption. The R-squared coefficients for mortality in both ranges are higher than 20 percent, which is a sign that this model has decent explanatory power. Animal protein and wheat flour consumption are still significantly associated with mortality, even after we control for smoking; animal protein seems protective and wheat flour detrimental. And smoking’s association with the amount of animal protein and wheat flour consumed is practically zero.

Replacing “SexM1F2” with “Smoking” would be particularly far from ideal if we were analyzing this data at the individual level. It could lead to some outlier-induced errors; for example, due to the possible existence of a minority of female chain smokers. But this variable replacement is not as harmful when we look at county-level data, as we are doing here.

In fact, this is as good and parsimonious model of mortality based on the China Study II data as I’ve ever seen based on county level data.

Now, here is an interesting thing. Does the original China Study II analysis of univariate correlations show smoking as a major problem in terms of mortality? Not really.

The table below, from the China Study II report (), shows ALL of the statistically significant (P<0.05) univariate correlations with mortality in 70-79 age range. I highlighted the only measure that is directly related to smoking; that is “dSMOKAGEm”, listed as “questionnaire AGE MALE SMOKERS STARTED SMOKING (years)”.


The high positive correlation with “dSMOKAGEm” does not even make a lot of sense, as one would expect a negative correlation here – i.e., the earlier in life folks start smoking, the higher should be the mortality. But this reverse-signed correlation may be due to smokers who get an early start dying in disproportionally high numbers before they reach age 70, and thus being captured by another age range mortality variable. The fact that other smoking-related variables are not showing up on the table above is likely due to distortions caused by inter-correlations, as well as measurement problems like the one just mentioned.

As one looks at these univariate correlations, most of them make sense, although several can be and probably are distorted by correlations with other variables, even unmeasured variables. And some unmeasured variables may turn out to be critical. Remember what I said in my previous post – the variable “SexM1F2” was introduced by me; it was not in the original dataset. “Smoking” is this variable, but reversed, to account for the fact that men are heavy smokers and women are not.

Univariate correlations are calculated without adjustments or control. To correct this problem one can adjust a variable based on other variables; as in “adjusting for age”. This is not such a good technique, in my opinion; it tends to be time-consuming to implement, and prone to errors. One can alternatively control for the effects of other variables; a better technique, employed in multivariate statistical analyses. This latter technique is the one employed in WarpPLS analyses ().

Why don’t more smoking-related variables show up on the univariate correlations table above? The reason is that the table summarizes associations calculated based on data for both sexes. Since the women in the dataset smoked very little, including them in the analysis together with men lowers the strength of smoking-related associations, which would probably be much stronger if only men were included. It lowers the strength of the associations to the point that their P values become higher than 0.05, leading to their exclusion from tables like the one above. This is where the aggregation process that may lead to ecological fallacy shows its ugly head.

No one can blame Dr. Campbell for not issuing warnings about smoking, even as they came mixed with warnings about animal food consumption (). The former warnings, about smoking, make a lot of sense based on the results of the analyses in this and the last two posts.

The latter warnings, about animal food consumption, seem increasingly ill-advised. Animal food consumption may actually be protective in regards to the factor X, as it seems to be protective in terms of wheat flour consumption ().

The China Study II: Gender, mortality, and the mysterious factor X

WarpPLS and HealthCorrelator for Excel were used to do the analyses below. For other China Study analyses, many using WarpPLS as well as HealthCorrelator for Excel, click here. For the dataset used, visit the HealthCorrelator for Excel site and check under the sample datasets area. As always, I thank Dr. T. Colin Campbell and his collaborators for making the data publicly available for independent analyses.

In my previous post I mentioned some odd results that led me to additional analyses. Below is a screen snapshot summarizing one such analysis, of the ordered associations between mortality in the 35-69 and 70-79 age ranges and all of the other variables in the dataset. As I said before, this is a subset of the China Study II dataset, which does not include all of the variables for which data was collected. The associations shown below were generated by HealthCorrelator for Excel.


The top associations are positive and with mortality in the other range (the “M006 …” and “M005 …” variables). This is to be expected if ecological fallacy is not a big problem in terms of conclusions drawn from this dataset. In other words, the same things cause mortality to go up in the two age ranges, uniformly across counties. This is reassuring from a quantitative analysis perspective.

The second highest association in both age ranges is with the variable “SexM1F2”. This variable is a “dummy” variable coded as 1 for male sex and 2 for female, which I added to the dataset myself – it did not exist in the original dataset. The association in both age ranges is negative, meaning that being female is protective. They reflect in part the role of gender on mortality, more specifically the biological aspects of being female, since we have seen before in previous analyses that being female is generally health-protective.

I was able to add a gender-related variable to the model because the data was originally provided for each county separately for males and females, as well as through “totals” that were calculated by aggregating data from both males and females. So I essentially de-aggregated the data by using data from males and females separately, in which case the totals were not used (otherwise I would have artificially reduced the variance in all variables, also possibly adding uniformity where it did not belong). Using data from males and females separately is the reverse of the aggregation process that can lead to ecological fallacy problems.

Anyway, the associations with the variable “SexM1F2” got me thinking about a possibility. What if females consumed significantly less wheat flour and more animal protein in this dataset? This could be one of the reasons behind these strong associations between being female and living longer. So I built a more complex WarpPLS model than the one in my previous post, and ran a linear multivariate analysis on it. The results are shown below.


What do these results suggest? They suggest no strong associations between gender and wheat flour or animal protein consumption. That is, when you look at county averages, men and women consumed about the same amounts of wheat flour and animal protein. Also, the results suggest that animal protein is protective and wheat flour is detrimental, in terms of longevity, regardless of gender. The associations between animal protein and wheat flour are essentially the same as the ones in my previous post. The beta coefficients are a bit lower, but some P values improved (i.e., decreased); the latter most likely due to better resample set stability after including the gender-related variable.

Most importantly, there is a very strong protective effect associated with being female, and this effect is independent of what the participants ate.

Now, if you are a man, don’t rush to take hormones to become a woman with the goal of living longer just yet. This advice is not only due to the likely health problems related to becoming a transgender person; it is also due to a little problem with these associations. The problem is that the protective effect suggested by the coefficients of association between gender and mortality seems too strong to be due to men "being women with a few design flaws".

There is a mysterious factor X somewhere in there, and it is not gender per se. We need to find a better candidate.

One interesting thing to point out here is that the above model has good explanatory power in regards to mortality. I'd say unusually good explanatory power given that people die for a variety of reasons, and here we have a model explaining a lot of that variation. The model  explains 45 percent of the variance in mortality in the 35-69 age range, and 28 percent of the variance in the 70-79 age range.

In other words, the model above explains nearly half of the variance in mortality in the 35-69 age range. It could form the basis of a doctoral dissertation in nutrition or epidemiology with important  implications for public health policy in China. But first the factor X must be identified, and it must be somehow related to gender.

Next post coming up soon ...

The China Study II: Animal protein, wheat, and mortality … there is something odd here!

WarpPLS and HealthCorrelator for Excel were used in the analyses below. For other China Study analyses, many using WarpPLS and HealthCorrelator for Excel, click here. For the dataset used, visit the HealthCorrelator for Excel site and check under the sample datasets area. I thank Dr. T. Colin Campbell and his collaborators at the University of Oxford for making the data publicly available for independent analyses.

The graph below shows the results of a multivariate linear WarpPLS analysis including the following variables: Wheat (wheat flour consumption in g/d), Aprot (animal protein consumption in g/d), Mor35_69 (number of deaths per 1,000 people in the 35-69 age range), and Mor70_79 (number of deaths per 1,000 people in the 70-79 age range).


Just a technical comment here, regarding the possibility of ecological fallacy. I am not going to get into this in any depth now, but let me say that the patterns in the data suggest that, with the possible exception of some variables (e.g., blood glucose, gender; the latter will get us going in the next few posts), ecological fallacy due to county aggregation is not a big problem. The threat of ecological fallacy exists, here and in many other datasets, but it is generally overstated (often by those whose previous findings are contradicted by aggregated results).

I have not included plant protein consumption in the analysis because plant protein consumption is very strongly and positively associated with wheat flour consumption. The reason is simple. Almost all of the plant protein consumed by the participants in this study was probably gluten, from wheat products. Fruits and vegetables have very small amounts of protein. Keeping that in mind, what the graph above tells us is that:

- Wheat flour consumption is significantly and negatively associated with animal protein consumption. This is probably due to those eating more wheat products tending to consume less animal protein.

- Wheat flour consumption is positively associated with mortality in the 35-69 age range. The P value (P=0.06) is just shy of the 5 percent (i.e., P=0.05) that most researchers would consider to be the threshold for statistical significance. More consumption of wheat in a county, more deaths in this age range.

- Wheat flour consumption is significantly and positively associated with mortality in the 70-79 age range. More consumption of wheat in a county, more deaths in this age range.

- Animal protein consumption is not significantly associated with mortality in the 35-69 age range.

- Animal protein consumption is significantly and negatively associated with mortality in the 70-79 age range. More consumption of animal protein in a county, fewer deaths in this age range.

Let me tell you, from my past experience analyzing health data (as well as other types of data, from different fields), that these coefficients of association do not suggest super-strong associations. Actually this is also indicated by the R-squared coefficients, which vary from 3 to 7 percent. These are the variances explained by the model on the variables above the R-squared coefficients. They are low, which means that the model has weak explanatory power.

R-squared coefficients of 20 percent and above would be more promising. I hate to disappoint hardcore carnivores and the fans of the “wheat is murder” theory, but these coefficients of association and variance explained are probably way less than what we would expect to see if animal protein was humanity's salvation and wheat its demise.

Moreover, the lack of association between animal protein consumption and mortality in the 35-69 age range is a bit strange, given that there is an association suggestive of a protective effect in the 70-79 age range.

Of course death happens for all kinds of reasons, not only what we eat. Still, let us take a look at some other graphs involving these foodstuffs to see if we can form a better picture of what is going on here. Below is a graph showing mortality at the two age ranges for different levels of animal protein consumption. The results are organized in quintiles.


As you can see, the participants in this study consumed relatively little animal protein. The lowest mortality in the 70-79 age range, arguably the range of higher vulnerability, was for the 28 to 35 g/d quintile of consumption. That was the highest consumption quintile. About a quarter to a third of 1 lb/d of beef, and less of seafood (in general), would give you that much animal protein.

Keep in mind that the unit of analysis here is the county, and that these results are based on county averages. I wish I had access to data on individual participants! Still I stand by my comment earlier on ecological fallacy. Don't worry too much about it just yet.

Clearly the above results and graphs contradict claims that animal protein consumption makes people die earlier, and go somewhat against the notion that animal protein consumption causes things that make people die earlier, such as cancer. But they do so in a messy way - that spike in mortality in the 70-79 age range for 21-28 g/d of animal protein is a bit strange.

Below is a graph showing mortality at the two age ranges (i.e., 35-69 and 70-79) for different levels of wheat flour consumption. Again, the results are shown in quintiles.


Without a doubt the participants in this study consumed a lot of wheat flour. The lowest mortality in the 70-79 age range, which is the range of higher vulnerability, was for the 300 to 450 g/d quintile of wheat flour consumption. The high end of this range is about 1 lb/d of wheat flour! How many slices of bread would this be equivalent to? I don’t know, but my guess is that it would be many.

Well, this is not exactly the smoking gun linking wheat with early death, a connection that has been reaching near mythical proportions on the Internetz lately. Overall, the linear trend seems to be one of decreased longevity associated with wheat flour consumption, as suggested by the WarpPLS results, but the relationship between these two variables is messy and somewhat weak. It is not even clearly nonlinear, at least in terms of the ubiquitous J-curve relationship.

Frankly, there is something odd about these results.

This oddity led to me to explore, using HealthCorrelator for Excel, all ordered associations between mortality in the 35-69 and 70-79 age ranges and all of the other variables in the dataset. That in turn led me to a more complex WarpPLS analysis, which I’ll talk about in my next post, which is still being written.

I can tell you right now that there will be more oddities there, which will eventually take us to what I refer to as the mysterious factor X. Ah, by the way, that factor X is not gender - but gender leads us to it.

Great evolution thinkers you should know about

If you follow a paleo diet, you follow a diet that aims to be consistent with evolution. This is a theory that has undergone major changes and additions since Alfred Russel Wallace and Charles Darwin proposed it in the 1800s. Wallace proposed it first, by the way, even though Darwin’s proposal was much more elaborate and supported by evidence. Darwin acknowledged Wallace's precedence, but received most of the credit for the theory anyway.

(Alfred Russel Wallace; source: Wikipedia)

What many people who describe themselves as paleo do not seem to know is how the theory found its footing. The original Wallace-Darwin theory (a.k.a. Darwin’s theory) had some major problems, notably the idea of blending inheritance (e.g., blue eye + brown eye = somewhere in between), which led it to be largely dismissed until the early 1900s. Ironically, it was the work of a Catholic priest that provided the foundation on which the theory of evolution would find its footing, and evolve into the grand theory that it is today. We are talking about Gregor Johann Mendel.

Much of the subsequent work that led to our current understanding of evolution sought to unify the theory of genetics, pioneered by Mendel, with the basic principles proposed as part of the Wallace-Darwin theory of evolution. That is where major progress was made. The evolution thinkers below are some of the major contributors to that progress.

Ronald A. Fisher. English statistician who proposed key elements of a genetic theory of natural selection in the 1910s, 1920s and 1930s. Fisher showed that the inheritance of discrete traits (e.g., flower color) described by Gregor Mendel has the same basis as the inheritance of continuous traits (e.g., human height) described by Francis Galton. He is credited, together with John B.S. Haldane and Sewall G. Wright, with setting the foundations for the development of the field of population genetics. In population genetics the concepts and principles of the theories of evolution (e.g., inheritance and natural selection of traits) and genetics (e.g., genes and alleles) have been integrated and mathematically formalized.

John B.S. Haldane. English geneticist who, together with Ronald A. Fisher and Sewall G. Wright, is credited with setting the foundations for the development of the field of population genetics. Much of his research was conducted in the 1920s and 1930s. Particularly noteworthy is the work by Haldane through which he mathematically modeled and explained the interactions between natural selection, mutation, and migration. He is also known for what is often referred to as Haldane’s principle, which explains the direction of the evolution of many species’ traits based on the body size of the organisms of the species. Haldane’s mathematical formulations also explained the rapid spread of traits observed in some actual populations of organisms, such as the increase in frequency of dark-colored moths from 2% to 95% in a little less than 50 years as a response to the spread of industrial soot in England in the late 1800s.

Sewall G. Wright. American geneticist and statistician who, together with Ronald A. Fisher and John B.S. Haldane, is credited with setting the foundations for the development of the field of population genetics. As with Fisher and Haldane, much of his original and most influential research was conducted in the 1920s and 1930s. He is believed to have discovered the inbreeding coefficient, related to the occurrence of identical genes in different individuals, and to have pioneered methods for the calculation of gene frequencies among populations of organisms. The development of the notion of genetic drift, where some of a population’s traits result from random genetic changes instead of selection, is often associated with him. Wright is also considered to be one of pioneers of the development of the statistical method known as path analysis.

Theodosius G. Dobzhansky. Ukrainian-American geneticist and evolutionary biologist who migrated to the United States in the late 1920s, and is believed to have been one of the main architects of the modern evolutionary synthesis. Much of his original research was conducted in the 1930s and 1940s. In the 1930s he published one of the pillars of the modern synthesis, a book titled Genetics and the Origin of Species. The modern evolutionary synthesis is closely linked with the emergence of the field of population genetics, and is associated with the integration of various ideas and predictions from the fields of evolution and genetics. In spite of Dobzhansky’s devotion to religious principles, he strongly defended Darwinian evolution against modern creationism. The title of a famous essay written by him is often cited in modern debates between evolutionists and creationists regarding the teaching of evolution in high schools: Nothing in Biology Makes Sense Except in the Light of Evolution.

Ernst W. Mayr. German taxonomist and ornithologist who spent most of his life in the United States, and is believed, like Theodosius G. Dobzhansky, to have been one of the main architects of the modern evolutionary synthesis. Mayr is credited with the development in the 1940s of the most widely accepted definition of species today, that of a group of organisms that are capable of interbreeding and producing fertile offspring. At that time organisms that looked alike were generally categorized as being part of the same species. Mayr served as a faculty member at Harvard University for many years, where he also served as the director of the Museum of Comparative Zoology. He lived to the age of 100 years, and was one of the most prolific scholars ever in the field of evolutionary biology. Unlike many evolution theorists, he was very critical of the use of mathematical approaches to the understanding of evolutionary phenomena.

William D. Hamilton. English evolutionary biologist (born in Egypt) widely considered one of the greatest evolution theorists of the 20th Century. Hamilton conducted pioneering research based on the gene-centric view of evolution, also know as the “selfish gene” perspective, which is based on the notion that the unit of natural selection is the gene and not the organism that carries the gene. His research conducted in the 1960s set the foundations for using this notion to understand social behavior among animals. The notion that the unit of natural selection is the gene forms the basis of the theory of kin selection, which explains why organisms often will instinctively behave in ways that will maximize the reproductive success of relatives, sometimes to the detriment of their own reproductive success (e.g., worker ants in an ant colony).

George C. Williams. American evolutionary biologist believed to have been a co-developer in the 1960s, together with William D. Hamilton, of the gene-centric view of evolution. This view is based on the notion that the unit of natural selection is the gene, and not the organism that carries the gene or a group of organisms that happens to share the gene. Williams is also known for his pioneering work on the evolution of sex as a driver of genetic variation, without which a species would adapt more slowly in response to environmental pressures, in many cases becoming extinct. He is also known for suggesting possible uses of human evolution knowledge in the field of medicine.

Motoo Kimura. Japanese evolutionary biologist known for proposing the neutral theory of molecular evolution in the 1960s. In this theory Kimura argued that one of the main forces in evolution is genetic drift, a stochastic process that alters the frequency of genotypes in a population in a non-deterministic way. Kimura is widely known for his innovative use of a class of partial differential equations, namely diffusion equations, to calculate the effect of natural selection and genetic drift on the fixation of genotypes. He has developed widely used equations to calculate the probability of fixation of genotypes that code for certain phenotypic traits due to genetic drift and natural selection.

George R. Price. American geneticist known for refining in the 1970s the mathematical formalizations developed by Ronald A. Fisher and William D. Hamilton, and thus making significant contributions to the development of the field of population genetics. He developed the famous Price Equation, which has found widespread use in evolutionary theorizing. Price is also known for introducing, together with John Maynard Smith, the concept of evolutionary stable strategy (ESS). The EES notion itself builds on the Nash Equilibrium, named after its developer John Forbes Nash (portrayed in the popular Hollywood film A Beautiful Mind). The concept of EES explains why certain evolved traits spread and become fixed in a population.

John Maynard Smith. English evolutionary biologist and geneticist credited with several innovative applications of game theory (which is not actually a theory, but an applied branch of mathematics) in the 1970s to the understanding of biological evolution. Maynard Smith is also known for introducing, together with George R. Price, the concept of evolutionary stable strategy (EES). As noted above, the EES notion builds on the Nash Equilibrium, and explains why certain evolved traits spread and become fixed in a population. The pioneering work by John Maynard Smith has led to the emergence of a new field of research within evolutionary biology known as evolutionary game theory.

Edward O. Wilson. American evolutionary biologist and naturalist who coined the term “sociobiology” in the 1970s to refer to the systematic study of the biological foundations of social behavior of animals, including humans. Wilson was one of the first evolutionary biologists to convincingly argue that human mental mechanisms are shaped as much by our genes as they are by the environment that surrounds us, setting the stage for the emergence of the field of evolutionary psychology. Many of Wilson’s theoretical contributions in the area of sociobiology are very general, and apply not only to humans but also to other species. Wilson has been acknowledged as one of the foremost experts in the study of ants’ and other insects’ social organizations. He is also known for his efforts to preserve earth’s environment.

Amotz Zahavi. Israeli evolutionary biologist best known for his widely cited handicap principle, proposed in the 1970s, which explains the evolution of fitness signaling traits that appear to be detrimental to the reproductive fitness of an organism. Zahavi argued that traits evolved to signal the fitness status of an organism must be costly in order to the reliable. An example is the large and brightly colored trains evolved by the males of the peacock species, which signal good health to the females of the species. The male peacock’s train makes it more vulnerable to predators, and as such is a costly indicator of survival success. Traits used for this type of signaling are often referred to as Zahavian traits.

Robert L. Trivers. American evolutionary biologist and anthropologist who proposed several influential theories in the 1970s, including the theories of reciprocal altruism, parental investment, and parent-offspring conflict. Trivers is considered to be one of the most influential living evolutionary theorists, and is a very active researcher and speaker. His most recent focus is on the study of body symmetry and its relationship with various traits that are hypothesized to have been evolved in our ancestral past. Trivers’s theories often explain phenomena that are observed in nature but are not easily understood based on traditional evolutionary thinking, and in some cases appear contradictory with that thinking. Reciprocal altruism, for example, is a phenomenon that is widely observed in nature and involves one organism benefiting another not genetically related organism, without any immediate gain to the organism (e.g., vampire bats regurgitating blood to feed non-kin).

There are many other more recent contributors that could arguably be included in the list above. Much recent progress has been made in interdisciplinary fields that could be seen as new fields of research inspired in evolutionary ideas. One such field is that of evolutionary psychology, which has emerged in the 1980s. New theoretical contributions tend to take some time to be recognized though, as will be the case with ideas coming off these new fields, because new theoretical contributions are invariably somewhat flawed and/or incomplete when they are originally proposed.

Calling self-experimentation N=1 is incorrect and misleading

This is not a post about semantics. Using “N=1” to refer to self-experimentation is okay, as long as one understands that self-experimentation is one of the most powerful ways to improve one’s health. Typically the term “N=1” is used in a demeaning way, as in: “It is just my N=1 experience, so it’s not worth much, but …” This is the reason behind this post. Using the “N=1” term to refer to self-experimentation in this way is both incorrect and misleading.

Calling self-experimentation N=1 is incorrect

The table below shows a dataset that is discussed in this YouTube video on HealthCorrelator for Excel (HCE). It refers to one single individual. Nearly all health-related datasets will look somewhat like this, with columns referring to health variables and rows referring to multiple measurements for the health variables. (This actually applies to datasets in general, including datasets about non-health-related phenomena.)


Often each individual measurement, or row, will be associated with a particular point in time, such as a date. This will characterize the measurement approach used as longitudinal, as opposed to cross-sectional. One example of the latter would be a dataset where each row referred to a different individual, with the data on all rows collected at the same point in time. Longitudinal health-related measurement is frequently considered superior to cross-sectional measurement in terms of the insights that it can provide.

As you can see, the dataset has 10 rows, with the top row containing the names of the variables. So this dataset contains nine rows of data, which means that in this dataset “N=9”, even though the data is for one single individual. To call this an “N=1” experiment is incorrect.

As a side note, an empty cell, like that on the top row for HDL cholesterol, essentially means that a measurement for that variable was not taken on that date, or that it was left out because of obvious measurement error (e.g., the value received from the lab was “-10”, which would be a mistake since nobody has a negative HDL cholesterol level). The N of the dataset as a whole would still be technically 9 in a situation like this, with only one missing cell on the row in question. But the software would typically calculate associations for that variable (HDL cholesterol) based on a sample of 8.

Calling self-experimentation N=1 is misleading

Calling self-experimentation “N=1”, meaning that the results of self-experimentation are not a good basis for generalization, is very misleading. But there is a twist. Those results may indeed not be a good basis for generalization to other people, but they provide a particularly good basis for generalization for you. It is often much safer to generalize based on self-experimentation, even with small samples (e.g., N=9).

The reason, as I pointed out in this interview with Jimmy Moore, is that data about oneself only tends to be much more uniform than data about a sample of individuals. When multiple individuals are included in an analysis, the number of sources of error (e.g., confounding variables, measurement problems) is much higher than when the analysis is based on one single individual. Thus analyses based on data from one single individual yield results that are more uniform and stable across the sample.

Moreover, analyses of data about a sample of individuals are typically summarized through averages, and those averages tend to be biased by outliers. There are always outliers in any dataset; you might possibly be one of them if you were part of a dataset, which would render the average results at best misleading, and at worst meaningless, to you. This is a point that has also been made by Richard Nikoley, who has been discussing self-experimentation for quite some time, in this very interesting video.

Another person who has been talking about self-experimentation, and showing how it can be useful in personal health management, is Seth Roberts. He and the idea of self-experimentation were prominently portrayed in this article on the New York Times. Check this video where Dr. Roberts talks about how he found out through self-experimentation that, among other things, consuming butter reduced his arterial plaque deposits. Plaque reduction is something that only rarely happens, at least in folks who follow the traditional American diet.

HCE generates coefficients of association and graphs at the click of a button, making it relatively easy for anybody to understand how his or her health variables are associated with one another, and thus what modifiable health factors (e.g., consumption of certain foods) could be causing health effects (e.g., body fact accumulation). It may also help you identify other, more counter-intuitive, links; such as between certain thought and behavior patterns (e.g., wealth accumulation thoughts, looking at the mirror multiple times a day) and undesirable mental states (e.g., depression, panic attacks).

Just keep in mind that you need to have at least some variation in all the variables involved. Without variation there is no correlation, and thus causation may remain hidden from view.

Being glucose intolerant may make you live only to be 96, if you would otherwise live to be 100

This comes also from the widely cited Brunner and colleagues study, published in Diabetes Care in 2006. They defined a person as glucose intolerant if he or she had a blood glucose level of 5.3-11 mmol/l after a 2-h post–50-g oral glucose tolerance test. For those using the other measurement system, like us here in the USA, that is a blood glucose level of approximately 95-198 mg/dl.

Quite a range, eh!? This covers the high end of normoglycemia, as well as pre- to full-blown type 2 diabetes.

In this investigation, called the Whitehall Study, 18,403 nonindustrial London-based male civil servants aged 40 to 64 years were examined between September 1967 and January 1970. These folks were then followed for over 30 years, based on the National Health Service Central Registry; essentially to find out whether they had died, and of what. During this period, there were 11,426 deaths from all causes; with 5,497 due to cardiovascular disease (48.1%) and 3,240 due to cancer (28.4%).

The graph below shows the age-adjusted survival rates against time after diagnosis. Presumably the N values refer to the individuals in the glucose intolerant (GI) and type 2 diabetic (T2DM) groups that were alive at the end of the monitoring period. This does not apply to the normoglycemic N value; this value seems to refer to the number of normoglycemic folks alive after the divergence point (5-10 years from diagnosis).


Note by the authors: “Survival by baseline glucose tolerance status diverged after 5-10 years of follow-up. Median survival differed by 4 years between the normoglycemic and glucose intolerant groups and was 10 years less in the diabetic compared with the glucose intolerant group.”

That is, it took between 5 and 10 years of high blood glucose levels for any effect on mortality to be noticed. One would expect at least some of the diagnosed folks to have done something about their blood glucose levels; a confounder that was not properly controlled for in this study, as far as I can tell. The glucose intolerant folks ended up living 4 years less than the normoglycemics, and 10 years more than the diabetics.

One implication of this article is that perhaps you should not worry too much if you experience a temporary increase in blood glucose levels due to compensatory adaptation to healthy changes in diet and lifestyle, such as elevated growth hormone levels. It seems unlikely that such temporary increase in blood glucose levels, even if lasting as much as 1 year, will lead to permanent damage to cells involved in glucose metabolism like the beta cells in the pancreas.

Another implication is that being diagnosed as pre-diabetic or diabetic is not a death sentence, as some people seem to take such diagnoses at first. Many of the folks in this study who decided to do something about their health following an adverse diagnosis probably followed the traditional advice for the treatment of pre-diabetes and diabetes, which likely made their health worse. (See Jeff O’Connell’s book Sugar Nation for a detailed discussion of what that advice entails.) And still, not everyone progressed from pre-diabetes to full-blow diabetes. Probably fewer refined foods available helped, but this does not fully explain the lack of progression to full-blow diabetes.

It is important to note that this study was conducted in the late 1960s. Biosynthetic insulin was developed in the 1970s using recombinant DNA techniques, and was thus largely unavailable to the participants of this study. Other treatment options were also largely unavailable. Arguably the most influential book on low carbohydrate dieting, by Dr. Atkins, was published in the early 1970s. The targeted use of low carbohydrate dieting for blood glucose control in diabetics was not widely promoted until the 1980s, and even today it is not adopted by mainstream diabetes doctors. To this I should add that, at least anecdotally and from living in an area where diabetes is an epidemic (South Texas), those people who carefully control their blood sugars after type 2 diabetes diagnoses, in many cases with the help of drugs, seem to see marked and sustained health improvements.

Finally, an interesting implication of this study is that glucose intolerance, as defined in the article, would probably not do much to change an outside observer’s perception of a long-living population. That is, if you take a population whose individuals are predisposed to live long lives, with many naturally becoming centenarians, they will likely still be living long lives even if glucose intolerance is rampant. Without carefully conducted glucose tolerance tests, an outside observer may conclude that a damaging diet is actually healthy by still finding many long-living individuals in a population consuming that diet.

Fasting blood glucose of 83 mg/dl and heart disease: Fact and fiction

If you are interested in the connection between blood glucose control and heart disease, you have probably done your homework. This is a scary connection, and sometimes the information on the Internetz make people even more scared. You have probably seen something to this effect mentioned:
Heart disease risk increases in a linear fashion as fasting blood glucose rises beyond 83 mg/dl.
In fact, I have seen this many times, including on some very respectable blogs. I suspect it started with one blogger, and then got repeated over and over again by others; sometimes things become “true” through repetition. Frequently the reference cited is a study by Brunner and colleagues, published in Diabetes Care in 2006. I doubt very much the bloggers in question actually read this article. Sometimes a study by Coutinho and colleagues is also cited, but this latter study is actually a meta-analysis.

So I decided to take a look at the Brunner and colleagues study. It covers, among other things, the relationship between cardiovascular disease (they use the acronym CHD for this), and 2-hour blood glucose levels after a 50-g oral glucose tolerance test (OGTT). They tested thousands of men at one point in time, and then followed them for over 30 years, which is really impressive. The graph below shows the relationship between CHD and blood glucose in mmol/l. Here is a calculator to convert the values to mg/dl.


The authors note in the limitations section that: “Fasting glucose was not measured.” So these results have nothing to do with fasting glucose, as we are led to believe when we see this study cited on the web. Also, on the abstract, the authors say that there is “no evidence of nonlinearity”, but in the results section they say that the data provides “evidence of a nonlinear relationship”. The relationship sure looks nonlinear to me. I tried to approximate it manually below.


Note that CHD mortality really goes up more clearly after a glucose level of 5.5 mmol/l (100 mg/dl). But it also varies significantly more widely after that level; the magnitudes of the error bars reflect that. Also, you can see that at around 6.7 mmol/l (121 mg/dl), CHD mortality is on average about the same as at 5.5 mmol/l (100 mg/dl) and 3.5 mmol/l (63 mg/dl). This last level suggests an abnormally high insulin response, bringing blood glucose levels down too much at the 2-hour mark – i.e., reactive hypoglycemia, which the study completely ignores.

These findings are consistent with the somewhat chaotic nature of blood glucose variations in normoglycemic individuals, and also with evidence suggesting that average blood glucose levels go up with age in a J-curve fashion even in long-lived individuals.

We also know that traits vary along a bell curve for any population of individuals. Research results are often reported as averages, but the average individual does not exist. The average individual is an abstraction, and you are not it. Glucose metabolism is a complex trait, which is influenced by many factors. This is why there is so much variation in mortality for different glucose levels, as indicated by the magnitudes of the error bars.

In any event, these findings are clearly inconsistent with the statement that "heart disease risk increases in a linear fashion as fasting blood glucose rises beyond 83 mg/dl". The authors even state early in the article that another study based on the same dataset, to which theirs was a follow-up, suggested that:
…. [CHD was associated with levels above] a postload glucose of 5.3 mmol/l [95 mg/dl], but below this level the degree of glycemia was not associated with coronary risk.
Now, exaggerating the facts, to the point of creating fictitious results, may have a positive effect. It may scare people enough that they will actually check their blood glucose levels. Perhaps people will remove certain foods like doughnuts and jelly beans from their diets, or at least reduce their consumption dramatically. However, many people may find themselves with higher fasting blood glucose levels, even after removing those foods from their diets, as their bodies try to adapt to lower circulating insulin levels. Some may see higher levels for doing other things that are likely to improve their health in the long term. Others may see higher levels as they get older.

Many of the complications from diabetes, including heart disease, stem from poor glucose control. But it seems increasingly clear that blood glucose control does not have to be perfect to keep those complications at bay. For most people, blood glucose levels can be maintained within a certain range with the proper diet and lifestyle. You may be looking at a long life if you catch the problem early, even if your blood glucose is not always at 83 mg/dl (4.6 mmol/l). More on this on my next post.

Nonlinearity and the industrial seed oils paradox

Most relationships among variables in nature are nonlinear, frequently taking the form of a J curve. The figure below illustrates this type of curve. In this illustration, the horizontal axis measures the amount of time an individual spends consuming a given dose (high) of a substance daily. The vertical axis measures a certain disease marker – e.g., a marker of systemic inflammation, such as levels of circulating tumor necrosis factor (TNF). This is just one of many measurement schemes that may lead to a J curve.


J-curve relationships and variants such as U-curve and inverted J-curve relationships are ubiquitous, and may occur due to many reasons. For example, a J curve like the one above may be due to the substance being consumed having at least one health-promoting attribute, and at least one health-impairing attribute. The latter has a delayed effect, and ends up overcoming the benefits of the former over time. In this sense, there is no “sweet spot”. People are better off not consuming the substance at all. They should look for other sources of the health-promoting factors.

So what does this have to do with industrial seed oils, like safflower and corn oil?

If you take a look at the research literature on the effects of industrial seed oils, you’ll find something interesting and rather paradoxical. Several studies show benefits, whereas several others hint at serious problems. The problems seem to be generally related to long-term consumption, and to be associated with a significant increase in the ratio of dietary omega-6 to omega-3 fats; this increase appears to lead to systemic inflammation. The benefits seem to be generally related to short-term consumption.

But what leads to the left side of the J curve, the health-promoting effects of industrial seed oils, usually seen in short-term studies?

It is very likely vitamin E, which is considered, apparently correctly, to be one of the most powerful antioxidants in nature. Oxidative stress is strongly associated with systemic inflammation. Seed oils are by far the richest sources of vitamin E around, in the form of both γ-Tocopherol and α-Tocopherol. Other good sources, with much less gram-adjusted omega-6 content, are what we generally refer to as “nuts”. And, there are many, many substances other than vitamin E that have powerful antioxidant properties.

Chris Masterjohn has talked about seed oils and vitamin E before, making a similar point (see here, and here). I acknowledged this contribution by Chris before; for example, in my June 2011 interview with Jimmy Moore. In fact, Chris has gone further and also argued that the vitamin E requirement goes up as body fat omega-6 content increases over time (see comments under this post, in addition to the links provided above).

If this is correct, I would speculate that it may create a vicious feedback-loop cycle, as the increased vitamin E requirement may lead to increased hunger for foods rich in vitamin E. For someone already consuming a diet rich in seed oils, this may drive a subconscious compulsion to add more seed oils to dishes. Not good!

Men who are skinny-fat: There are quite a few of them

The graph below (from Wikipedia) plots body fat percentage (BF) against body mass index (BMI) for men. The data is a bit old: 1994. The top-left quadrant refers to men with BF greater than 25 percent and BMI lower than 25. A man with a BF greater than 25 has crossed into obese territory, even though a BMI lower than 25 would suggest that he is not even overweight. These folks are what we could call skinny-fat men.


The data is from the National Health and Nutrition Examination Survey (NHANES), so it is from the USA only. Interesting that even though this data is from 1994, we already could find quite a few men with more than 25 percent BF and a BMI of around 20. One example of this would be a man who is 5’11’’, weighing 145 lbs, and who would be technically obese!

About 8 percent of the entire sample of men used as a basis for the plot fell into the area defined by the top-left quadrant – the skinny-fat men. (That quadrant is one in which the BMI measure is quite deceiving; another is the bottom-right quadrant.) Most of us would be tempted to conclude that all of these men were sick or on the path to becoming so. But we do not know this for sure. On the standard American diet, I think it is a reasonably good guess that these skinny-fat men would not fare very well.

What is most interesting for me regarding this data, which definitely has some measurement error built in (e.g., zero BF), is that it suggests that the percentage of skinny-fat men in the general population is surprisingly high. (And this seems to be the case for women as well.) Almost too high to characterize being skinny-fat as a disease per se, much less a genetic disease. Genetic diseases tend to be rarer.

In populations under significant natural selection pressure, which does not include modern humans living in developed countries, genetic diseases tend to be wiped out by evolution. (The unfortunate reality is that modern medicine helps these diseases spread, although quite slowly.)  Moreover, the prevalence of diabetes in the population was not as high as 8 percent in 1994, and is not that high today either; although it tends to be concentrated in some areas and cluster with obesity as defined based on both BF and BMI.

And again, who knows, maybe these folks (the skinny-fat men) were not even the least healthy in the whole sample, as one may be tempted to conclude.

Maybe being skinny-fat is a trait, passed on across generations, not a disease. Maybe such a trait was useful at some point in the not so distant past to some of our ancestors, but leads to degenerative diseases in the context of a typical Western diet. Long-living Asians with low BMI tend to gravitate more toward the skinny-fat quadrant than many of their non-Asian counterparts. That is, long-living Asians generally tend have higher BF percentage at the same BMI (see a discussion about the Okinawans on this post).

Evolution is a deceptively simple process, which can lead to very odd results.

This “trait-not-disease” idea may sound like semantics, but it has major implications. It would mean that many of the folks who are currently seen as diseased or disease-prone, are in fact simply “different”. At a point in time in our past, under a unique set of circumstances, they might have been the ones who would have survived. The ones who would have been perceived as healthier than average.

Refined carbohydrate-rich foods, palatability, glycemic load, and the Paleo movement

A great deal of discussion has been going on recently revolving around the so-called “carbohydrate hypothesis of obesity”. I will use the acronym CHO to refer to this hypothesis. This acronym is often used to refer to carbohydrates in nutrition research; I hope this will not cause confusion.

The CHO could be summarized as this: a person consumes foods with “easily digestible” carbohydrates, those carbohydrates raise insulin levels abnormally, the abnormally high insulin levels drive too much fat into body fat cells and keep it there, this causes hunger as not enough fat is released from fat cells for use as energy, this hunger drives the consumption of more foods with “easily digestible” carbohydrates, and so on.

It is posited as a feedback-loop process that causes serious problems over a period of years. The term “easily digestible” is within quotes for emphasis. If it is taken to mean “refined”, which is still a bit vague, there is a good amount of epidemiological evidence in support of the CHO. If it is taken to mean simply “easily digestible”, as in potatoes and rice (which is technically a refined food, but a rather benign one), there is a lot of evidence against it. Even from an unbiased (hopefully) look at county-level data in the China Study.

Another hypothesis that has been around for a long time and that has been revived recently, which we could call the “palatability hypothesis”, is a competing hypothesis. It is an interesting and intriguing hypothesis, at least at first glance. There seems to be some truth to this hypothesis. The idea here is that we have not evolved mechanisms to deal with highly palatable foods, and thus end up overeating them.  Therefore we should go in the opposite direction, and place emphasis on foods that are not very palatable to reach our optimal weight. You might think that to test this hypothesis it would be enough to find out if this diet works: “Eat something … if it tastes good, spit it out!”

But it is not so simple. To test this palatability hypothesis one could try to measure the palatability of foods, and see if it is correlated with consumption. The problem is that the formulations I have seen of the palatability hypothesis treat the palatability construct as static, when in fact it is dynamic – very dynamic. The perception of the reward associated with a specific food changes depending on a number of factors.

For example, we cannot assign a palatability score to a food without considering the particular state in which the individual who eats the food is. That state is defined by a number of factors, including physiological and psychological ones, which vary a lot across individuals and even across different points in time for the same individual. For someone who is hungry after a 20 h fast, for instance, the perceived reward associated with a food will go up significantly compared to the same person in the fed state.

Regarding the CHO, it seems very clear that refined carbohydrate-rich foods in general, particularly the highly modified ones, disrupt normal biological mechanisms that regulate hunger. Perceived food reward, or palatability, is a function of hunger. Abnormal glucose and insulin responses appear to be at the core of this phenomenon. There are undoubtedly many other factors at play as well. But, as you can see, there is a major overlap between the CHO and the palatability hypothesis. Refined carbohydrate-rich foods generally have higher palatability than natural foods in general. Humans are good engineers.

One meme that seems to be forming recently on the Internetz is that the CHO is incompatible with data from healthy isolated groups that consume a lot of carbohydrates, which are sometimes presented as alternative models of life in the Paleolithic. But in fact among influential proponents of the CHO are the intellectual founders of the Paleolithic dieting movement. Including folks who studied native diets high in carbohydrates, and found their users to be very healthy (e.g., the Kitavans). One thing that these intellectual founders did though was to clearly frame the CHO in terms of refined carbohydrate-rich foods.

Natural carbohydrate-rich foods are clearly distinguished from refined ones based on one key attribute; not the only one, but a very important one nonetheless. That attribute is their glycemic load (GL). I am using the term “natural” here as roughly synonymous with “unrefined” or “whole”. Although they are often confused, the GL is not the same as the glycemic index (GI). The GI is a measure of the effect of carbohydrate intake on blood sugar levels. Glucose is the reference; it has a GI of 100.

The GL provides a better way of predicting total blood sugar response, in terms of “area under the curve”, based on both the type and quantity of carbohydrate in a specific food. Area under the curve is ultimately what really matters; a pointed but brief spike may not have much of a metabolic effect. Insulin response is highly correlated with blood sugar response in terms of area under the curve. The GL is calculated through the following formula:

GL = (GI x the amount of available carbohydrate in grams) / 100

The GL of a food is also dynamic, but its range of variation is small enough in normoglycemic individuals so that it can be treated as a relatively static number. (Still, the reference are normoglycemic individuals.) One of the main differences between refined and natural carbohydrate-rich foods is the much higher GL of industrial carbohydrate-rich foods, and this is not affected by slight variations in GL and GI depending on an individual’s state. The table below illustrates this difference.


Looking back at the environment of our evolutionary adaptation (EEA), which was not static either, this situation becomes analogous to that of vitamin D deficiency today. A few minutes of sun exposure stimulate the production of 10,000 IU of vitamin D, whereas food fortification in the standard American diet normally provides less than 500 IU. The difference is large. So is the difference in GL of natural and refined carbohydrate-rich foods.

And what are the immediate consequences of that difference in GL values? They are abnormally elevated blood sugar and insulin levels after meals containing refined carbohydrate-rich foods. (Incidentally, the GL  happens to be relatively low for the rice preparations consumed by Asian populations who seem to do well on rice-based diets.)  Abnormal levels of other hormones, in a chronic fashion, come later, after many years consuming those foods. These hormones include adiponectin, leptin, and tumor necrosis factor. The authors of the article from which the table above was taken note that:

Within the past 20 y, substantial evidence has accumulated showing that long term consumption of high glycemic load carbohydrates can adversely affect metabolism and health. Specifically, chronic hyperglycemia and hyperinsulinemia induced by high glycemic load carbohydrates may elicit a number of hormonal and physiologic changes that promote insulin resistance. Chronic hyperinsulinemia represents the primary metabolic defect in the metabolic syndrome.

Who are the authors of this article? They are Loren Cordain, S. Boyd Eaton, Anthony Sebastian, Neil Mann, Staffan Lindeberg, Bruce A. Watkins, James H O’Keefe, and Janette Brand-Miller. The paper is titled “Origins and evolution of the Western diet: Health implications for the 21st century”. A full-text PDF is available here. For most of these authors, this article is their most widely cited publication so far, and it is piling up citations as I write. This means that not only members of the general public have been reading it, but that professional researchers have been reading it as well, and citing it in their own research publications.

In summary, the CHO and the palatability hypothesis overlap, and the overlap is not trivial. But the palatability hypothesis is more difficult to test. As Karl Popper noted, a good hypothesis is a testable hypothesis. Eating natural foods will make an enormous difference for the better in your health if you are coming from the standard American diet, and you can justify this statement based on the CHO, the palatability hypothesis, or even a few others – e.g., a nutrient density hypothesis, which would be closer to Weston Price's views. Even if you eat only plant-based natural foods, which I cannot fully recommend based on data I’ve reviewed on this blog, you will be better off.
Image and video hosting by TinyPic