A Veneer of Legitimate Science Crafted To Advance Questionable Claims

A Veneer of Legitimate Science Crafted To Advance Questionable Claims

I was doing research on the effect of a chemical being broadly used in industry for an article, when I stumbled on an online piece at CNN.com. The piece was titled “A Study estimates 15,000 cancer cases could stem from chemicals in California tap water” reported by Nadia Kounang, CNN in an Update @ 9:23 AM ET, Tue April 30, 2019. (Why was it necessary to update this article?)

I do a fair amount of work that involves scientific journals and I have a background in statistics and this piece struck me as odd. Upon reading the article it noted that Californians could see an increase in cancers state wide of 15,000 through the period of a lifetime(??). It bothered me because in a state with a population of 45,000,000 over a period of roughly 75 years the incidence is basically insignificant and falls below any threshold of even a research project with careful controls.

The piece was purportedly from researchers at the advocacy group Environmental Working Group out of Washington D.C. and was published in the journal Environmental Health. The first difficulty I had was locating the publication and the article. Eventually I located the journal which was one of supposedly two to three hundred scientific journals published by BMC a part of Springer Nature. Their mission from the website states they are

A pioneer of open access publishing, BMC has an evolving portfolio of high quality peer-reviewed journals including broad interest titles such as BMC Biology and BMC Medicine, specialist journals such as Malaria Journal and Microbiome, and the BMC Series.

The article in question was published in April 2019 and was an attempt to apply methods used in air quality and risks to drinking water. The paper is reasonably short and includes the two following statements.

“Cumulative risk assessment for drinking water has lagged behind similar methodologies already standard in air quality evaluations. In our estimate, the slow adoption of cumulative methods in drinking water assessments is at least partly due to the variety of health outcomes caused by drinking water contaminants. While acknowledging the scientific challenges of assessing the impacts of co-occurring chemicals on multiple body systems, we believe that the drinking water field can start with the application of existing cumulative risk methodologies established for air quality.”

“The EPA’s technical support materials for the National Air Toxics Assessment note that the true value of the cumulative risk is not known and that the actual risks could be lower than predicted [2]. “

So it seems they are applying air quality evaluation standards to water while admitting that cumulative substances in air cannot be evaluated for health risks?

What they did come up with is one hell of a headline and a remarkable amount of media coverage. It seems they got mentioned in The Washington Post and The New York Times and probably a number of other news outlets.

In browsing other of BMC’s “scientific” journals I found a number of other shocking headlines with one particular organization repeatedly getting published. It was the Ramazzini Institute of Italy. It seems they have announced findings like “Splenda Causes Cancer”, “Cellphone radiation Causes Brain Tumors”, “New Results On Glyphosate Danger” It seems that The Ramazzini Institute research is widely distrusted by the Food and Drug Administration, the European Food Safety Authority, and the Environmental Protection Agency as well as a virtual who’s who of modern science.

I am saddened by this afternoon’s walk through borderline research, questionable institutions and scientific publications. They seem to form a large network of radical organizations, lobbying groups, litigation law firms and pseudo scientific institutes that have been financed by millions in donations, litigation proceeds and well-intentioned private and government grants. They now represent a vicious conspiracy that plants misleading information in the media to stir up public fears and gain support. They use the fruits of this misrepresentation to legally attack businesses and win or extort hundreds of millions of dollars from business while at the same time making a number of participants rich.

I have always been puzzled by why bad scientific arguments seem to prevail in areas from breast implants, to talcum powder, to Roundup but I am beginning to get the picture.

Advertisements

Selective Blank Slatism

Bo Winegard & Cory Clark

Published on May 3, 2019 

Selective Blank Slatism and Ideologically Motivated Misunderstandings

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. ~John B. Watson

Blank slatism is the view, exemplified here with John B. Watson’s characteristic arrogance, that human nature is highly flexible and largely determined by environmental forces. Because almost all the available evidence suggests that blank slatism is incorrect, many scholars are puzzled that versions of this philosophy appear to remain popular in certain university departments and among the intelligentsia more broadly. Some critics of progressivism, such as the economist Thomas Sowell, have contended that political progressives are particularly likely to hold blank slate beliefs as a result of their tendency to attribute many social disparities to environmental and social causes and to de-emphasize genetic ones.

Others—usually those favorably inclined to progressivism, like the Guardian‘s Ed Rooksby—have argued that this is a misrepresentation, a lazy straw man argument, and that, properly understood, most progressives are not blank slatists at all. Rather, they are simply sensitive to the effects of social forces and injustices, and this sensitivity is often mischaracterized by ideological opponents as naive environmentalism. Many of these people argue that, in fact, progressives are more likely than conservatives to accept that genetics contribute to human behavior. After all, conservatives still appear reluctant to believe that sexuality is caused by genes or constrained by one’s nature. Rather, they believe it is a choice, an exercise of a person’s free will. Similarly, they are equally unlikely to attribute social failure to a person’s genes, but instead blame a person’s attitudes to work and commitment, contending that the destitute are often lazy and undisciplined.

Those who defend progressivism are partially correct; progressives aren’t blank slatists in general and indeed they appear more likely than conservatives to accept genetic causes for many human behaviors and life outcomes. However, progressives are selective or ideological blank slatists. That is, they generally accept that there is some kind of nature that constrains individuals. Richard Dawkins could never be Lebron James nor vice versa, no matter their respective diets, upbringings, or effort exerted.

However, they are selectively skeptical that an appeal to this nature (genetics) can explain certain kinds of differences between humans, between sexes, and among ethnic populations. Specifically, they are skeptical of genetic explanations if they appear to suggest that social inequalities are “natural” or caused by genetic differences between groups, and especially when those differences appear to favor the higher status group (for instance, that men are better than women at something on average because of genetic differences between men and women).

The notion that progressives are selective blank slatists is congruent with theory, observational evidence, and systematic survey evidence.

Selective Blank Slatism: Theory and Evidence

One of the chief psychological differences between conservatives and progressives is that progressives are more averse to inequality. Both, of course, see disparities in the world—they see that a professor has more status than a construction worker or that a lawyer makes more money than a social worker and so on. But progressives find these disparities more disconcerting.

Where such disparities exist, there are at least two possible explanations. The first is that genes have endowed certain individuals and groups with natures that lead to better life outcomes than others (for instance, some have higher intelligence, superior athletic ability, greater musical talent, exceptional beauty, and captivating charisma). The second is that all individuals and groups are born genetically equal in their capacities to develop desirable traits and abilities, but then these natural equalities are distorted by environmental and social forces, which thwart certain individuals and groups trying to achieve their full potential.

Those who particularly abhor inequality appear to prefer the latter explanation for two reasons. First, it suggests that groups and individuals are naturally equal. Second, it suggests that equality in life outcomes can be achieved in a genuinely free and meritocratic society. The pressing political project at hand, then, is to create such a society. Accepting the first explanation (that individuals and groups naturally differ) is morally unpleasant for progressives simply because it violates their preference for equality; but it is also unpleasant because it means that society can only make individuals and groups equal by violatingmeritocratic principles with interventionist policies that favor certain groups.

The view that most humans and all groups are basically equal is a kind of cosmic egalitarianism that suggests that the universe is just and fair, but that people are not. This view ineluctably leads to selective blank slatism because if humans are, in fact, naturally equal, then the only thing that could explain social disparities are environmental forces.

So, selective blank slatism is theoretically consistent with progressives’ psychological inclinations and preferences. It also conforms to informal inferences we can draw from what we see in the world. For example, when James Damore’s “Google memo” was released, progressives immediately assailed him, accusing him of perpetuating sexism in the tech industry. Despite how scurrilous many of the attacks on Damore were, his actual memo was a generally judicious and cautious document. He simply asserted that some of Google’s diversity policies were unfair and likely doomed to failure because they failed to consider biological (read, natural or genetically caused) differences between men and women.

The consternation and outrage the memo provoked among progressives is readily explicable if we accept that progressives are cosmic egalitarians. Women are under-represented in the tech industry and, because a cosmic egalitarian cannot countenance genetic differences between men and women, this disparity must necessarily be attributed to sexism. Furthermore, anyone who claims otherwise is wilfully defending an intolerable status quo.

We now have strong, systematic evidence that supports the theory and the informal observations that progressives are cosmic egalitarians and selective blank slatists. We collected survey data from 3,274 people. We first asked traditional demographic questions, including political ideology on a 7-point scale (from 1 = very conservative to 7 = very liberal), and then asked many questions about sex and ethnic differences and the causes of social disparities. For analytical purposes, we divided participants into extreme conservatives (those who answered 1 on political ideology), conservatives (answered 2-3), moderates (answered 4), liberals (answered 5-6), and progressives (answered 7). It’s important to note that our scale did not use the label “progressive.” The term is ours to describe extreme liberals. Overall, 488 participants, or roughly 15 percent, were progressives as we defined it. Although we asked a variety of questions, we will only report seven of the most directly germane here (curious readers can examine this, which reports all of the data).

First, consistent with selective blank slatism, progressives more than others reported that men and women have equal abilities on all tasks. (Questions were on a 7-point scale, from 1 = do not agree at all to 7 = agree completely.)

And they also reported that all ethnic groups have equal abilities on all tasks more than others.

Consistent with these answers, they also reported that differences between the sexes (and between ethnic groups) were more likely to be caused by discrimination than others did. (Notice that the question/statement claims that the only reason there are differences is because of sexism. A full 130 progressives, or 26 percent, endorsed this at 7, indicating that they agree completely.)

 

 

 

 

 

 

 

 

 

Predictably, they also reported more than other groups that people use science to justify existing inequalities. These findings are consistent with progressives’ response to the Damore Google memo. Progressives are likely to impute nefarious motives to anyone who asserts that men and women differ biologically—even when such assertions are supported by science.

Conclusion

Progressives do accept a genetically caused human nature; but, consistent with the claims of their critics, they accept this much less when it is ideologically inconvenient. In other words, progressives are selective or ideological blank slatists. They accept genetic explanations for things such as homosexuality, transsexuality, obesity, addiction, and a variety of mental illnesses, but not for sex or group differences, and especially not when those sex or group differences could explain (and thus potentially justify) existing inequalities between sexes and groups.

Our data, although limited, provide compelling support for the contention that progressives are selective blank slatists. Progressives agreed more strongly than any other ideological group with statements that convey blank slate attitudes about sex and ethnic differences (precisely the kind of blank slatism a priori theory would predict progressives would hold). Most supportive and perhaps most surprising, a full 26 percent of progressives fully endorsed the statement that “the only reason there are sex differences is because society is sexist,” which is, to put it mildly, a wildly implausible claim.

It should not surprise us that progressives have an ideologically saturated view of human nature. On all sides, concerns about human nature are intense and passionate because today’s competing ideologies are premised on different conceptions of human variation and its relation to the ideal social order. With so much at stake, few are capable of approaching the evidence with an open mind. Conservatives too, as noted in the introduction, are almost certainly selective blank slatists. They appear, for example, to be more skeptical that mental illnesses, drug addiction, and sexual orientation are caused by genetics. And although conservatives do appear to accept a more constrained view of humans than do progressives, they often argue that all (or almost all) people, if they just work hard, can succeed. Furthermore, they often blame social pathologies exclusively on cultural deficits and decadence.

So, it is unlikely that either the Left or the Right has a monopoly on bias; and it is unlikely that either is absolutely correct about human nature (although, it is possible that one is more correct than the other). If we begin to understand these biases and the errors into which they lead us, then we can begin to adjust for them. We can, as it were, correct our distorted vision with the spectacles of self-conscious and disciplined reflection. The first step might be to ask ourselves a simple question: How likely is it that what we want to be true of human nature is true of human nature? In other words, if all of the “facts” about humans conform to our desires then that is strong evidence not that we are lucky, but that we are wrong.

 

Bo Winegard is an essayist and an assistant professor at Marietta College. You can follow him on Twitter @EPoe187

Cory Clark is an assistant professor of psychology at Durham university who studies moral psychology and free will. You can follow her on Twitter @ImHardcory

Second-Hand Smoke Again?

From the “sometimes the fraud is so obvious only really smart people can’t see it” department.

The EPA Particulate Matter studies greatly exaggerated the health risks of exposure to PM 2.5 (Particulate Matter as small as 2.5 microns) to justify proposed regulations to control second-hand smoke.

Sugar-punchlineA new study indicates you can smoke (each cigarette is a massive PM2.5 exposure) until you are age 35, quit, and then by age 50 statistically have normal life expectancy — despite inhaling over 4 pounds of PM2.5 as a smoker. For comparison, a non-smoker inhales about 2 ounces of PM2.5 over the course of an 80-year lifespan.

I Ask Only Because I Can

I raise this only because I am here. If I weren’t here I certainly would not raise this issue with you.

Just as Goldilocks found the porridge that was just right, the Earth seems to be just right for living creatures. The Earth seems to be the perfect distance from the sun with lots of water.

The fine-tuned Universe is the proposition that the conditions that allow life in the Universe can occur only when certain universal dimensionless physical constants lie within a very narrow range of values, so that if any of several fundamental constants were only slightly different, the Universe would be unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood.

As Stephen Hawking has noted, “The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. … The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.”

Martin Rees formulates the fine-tuning of the Universe in terms of the following six dimensionless physical constants.

N, the ratio of the strength of electromagnetism to the strength of gravity for a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.

Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force.[13] If ε were 0.006, only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the big bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.

Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the Universe. It is the ratio of the mass density of the Universe to the “critical density” and is approximately 1. If gravity were too strong compared with dark energy and the initial metric expansion, the universe would have collapsed before life could have evolved. On the other side, if gravity were too weak, no stars would have formed.

Lambda (λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as positing that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, the cosmological constant, λ, is on the order of 10−122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. If the cosmological constant were not extremely small, stars and other astronomical structures would not be able to form.[12]

Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.

D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 dimensions of spacetime nor if any other than 1 time dimension existed in spacetime.

Various possible explanations of ostensible fine-tuning are discussed among philosophers, scientists, theologians, and proponents and detractors of creationism. The fine-tuned Universe observation is closely related to, but is not exactly synonymous with, the anthropic principle, which is often used as an explanation of apparent fine-tuning.

The anthropic principle is a philosophical consideration that observations of the universe must be compatible with the conscious and sapient life that observes it. Some proponents of the anthropic principle reason that it explains why this universe has the age and the fundamental physical constants necessary to accommodate conscious life. As a result, they believe it is unremarkable that this universe has fundamental constants that happen to fall within the narrow range thought to be compatible with life. The strong anthropic principle (SAP) as explained by John D. Barrow and Frank Tipler states that this is all the case because the universe is in some sense compelled to eventually have conscious and sapient life emerge within it. Some critics of the SAP argue in favor of a weak anthropic principle (WAP) similar to the one defined by Brandon Carter, which states that the universe’s ostensible fine tuning is the result of selection bias (specifically survivor bias): i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing and reflecting on the matter. Often such arguments draw upon some notion of the multiverse for there to be a statistical population of universes to select from and from which selection bias (our observance of only this universe, compatible with our life) could occur.

The First Americans

Many thousands of years ago, not a single human being lived in the Americas.

I have long been interested in the settling of our continent by people from somewhere else. My interest intensified when I found a Clovis spear point on our property in Georgia. When I looked into Clovis sites I became even more puzzled by what I found. It seems that most Clovis sites are dated between 8,000 and 11,500 years old and surprisingly much more are found in the Southeastern United States than anywhere else. That seemed odd to me for a migration that was supposed to have come thru Alaska? Research into how humans came to the American continents has been going on for a long time. In comparing this there are as many inconsistencies and gaps as there are theories but it is looking like genetics may be the path to an answers.

I found the following article this morning and thought it worth a read…

A BBC Report By Melissa Hogenboom

30 March 2017

Many thousands of years ago, not a single human being lived in the Americas.

This only changed during the last Ice Age. It was a time when most of North America was covered with a thick sheet of ice, which made the Americas difficult to inhabit.

But at some point during this time, adventurous humans started their journey into a new world.

They probably came on foot from Siberia across the Bering Land Bridge, which existed between Alaska and Eurasia from the end of the last Ice Age until about 10,000 years ago. The area is now submerged by water.

There is still debate about when these first Americans actually arrived and where they came from. But we are now getting closer to uncovering the original narrative, and finding out who these first Americans really were.

During the last Ice Age lower sea levels exposed a land bridge across the Bering Sea

During the last Ice Age lower sea levels exposed a land bridge across the Bering Sea (Credit: Tom Thulen/Alamy)

During the peak of the last Ice Age about 20,000 years ago, a journey from Asia into the Americas would not have been particularly desirable. North America was covered in icy permafrost and tall glaciers. But, paradoxically, the presence of so much ice meant that the journey was, in a way, easier than it would be today.

The abundance of ice meant that sea levels were much lower than they are now, and a stretch of land emerged between Siberia and Alaska. Humans and animals could simply walk from Asia to North America. The land bridge was called Beringia.

People were using the woody shrubs from the land bridge to ignite bones on the landscape

At some point around this time – known as the Last Glacial Maximum – groups of hunter-gatherers moved east from what is now Siberia to set up camp there.

“The first people who arrived in Beringia were probably small, highly mobile groups evolving in a large landscape, probably depending on the availability of seasonal resources,” says Lauriane Bourgeonof the University of Montreal, Canada.

These people did well to seek refuge there. Central Beringia was a much more desirable environment than the icy lands they had left behind. The climate was a bit damper. Vegetation, in the form of woody shrubs, would have given them access to wood that they could burn to keep warm.

Beringia was also an ideal environment for large grazing mammals, giving early hunter-gathers something to hunt, says Scott Elias of Royal Holloway, University London in the UK, who reconstructs past climates.

During the last Ice Age humans could walk from Siberia into the Americas

During the last Ice Age humans could walk from Siberia into the Americas (Credit: Gary Hinks/SPL)

“Our hypothesis is that people were using the woody shrubs from the land bridge to ignite bones on the landscape. The bones of big animals contain lots of fatty deposits of marrow, and they will burn.”

When humans got to Beringia, they would have had little choice but to set up camp there. The vast Laurentide and Cordilleran ice sheets further east cut them off from North America.

This standstill helped these isolated groups of people to become genetically distinct from those they had left behind

It is now becoming clear that they made Beringia their home, staying put for several thousand years. This idea is called the Beringian Standstill Hypothesis. This standstill helped these isolated groups of people to become genetically distinct from those they had left behind, according to a 2007 study.

This long standstill therefore meant that the people who arrived in the Americas – when the ice finally retreated and allowed entry – were genetically different to the individuals who had left Siberia thousands of years earlier. “Arguably one of the most important parts of the process is what happened in Beringia. That’s when they differentiated from Asians and started becoming Native Americans,” says Connie Mulligan of the University of Florida in Gainesville, US, who took part in this early analysis.

Since then, other genetic insights have further supported the standstill hypothesis. Elias and colleagues even propose that people stayed in Beringia for as long as 10,000 years.

When the ice finally started to retreat, groups of people then travelled to different pockets of the Americas.

There has long been debate over whether these early settlers arrived from several migrations from different areas, or just one.

There’s been no turnover or change in the population group as some people had previously hypothesised

Over 20 years ago, Mulligan proposed that there was just one migration from Beringia into the “New World”. She came to this conclusion by analysing the genetic variation in the DNA of modern-day Native Americans and comparing it with the variation in Asia. The same rare pattern appeared in all the Native Americans she studied, but very rarely appeared in modern-day Asians. This meant Native Americans likely arose from a single population of people who had lived in Beringia, isolated for many years.

In 2015, a study using more advanced genetic techniques came to a similar conclusion. Rasmus Nielsen of the University of California, Berkeley, US, and colleagues found that the “vast majority” of Native Americans must have originated from just one colonisation event.

“There’s been no turnover or change in the population group as some people had previously hypothesised,” says Nielsen. In fact, about 80% of Native Americans today are direct descendants of the Clovis people, who lived across North America about 13,000 years ago. This discovery came from a 2014 genetic study of a one-year-old Clovis boy who died about 12,700 years ago.

But we now know there must have been staggered migrations from Beringia.

Many Native Americans today are direct descendants of the Clovis people

Many Native Americans today are direct descendants of the Clovis people (Credit: William Scott/Alamy)

That is because there are small groups of people in the Amazonian region of South America – such as the Suruí and Karitiana – with additional mysterious “arctic gene flow”, unrelated to the Clovis boy. Another 2015 study therefore proposed there was more than one “founding population of the Americas”.

The indigenous populations of the Americas, the team found, have distant genetic links in common with people of Australia, Papua New Guinea and the Andaman Islands.

People came into Beringia over different times during the standstill

This means, says Pontus Skoglund of Harvard University in Boston, Massachusetts, that people came into Beringia over different times during the “standstill” and went on to populate different parts of the Americas. Those early dispersals are still reflected by differences in the genomes of people living today.

“It wasn’t simply a single homogenous founding population. There must have been some type of patchwork of people, and maybe there were multiple pulses,” says Skoglund.

In other words, the Beringian inhabitants did not all arrive or leave at the same time.

This makes sense when you consider that Beringia was not a narrow land bridge with ocean on either side. “It was a huge region about twice the size of Texas,” says Elias. The people living there would have had no idea that it was a land bridge at all. “There were no sign posts saying they were leaving Siberia.”

This makes it highly likely that there were different groups of Beringian inhabitants that never met.

A study published in February 2017 strengthens this idea further. After examining the shapes of 800- to 500-year-old skulls from Mexico, researchers found they were so distinct, the people the skulls belonged to must have remained genetically isolated for at least 20,000 years.

There is evidence humans were present in Oregon 14,500 years ago

There is evidence humans were present in Oregon 14,500 years ago (Credit: John R. Foster/SPL)

To understand who the first Americans really were, we have to consider when they arrived. While the exact timing is hard to pin down. Nielsen’s work gives some insight. By sequencing the genomes of people from the Americas, Siberia, and Oceania, he and colleagues could understand when these populations diverged. The team concludes that the ancestors of the first Americans came to Beringia at some point between 23,000 years and 13,000 years ago.

We found cut marks on bones from horse, caribou and wapiti so we know that humans were relying on those species

We now have archaeological evidence to suggest that the people who left Siberia – and then Beringia – did so even earlier than the 23,000-year-limit proposed by Nielsen and colleagues. In January 2017, Lauriane Bourgeon and her team found evidence of people living in a cave system in the northern Yukon Territory of western Canada, called the Bluefish Caves, that dates to as early as 24,000 years ago. It was previously believed that people had only arrived in this area 10,000 years later.

“They reached Beringia as early as 24,000 years ago, and they remained genetically and geographically isolated until about 16-15,000 years ago, before dispersing south of the ice sheets that covered most of North America during this period,” says Bourgeon.

The caves “were only used on brief occasions for hunting activities”, she says. “We found cut marks on bones from horse, caribou and wapiti, so we know that humans were relying on those species.”

Cut marks were discovered on a horse mandible (Credit: Lauriane Bourgeon)

Human cut marks were discovered on this 24,000-year-old horse mandible (Credit: Lauriane Bourgeon)

This work provides further evidence that people were in the Beringia area at this early date. But it does not reveal the exact dates these people first ventured further south.

For that, we can turn to archaeological evidence. For decades, stone tools left by the Clovis people have been found throughout North America. Some date to as early as 13,000 years ago. This might suggest that humans moved south very late. But in recent years evidence has begun to emerge that questions this idea.

Most preserved remains are stone tools and sometimes bones of animals

For instance, at a site called Monte Verde in southern Chile, there is evidence of human occupation that dates between 14,500 and 18,500 years ago. We know these people built firesate seafood and used stone tools – but because they did not leave any human remains behind, much about this early group remains mysterious.

“We really know little about them, because most preserved remains are stone tools and sometimes bones of animals, thus technology and diet,” explains Tom Dillehay of Vanderbilt University in Tennessee, US, who is studying these people. “Monte Verde in south-central Chile, where I am at present, has several organic remains – animal hide, meat, plant remains that reveal a wider diet, wood technology – but these types of sites are rare to find.”

Another conundrum remains. Ice sheets still covered North America 18,500 years ago, making journeying south difficult. How did people arrive in southern Chile so early?

Animal remains were discovered in the Bluefish Caves site in northern Yukon

Animal remains were discovered in the Bluefish Caves site in northern Yukon (Credit: Lauriane Bourgeon/Canadian Museum of History)

A leading idea had been that an ice-free corridor opened up, which allowed humans to travel south. However, the latest evidence suggests this corridor only opened about 12,600 years ago, long after these early Chileans arrived.

Elias also points out how difficult this journey would have been. “Even if there was a small gap in between these enormous ice sheets, the environment left in that gap would have been so horrible, with mud, ice, meltwater and slush. It would not have been a habitable place for people or the animals they would have wanted to follow,” he says.

These early people could have travelled by boat

There is an alternative. These early people could have travelled by boat, taking a route along the Pacific coast. There is no archaeological evidence to support this idea, but that is not entirely unexpected: wooden boats are rarely preserved in the archaeological record.

There are still many unanswered questions, but Mulligan says that studying how and when early hunter-gatherers spread across the Americas can help us to understand the process of migration itself. That is, how population sizes change and which genetic traits persist.

In many ways, the peopling of America presents scientists with a golden opportunity to study these processes. There have been multiple migrations both into and out of other regions of the world – Africa, Europe and Asia, for instance. But the people who moved into the Americas were on a one-way journey. “We know the original inhabitants came from Asia into the New World with no other people there, and no major back migrations, so it’s the simplest model you can conceive of.”

That it was a one-way journey, coupled with the increased interest in studying the genetics of these ancient people, means we should soon understand even more about who these first Americans really were, and exactly when they arrived.

Melissa Hogenboom is BBC Earth’s associate editor. She is @melissasuzanneh on Twitter.

The Case Is Closed. Period.

Another post from my favorite soldier of fortune.

Darwin’s Vigilantes, Richard Sternberg, and Conventional Pseudoscience

Posted on September 22, 2018 by Fred Reed

I am sorry. I admit it: I am a bad person. I promise I will never write about this again. Well, sort of never. It’s just too much fun. Anyway, it’s not my fault. My childhood makes me do it. Maybe I ate lead paint.

Science is supposed to be the objective study of nature, impelled by a willingness to follow the evidence impartially wherever it leads. For the most part it works this way. In the case of emotionally charged topics, it does not. For example, racial intelligence, cognitive differences between the sexes, and weaknesses in Darwinian evolution. Scientists who do perfectly good research in these fields, but arrive at forbidden conclusions, will be hounded out of their fields, fired from academic and research positions, blackballed from employment, and have their careers destroyed.

A prime example is Richard Sternberg, a Ph.D. in biology (Molecular Evolution) from Florida International University and a Ph.D. in Systems Science (Theoretical Biology) from Binghamton University. He is not a lightweight. From 2001-2007 he was staff scientist at the National Center for Biotechnology Information; 2001-2007 a Research Associate at the Smithsonian’s National Museum of Natural History.

Hell broke loose when he authorized in 2004 the publication, in the Proceedings of the Biological Society of Washington, an organ of the Smithsonian Institution, of a peer-reviewed article, The Origin of Biological Information and the Higher taxonomic Categories by Stephen Meyer. It dealt with the possibility of intelligent design as an explanation of aspects of Darwinism not explainable by the conventional theory. This is a serious no-no among the guardians of conventional Darwinism, the political correctness of science.

At the Smithsonian, he was demoted, denied access to specimens he needed in his work, transferred to work under a hostile supervisor, and lost his office space. In the ensuring storm of hatred, two separate federal investigations concluded that he had been made the target of malicious treatment.

Predictably, the establishment dismisses Meyer’s idea as “pseudoscience”:

Wikipedia: The Sternberg peer review controversy concerns the conflict arising from the publication of an article supporting the pseudo-scientific concept of intelligent design in a scientific journal, and the subsequent questions of whether proper editorial procedures had been followed and whether it was properly peer reviewed.

Pseudoscience? Does not Darwinism itself qualify as pseudoscience? It is firmly based on no evidence. For most readers this assertion will seem as delusional as saying that the sun revolves around the earth. This is because we have been indoctrinated since birth in the Darwinian myth. But look at the facts.

 

We are told that life arose by chance in the primeval oceans. Do we know of what those oceans consisted? (Know, not speculate, hope, it stands to reason, must have been, everybody says so). No, we do not. Do we know of what those oceans would have had to consist to bring about life? No. Do we even know what we think evolved? No. Has the chance appearance of life been replicated in the laboratory? No. Has a metabolizing, reproducing chemical complex been constructed in the laboratory, showing that it might be possible? No. Can the chance appearance be shown to be mathematically probable? No. Can Darwinism explain the existence of irreducibly complex structures? No. Does the fossil record, particularly of the Ediacaran and Cambrian, support Darwin? No.

Darwinism was a clever metaphysical idea formed when almost nothing was known about the matter, and imposed by impassioned supporters on a near-total lack of evidence. Should not intensely believing in something that you cannot support by observation or experiment be called pseudoscience?

The ardent of evolution, like Christians, base their creation myth from a sacred book, The Origin of Species, both resting on about as much evidence. Thereafter they assume what is to be proved. Since Darwinists posit the unchallengeable truth of their version of creation, no reason exists for questioning it. If you know it happened, then obviously it was mathematically possible. The math can be discovered later. If you know that life originated in ancient seas, then how it originated becomes a mere detail. If you know the theory is correct, then anyone who doubts must necessarily be at least wrong, and thus ignorable, and perhaps a crank or fool or lunatic.

A classic example of starting from certainty is Darwinism’s reaction to the apparent irreducible complexity of the bacterial flagellum, though hundreds of others could be adduced. This is an immensely complex cellular organelle which would cease to function of any of its parts were removed. It could not have evolved by Darwin’s gradual changes. The Darwinians say, “Well, we aren’t sure just at the moment, but is possible that we will figure out later how it could have happened.” Yes, and it is possible that I will win three Irish Sweepstakes in a row. They are, again, saying that they know that Darwinism is correct, and therefore the evidence will be forthcoming. This is called “faith,” the belief in the unestablishable.

As a friend has written in another context, “When utterly astonishing claims of an extremely controversial nature are made over a period of many years by numerous seemingly reputable academics and other experts, and they are entirely ignored or suppressed but never effectively refuted, reasonable conclusions seem to point in an obvious direction.”

Just so. A lot of highly credentialed researchers express doubts about doctrinaire Darwinism, asserting that it cannot explain many aspects of nature. What does explain them is a separate question. Why is wondering about this a firing offense?

A difficulty in conveying doubts about Neo-Darwinism (the correct name of the current theory) is that very few people, including the highly intelligent, know anything about the issue. The world is full of esoteric specialities from the decipherment of ancient Sumerian inscriptions to the neural anatomy of squids. Few will have chosen Darwin’s defects for careful study.

This is convenient for Darwinists as the dim will believe whatever they hear on television and the bright usually have other things to do with their brains. As the case of Mr. Sternberg shows, scientists who doubt Darwin–again, there are many–know better than to say anything.

The fury is telling. If the Darwinists could prove the many highly credentialed proponents of ID wrong, they would do so, and that would be that. If they could prove their own propositions correct, they would, and that would also be that. But they can’t (or they would have).

If you follow the controversy, you quickly see patterns. One is that the Darwinists are fiercely defensive, suggesying doubt of their own position.   People seldom become infuriated at doubts of something that they believe with genuine certainty. If a physicist at CalTech expressed doubts about general relativity, he would certainly be challenged to prove his theory. He would not be hounded, belittled, forced to resign, charged with pseudoscience, and banned from publication.

Unfortunately for NeoDarwinism, the likelihood of confirmation diminishes with time. Year by year, the fossil record becomes less incomplete, and still the intermediates are not found. As molecular biology repidly advances, the failure to find a chemically possible chain of events that might produce life leads ever more convincingly to a simple concluseion: There isn’t one.

Publications by Richard Sternberg.

 

Science and Government for Our Own Good?

We Have Been Manipulated and We are Paying for It

From condo boards to Capital Hill we are generally managed by a group of people that consider themselves our benefactors. More often than not these people consider themselves highly intelligent and posses the conviction that everyone would be happier, healthier and generally much better off if they would follow their rules and recommendations. Another characteristic of these leaders is that they seem to gravitate towards academics or politics. Over decades the relationship between like-minded academics and politicians has reinforced the notion that they are capable to find fundamental solutions to most of our problems.

One example of this marriage is seen in government official concern about the American diet and its effect on the “nations health”. While we ask what business does the mayor, or the statehouse or Washington have in trying to tell any of us what we should eat, we need to understand what actions they have taken in controlling our choices over the years and the consequences of those actions.

The truth is that our political class have often decided that they and their educated advisors are much smarter than average citizens and have acted hundreds of times over the past number of decades to control our lives in regard to our eating requirements.

In addition to instituting policies designed to improve our eating habits they have also managed (manipulated) the system that produces our food for any number of other “worthy” purposes.

At the end of World War I, the destructive effects of the war bankrupted much of Europe, closing major export markets for the United States and beginning a series of events that would lead to the development of agricultural price and income support policies. United States price and income support, known otherwise as agricultural subsidy, grew out of a serious farm income and financial crises, which was understood to jeopardize the future ability of American agriculture to meet the food needs of the American people. This led to widespread political belief that the free market system was not adequately rewarding farm people for their agricultural commodities.

Beginning with the 1921 Packers and Stockyards Act and 1922 Capper-Volstead Act, which regulated livestock and protected farmer cooperatives against anti-trust suits, United States agricultural policy began to become more and more sweeping in its scope. In reaction to falling grain prices and the widespread economic turmoil of the Dust Bowl and Great Depression, three bills established to price subsidies for farmers in the United States: the 1922 Grain Futures Act, the 1929 Agricultural Marketing Act, and finally the 1933 Agricultural Adjustment Act.

Out of these bills grew a system of government-controlled agricultural commodity prices and government supply control (farmers being paid to leave land unused). Supply control would continue to be used to decrease overproduction, leading to over 50,000,000 acres to be set aside during times of low commodity prices (1955–1973, 1984–1995). The practice wasn’t curtailed until the Federal Agriculture Improvement and Reform Act of 1996. While the reform act of 1996 was supposed to be a first step towards returning the agricultural economy to free enterprise it left in place hundreds of specific supports and direct programs for farmers so that the result was at best only a series of half measures. Even this reform act was a victim of the power of special interest lobbies over the general welfare of the public. Sugar is a classic example of how this system corrupts the political process.

SUGAR

In 1981 the U.S. government passed laws and implemented policies that were designed to support the price of domestically produced sugar (beet and cane) by using tariffs and purchase programs.

In 2001 sugar imported into the U.S. outside of the Tariff Rate Quotas (TRQ) paid a duty of over 15¢ per pound. At that time world sugar prices were averaging 7¢ per pound while U.S. sugar was selling at 22¢ per pound.

The cost in tax dollars flowing to support the sugar industry has been as high as $1.4 million per month in 2000 just for storage of surplus sugar alone with an overall estimated $200,000,000 per year currently in direct purchase costs.

One private U.S. company, US Sugar of Clewiston, Florida produces over 18% of all U.S. sugar and this government policy has made Clewiston one of the wealthiest per capita towns in the country.

In 1996, an agriculture reform bill year, sugar companies paid directly and indirectly $13 million to the 49 members of the House Agriculture Committee and remarkably, seemed to have managed to leave sugar price supports exactly as they were.

In a claimed effort to protect an economic segment that probably employs under 17,000 total (with estimated job losses more like 3 – 4,000 should price supports be removed) the U.S. government has spent hundreds of millions and caused the American people to suffer multiple negative unintended consequences.

As a result of this sugar policy food companies have switched from refined sugar to fructose in the manufacture of most processed foods. In addition to processed foods virtually all U.S. soft drinks and a majority of juices are now formulated with high fructose corn syrup. This has been done for one very simple reason… cost.

While there is an ongoing debate about many health problems associated with fructose versus sugar there are troubling indications in some areas. Results published… by the journal Pharmacology, Biochemistry and Behavior in 2010, by researchers from the Department of Psychology and the Princeton Neuroscience Institute reported on two experiments investigating the link between the consumption of high-fructose corn syrup (HFCS) and obesity. Using lab animal experiments the results found a high correlation between HFCS and obesity that was not found in a diet using equal amounts of table sugar.

In 2002 The Department of Commerce, International Trade Administration issued an official report on the sugar laws impact on the confection industry in the U.S. which characterized the results as follows:

  1. Employment in sugar containing products (SCPs) industries decreased by more than 10,000 jobs between 1997 and 2002 according to the Bureau of Labor

Statistics.

  1. For each one sugar growing and harvesting job saved through high U.S. sugar prices, nearly three confectionery manufacturing jobs are lost.
  2. For the confectionery industry in particular, evidence suggests that sugar costs are a major factor in relocation decisions because high U.S. sugar prices represent a larger share of total production costs than labor.   In 2004, the price of U.S refined sugar was 23.5 cents per pound compared to the world price at 10.9 cents.
  3. Many U.S. SCP manufacturers have closed or relocated to Canada where sugar prices are less than half of U.S. prices and to Mexico where sugar prices are about two-thirds of U.S. prices.
  4. Imports of SCPs have grown rapidly from $6.7 billion in 1990, to $10.2 billion in 1997, up to an estimated $18.7 billion in 2004 based on 2002 trends.

 

The fundamental problem with price supports is that no matter how justified at some point, as with most laws and government programs, they live on way past their usefulness.

 

Want to understand why Americans are getting fat? Look at the unintended consequence of trying to protect sugar growing jobs.

 

Another area where the anointed have taken action to protect us from ourselves is government direct intervention in controlling the American diet. While the changes worked on the American diet are significant there is also the quantity of our money (taxes) expended in the task to consider.

 

FAT, SALT AND OTHER BAD THINGS

 

It seems to be almost impossible to keep up with dietary research into what things in our diet are good for us and those that are bad. Salt is bad (recent research has now reversed that), eggs are bad (it seems eggs switch back and forth regularly), trans-fatty acids are bad (they were originally introduced as a healthy substitute for animal fat in our diet). For almost sixty years the recommendations regarding a healthy diet have emphasized that animal fat is a major contributor to coronary disease.

 

As a recent article in the New York Times notes “The new findings are part of a growing body of research that has challenged the accepted wisdom that saturated fat is inherently bad for you and will continue the debate about what foods are best to eat.

 

For decades, health officials have urged the public to avoid saturated fat as much as possible, saying it should be replaced with the unsaturated fats in foods like nuts, fish, seeds and vegetable oils.

 

But the new research, published on Monday in the journal Annals of Internal Medicine, did not find that people who ate higher levels of saturated fat had more heart disease than those who ate less”.

 

Since the 1960’s we have been pressured by our government and academics to avoid consuming fat in our diet. And while some will argue that the American people have not taken this government advice seriously the facts actually tell a very different story. So why did they urge this and what have been the results?

 

Historically, in the late 1930’s medical professionals began to notice an alarming increase in coronary disease. It was first made public in 1938 with an article in the Journal of the American Medical Association. By the 1950’s it was declared an epidemic.

 

Along comes Ancel Benjamin Keys, a scientist at the University of Minnesota with the explanation. His research pointed to high dietary fat as the culprit. He published a seven-country study “proving” that the level of coronary disease was a result of the fat content of various national diets. Other researchers and government officials embraced the findings and it didn’t hurt that he was a charismatic person with solid academic credentials. From that point on funding and emphasis in additional research, dietary recommendations and government policy changed.

 

The government so embraced his findings that within a decade whole new departments were created to inform and help the public make healthy eating choices. There were departments to educate, others to provide packaging requirements so the public could better identify what they were eating, and others to do research on better nutrition.

 

In 2013 there was an $8.7 million budget just for the Center for Nutrition Policy and Promotion within the Department of Agriculture. While the politicians are always quick to point out the need for the USDA to protect our food supply, inspection is actually one of the smallest parts of their budget. Consider 2010’s expenditures:

 

USDA 2010 budget (partial)

Farm and Foreign Agricultural Services $25 billion (not subsidies)

Food Safety $1 billion

Research, Education, and Economics $2.89 billion

Marketing and Regulatory Programs $2.75 billion

Food, Nutrition, and Consumer Services $97.95 billion*

  • (includes Supplemental Nutrition Program for Women, Infants and Children [$7 billion] and food stamps [$41.2 billion]). What is the remaining $47.5 billion used for?

That is $78.64 billion in non-food expenses at the USDA which represents the total average annual tax collected from over 17 million Americans

 

At the governments urging, butter was replaced by margarine, lard by vegetable shortening (Crisco which was a major funder of the American Heart Association) and other vegetable oils, and most importantly of all, eggs, cheese, and meat were replaced to a great extent by pasta and grain. The result of this is the actual American diet switched from animal fat to vegetable oil, from beef and pork to poultry along with a twenty-five percent increase in carbohydrates in the average American’s diet since the early 1970s.

 

Dr. Keys sealed the saturated fat argument in 1961 by taking a position on the nutrition committee of the American Heart Association, whose dietary guidelines are considered the definitive statement on the issue.

 

What were some of the results in this government led diet change in America? Increased consumption of carbohydrates could be one of the most significant negative changes. One of the problems is that carbohydrates break down into glucose, which causes the body to release insulin, which is a hormone that is extremely efficient at helping the body at storing fat. If carbohydrates weren’t enough, early on, clinical studies showed diets high in vegetable oil resulted in higher rates of cancer and gallstones.

 

In probability and statistics there is an adage that correlation is not causation. What that means is that without controls and consideration of multiple variable factors there cannot be high trust, let alone certainty, in the statistical results. Also it has become common for nontechnical people (media, government) to misunderstand study results as significant when the statistics actually conclude that the data is not outside of normal variation and probability. Oddly, in recent years an alarming number of researchers do not seem to attempt to correct the reporting when the media seriously misstates the nature of the research results, while in the past it was common. Perhaps they do not think that it is their job to educate anyone in probability and statistical hypothesis testing but the result is usually the uneducated media and politicians overstating the significance of the results.

 

During all this nobody thought to question the type of diet Americans had at the turn of the nineteenth century and before and what negative health results it should have produced. While there doesn’t seem to be much research to document the character of that historical diet it is highly unlikely that the American diet at that time was lower in fat or salt than the diet in the first third of the twentieth century. Butter, bacon, beef, biscuits, fatback, lard were all staples of most nineteenth century American’s diet. The Southern diet as recently as the mid twentieth century was characterized as a heart attack waiting to happen. With that history why did fat suddenly become the primary suspect in the cause of coronary disease?

 

What is now emerging is the very real possibility that Americans have born the high cost of a misguided information campaign sponsored by our elected officials and their bureaucrats. This cost was not just in tax dollars wasted but in the negative impact on Americans health. And now our government is very concerned with our diet and the obesity problem that they are partly responsible for creating.

 

In attempting to defend the government position an argument has been put forward that the crises was real and existed long before it was recognized and became apparent only after medical procedures improved for identifying heart attacks in the twentieth century. While death certificates may have missed actual causes of death in those early years, examinations of specific and large institutions detailed autopsy records have put that argument to rest. It seems that there was a very real and dramatic increase in coronary disease from the 1930s to the 50s and beyond. The question seems to remain what the cause could be? If the American diet had not changed significantly in the early twentieth century what was causing the major increase in coronary disease?

 

Actually there have been a few papers that have offered a credible explanation with one being published as early as 1962. It seems that the smoking of tobacco while common was actually lightly used until the late nineteenth century. Around 1885 American companies began to manufacture and market machine made cigarettes. Public consumption went up dramatically from a pre-machine annual consumption of 40 per year to 40 per month by 1900 and 80 per week by 1925. Allowing a 20 to 30 year lag time for symptoms to occur and you have a credible model to explain the 1930s spike in heart problems and the continuing increases thru the 1950s.

 

What should be taken away from all this? One thing would be to question the wisdom of our political leaders in areas beyond the basic needs our government was originally intended to provide. There are a lot of research results and opinions of scientists that are proven to be wrong over time.

 

Another is to question the ever-growing influence of government funds in scientific research. It seems that our political leaders have developed a preference for funding scientists whose research leans in the direction of confirming popular and already held beliefs. And the possibility that our lives might be better and healthier and more productive if government stopped being so concerned about protecting us from ourselves.