You are currently browsing the archive for the Politics category.

Creating a working scientific theory is hard work. The observations and experiments must support the theory, and new predictions must be made and verified. Once established, more and more is expected of it. On the positive side, the theory might feed technological developments. However, it also might require re-thinking dearly-held philosophical, economic, or political views.

It appears to be much easier to attack a dominant paradigm. One needs to do only two things: to sow doubt about the dominant paradigm, and to establish oneself as the only truthful authority. Here is how it works, ordered in a list of strategies that are increasingly nihilistic:

(1) Amass information that supports one’s view.

Lists of titles of articles and web pages can be constructed to convey a sense that there a wide body of support for one side of the argument. Do not present the data supporting the opposing view in a similar manner; the relative numbers will probably undermine one’s case. However, journal articles and abstracts often contain impenetrable technical jargon or provocative questions. These are particularly useful, because their language will be read to support a hypothesis, even if their results do not.

(2) Search out examples of poor science in support of the dominant paradigm.

Lists of titles and abstracts that support a view can be nicely complemented with catalogs of biased or erroneous articles that were written in support of the opposing view. There should be plenty of these, because poor-quality yet uncontroversial results receive less scrutiny than ones that are obviously wrong. Their existence undermines the other side’s credibility. Finding them also makes it appear that one has conducted an unbiased and exhaustive search of the literature, and found it lacking.

(3) Present all information as equal.

The quality and reliability of the thousands of scientific papers that are published each year varies widely. Many studies are designed poorly, with samples that are too small or experiments that are dominated by noise. They are published anyway, because scientists must publish in order to receive further funding. This can be used to one’s advantage. If data can be separated from its reliability in a presentation, then all results can be portrayed as equivalent, and any conclusion can be molded from it.

(4) Emphasize any doubt in the opposing paradigm.

Scientific researchers always have to equivocate their conclusions with statistical statements about the relative certainty of a result. Scientists also have a tendency to follow a description of a result with truisms about how much is left to be learned. This can be used to one’s advantage. The restrained language of many scientists is rhetorically underwhelming when contrasted with bombastic certitude.

(5) Remind everyone that science is not a democracy.

Sure, science is based on a shared reality, in which experiments and observations must be reproduced by many people. However, scientists will readily admit that if information is faulty or incomplete, significant theories might be found to be wrong. Therefore, one can ignore the observations and experiments that underlie the dominant paradigm, and simply point out
that it is possible even for large numbers of scientists to be wrong.

(6) Appeal to history’s paradigm shifts.

History abounds with stories of dominant paradigms that were overturned, ushering in new eras of understanding. These can be detached from their historical context, and turned into anecdotes that confirm an eccentric viewpoint. Simply avoid any explanation of why the old paradigm was held to be true, how evidence emerged that was contrary to the fading views, and what ideas motivated those who developed the new paradigm. The important thing is that ideas change, so there is no reason to trust our current knowledge.

(7) Demonize the opponent.

Describe those who hold opposing views in terms that will preclude people from listening to them. If ones opponents can be described in emotionally-laden language, many people will be less inclined to think critically about the debate at hand. A slur should be matched to the audience at hand: the religious find materialistic atheists repugnant; conservatives find socialists (or even liberals) threatening; liberals despise the self-interest of corporations, particularly big
chemical and pharmaceutical companies.

(8) Reveal a conspiracy.

This serves two purposes. First, it serves as a counter to information that
does not serve one’s rhetorical goal. Any contrary information can be dismissed
as a product of the conspiracy. By extension, it is only possible to trust the information presented by people who oppose the dominant paradigm. Second, by revealing the knowledge of the conspiracy to one’s followers, one compliments their intelligence, because those in on the secret feel that they are exceptionally perceptive.

With these steps, one can generate a raging controversy that will entrance the media and enliven the Internet. Simple right?

OK, that was just letting off steam. I don’t, in fact, believe that conspiracy theorists and true believers actually check off an eight point list when they wish to engage in debate. I suspect that these strategies emerge on their own, as people search for ways to convince themselves and others of their ideas.

Indeed, some of the above strategies resemble components of a genuine scientific debate. For instance, I implement a form of the first strategy when I use approximations in my work. I do so without compromising my scientific integrity by performing calculations that show that the things I left out won’t change my conclusions, at least to the accuracy that is required for the problem at hand. Unfortunately, the process of making approximations (or setting aside information that is not directly relevant) requires careful justification, and can invite controversy.

Therefore, I find the parallels between a scientific debate and someone trying to push a pet theory to be vexing. I wonder, how straightforward is it for someone to tell the relative quality of Ned Wright’s cosmology tutorial and the Iron Sun hypothesis? How does one convince non-experts that the risk from vaccines is tiny compared to their enormous benefit, when a Kennedy lends his pen to the other side? How can a lay person know whether to trust New Scientist or Dr. Roy Spencer on global warming?*

Scientific debate has always been difficult. I wanted to blame the Internet, but then I realized that the good old days weren’t much better. In my freshman year in high school, I tried to use a book I found in the library to write a report on the lost city of Atlantis. The book claimed that Atlantis was once a seat of technology, with radios, flying cars, and nuclear power, and I nearly ran with that. Fortunately, Mr. Winters guided me to more sane literature describing the eruption of Mt. Santorini, and I got to learn something useful about vulcanism and the early demise of the Minoan civilization.

So what is there to do? I hope that teachers, scientists who write for the public, publications that cover diverse scientific issues, and scholarly organizations will win the debate… Otherwise, I won’t get much joy if nature resolves things the hard way.

(*If you were wondering, I trust Ned Wright, Wired, and New Scientist on the above issues, respectively.)

I wanted to get an idea of how much it would cost to undo global warming by engineering the atmosphere. A friend stated emphatically that it was cheap, but I wasn’t so sure. So, I decided to do a back-of-the-envelope calculation. You can find more reliable numbers elsewhere, but I’m posting this anyway, in case someone might find it fun to see what one can do with some numbers found with Google, some arithmetic, and an hour or two of spare time.

The simplest thing one could do to change the climate (aside from burning fossil fuels) is to eject a chemical into the atmosphere that reflects sunlight back into space. The obvious candidate chemical is sulfur dioxide. In 1991, Mount Pinatubo erupted, and put somewhere between 17 million and 20 million tons of sulfur dioxide into the stratosphere. As a result, the Earth cooled by about 0.5 degree Celsius for a few years. To temporarily alleviate global warming of a few degrees, we would need to put a comparable amount of sulfur dioxide (within factors of a few) into the atmosphere each year.

How much would it cost to inject this much stuff into the upper atmosphere? As a rough estimate of the cost, I assumed that it would cost about the same amount as getting things into the air by airplane. A 747 freighter uses about 10,000 kg of fuel to take off to a height of 10,000 feet, for a gross take-off weight of 360,000 tons, about 140 tons of which are cargo. Fuel costs $0.70 per kg ($2/gallon, density of 2.8 kg/gallon). So, I get a cost of $50 per ton to get things into the air, based on the fuel alone. That estimate doesn’t get the material into the stratosphere, but most of the air resistance is near the ground, so this shouldn’t be too far off (feel free to correct me if I’m wrong).

So, I estimate that getting 20 million tons of sulfur dioxide into the stratosphere will cost at least $1 billion dollars a year. My number should be accurate to within an order of magnitude. I don’t know whether it is expensive to make sulfur dioxide (probably not), and the distribution system might have different costs than a simple airliner (these don’t typically fly in the stratosphere, after all).

Nonetheless, my estimate isn’t too different from one reported in ArsTechnica (quoting an
article in Geophysical Research Letters Alan Robock and collaborators): they estimate $4 billion dollars if one uses F-15Cs, and $375 million dollars if one uses KC-135 Stratotankers. However, from the description at ArsTechnica (I’d have to use a computer at work to get the actual paper), the authors of the proper study seem to think that we only need to add enough sulfur dioxide to counter the warming trend, so that Mount Pinatubo only needs to be reproduced once every four to eight years. This sounds to me like an underestimate, but either way, the numbers are within my order-of-magnitude tolerance.

My friend suggested that a single developing country might undertake geoengineering on its own. Setting aside the political problems, I also wanted to know, is this amount of money reasonable? I think so. For comparison, the budget of India’s space agency is about $750 million dollars a year (assuming an exchange rate of $1 to 46.7 Rupees, where a Crores Rupees is 10 million Rupees). Space agencies are a bit of a luxury, so $1 billion does not seem out of reach.

This scheme is also inexpensive compared to the cap-and-trade legislation that is heading through congress, which may cost on order $150 billion per year, and that’s for the U.S. alone.

So, in terms of cost, my friend is probably right: putting sulfur in the stratosphere is relatively cheap and accessible, especially when compared to changing our economy to use less carbon.

However, I still don’t think that anyone will do it, because fears of the potential side effects will probably dissuade us. The sulfur dioxide from Mount Pinatubo exacerbated the hole in the ozone layer, for instance. However, who am I to divine the heart of man? If you want politics, FiveThirtyEight probably has a better discussion of those issues.

I was listening to NPR’s Living on Earth this morning, and my curiosity was piqued by a piece on endocrine disrupting compounds. Over the past 12 years or so, scientists have been suggesting that some chemicals that are found to be harmless at high doses might have important biological effects at low doses. This idea is important, because widely-used chemicals such as bisphenol-A (used in food containers) are thought to be harmful to human metabolism. Bisphenol-A has been banned in Europe and Canada, but is still widely used in the U.S. It is a serious issue, because some scientists believe the subtle effects of these compounds could be contributing to the rise in allergies and some cancers in the U.S.

The piece was an interview with Dr. J. Peterson Myers. The piece answered the question about how endocrine disrupting compounds act: they mimic hormones and interfere with the the ability of genes to produce proteins. The part that got me curious was the following claim about their effects:

I should emphasize that it’s not even close to brand spanking new. It’s solid in endocrinology. This is something that physicians have to structure their drug deliveries around. They know that at low doses you can cause effects that don’t happen at high doses. In fact, you can cause the opposite effect.

This seems to imply that the low-dose effect disappears at high doses.

With most toxins, there is a threshold level at which they begins to have harmful effects. For instance, at low doses, botulinum toxin merely paralyzes muscles locally in the body. As Botox, the toxin is used in cosmetic treatments to smooth wrinkles, and prevent the facial expression of emotion (okay, the second effect may not be the intended one). However, at high doses, the toxin causes botulism, a potentially-fatal form of food poisoning. Examples like this inspired the aphorism, “The dose makes the poison.”

The example given in the radio piece of an endocrine disruptor that acts different at low and high doses was Tamoxifen. Tamoxifen is used to prevent the recurrence of breast cancer. At high doses, the drug, and by-products of the drug made by the body, binds to estrogen receptors in the breast. Breast cancer tumor cells require estrogen to grow, so Tamoxifen prevents the growth of tumor cells. However, it turns out that in the uterus, Tamoxifen acts more like estrogen, and can encourage the growth of uterine cancer.

Unfortunately, the effects of Tamoxifen that I have been able to find (note that I am not a biologist, and don’t really know how to search the relevant literature) don’t seem to address the original question I had. Tamoxifen, at the same dose, acts differently on estrogen receptors in different parts of the body. It also acts differently on various hormonal receptors in fish. However, I was not able to find an explanation on how it might be harmful at low doses, but beneficial or benign at high doses.

I can construct a thought experiment, in which a chemical interacts badly with a receptor at low doses, but has a beneficial reaction at a higher dose. The bad reaction at low doses would be a side-effect of its use at higher doses. However, I can’t figure out how to get rid of the bad interaction from the low dose. Shouldn’t it always be identifiable?

Perhaps I am missing the point. It is possible that the problem is that the EPA doesn’t look for the kind of detrimental effects that are characteristic of endocrine disrupting compounds, including elevated long-term risks of cancer. In that case, complaining about the doses tested misses the point. The real problem could be that the testing methods aren’t designed to catch subtle-enough effects.

So, the cumulative effects on an individual of exposure to many different endocrine disrupting compounds might be important. I am going to try to find an expert to explain this better to me. Take this post as the start of a teaching moment: one was or another, it is an example of a “near-miss.”

The recent Supreme Court decision on Ricci vs. DeStefano has sparked a lot of talk about affirmative action and “reverse discrimination.” I tend to agree with suggestions that affirmative action should be based on things other than race, such as income level, educational opportunities available where one grew up, and whether ones parents went to college.

However, I have been bothered by all the emphasis on “reverse discrimination.” My hunch has been that this doesn’t do justice to the magnitude of the discrimination that some minorities face. So, I tried to put some numbers on discrimination. The place I knew I could start was affirmative action in college admissions.

I attempted to make a crude estimate of how race-based admissions might decrease the chances that a member of a well-represented majority would be accepted to an elite university. To do this, I needed to compare the relative admission rates of those who are members of under-represented minorities, compared to those for the general population. Say that a fraction f of students are admitted to a school. That means that of M students that apply, N=fM are admitted. Now say that I know the fraction of those students that are under-represented minorities, g. To estimate the effect of racial preferences, I will assume that under-represented minority students were admitted at some factor x times the rate of non-minority students. I happen to have been able to find estimates for these numbers relatively easy, which is why I took this approach.

I then wanted to estimate the fraction of non-minorities admitted as a function of x, the effect of racial preferences. The number of non-minorities admitted is n=(1-g)N. The number of non-minority applicants is m = (1-g/x)M. The admissions rate for non-minorities is then f’ = n/m = (1-g)/(1-g/x)N/M = (1-g)/(1-g/x) f. By changing x, I can tell how the fraction of admitted non-minorities would be affected by affirmative action.

So, I found some admissions numbers for Harvard in the Boston Globe. In 2009, Harvard admitted f=7% of applicants. Other Ivy League schools admitted about 10%, so this seems like a good number to work with. Of the students accepted to Harvard, g=22% were from under-represented minorities. This seems to hold true generally at the Ivy League and University of California Schools, so I think this is also a good number to work with.

I was not able to find numbers for x for Harvard, but I needed to make some assumptions to construct any sort of argument. So, I tried to find numbers from other elite schools. From an article in UCLA’s student newspaper, it would appear that minorities are admitted at a rate up to x=2 times higher than the rates of other students (for MIT). This is roughly supported by the factor-of-two decline in under-represented minorities attending UCLA and UC Berkeley after California ended race-based admissions.

So, if I assume that under-represented minorities are equally qualified as other students, but are given favorable treatment, so that x=2, I get f’ = (1-0.22)/(1-0.22/2) 7% = 6%. So, in this scenario, there is roughly a 1% chance that a non-minority would be negatively impacted by affirmative action.

If I wanted to take a worst-case scenario and assume that minorities are not in fact equally qualified, then I need to do a different calculation. I must emphasize that this is not at all what I believe — this is simply to get a mathematical bound. Let’s pretend that an elite school suddenly decides to admit no minorities. The number of non-minorities admitted could then go up to n=N, and the effective applicant pool would shrink by the fraction that are minorities, so that m = (1-g/x)M. Therefore, f” = 1/(1-g)f = 1/(1-0.22/2) 7% = 8%. So, by changing the rates of admissions for minority students, I could conceivably change the rate of admission for non-minority students to an elite school like Harvard by as little as 1%, and at most 2%.

Fractionally, this is a significant change in the chance that an individual in the majority would get admitted. However, because the absolute chances of getting into Harvard are small, the actual chance that any individual non-minority would be affected by affirmative action is slight — a couple percent at most.

To put this in perspective, I looked into other forms of racial discrimination. A pair of economists did an experiment in 2004, in which they tried to judge the effects of discrimination by sending resumes to employers in the Chicago and Boston areas. The resumes were made to be identical, although half had stereotypically “white” names (such as Emily Walsh), and the other half had stereotypically African-American names (such as Lakisha Washington). The resumes with white names got callbacks in 10% of the cases, while those with African-American names got callbacks in only 6% of cases. So, the effect is about twice as large as the worst-case scenario for “reverse discrimination” under affirmative action.

What about looking at the most dramatic racial disparity in American life — prison populations? In 2008, a black man was 6.6 times more likely to be in prison than a white man, and a hispanic man was 2.4 times more likely to be in prison than a white man. Some of this has to do with the violence of the inner cities. However, a big part of the problem is the fact that, despite the fact that drug use rates are very similar for all racial groups, black men are sentenced for drug offenses at 13 times the rate of white men.

I have to mistrust the motives of anyone who decries “reverse discrimination,” but won’t spend equal breath bemoaning the remnants of racial discrimination in the work force. And if one wants to address those issues, it should be in small breaths taken between shouting about the big issue: that unequal law enforcement policies undermine black and hispanic communities, and deprive their children of opportunity.