science

You are currently browsing the archive for the science category.

Creating a working scientific theory is hard work. The observations and experiments must support the theory, and new predictions must be made and verified. Once established, more and more is expected of it. On the positive side, the theory might feed technological developments. However, it also might require re-thinking dearly-held philosophical, economic, or political views.

It appears to be much easier to attack a dominant paradigm. One needs to do only two things: to sow doubt about the dominant paradigm, and to establish oneself as the only truthful authority. Here is how it works, ordered in a list of strategies that are increasingly nihilistic:

(1) Amass information that supports one’s view.

Lists of titles of articles and web pages can be constructed to convey a sense that there a wide body of support for one side of the argument. Do not present the data supporting the opposing view in a similar manner; the relative numbers will probably undermine one’s case. However, journal articles and abstracts often contain impenetrable technical jargon or provocative questions. These are particularly useful, because their language will be read to support a hypothesis, even if their results do not.

(2) Search out examples of poor science in support of the dominant paradigm.

Lists of titles and abstracts that support a view can be nicely complemented with catalogs of biased or erroneous articles that were written in support of the opposing view. There should be plenty of these, because poor-quality yet uncontroversial results receive less scrutiny than ones that are obviously wrong. Their existence undermines the other side’s credibility. Finding them also makes it appear that one has conducted an unbiased and exhaustive search of the literature, and found it lacking.

(3) Present all information as equal.

The quality and reliability of the thousands of scientific papers that are published each year varies widely. Many studies are designed poorly, with samples that are too small or experiments that are dominated by noise. They are published anyway, because scientists must publish in order to receive further funding. This can be used to one’s advantage. If data can be separated from its reliability in a presentation, then all results can be portrayed as equivalent, and any conclusion can be molded from it.

(4) Emphasize any doubt in the opposing paradigm.

Scientific researchers always have to equivocate their conclusions with statistical statements about the relative certainty of a result. Scientists also have a tendency to follow a description of a result with truisms about how much is left to be learned. This can be used to one’s advantage. The restrained language of many scientists is rhetorically underwhelming when contrasted with bombastic certitude.

(5) Remind everyone that science is not a democracy.

Sure, science is based on a shared reality, in which experiments and observations must be reproduced by many people. However, scientists will readily admit that if information is faulty or incomplete, significant theories might be found to be wrong. Therefore, one can ignore the observations and experiments that underlie the dominant paradigm, and simply point out
that it is possible even for large numbers of scientists to be wrong.

(6) Appeal to history’s paradigm shifts.

History abounds with stories of dominant paradigms that were overturned, ushering in new eras of understanding. These can be detached from their historical context, and turned into anecdotes that confirm an eccentric viewpoint. Simply avoid any explanation of why the old paradigm was held to be true, how evidence emerged that was contrary to the fading views, and what ideas motivated those who developed the new paradigm. The important thing is that ideas change, so there is no reason to trust our current knowledge.

(7) Demonize the opponent.

Describe those who hold opposing views in terms that will preclude people from listening to them. If ones opponents can be described in emotionally-laden language, many people will be less inclined to think critically about the debate at hand. A slur should be matched to the audience at hand: the religious find materialistic atheists repugnant; conservatives find socialists (or even liberals) threatening; liberals despise the self-interest of corporations, particularly big
chemical and pharmaceutical companies.

(8) Reveal a conspiracy.

This serves two purposes. First, it serves as a counter to information that
does not serve one’s rhetorical goal. Any contrary information can be dismissed
as a product of the conspiracy. By extension, it is only possible to trust the information presented by people who oppose the dominant paradigm. Second, by revealing the knowledge of the conspiracy to one’s followers, one compliments their intelligence, because those in on the secret feel that they are exceptionally perceptive.

With these steps, one can generate a raging controversy that will entrance the media and enliven the Internet. Simple right?

OK, that was just letting off steam. I don’t, in fact, believe that conspiracy theorists and true believers actually check off an eight point list when they wish to engage in debate. I suspect that these strategies emerge on their own, as people search for ways to convince themselves and others of their ideas.

Indeed, some of the above strategies resemble components of a genuine scientific debate. For instance, I implement a form of the first strategy when I use approximations in my work. I do so without compromising my scientific integrity by performing calculations that show that the things I left out won’t change my conclusions, at least to the accuracy that is required for the problem at hand. Unfortunately, the process of making approximations (or setting aside information that is not directly relevant) requires careful justification, and can invite controversy.

Therefore, I find the parallels between a scientific debate and someone trying to push a pet theory to be vexing. I wonder, how straightforward is it for someone to tell the relative quality of Ned Wright’s cosmology tutorial and the Iron Sun hypothesis? How does one convince non-experts that the risk from vaccines is tiny compared to their enormous benefit, when a Kennedy lends his pen to the other side? How can a lay person know whether to trust New Scientist or Dr. Roy Spencer on global warming?*

Scientific debate has always been difficult. I wanted to blame the Internet, but then I realized that the good old days weren’t much better. In my freshman year in high school, I tried to use a book I found in the library to write a report on the lost city of Atlantis. The book claimed that Atlantis was once a seat of technology, with radios, flying cars, and nuclear power, and I nearly ran with that. Fortunately, Mr. Winters guided me to more sane literature describing the eruption of Mt. Santorini, and I got to learn something useful about vulcanism and the early demise of the Minoan civilization.

So what is there to do? I hope that teachers, scientists who write for the public, publications that cover diverse scientific issues, and scholarly organizations will win the debate… Otherwise, I won’t get much joy if nature resolves things the hard way.

(*If you were wondering, I trust Ned Wright, Wired, and New Scientist on the above issues, respectively.)

I have just been arguing with myself over an Op-Ed in the New York Times yesterday, which partly described a controversy surrounding a recent trial of an AIDS vaccine in Thailand. The controversy centers on whether the results of the study were statistically significant — that is, whether the authors truly found that the vaccine prevented infections, or whether the apparently-favorable result was a product of chance. Apparently, some outside observers felt that, when the research team first announced their results, they had cherry-picked the statistical test that cast their results in the most favorable light.

At first, I was inclined to support the author of the Op-Ed, Seth Berkley, in defending the study. I think that it goes without saying that finding an AIDS vaccine is worth a significant monetary investment, because it would save many lives.

However, one sentence of the Op-Ed left me taken aback:

This illustrates why the controversy over statistical significance is exaggerated. Whether you consider the first or second analysis, the observed effect of the Thai candidates was either just above or below the level of statistical significance. Statisticians will tell you it is possible to observe an effect and have reason to think it’s real even if it’s not statistically significant. And if you think it’s real, you ought to examine it carefully.

I read that to my wife, and she put words to my own thoughts, saying, “Yes, there is a word for thinking that something is real when it is not statistically significant: bias.”

Statistics is useful because it allows us to quantify how certain we are that something is real, independent of our desire for it to be real. When the result of an experiment falls near the boundary of statistical significance or insignificance, all that one can infer is that one doesn’t have enough information to securely confirm or refute the original hypothesis. If the hypothesis is about something critical, then one should get more data.

The attitude expressed in the above paragraph concerns me, because when faced with a result that lies on a statistical margin, there is a tendency for some researchers to try a number of statistical tests, and only choose to report the test in which the result is significant. The problem with trying a number of statistical tests is that it increases the chance that the randomness of nature will produce a spurious positive result. The order in which the data from the Thailand AIDS vaccine was released apparently raised exactly this concern among some scientists.

So, I took a deep breath, and checked the numbers. The initial press release claimed that there was only a 4% probability that the reduced infection rate in the vaccinated group was a product of chance, whereas the later results gave chances of 8% and 16% that the result was a product of chance.

Apparently, according to the Wall Street Journal, biologists consider a <5% probability that a result could originate in chance to be the key level of significance for judging that a result is probably real. This seems reasonable. For that matter, the <16% chance probability that the result is spurious sounds pretty good to me, especially given that AIDS is a life-or-death situation, and anything that would help save those lives is important to pursue.

I have used a range of probability cutoffs in my studies. I have reported signals that have had a 10% probability of being caused by chance when the existence of the signal was mundane. For example, I called any periodic signal with a <10% chance probability a "detection" when I was writing my thesis on X-ray bursts from neutron stars, because it was already well-established that the phenomena occurred, and I was simply trying to build a sample. However, for a surprising result, my colleagues and I would demand a higher statistical significance. When I discovered a neutron star in a Chandra observation of the young star cluster Westerlund 1, I had to show that there was a <0.1% chance that the neutron star was there by accident before the referees would agree that the neutron star was actually in Westerlund 1 (and even then, I had to explain the statistics to the referees twice).

However, that is enough about me. The point is, there is no "rule" as to what statistical significance level is reliable. One could be fooled by a one-in-a-million result if one is unlucky. Rather, it is a matter of what will convince one's audience given the importance of the result (OK, I suppose that is the bias that upset me earlier), and whether the presentation of the result is faithful to any lingering uncertainties.

Unfortunately, the AIDS result will continue to be controversial, because it is marginal. To quote the Wall Street Journal article,

Observers noted that the result was derived from a small number of actual HIV cases. New infections occurred in 51 of the 8,197 people who got the vaccine, compared with 74 of the 8,198 volunteers who got placebo shots.

These numbers dampen my initial enthusiasm, especially given the spectacular successes of past vaccines. I was recently reading about Louis Pasteur’s vaccine work in a collection of biographies of 19th century scientists (The Golden Age of Science, edited by Bessie Zaban Jones) . Pasteur developed a number of very successful vaccines, including one for anthrax that reduced the mortality rate of oxen and sheep from the disease from 10% to 1%, and one for rabies that decreased the mortality rate in humans from over 15% to about 1%. These are huge, statistically significant improvements. The initial samples that established the efficacy of these vaccines didn’t need to be large. The first anthrax study was of 50 sheep, half of which were vaccinated, and nearly all of which survived exposure to anthrax. It is somewhat disheartening that what gets hyped as progress in medicine now can be so slight in comparison.

It’s a shame that the results came out the way that they did, and I don’t think that Seth Berkley’s Op-Ed will help. Both trigger the destructive instincts of people (like me) who want to see justice meted out to scientists who hype up their work by playing with statistics.

However, one also doesn’t want to reject a promising vaccine just because one resents feeling played. Looking into this, two numbers jumped out at me. First, the most conservative estimate is still that there is a >84% chance that this vaccine works. That isn’t bad. Second, even if the AIDS vaccine “only” reduces infection rates by 30%, that could prevent up to 1,000,000 infections a year. That would be huge.

I attended a noontime lecture today given by Felice Frenkel, describing her work as a science photographer, and her innovative projects on using drawings as a learning tool for scientific concepts. Her work is a great example how important good visualizations are in helping to convey ideas. She has a new book coming out, No Small Matter, Science of the Nanoscale (with George Whitesides) that looks like it will be interesting. I am putting it on my list of things I’d like to have. The photos she showed in the talk were gorgeous. It looks to be a good study of how visual representations can inform physical intuition.

Ms. Frenkel also talked about some of the NSF-funded work she did on a program to examine how student drawings could be used as a learning tool. The Picturing to Learn site has examples of student drawings that were made in response to questions about basic physical concepts. They reveal what the students are thinking, and can be used to highlight parts of the concepts that students missed. If I were teaching, I probably would include some of these ideas in my classes.

One of my last astronomy projects was a study of a star surrounded by the debris of two planets that suffered a catastrophic collision.

Two planets colliding around the binary suns of BD +20 307.

Artist

When asteroids or planets collide, the debris that results ends up orbiting the star. Our own solar system contains a small amount of this sort of dust, which is produced by collisions between small asteroids and by comets evaporating as they approach the Sun. This dust is known as the Zodiacal light. It shines by reflecting sunlight. producing a glow that can sometimes be seen in the morning sky.

Very rarely, collisions between larger asteroids, or even planets, occur around other stars. This creates immense amounts of dust that can absorb light from its star, and re-radiate it at longer wavelengths. Therefore, astronomers identify stars around which collisions have occurred by looking for stars that are unexpectedly bright at infrared wavelengths.

We don’t really know how often big collisions occur between asteroids and planets around the Sun. Until recently, we only had indirect information from our own Solar System to work with. For instance, we think that when the Earth was less than 50 million years old, a Mars-sized planet collided with it, breaking off a material from our planet to form the Moon. The dust kicked up by this collision would definitely have been visible to extraterrestrial astronomers.

I am not sure whether later impacts would have produced much dust. Between an age of about 400 and 700 million years, there is some geological evidence that the Moon was bombarded by a large number of asteroids, during a period referred to as the late heavy bombardment. Presumably, other inner planets would have been bombarded as well, although erosion and volcanism would have erased the signs. This could have produced dust that was visible to astronomers near distant stars.

At its current age, asteroids are believed to hit Earth every few hundred million years. These isolated events might (or might not) wreak havoc on Earth, but they are minor on cosmic scales. In the past 20 years, we have also seen Jupiter hit by comets not once, but twice. However, Jupiter is so massive that it attracts those sorts of collisions, and in any case, the asteroid simply gets absorbed into the planet, without polluting the surroundings with dust.

So, to get a better handle on how often collisions occur around stars of different ages, astronomers get a large catalog of stars (such as the Henry Draper catalog and its extensions, which contain the 359,083 brightest stars), and see whether any are unexpectedly bright in the infrared. We find that most stars around which collisions have occurred are young (10 to 100 million years old), because the orbits of their planets and asteroids haven’t settled yet. Old stars surrounded by the debris of annihilated planets are reassuringly rare.

Nonetheless, we have some good evidence that big collisions do sometimes happen, even around older stars. Over the summer I noticed a couple papers. Lisse et al., recently described the composition of dust around a 12 million year old star HD 172555. They infer that a large asteroid recently collided with a rocky planet around that star. Moor et al. report the discovery of four stars surrounded by dust. Three of them are less than 200 million years old (phew), and one of which is probably about 2 billion years old (HD 169666, uh-oh). Moor et al. do not elaborate much on the origin of the dust in their paper (frankly, in the preprint version, they didn’t do a particularly good job of presenting their conclusions).

My collaborators and I, however, do not suffer from similar restraint. I joined a project led by Ben Zuckerman, to study the dustiest Solar-type star known, BD +20 307. The amount of dust around the star led us to believe that two planets had collided recently, in an event of a magnitude similar to that which formed the Moon. We expected to discover the star was young, but we were in for a surprise. . .

I have written a piece about the result, and placed it here. You can also see our press release, if you want the quick version.

I’ve been reading a bit more about endocrine disruptors, because it turns out that I don’t really know any experts that can do the thinking for me. I belatedly decided to read the paper that Dr. John Myers (who I mentioned a couple posts ago) and collaborators wrote on the possible effects of low doses of endocrine disruptors.

That paper contained a general overview of the problem. It had a few interesting examples of responses that were not monotonic with dose. For instance, hormone-mimicking drugs used to treat cancer (Tamoxifen, for example) can cause the cancer to flare briefly, until the concentration of the drug gets high enough to start inhibiting the cancer. This is an interesting result, and seems to be the substance of Dr. Myer’s claim from my last post on this. However, at first I thought that he was saying that a chemical could have no discernible effect at high doses, but a significant one at lower doses. If that were the case, it would severely impact how one carries out toxilogical testing.

So, to get more background, I read a review article cited in Dr. Mayer’s paper. It provides more detail about other ways in which large effects can be produced by small exposures.

The most scandalous suggestion in the review article is that some toxicology studies extrapolate low dose responses from a single high dose measurement, by assuming the relationship between dose and effect follow a linear relationship with zero response at zero dose. That assumption would be invalid if a response saturates. This occurs for many biological systems, because the available chemical receptors get filled as the hormone concentration increases. I find it rather hard to believe that toxicology studies regularly use this assumption, because, frankly, anyone making such an assumption probably shouldn’t have access to a laboratory.

At the very least, I would imagine that toxicology studies would keep testing lower doses until they were sure the response looked linear, and then start extrapolating. That should generally be safe, I would think. The response of a saturating system will get steeper at lower concentrations, so as long as one extrapolates from more than one data point, one will tend to over-estimate the response at lower doses. Therefore, the extrapolation would be conservative. However, at this point, I am reading the review article in a much less worrisome light than the authors meant it to be taken. I am assuming that if an astronomer finds the authors’ point obvious, biologists would already be accounting for it in their work.

Perhaps the concern is that, even if one finds the linear regime of the high-dose response, other responses will have already saturated at doses orders of magnitude lower. In this case, the predictive power of the high-dose studies would be very limited. However, as I mentioned in my last post on this, I would still expect to see the low-dose effect if I test to the point where the high-dose effect is negligible.

Up to now, I have only described saturating response curves, which are monotonic. The review article also suggests that some responses can be non-monotonic. Unfortunately, the first example given in the review article, on how an estrogenic chemical (diethylstilbestrol, or DES) affects prostate development in mice, seems to be a poor choice to make their point about toxicological studies. At low doses, the chemical causes the prostate to grow. At high doses, the prostate ends up smaller, but only because the chemical produces “gross abnormalities in the reproductive organs.” This is a rather trivial example, like saying that getting hit in the head causes headaches at low doses, but cures them at high doses when it kills you. Clearly, at all doses tested, you can tell this chemical (like getting hit in the head) is generally not a good thing.

Likewise, the review article points out that vitamins must be taken at the right dose. When too little vitamins are present, the deficiencies cause diseases. At high doses they can become toxic. However, this example has no bearing on low-dose toxicology, because there is no regime in which one is concerned about getting too low a dose of synthetic hormone-mimicking chemicals.

There are a lot of valid points in the review article. I agree that one has to be concerned that multiple chemicals might have similar responses, so that, for instance, the concentration of any given estrogen-mimicking substance isn’t nearly as important as the sum of the concentrations of all such chemicals. It also seems plausible that very low doses of hormone-mimicking chemicals could affect the development of fetuses, because the chemical signals that allow cells to differentiate can be small. (The book Mutants, One Genetic Variety and the Human Body, by Armand Marie Leroi, describes how a fetus develops very well.)

However, the articles by Dr. Myers et al. and Dr. Welshons et al. are cluttered with examples that don’t seem relevant to the problem at hand. Mostly, I find it hard to believe that it is commonplace for biologists to extrapolate from single data points. If I were in a bad mood, I could dismiss their work as unnecessarily alarmist.

I am trying to reserve judgment, however, because medicine has a lot of mysteries at the moment. I am constantly hearing about rises in the rates of allergies, autism, and cancers. Some of this is probably because the tests to find these conditions are more effective than previously. However, unless it is demonstrated that the increase in rates is caused by people looking harder, it seems important to consider the possibility that other factors of our modern lives might be making them more common.

Fixing the Solar Model

Over the past few years, I have been following a minor controversy about the current model for the Sun. It seems that advances in computational power and numerical techniques have allowed astrophysicists to improve their estimates of the relative amounts of each element in the Sun, and that this has broken our best model for the structure of the Sun.

Active Region 1002 on an Unusually Quiet Sun.

Active Region 1002 on an Unusually Quiet Sun.

Our understanding of the structures of stars (like the Sun) is detailed, but contains a lot of necessary approximations. One starts with three relationships that are well-understood: (1) the balance between a star’s gravity and the pressure of its plasma (hydrostatic balance), (2) a relationship between density and mass (a continuity equation), and (3) a relationship between density, pressure, and temperature (for instance, the ideal gas law).

Next, one needs to know (4) the rate at which energy is generated by thermonuclear fusion within stars. In the Sun, four atoms of hydrogen are built into a helium atom through a process that also produces, for brief periods before they decay, unstable isotopes of berylium, boron, and lithium (these reactions are important in understanding the Solar neutrino problem). In stars that are more massive and hotter than the Sun, hydrogen is burned into helium through a catalytic cycle involving carbon, nitrogen, and oxygen.

Finally, one needs to model how heat generated in the core of a star reaches its surface. There are two possibilities: (5a) heat is carried by light diffusing outward, or (5b) heat is transported by gas moving that rises buoyantly, cools, and then descends back into the star (convection). These two effects are important in different regions of each star. The key to determining where they are important is to determine how far light can travel through the star before it interacts with an ion in the plasma (the opacity of the plasma). For many elements, both calculating the opacity from first principles and determining it from experiments turns out to be exceedingly difficult. Moreover, once the calculations are done for each element, one needs to know to what fraction of the star is made up of each element. This is also hard to measure with high precision. This is where the controversy starts.

Astrophysicists have a lot of data that they have to explain when they try to model the abundances of elements in the Sun. Looking at the atmosphere of the sun, they have to explain the change in apparent brightness of the solar disk as one moves from the center to the edges (limb darkening). They need to explain the depths of the absorption lines from various ions, which is related to both the temperature in the atmosphere and the abundances of each element. They have to explain how the shapes of hydrogen lines vary across the face of the disk, which is related to how the pressure of the atmosphere changes with height. Finally, they need to explain why the surface of the Sun is speckled. This last feature, in particular, has led astrophysicists to develop three-dimensional models of the Solar atmosphere.

While they were at it, several groups incorporated the most recently calculated and measured values for the opacities of each element, and dropped the assumption that everything would locally be in equilibrium.

The new models seem to do the best job yet of explaining the appearance of the surface of the Sun. The models also imply that the abundances of carbon, nitrogen, oxygen, and neon in the Sun are lower than astrophysicists previously thought. The abundances of many of the elements are more consistent with measurements of elemental abundances in meteors and in interstellar space.

At first glance, this wouldn’t seem like it could cause much of a problem for modeling the Sun, because each of these elements is about 10,000 times less abundant than hydrogen. However, changing the assumed abundances of these elements changes the assumed density and opacity of the Sun, and it turns out that both of these things affect another important set of calculations.

In the 1960s and 1970s, it was realized that the Sun was continually pulsingat a barely-perceptible level. The pulsations are caused by sound waves propagating through the Sun. The set of characteristic frequencies at which the Sun pulses tells us about its interior, much like how seismic waves on Earth can be used to study the Earth’s core. As a result of their analogous usefulness, the Solar pulsations are called helioseismic (although the physical mechanism causing “seismic” disturbances on the Earth and Sun are very different).

Constructing a model of helioseismic waves requires knowing the speed of sound throughout the star. This in turn depends upon the density of the plasma, and the locations at which energy is transported by radiation or convection. It turns out that using the new elemental abundances, the models for the pulsations of the Sun no longer work.

Currently, the astrophysicists involved are hopeful that other changes can be made to the Solar model so that everything will be consistent again. Over the past few years, refinements of the model have improved its agreement with the helioseismic data. Perhaps this problem will go away with better calculations of the opacities of various elements. However, the changes needed are up to 15% at some temperatures, which might have other observable consequences.

There is even a small chance that the discrepancy could be a piece in a bigger puzzle. Already, the Sun has provided us with a big hint as to what might be found beyond the Standard Model for particle physics. We see fewer neutrinos from the Sun than we expect given the nuclear reactions that are occurring in its core. This is taken as evidence that the electron neutrinos we are looking for are changing into other types of neutrinos, which in turn implies that neutrinos have mass (see the page by the late John Bachall for more detail). Could the mismatch between helioseismology and the standard solar model that was introduced with the new abundances be another clue? I don’t know. However, the best place to look for progress in science is where there is something significant we can’t yet explain.

I was listening to NPR’s Living on Earth this morning, and my curiosity was piqued by a piece on endocrine disrupting compounds. Over the past 12 years or so, scientists have been suggesting that some chemicals that are found to be harmless at high doses might have important biological effects at low doses. This idea is important, because widely-used chemicals such as bisphenol-A (used in food containers) are thought to be harmful to human metabolism. Bisphenol-A has been banned in Europe and Canada, but is still widely used in the U.S. It is a serious issue, because some scientists believe the subtle effects of these compounds could be contributing to the rise in allergies and some cancers in the U.S.

The piece was an interview with Dr. J. Peterson Myers. The piece answered the question about how endocrine disrupting compounds act: they mimic hormones and interfere with the the ability of genes to produce proteins. The part that got me curious was the following claim about their effects:

I should emphasize that it’s not even close to brand spanking new. It’s solid in endocrinology. This is something that physicians have to structure their drug deliveries around. They know that at low doses you can cause effects that don’t happen at high doses. In fact, you can cause the opposite effect.

This seems to imply that the low-dose effect disappears at high doses.

With most toxins, there is a threshold level at which they begins to have harmful effects. For instance, at low doses, botulinum toxin merely paralyzes muscles locally in the body. As Botox, the toxin is used in cosmetic treatments to smooth wrinkles, and prevent the facial expression of emotion (okay, the second effect may not be the intended one). However, at high doses, the toxin causes botulism, a potentially-fatal form of food poisoning. Examples like this inspired the aphorism, “The dose makes the poison.”

The example given in the radio piece of an endocrine disruptor that acts different at low and high doses was Tamoxifen. Tamoxifen is used to prevent the recurrence of breast cancer. At high doses, the drug, and by-products of the drug made by the body, binds to estrogen receptors in the breast. Breast cancer tumor cells require estrogen to grow, so Tamoxifen prevents the growth of tumor cells. However, it turns out that in the uterus, Tamoxifen acts more like estrogen, and can encourage the growth of uterine cancer.

Unfortunately, the effects of Tamoxifen that I have been able to find (note that I am not a biologist, and don’t really know how to search the relevant literature) don’t seem to address the original question I had. Tamoxifen, at the same dose, acts differently on estrogen receptors in different parts of the body. It also acts differently on various hormonal receptors in fish. However, I was not able to find an explanation on how it might be harmful at low doses, but beneficial or benign at high doses.

I can construct a thought experiment, in which a chemical interacts badly with a receptor at low doses, but has a beneficial reaction at a higher dose. The bad reaction at low doses would be a side-effect of its use at higher doses. However, I can’t figure out how to get rid of the bad interaction from the low dose. Shouldn’t it always be identifiable?

Perhaps I am missing the point. It is possible that the problem is that the EPA doesn’t look for the kind of detrimental effects that are characteristic of endocrine disrupting compounds, including elevated long-term risks of cancer. In that case, complaining about the doses tested misses the point. The real problem could be that the testing methods aren’t designed to catch subtle-enough effects.

So, the cumulative effects on an individual of exposure to many different endocrine disrupting compounds might be important. I am going to try to find an expert to explain this better to me. Take this post as the start of a teaching moment: one was or another, it is an example of a “near-miss.”

I regularly notice people complaining about conformity in science. Generally, this complaint accompanies a narrative about how someone’s pet theory is ignored by scientists, who are inevitably accused of being slaves to government funding. From my experience, I feel that these complaints are unfounded. My thinking on this was influenced about 10 years ago by reading Schroedinger’s book Nature and the Greeks, and a book by Bertrand Russel (I wish I could remember which one). They can do the subject more justice than I. Nonetheless, I feel there is still a place for someone to defend conformity in science.

It is true that the big breakthroughs that we hear about in science often have at their center some giant of intellect and original thinking. When many of us think of scientists, we think of Newton, Darwin, Einstein, Feynman, and (maybe) Watson and Crick. The problem is, the narratives commonly associated with these scientists often ignores the other great minds that surrounded them. Newton corresponded regularly Leibinz and other mutual friends, which almost certainly influenced their concurrent development of calculus. Einstein was surrounded by other physicists that recognized that Newton’s theories were incompatible with some key observations, and mathematicians who were able to introduce Einstein to the equations that he needed to translate his ideas into predictions.

Most striking is the discovery of DNA. We all probably know that Watson and Crick’s got credit, but how many of us know the name of the woman whose experimental work inspired the Nobel Prize winners’ model?

Science certainly has its heroes, but scientific progress is by no means driven by lone geniuses. The people above were geniuses, no doubt, but they were part of a broad and vibrant scientific community. Moreover, their brilliance was matched equally by their ability to convince their peers that they were correct.

The thing is, science is a collective enterprise. A scientific theory must produce predictions that can be verified by independent observers. This requires that other scientists be willing to accept (tentatively) some paradigm so that they can perform experiments.

Indeed, developing a new theory requires that a scientist understands where the old theory fails. Einstein and his peers had to understand that Newton’s theory of gravity worked well in many situations, predicting the trajectories of cannonballs, and the motions of the planets beyond Venus (Mercury, on the other hand, was one of the failures). Therefore, a key part of the success of Einstein’s theory was explaining how, in most cases, bodies could behave in the way predicted by Newton. Given the success that our major scientific theories have had in making predictions and producing technology, I am convinced that future breakthroughs will emerge by using those theories as working models, and continually testing their bounds.

Don’t get me wrong. I love stories of big, dramatic breakthroughs. I would love to overturn preconceived notions, and find my way into the pantheon of the world’s great geniuses. I also would take great pleasure in hearing about the ideas that will arise that will and take us by surprise.

However, scientists have learned an enormous amount about the natural world in the last five centuries, and currently tens of thousands of people are working in physics, biology, chemistry and engineering. With that in mind, it seems likely that new knowledge will appear in smaller increments than it did in our heroic past. To use a quote also used by Newton, we are standing on the shoulders of giants. Only now, there have been even more giants. Perhaps it is time to let go a bit of the dream that a super-human will come to deliver us our next big breakthrough.

Given the challenges we face as a society, should we put our resources in “big ideas” without good reason to think they will pan out? I think the status quo is working pretty well, because it acknowledges the collective nature of science (and presumably reality). Scientists get funded when they can make it seem plausible to other scientists that their ideas will bear some fruit. If a scientist lacks the perspective to explain why the old work was inadequate, and lacks the skill to convince others their new ideas are worth pursuing, I would claim that giving them money is little better than putting a load of chips on the green slot of a roulette wheel.

In other words, the geniuses have to understand that there are many things to which they should conform.

Apparently, in the circles of conservative commentators and blog trolls, the claim has been going around that the Earth has been cooling in the last decade. Now, there are lots of places that one can go to find correct information on the web, so I’ve tended not to spend much time responding to ridiculous claims. However, I heard yesterday on NPR that the most popular global warming blog was written by a group denying global warming is caused by people. Clearly, more voices are needed to explain why most scientists do believe humans cause global warming.

figa2lrg
The suggestion that the Earth is cooling is a misrepresentation of the data (see the figure). Global average temperatures have increased by about
0.13&deg C per decade
over the past 50 years.

However, one should not expect to see this trend year-to-year, because yearly temperature measurements vary by about 0.1&deg to 0.2&deg C. These yearly temperature variations are caused by several things. For instance, El Nino tends to cause surface temperatures to rise, because it redistributes heat from the ocean into the atmosphere. Volcanic eruptions, such as the one from Mount Pinatubo in 1991, lower surface temperature by putting chemicals that reflect sunlight into the upper atmosphere.

As a result of the yearly variations, one should only expect to see a trend in global temperatures over timescales a decade or two. Statistically speaking, one can only see a trend in the data when the error in the mean over a long time period is about 3 times smaller than the trend. The error in the mean scales as the yearly errors divided by the square root of the number of measurements. If the yearly errors are equal to the trend, as is the case for global warming, then it would take about 10 years to measure the mean. Measuring a trend would take longer: 20-30 years. Indeed, in 1979, scientists were not sure that a global warming signal had been seen, because they could only look back a few decades. Thirty years later, in 2007, the trend was clear, because almost a hundred years of data was available.

What then of the last 10 years? First, the data looks like the temperature has been roughly constant. As I mentioned above, this is because 10 years is only enough to measure a mean, not a trend. There could not possibly be evidence that the Earth is cooling on a 10 year time scale.

Moreover, cherry-picking 1998 and 2008 to claim a global cooling signal, as I heard the conservative commentatorDeroy Murdock do on the Tavis Smiley show, is ignorant at best, and dishonest at worst. In terms of global temperature, 1998 was the hottest year on record, tied with 2005. It is thought that a strong El Nino effect made 1998 so hot. 2008 was “only” the eigth hottest year on record. Some basic familiarity with numbers would have made the conservative commentator realize that 2008 was cooler, so technically he was correct. However, 8 of the 9 hottest years measured were after 2000; 15 or 16 of the 20 hottest years were since 1990; and at most one of the 20 hottest years were since 1980. The last decade was hot!

Therefore, although the claim that the Earth was cooler in 2008 that 1998 is a true fact, I think that anyone who brings that into the global warming debate is warping the science, and defying logic.

I’ve been thinking for a while about whether scientists are doing a good job of presenting their work to the public. I have done a little about it, preparing a couple press releases a year while I was an astronomer, some of which were picked up by magazines like Sky and Telescope. I even did a radio interview on Kirsten Sandord’s This Week in Science show. That was fun, but with my new job, I need to find other ways to contribute.

So, I came up with this idea of trying to explain why scientists believe their theories. It is certainly possible to find plenty of information on evolution, climate change, general relativity, and particle physics on the Internet (just head to Wikipedia; it’s one of the first places I go). However, what I have not found is a site that summarizes a wide range of scientific theories in a uniform way, so that people can compare them side-by-side.

Therefore, I have decided to start a site of Science Tracts (taking a cue from the religious) that concisely explain a number of scientific theories from different disciplines. Each page will explain what the theory is, what evidence led scientists to develop the theory, what successful predictions the theory has made, what technology relies on the theories (if any), the connections to other theories, and what big areas of uncertainty remain. I think it is illustrative to compare, for instance, General Relativity and evolution for how well they each work as theories.

Unfortunately, the Pew survey I referred to in my last post also revealed that only 13% of the general public visits Internet sites to learn about science. So, I might be talking to myself. However, I plan to leave space for comments, questions, and moderated debate. Start with this blog post if you’d like, because it may take me some time to figure out how Web 2.0 works. . .

« Older entries