The news media and physics community are abuzz with two reports suggesting that neutrinos might travel faster than the speed of light (from the MINOS and OPERA experiments). The claim is that neutrinos travel faster than light by 1 part in 40,000. The results are significant at the vaunted six-sigma level, barring any currently-unknown error in the experiment.

This has physicists excited, because anything traveling faster than the speed of light would violate Einstein’s theory of Special Relativity, which, along with quantum mechanics, underlies all of modern particle physics. Particle physicists are looking for something like this, because they are deeply dissatisfied with the current theory. The Standard Model contains a number of parameters that have to be made up to match fundamental measurements (although once you do so, it makes a vast array of validated predictions). Something that violates Special Relativity in a way that is barely noticeable would be consistent with all of the tests that have so far said that Einstein was spot-on, but leave room for tweaking the underlying theory.

Of course, the most excited are those who have developed models that predict that neutrinos should travel faster than light, as is evident by some recent edits to the Wikipedia page on neutrinos.

Astronomers, however, are deeply skeptical that neutrinos travel faster than light, because they already have measured the speed that neutrinos travel to an accuracy 10,000 times higher than either particle physics experiment. In 1987, a supernova occurred 168,000 light years from Earth in a nearby galaxy, the Large Magellenic Cloud. Two neutrino detectors were operating, and together they detected 24 neutrinos above the expected background that appeared nearly simultaneously at Earth with the light (photons) from the supernova. The coincidence between the photons and neutrinos arriving at Earth was used to determine the speed with which neutrinos traveled, and it was found to match that of light by 1 part in 450,000,000.

Most particle physicists are probably aware of this, but they have several reasons to give credence to the OPERA and MINOS results:

1) All of neutrino astrophysics involves ~10 MeV neutrinos, whereas the particle physicists were studying neutrinos 1000 times more energetic. Perhaps more energetic neutrinos travel faster than less energetic ones.

2) For 1987A, astrophysicists presumed neutrinos would travel at the speed of light, and focused their search around the time of the event. With the couple dozen events they found, they may not have been able to test for time delays or distributions.

3) For their experiment, the particle physicists simply had to understand the emission time of the neutrinos, the distance to the detector, and the physics of detecting the neutrinos (via muons, I
believe). Astrophysicists have to understand the interior of the Sun to study neutrino oscillations, or of a collapsing star to understand their travel time. Astrophysicists definitely do not understand all the physics of a supernova – most of the models to date still don’t produce an explosion from a collapsing star.

4) Neutrino oscillations required the biggest tweak to the Standard Model (giving them mass) of any recent result in physics.

5) All of physics points to the need for new underlying theories: our theory for gravity (General Relativity) and quantum mechanics can’t be reconciled; the Standard Model has a lot of free parameters, and we haven’t yet found the Higgs Boson; 75% of the energy of the universe is made up of stuff we haven’t been able to measure (darkmatter and dark energy), and only have hypotheses to explain.

If I were to guess as to what would set off a paradigm shift in physics, at the moment, neutrinos seem the most likely candidate. So, yes, I am cautiously excited.

As several municipalities considering or even implementing bans on disposable grocery bags, I have noticed some claims emerging that reusable bags are actually worse for the environment than disposable ones. The basic argument is that reusable bags not only take more energy to manufacture (they are heavier, so this is doubtless true), but moreover the need to wash reusable bags negates any energy benefit from re-using them. A similar case was made for using recycled paper napkins rather than cloth in restaurants, and the numbers do favor recycled paper. At least, they do for restaurants, which replace their napkins when they get the slightest stain; at home, using that author’s numbers, cloth is likely to be the more “green” choice.

However, I didn’t find any good analyses for reusable and disposable grocery bags (while I did find some contradictory claims), so I decided to see what I could work out myself. I did a bit of searching one weekend to try to figure out how much energy it took to manufacture canvas and plastic bags, estimated how much energy it took to wash the canvas bags, and made a quick calculation of how many plastic bags I avoid using in a year by carrying around my canvas ones. The details of the references, assumptions, calculations, and rough uncertainties are here, and I am posting a summary below.

For manufacture, the canvas bags are heavier, and therefore it takes a lot more energy to manufacture one canvas bag than it does one disposable plastic one. I weighed the bags that I’ve used, and found that my Trader Joe’s 2003-vintage canvas bags weigh about 180 g each, and the disposable plastic bags I get when I forget the reusable ones weigh on average 6 g each. My canvas bags should have taken about 25 MJ to make [PDF] (growing the cotton, weaving the fabric, and assembling the bag), whereas one plastic bag should take about 0.5 MJ to make.

The energy used in transportation per bag should be strictly proportional to the difference in weight. I assumed that it took 5 MJ to transport my 180 g canvas bag [PDF], and 0.2 MJ to transport a 6 g plastic bag.

Only the canvas bags need to be washed. The energy used will depend upon the size of the load, the temperature at which the water is washed, the amount of water used for the load, and whether or not the bags are run through the dryer (mine are). I tend to wash fairly large loads, about 4.5 kg at a time. I would use hot water for the wash, since I wash the bags with our napkins and dish towels, and cold for the rinse. I estimate that each wash takes between 0.6 MJ per bag for our front-loading machine (for a top-loading machine, this would be about 0.9 MJ per bag). Drying the bags takes another 1 MJ per bag (I should get a clothes line! Um, and a backyard. Oh, and a sunny, dry climate). I wash the bags about once a month at most, so the yearly energy budget for washing the bags is 19.2 MJ. Washing the canvas bags nearly doubles their energy footprint.

Summing the numbers, in the first year of purchasing a canvas bag, I estimate that I use 49.2 MJ of energy for that bag, versus 0.7 MJ per disposable plastic bag. The canvas bags carry more, and the plastic ones are always double-bagged, so I estimate that each canvas bag replaces three plastic ones on each trip. I go shopping once a week, so one canvas bag replaces 156 plastic ones in a year. Those 156 plastic bags would take 109 MJ of energy — more than twice the energy used by a new canvas bag. After the first year, I would only need to wash the canvas bag, taking 19.2 MJ of energy, so my reusable bags are 5.7 times more energy-efficient.

I might be off on my numbers by a factor of a couple, so perhaps in the first year the canvas bags are about equally energy-intensive as the plastic ones. However, in the long run, it appears that canvas bags are much more efficient than plastic ones. This would be especially true if I were to line-dry them, as someone truly eco-conscious would… But I mainly use them because I hate stuffing my closet with plastic bags. It just turns out that it’s also the more energy-efficient choice.

Creating a working scientific theory is hard work. The observations and experiments must support the theory, and new predictions must be made and verified. Once established, more and more is expected of it. On the positive side, the theory might feed technological developments. However, it also might require re-thinking dearly-held philosophical, economic, or political views.

It appears to be much easier to attack a dominant paradigm. One needs to do only two things: to sow doubt about the dominant paradigm, and to establish oneself as the only truthful authority. Here is how it works, ordered in a list of strategies that are increasingly nihilistic:

(1) Amass information that supports one’s view.

Lists of titles of articles and web pages can be constructed to convey a sense that there a wide body of support for one side of the argument. Do not present the data supporting the opposing view in a similar manner; the relative numbers will probably undermine one’s case. However, journal articles and abstracts often contain impenetrable technical jargon or provocative questions. These are particularly useful, because their language will be read to support a hypothesis, even if their results do not.

(2) Search out examples of poor science in support of the dominant paradigm.

Lists of titles and abstracts that support a view can be nicely complemented with catalogs of biased or erroneous articles that were written in support of the opposing view. There should be plenty of these, because poor-quality yet uncontroversial results receive less scrutiny than ones that are obviously wrong. Their existence undermines the other side’s credibility. Finding them also makes it appear that one has conducted an unbiased and exhaustive search of the literature, and found it lacking.

(3) Present all information as equal.

The quality and reliability of the thousands of scientific papers that are published each year varies widely. Many studies are designed poorly, with samples that are too small or experiments that are dominated by noise. They are published anyway, because scientists must publish in order to receive further funding. This can be used to one’s advantage. If data can be separated from its reliability in a presentation, then all results can be portrayed as equivalent, and any conclusion can be molded from it.

(4) Emphasize any doubt in the opposing paradigm.

Scientific researchers always have to equivocate their conclusions with statistical statements about the relative certainty of a result. Scientists also have a tendency to follow a description of a result with truisms about how much is left to be learned. This can be used to one’s advantage. The restrained language of many scientists is rhetorically underwhelming when contrasted with bombastic certitude.

(5) Remind everyone that science is not a democracy.

Sure, science is based on a shared reality, in which experiments and observations must be reproduced by many people. However, scientists will readily admit that if information is faulty or incomplete, significant theories might be found to be wrong. Therefore, one can ignore the observations and experiments that underlie the dominant paradigm, and simply point out
that it is possible even for large numbers of scientists to be wrong.

(6) Appeal to history’s paradigm shifts.

History abounds with stories of dominant paradigms that were overturned, ushering in new eras of understanding. These can be detached from their historical context, and turned into anecdotes that confirm an eccentric viewpoint. Simply avoid any explanation of why the old paradigm was held to be true, how evidence emerged that was contrary to the fading views, and what ideas motivated those who developed the new paradigm. The important thing is that ideas change, so there is no reason to trust our current knowledge.

(7) Demonize the opponent.

Describe those who hold opposing views in terms that will preclude people from listening to them. If ones opponents can be described in emotionally-laden language, many people will be less inclined to think critically about the debate at hand. A slur should be matched to the audience at hand: the religious find materialistic atheists repugnant; conservatives find socialists (or even liberals) threatening; liberals despise the self-interest of corporations, particularly big
chemical and pharmaceutical companies.

(8) Reveal a conspiracy.

This serves two purposes. First, it serves as a counter to information that
does not serve one’s rhetorical goal. Any contrary information can be dismissed
as a product of the conspiracy. By extension, it is only possible to trust the information presented by people who oppose the dominant paradigm. Second, by revealing the knowledge of the conspiracy to one’s followers, one compliments their intelligence, because those in on the secret feel that they are exceptionally perceptive.

With these steps, one can generate a raging controversy that will entrance the media and enliven the Internet. Simple right?

OK, that was just letting off steam. I don’t, in fact, believe that conspiracy theorists and true believers actually check off an eight point list when they wish to engage in debate. I suspect that these strategies emerge on their own, as people search for ways to convince themselves and others of their ideas.

Indeed, some of the above strategies resemble components of a genuine scientific debate. For instance, I implement a form of the first strategy when I use approximations in my work. I do so without compromising my scientific integrity by performing calculations that show that the things I left out won’t change my conclusions, at least to the accuracy that is required for the problem at hand. Unfortunately, the process of making approximations (or setting aside information that is not directly relevant) requires careful justification, and can invite controversy.

Therefore, I find the parallels between a scientific debate and someone trying to push a pet theory to be vexing. I wonder, how straightforward is it for someone to tell the relative quality of Ned Wright’s cosmology tutorial and the Iron Sun hypothesis? How does one convince non-experts that the risk from vaccines is tiny compared to their enormous benefit, when a Kennedy lends his pen to the other side? How can a lay person know whether to trust New Scientist or Dr. Roy Spencer on global warming?*

Scientific debate has always been difficult. I wanted to blame the Internet, but then I realized that the good old days weren’t much better. In my freshman year in high school, I tried to use a book I found in the library to write a report on the lost city of Atlantis. The book claimed that Atlantis was once a seat of technology, with radios, flying cars, and nuclear power, and I nearly ran with that. Fortunately, Mr. Winters guided me to more sane literature describing the eruption of Mt. Santorini, and I got to learn something useful about vulcanism and the early demise of the Minoan civilization.

So what is there to do? I hope that teachers, scientists who write for the public, publications that cover diverse scientific issues, and scholarly organizations will win the debate… Otherwise, I won’t get much joy if nature resolves things the hard way.

(*If you were wondering, I trust Ned Wright, Wired, and New Scientist on the above issues, respectively.)

This week, I have to stop avoiding a task at work that I’ve been dreading. I need to learn the inner workings of some software that our group has been developing and using for nearly 20 years.

I’ve been dreading this project, not just because I dislike working with Fortran 77, but also because the code is a mess. The code started as a routine that solved a set of differential equations. Loops were written surrounding that routine to handle different initial conditions. Other routines were written to bring in information about the systems we model. If statements were placed haphazardly to handle special cases. More code was written to produce plots (using our own home-built plotting package, because this all started before commercial ones were able to produce nice results), and that was stuck in the main file. Finally, someone wrote a set of C++ routines that used the output of the original routines as its input, but embedded the new routines within the original convoluted structure of the code.

This type of thing seems to happen often when physicists, applied mathematicians, and engineers develop software over many years, without any input from anyone who has taken a computer science course.

Unfortunately, there are only two people left in the group who know how the code works, and everyone seems to think that I am one of them. I started documenting the code today, because eventually I want to re-design it. We have a script to call the code, because the input is so complex. There are about 50 parameters that need to be set, and I only know what 10 of them do. The output from the code is a similar mess. I counted about 40 output files, of which I’ve only used one in my own work.

All these inputs and outputs, and all of the if statements and loops within the code, were needed at one point. Some of the plots were used to verify that different sub-routines were working properly. Once they served their purpose, however, they were simply left in the middle of the code. A lot of these features were probably implemented for one of the several dozen analysis projects that our group has carried out. They are unlikely to be used again, but are kept around “just in case.”

In a way, this code reminds me of some aspects of our genetic code. Biologists only know the function of about 2% of the DNA in any given plant or animal. Some of the remaining 98% or so of the DNA might have functions that we simply haven’t identified. Like the plots produced by our Fortran code, the DNA might only produce useful products under very specific circumstances, such as when embryonic growth must be regulated. However, a lot of it might just be doing nothing.

The way DNA is arranged seems similarly haphazard. The human genome is arranged into 23 pairs of chromosomes. The Adder’s Tongue Fern, on the other hand, has something like 700 pairs of chromosomes. Why would a fern, which doesn’t need to move, hunt prey, or perform elaborate mating rituals, need all those chromosomes?

However, all of this makes sense in the context of evolution. New features are added to the genetic code that give it more functions, yet there is no reason to take away pieces of code that no longer do anything. Something similar happened with our Fortran code. Now both look like a mess.

I realize that this is a poor analogy, but deconstructing bad analogies can help clarify how something really works. In the context of biology, evolution occurs through natural selection, when random mutations improve the survivability of a species. This is not how our code evolved. If it did evolve through random mutations and natural selection, we would have millions of copies of the code, only a small number of which would work. I’ve only been able to find a few versions of this code.

Perhaps I can think of our code evolving in a way more similar to that which Lamarck imagined, 60 years before Darwin and Wallace came up with the modern idea. Lamarck believed species evolved for two reasons. First, there was a natural tendency for life to get more complex. Second, an organ would become exaggerated as it was used more (like the giraffe’s neck). Neither of this is true in nature, but at first I thought it might do a fair job of describing our code, which acquired new characteristics whenever we decided that we needed them.

In the end, though, the biological analogy works poorly because our code didn’t acquire changes through accidents, or through its own striving, but because we, the authors, added features. So, instead, why don’t I turn the analogy on its head, and consider whether the evolution of our code can be used to develop a new concept for how a biological system might evolve? It should be amusing at least.

Now, I’m not suggesting intelligent design here. After all, our code is barely designed, let alone intelligently so. I see no point in pushing that analogy. Likewise, I see no reason to force the traditional Western conception of a deity as an infallible, all-seeing creator into explaining the messy world of genomics. Instead of relying on ancient texts and the traditions of our forerunners as the basis for inquiry, why not consider what our DNA might say about theological questions?

My (strained) analogy between our Fortran code and DNA suggests that one might think of a creator as an author that learned as it went along. At first, it created cells, at that was good. However, it then thought that the cell should have little hairs help secrete important chemicals, and that seemed better. Then, the creator realized the hairs could be used for propulsion, and flagella appeared. This went on, with experiments in multi-celled life, spinal cords, central nervous systems, and so on. Eventually, we ended up with humans, who are spectacularly good at figuring things out, manipulating their environments, and populating the planet. However, humans are far too often are dogged with detached retinas, mental illness, back pain, and really difficult childbirth. The planet also ended up with a myriad evolutionary dead ends, vestigial appendages, and huge blocks of code that weren’t being used (and haven’t been documented).

If only the creator had taken a good software architecture course in college.

I realize that my analogy is still bad. Theologians probably won’t be happy with the fact that my hypothetical creator is constrained to act — and learn — in time. Time, as we understand it, is connected to space in our theory of gravity, and is therefore a property of this Universe. Can a creator even be contained within its creation? This seems like a paradox to my mind, which is untrained in metaphysics. In any case, I don’t believe that evolution requires a creator to intervene at each step; chance mutations and natural selection seem to be effective on their own. I do think, however, the logic behind using intelligent design to describe biology is equally bad as my initial analogy above. . .

Good design would have produce a code that was modular, that had related information organized into structures, and that output a well-documented data stream (rather than 50 random plots). Fortunately, I get another shot at our group’s code, and I hope to instill a more intelligent design.

I wanted to get an idea of how much it would cost to undo global warming by engineering the atmosphere. A friend stated emphatically that it was cheap, but I wasn’t so sure. So, I decided to do a back-of-the-envelope calculation. You can find more reliable numbers elsewhere, but I’m posting this anyway, in case someone might find it fun to see what one can do with some numbers found with Google, some arithmetic, and an hour or two of spare time.

The simplest thing one could do to change the climate (aside from burning fossil fuels) is to eject a chemical into the atmosphere that reflects sunlight back into space. The obvious candidate chemical is sulfur dioxide. In 1991, Mount Pinatubo erupted, and put somewhere between 17 million and 20 million tons of sulfur dioxide into the stratosphere. As a result, the Earth cooled by about 0.5 degree Celsius for a few years. To temporarily alleviate global warming of a few degrees, we would need to put a comparable amount of sulfur dioxide (within factors of a few) into the atmosphere each year.

How much would it cost to inject this much stuff into the upper atmosphere? As a rough estimate of the cost, I assumed that it would cost about the same amount as getting things into the air by airplane. A 747 freighter uses about 10,000 kg of fuel to take off to a height of 10,000 feet, for a gross take-off weight of 360,000 tons, about 140 tons of which are cargo. Fuel costs $0.70 per kg ($2/gallon, density of 2.8 kg/gallon). So, I get a cost of $50 per ton to get things into the air, based on the fuel alone. That estimate doesn’t get the material into the stratosphere, but most of the air resistance is near the ground, so this shouldn’t be too far off (feel free to correct me if I’m wrong).

So, I estimate that getting 20 million tons of sulfur dioxide into the stratosphere will cost at least $1 billion dollars a year. My number should be accurate to within an order of magnitude. I don’t know whether it is expensive to make sulfur dioxide (probably not), and the distribution system might have different costs than a simple airliner (these don’t typically fly in the stratosphere, after all).

Nonetheless, my estimate isn’t too different from one reported in ArsTechnica (quoting an
article in Geophysical Research Letters Alan Robock and collaborators): they estimate $4 billion dollars if one uses F-15Cs, and $375 million dollars if one uses KC-135 Stratotankers. However, from the description at ArsTechnica (I’d have to use a computer at work to get the actual paper), the authors of the proper study seem to think that we only need to add enough sulfur dioxide to counter the warming trend, so that Mount Pinatubo only needs to be reproduced once every four to eight years. This sounds to me like an underestimate, but either way, the numbers are within my order-of-magnitude tolerance.

My friend suggested that a single developing country might undertake geoengineering on its own. Setting aside the political problems, I also wanted to know, is this amount of money reasonable? I think so. For comparison, the budget of India’s space agency is about $750 million dollars a year (assuming an exchange rate of $1 to 46.7 Rupees, where a Crores Rupees is 10 million Rupees). Space agencies are a bit of a luxury, so $1 billion does not seem out of reach.

This scheme is also inexpensive compared to the cap-and-trade legislation that is heading through congress, which may cost on order $150 billion per year, and that’s for the U.S. alone.

So, in terms of cost, my friend is probably right: putting sulfur in the stratosphere is relatively cheap and accessible, especially when compared to changing our economy to use less carbon.

However, I still don’t think that anyone will do it, because fears of the potential side effects will probably dissuade us. The sulfur dioxide from Mount Pinatubo exacerbated the hole in the ozone layer, for instance. However, who am I to divine the heart of man? If you want politics, FiveThirtyEight probably has a better discussion of those issues.

I have just been arguing with myself over an Op-Ed in the New York Times yesterday, which partly described a controversy surrounding a recent trial of an AIDS vaccine in Thailand. The controversy centers on whether the results of the study were statistically significant — that is, whether the authors truly found that the vaccine prevented infections, or whether the apparently-favorable result was a product of chance. Apparently, some outside observers felt that, when the research team first announced their results, they had cherry-picked the statistical test that cast their results in the most favorable light.

At first, I was inclined to support the author of the Op-Ed, Seth Berkley, in defending the study. I think that it goes without saying that finding an AIDS vaccine is worth a significant monetary investment, because it would save many lives.

However, one sentence of the Op-Ed left me taken aback:

This illustrates why the controversy over statistical significance is exaggerated. Whether you consider the first or second analysis, the observed effect of the Thai candidates was either just above or below the level of statistical significance. Statisticians will tell you it is possible to observe an effect and have reason to think it’s real even if it’s not statistically significant. And if you think it’s real, you ought to examine it carefully.

I read that to my wife, and she put words to my own thoughts, saying, “Yes, there is a word for thinking that something is real when it is not statistically significant: bias.”

Statistics is useful because it allows us to quantify how certain we are that something is real, independent of our desire for it to be real. When the result of an experiment falls near the boundary of statistical significance or insignificance, all that one can infer is that one doesn’t have enough information to securely confirm or refute the original hypothesis. If the hypothesis is about something critical, then one should get more data.

The attitude expressed in the above paragraph concerns me, because when faced with a result that lies on a statistical margin, there is a tendency for some researchers to try a number of statistical tests, and only choose to report the test in which the result is significant. The problem with trying a number of statistical tests is that it increases the chance that the randomness of nature will produce a spurious positive result. The order in which the data from the Thailand AIDS vaccine was released apparently raised exactly this concern among some scientists.

So, I took a deep breath, and checked the numbers. The initial press release claimed that there was only a 4% probability that the reduced infection rate in the vaccinated group was a product of chance, whereas the later results gave chances of 8% and 16% that the result was a product of chance.

Apparently, according to the Wall Street Journal, biologists consider a <5% probability that a result could originate in chance to be the key level of significance for judging that a result is probably real. This seems reasonable. For that matter, the <16% chance probability that the result is spurious sounds pretty good to me, especially given that AIDS is a life-or-death situation, and anything that would help save those lives is important to pursue.

I have used a range of probability cutoffs in my studies. I have reported signals that have had a 10% probability of being caused by chance when the existence of the signal was mundane. For example, I called any periodic signal with a <10% chance probability a "detection" when I was writing my thesis on X-ray bursts from neutron stars, because it was already well-established that the phenomena occurred, and I was simply trying to build a sample. However, for a surprising result, my colleagues and I would demand a higher statistical significance. When I discovered a neutron star in a Chandra observation of the young star cluster Westerlund 1, I had to show that there was a <0.1% chance that the neutron star was there by accident before the referees would agree that the neutron star was actually in Westerlund 1 (and even then, I had to explain the statistics to the referees twice).

However, that is enough about me. The point is, there is no "rule" as to what statistical significance level is reliable. One could be fooled by a one-in-a-million result if one is unlucky. Rather, it is a matter of what will convince one's audience given the importance of the result (OK, I suppose that is the bias that upset me earlier), and whether the presentation of the result is faithful to any lingering uncertainties.

Unfortunately, the AIDS result will continue to be controversial, because it is marginal. To quote the Wall Street Journal article,

Observers noted that the result was derived from a small number of actual HIV cases. New infections occurred in 51 of the 8,197 people who got the vaccine, compared with 74 of the 8,198 volunteers who got placebo shots.

These numbers dampen my initial enthusiasm, especially given the spectacular successes of past vaccines. I was recently reading about Louis Pasteur’s vaccine work in a collection of biographies of 19th century scientists (The Golden Age of Science, edited by Bessie Zaban Jones) . Pasteur developed a number of very successful vaccines, including one for anthrax that reduced the mortality rate of oxen and sheep from the disease from 10% to 1%, and one for rabies that decreased the mortality rate in humans from over 15% to about 1%. These are huge, statistically significant improvements. The initial samples that established the efficacy of these vaccines didn’t need to be large. The first anthrax study was of 50 sheep, half of which were vaccinated, and nearly all of which survived exposure to anthrax. It is somewhat disheartening that what gets hyped as progress in medicine now can be so slight in comparison.

It’s a shame that the results came out the way that they did, and I don’t think that Seth Berkley’s Op-Ed will help. Both trigger the destructive instincts of people (like me) who want to see justice meted out to scientists who hype up their work by playing with statistics.

However, one also doesn’t want to reject a promising vaccine just because one resents feeling played. Looking into this, two numbers jumped out at me. First, the most conservative estimate is still that there is a >84% chance that this vaccine works. That isn’t bad. Second, even if the AIDS vaccine “only” reduces infection rates by 30%, that could prevent up to 1,000,000 infections a year. That would be huge.

I attended a noontime lecture today given by Felice Frenkel, describing her work as a science photographer, and her innovative projects on using drawings as a learning tool for scientific concepts. Her work is a great example how important good visualizations are in helping to convey ideas. She has a new book coming out, No Small Matter, Science of the Nanoscale (with George Whitesides) that looks like it will be interesting. I am putting it on my list of things I’d like to have. The photos she showed in the talk were gorgeous. It looks to be a good study of how visual representations can inform physical intuition.

Ms. Frenkel also talked about some of the NSF-funded work she did on a program to examine how student drawings could be used as a learning tool. The Picturing to Learn site has examples of student drawings that were made in response to questions about basic physical concepts. They reveal what the students are thinking, and can be used to highlight parts of the concepts that students missed. If I were teaching, I probably would include some of these ideas in my classes.

One of my last astronomy projects was a study of a star surrounded by the debris of two planets that suffered a catastrophic collision.

Two planets colliding around the binary suns of BD +20 307.

Artist

When asteroids or planets collide, the debris that results ends up orbiting the star. Our own solar system contains a small amount of this sort of dust, which is produced by collisions between small asteroids and by comets evaporating as they approach the Sun. This dust is known as the Zodiacal light. It shines by reflecting sunlight. producing a glow that can sometimes be seen in the morning sky.

Very rarely, collisions between larger asteroids, or even planets, occur around other stars. This creates immense amounts of dust that can absorb light from its star, and re-radiate it at longer wavelengths. Therefore, astronomers identify stars around which collisions have occurred by looking for stars that are unexpectedly bright at infrared wavelengths.

We don’t really know how often big collisions occur between asteroids and planets around the Sun. Until recently, we only had indirect information from our own Solar System to work with. For instance, we think that when the Earth was less than 50 million years old, a Mars-sized planet collided with it, breaking off a material from our planet to form the Moon. The dust kicked up by this collision would definitely have been visible to extraterrestrial astronomers.

I am not sure whether later impacts would have produced much dust. Between an age of about 400 and 700 million years, there is some geological evidence that the Moon was bombarded by a large number of asteroids, during a period referred to as the late heavy bombardment. Presumably, other inner planets would have been bombarded as well, although erosion and volcanism would have erased the signs. This could have produced dust that was visible to astronomers near distant stars.

At its current age, asteroids are believed to hit Earth every few hundred million years. These isolated events might (or might not) wreak havoc on Earth, but they are minor on cosmic scales. In the past 20 years, we have also seen Jupiter hit by comets not once, but twice. However, Jupiter is so massive that it attracts those sorts of collisions, and in any case, the asteroid simply gets absorbed into the planet, without polluting the surroundings with dust.

So, to get a better handle on how often collisions occur around stars of different ages, astronomers get a large catalog of stars (such as the Henry Draper catalog and its extensions, which contain the 359,083 brightest stars), and see whether any are unexpectedly bright in the infrared. We find that most stars around which collisions have occurred are young (10 to 100 million years old), because the orbits of their planets and asteroids haven’t settled yet. Old stars surrounded by the debris of annihilated planets are reassuringly rare.

Nonetheless, we have some good evidence that big collisions do sometimes happen, even around older stars. Over the summer I noticed a couple papers. Lisse et al., recently described the composition of dust around a 12 million year old star HD 172555. They infer that a large asteroid recently collided with a rocky planet around that star. Moor et al. report the discovery of four stars surrounded by dust. Three of them are less than 200 million years old (phew), and one of which is probably about 2 billion years old (HD 169666, uh-oh). Moor et al. do not elaborate much on the origin of the dust in their paper (frankly, in the preprint version, they didn’t do a particularly good job of presenting their conclusions).

My collaborators and I, however, do not suffer from similar restraint. I joined a project led by Ben Zuckerman, to study the dustiest Solar-type star known, BD +20 307. The amount of dust around the star led us to believe that two planets had collided recently, in an event of a magnitude similar to that which formed the Moon. We expected to discover the star was young, but we were in for a surprise. . .

I have written a piece about the result, and placed it here. You can also see our press release, if you want the quick version.

I’ve been reading a bit more about endocrine disruptors, because it turns out that I don’t really know any experts that can do the thinking for me. I belatedly decided to read the paper that Dr. John Myers (who I mentioned a couple posts ago) and collaborators wrote on the possible effects of low doses of endocrine disruptors.

That paper contained a general overview of the problem. It had a few interesting examples of responses that were not monotonic with dose. For instance, hormone-mimicking drugs used to treat cancer (Tamoxifen, for example) can cause the cancer to flare briefly, until the concentration of the drug gets high enough to start inhibiting the cancer. This is an interesting result, and seems to be the substance of Dr. Myer’s claim from my last post on this. However, at first I thought that he was saying that a chemical could have no discernible effect at high doses, but a significant one at lower doses. If that were the case, it would severely impact how one carries out toxilogical testing.

So, to get more background, I read a review article cited in Dr. Mayer’s paper. It provides more detail about other ways in which large effects can be produced by small exposures.

The most scandalous suggestion in the review article is that some toxicology studies extrapolate low dose responses from a single high dose measurement, by assuming the relationship between dose and effect follow a linear relationship with zero response at zero dose. That assumption would be invalid if a response saturates. This occurs for many biological systems, because the available chemical receptors get filled as the hormone concentration increases. I find it rather hard to believe that toxicology studies regularly use this assumption, because, frankly, anyone making such an assumption probably shouldn’t have access to a laboratory.

At the very least, I would imagine that toxicology studies would keep testing lower doses until they were sure the response looked linear, and then start extrapolating. That should generally be safe, I would think. The response of a saturating system will get steeper at lower concentrations, so as long as one extrapolates from more than one data point, one will tend to over-estimate the response at lower doses. Therefore, the extrapolation would be conservative. However, at this point, I am reading the review article in a much less worrisome light than the authors meant it to be taken. I am assuming that if an astronomer finds the authors’ point obvious, biologists would already be accounting for it in their work.

Perhaps the concern is that, even if one finds the linear regime of the high-dose response, other responses will have already saturated at doses orders of magnitude lower. In this case, the predictive power of the high-dose studies would be very limited. However, as I mentioned in my last post on this, I would still expect to see the low-dose effect if I test to the point where the high-dose effect is negligible.

Up to now, I have only described saturating response curves, which are monotonic. The review article also suggests that some responses can be non-monotonic. Unfortunately, the first example given in the review article, on how an estrogenic chemical (diethylstilbestrol, or DES) affects prostate development in mice, seems to be a poor choice to make their point about toxicological studies. At low doses, the chemical causes the prostate to grow. At high doses, the prostate ends up smaller, but only because the chemical produces “gross abnormalities in the reproductive organs.” This is a rather trivial example, like saying that getting hit in the head causes headaches at low doses, but cures them at high doses when it kills you. Clearly, at all doses tested, you can tell this chemical (like getting hit in the head) is generally not a good thing.

Likewise, the review article points out that vitamins must be taken at the right dose. When too little vitamins are present, the deficiencies cause diseases. At high doses they can become toxic. However, this example has no bearing on low-dose toxicology, because there is no regime in which one is concerned about getting too low a dose of synthetic hormone-mimicking chemicals.

There are a lot of valid points in the review article. I agree that one has to be concerned that multiple chemicals might have similar responses, so that, for instance, the concentration of any given estrogen-mimicking substance isn’t nearly as important as the sum of the concentrations of all such chemicals. It also seems plausible that very low doses of hormone-mimicking chemicals could affect the development of fetuses, because the chemical signals that allow cells to differentiate can be small. (The book Mutants, One Genetic Variety and the Human Body, by Armand Marie Leroi, describes how a fetus develops very well.)

However, the articles by Dr. Myers et al. and Dr. Welshons et al. are cluttered with examples that don’t seem relevant to the problem at hand. Mostly, I find it hard to believe that it is commonplace for biologists to extrapolate from single data points. If I were in a bad mood, I could dismiss their work as unnecessarily alarmist.

I am trying to reserve judgment, however, because medicine has a lot of mysteries at the moment. I am constantly hearing about rises in the rates of allergies, autism, and cancers. Some of this is probably because the tests to find these conditions are more effective than previously. However, unless it is demonstrated that the increase in rates is caused by people looking harder, it seems important to consider the possibility that other factors of our modern lives might be making them more common.

Fixing the Solar Model

Over the past few years, I have been following a minor controversy about the current model for the Sun. It seems that advances in computational power and numerical techniques have allowed astrophysicists to improve their estimates of the relative amounts of each element in the Sun, and that this has broken our best model for the structure of the Sun.

Active Region 1002 on an Unusually Quiet Sun.

Active Region 1002 on an Unusually Quiet Sun.

Our understanding of the structures of stars (like the Sun) is detailed, but contains a lot of necessary approximations. One starts with three relationships that are well-understood: (1) the balance between a star’s gravity and the pressure of its plasma (hydrostatic balance), (2) a relationship between density and mass (a continuity equation), and (3) a relationship between density, pressure, and temperature (for instance, the ideal gas law).

Next, one needs to know (4) the rate at which energy is generated by thermonuclear fusion within stars. In the Sun, four atoms of hydrogen are built into a helium atom through a process that also produces, for brief periods before they decay, unstable isotopes of berylium, boron, and lithium (these reactions are important in understanding the Solar neutrino problem). In stars that are more massive and hotter than the Sun, hydrogen is burned into helium through a catalytic cycle involving carbon, nitrogen, and oxygen.

Finally, one needs to model how heat generated in the core of a star reaches its surface. There are two possibilities: (5a) heat is carried by light diffusing outward, or (5b) heat is transported by gas moving that rises buoyantly, cools, and then descends back into the star (convection). These two effects are important in different regions of each star. The key to determining where they are important is to determine how far light can travel through the star before it interacts with an ion in the plasma (the opacity of the plasma). For many elements, both calculating the opacity from first principles and determining it from experiments turns out to be exceedingly difficult. Moreover, once the calculations are done for each element, one needs to know to what fraction of the star is made up of each element. This is also hard to measure with high precision. This is where the controversy starts.

Astrophysicists have a lot of data that they have to explain when they try to model the abundances of elements in the Sun. Looking at the atmosphere of the sun, they have to explain the change in apparent brightness of the solar disk as one moves from the center to the edges (limb darkening). They need to explain the depths of the absorption lines from various ions, which is related to both the temperature in the atmosphere and the abundances of each element. They have to explain how the shapes of hydrogen lines vary across the face of the disk, which is related to how the pressure of the atmosphere changes with height. Finally, they need to explain why the surface of the Sun is speckled. This last feature, in particular, has led astrophysicists to develop three-dimensional models of the Solar atmosphere.

While they were at it, several groups incorporated the most recently calculated and measured values for the opacities of each element, and dropped the assumption that everything would locally be in equilibrium.

The new models seem to do the best job yet of explaining the appearance of the surface of the Sun. The models also imply that the abundances of carbon, nitrogen, oxygen, and neon in the Sun are lower than astrophysicists previously thought. The abundances of many of the elements are more consistent with measurements of elemental abundances in meteors and in interstellar space.

At first glance, this wouldn’t seem like it could cause much of a problem for modeling the Sun, because each of these elements is about 10,000 times less abundant than hydrogen. However, changing the assumed abundances of these elements changes the assumed density and opacity of the Sun, and it turns out that both of these things affect another important set of calculations.

In the 1960s and 1970s, it was realized that the Sun was continually pulsingat a barely-perceptible level. The pulsations are caused by sound waves propagating through the Sun. The set of characteristic frequencies at which the Sun pulses tells us about its interior, much like how seismic waves on Earth can be used to study the Earth’s core. As a result of their analogous usefulness, the Solar pulsations are called helioseismic (although the physical mechanism causing “seismic” disturbances on the Earth and Sun are very different).

Constructing a model of helioseismic waves requires knowing the speed of sound throughout the star. This in turn depends upon the density of the plasma, and the locations at which energy is transported by radiation or convection. It turns out that using the new elemental abundances, the models for the pulsations of the Sun no longer work.

Currently, the astrophysicists involved are hopeful that other changes can be made to the Solar model so that everything will be consistent again. Over the past few years, refinements of the model have improved its agreement with the helioseismic data. Perhaps this problem will go away with better calculations of the opacities of various elements. However, the changes needed are up to 15% at some temperatures, which might have other observable consequences.

There is even a small chance that the discrepancy could be a piece in a bigger puzzle. Already, the Sun has provided us with a big hint as to what might be found beyond the Standard Model for particle physics. We see fewer neutrinos from the Sun than we expect given the nuclear reactions that are occurring in its core. This is taken as evidence that the electron neutrinos we are looking for are changing into other types of neutrinos, which in turn implies that neutrinos have mass (see the page by the late John Bachall for more detail). Could the mismatch between helioseismology and the standard solar model that was introduced with the new abundances be another clue? I don’t know. However, the best place to look for progress in science is where there is something significant we can’t yet explain.

« Older entries