Philosophy

You are currently browsing the archive for the Philosophy category.

Creating a working scientific theory is hard work. The observations and experiments must support the theory, and new predictions must be made and verified. Once established, more and more is expected of it. On the positive side, the theory might feed technological developments. However, it also might require re-thinking dearly-held philosophical, economic, or political views.

It appears to be much easier to attack a dominant paradigm. One needs to do only two things: to sow doubt about the dominant paradigm, and to establish oneself as the only truthful authority. Here is how it works, ordered in a list of strategies that are increasingly nihilistic:

(1) Amass information that supports one’s view.

Lists of titles of articles and web pages can be constructed to convey a sense that there a wide body of support for one side of the argument. Do not present the data supporting the opposing view in a similar manner; the relative numbers will probably undermine one’s case. However, journal articles and abstracts often contain impenetrable technical jargon or provocative questions. These are particularly useful, because their language will be read to support a hypothesis, even if their results do not.

(2) Search out examples of poor science in support of the dominant paradigm.

Lists of titles and abstracts that support a view can be nicely complemented with catalogs of biased or erroneous articles that were written in support of the opposing view. There should be plenty of these, because poor-quality yet uncontroversial results receive less scrutiny than ones that are obviously wrong. Their existence undermines the other side’s credibility. Finding them also makes it appear that one has conducted an unbiased and exhaustive search of the literature, and found it lacking.

(3) Present all information as equal.

The quality and reliability of the thousands of scientific papers that are published each year varies widely. Many studies are designed poorly, with samples that are too small or experiments that are dominated by noise. They are published anyway, because scientists must publish in order to receive further funding. This can be used to one’s advantage. If data can be separated from its reliability in a presentation, then all results can be portrayed as equivalent, and any conclusion can be molded from it.

(4) Emphasize any doubt in the opposing paradigm.

Scientific researchers always have to equivocate their conclusions with statistical statements about the relative certainty of a result. Scientists also have a tendency to follow a description of a result with truisms about how much is left to be learned. This can be used to one’s advantage. The restrained language of many scientists is rhetorically underwhelming when contrasted with bombastic certitude.

(5) Remind everyone that science is not a democracy.

Sure, science is based on a shared reality, in which experiments and observations must be reproduced by many people. However, scientists will readily admit that if information is faulty or incomplete, significant theories might be found to be wrong. Therefore, one can ignore the observations and experiments that underlie the dominant paradigm, and simply point out
that it is possible even for large numbers of scientists to be wrong.

(6) Appeal to history’s paradigm shifts.

History abounds with stories of dominant paradigms that were overturned, ushering in new eras of understanding. These can be detached from their historical context, and turned into anecdotes that confirm an eccentric viewpoint. Simply avoid any explanation of why the old paradigm was held to be true, how evidence emerged that was contrary to the fading views, and what ideas motivated those who developed the new paradigm. The important thing is that ideas change, so there is no reason to trust our current knowledge.

(7) Demonize the opponent.

Describe those who hold opposing views in terms that will preclude people from listening to them. If ones opponents can be described in emotionally-laden language, many people will be less inclined to think critically about the debate at hand. A slur should be matched to the audience at hand: the religious find materialistic atheists repugnant; conservatives find socialists (or even liberals) threatening; liberals despise the self-interest of corporations, particularly big
chemical and pharmaceutical companies.

(8) Reveal a conspiracy.

This serves two purposes. First, it serves as a counter to information that
does not serve one’s rhetorical goal. Any contrary information can be dismissed
as a product of the conspiracy. By extension, it is only possible to trust the information presented by people who oppose the dominant paradigm. Second, by revealing the knowledge of the conspiracy to one’s followers, one compliments their intelligence, because those in on the secret feel that they are exceptionally perceptive.

With these steps, one can generate a raging controversy that will entrance the media and enliven the Internet. Simple right?

OK, that was just letting off steam. I don’t, in fact, believe that conspiracy theorists and true believers actually check off an eight point list when they wish to engage in debate. I suspect that these strategies emerge on their own, as people search for ways to convince themselves and others of their ideas.

Indeed, some of the above strategies resemble components of a genuine scientific debate. For instance, I implement a form of the first strategy when I use approximations in my work. I do so without compromising my scientific integrity by performing calculations that show that the things I left out won’t change my conclusions, at least to the accuracy that is required for the problem at hand. Unfortunately, the process of making approximations (or setting aside information that is not directly relevant) requires careful justification, and can invite controversy.

Therefore, I find the parallels between a scientific debate and someone trying to push a pet theory to be vexing. I wonder, how straightforward is it for someone to tell the relative quality of Ned Wright’s cosmology tutorial and the Iron Sun hypothesis? How does one convince non-experts that the risk from vaccines is tiny compared to their enormous benefit, when a Kennedy lends his pen to the other side? How can a lay person know whether to trust New Scientist or Dr. Roy Spencer on global warming?*

Scientific debate has always been difficult. I wanted to blame the Internet, but then I realized that the good old days weren’t much better. In my freshman year in high school, I tried to use a book I found in the library to write a report on the lost city of Atlantis. The book claimed that Atlantis was once a seat of technology, with radios, flying cars, and nuclear power, and I nearly ran with that. Fortunately, Mr. Winters guided me to more sane literature describing the eruption of Mt. Santorini, and I got to learn something useful about vulcanism and the early demise of the Minoan civilization.

So what is there to do? I hope that teachers, scientists who write for the public, publications that cover diverse scientific issues, and scholarly organizations will win the debate… Otherwise, I won’t get much joy if nature resolves things the hard way.

(*If you were wondering, I trust Ned Wright, Wired, and New Scientist on the above issues, respectively.)

This week, I have to stop avoiding a task at work that I’ve been dreading. I need to learn the inner workings of some software that our group has been developing and using for nearly 20 years.

I’ve been dreading this project, not just because I dislike working with Fortran 77, but also because the code is a mess. The code started as a routine that solved a set of differential equations. Loops were written surrounding that routine to handle different initial conditions. Other routines were written to bring in information about the systems we model. If statements were placed haphazardly to handle special cases. More code was written to produce plots (using our own home-built plotting package, because this all started before commercial ones were able to produce nice results), and that was stuck in the main file. Finally, someone wrote a set of C++ routines that used the output of the original routines as its input, but embedded the new routines within the original convoluted structure of the code.

This type of thing seems to happen often when physicists, applied mathematicians, and engineers develop software over many years, without any input from anyone who has taken a computer science course.

Unfortunately, there are only two people left in the group who know how the code works, and everyone seems to think that I am one of them. I started documenting the code today, because eventually I want to re-design it. We have a script to call the code, because the input is so complex. There are about 50 parameters that need to be set, and I only know what 10 of them do. The output from the code is a similar mess. I counted about 40 output files, of which I’ve only used one in my own work.

All these inputs and outputs, and all of the if statements and loops within the code, were needed at one point. Some of the plots were used to verify that different sub-routines were working properly. Once they served their purpose, however, they were simply left in the middle of the code. A lot of these features were probably implemented for one of the several dozen analysis projects that our group has carried out. They are unlikely to be used again, but are kept around “just in case.”

In a way, this code reminds me of some aspects of our genetic code. Biologists only know the function of about 2% of the DNA in any given plant or animal. Some of the remaining 98% or so of the DNA might have functions that we simply haven’t identified. Like the plots produced by our Fortran code, the DNA might only produce useful products under very specific circumstances, such as when embryonic growth must be regulated. However, a lot of it might just be doing nothing.

The way DNA is arranged seems similarly haphazard. The human genome is arranged into 23 pairs of chromosomes. The Adder’s Tongue Fern, on the other hand, has something like 700 pairs of chromosomes. Why would a fern, which doesn’t need to move, hunt prey, or perform elaborate mating rituals, need all those chromosomes?

However, all of this makes sense in the context of evolution. New features are added to the genetic code that give it more functions, yet there is no reason to take away pieces of code that no longer do anything. Something similar happened with our Fortran code. Now both look like a mess.

I realize that this is a poor analogy, but deconstructing bad analogies can help clarify how something really works. In the context of biology, evolution occurs through natural selection, when random mutations improve the survivability of a species. This is not how our code evolved. If it did evolve through random mutations and natural selection, we would have millions of copies of the code, only a small number of which would work. I’ve only been able to find a few versions of this code.

Perhaps I can think of our code evolving in a way more similar to that which Lamarck imagined, 60 years before Darwin and Wallace came up with the modern idea. Lamarck believed species evolved for two reasons. First, there was a natural tendency for life to get more complex. Second, an organ would become exaggerated as it was used more (like the giraffe’s neck). Neither of this is true in nature, but at first I thought it might do a fair job of describing our code, which acquired new characteristics whenever we decided that we needed them.

In the end, though, the biological analogy works poorly because our code didn’t acquire changes through accidents, or through its own striving, but because we, the authors, added features. So, instead, why don’t I turn the analogy on its head, and consider whether the evolution of our code can be used to develop a new concept for how a biological system might evolve? It should be amusing at least.

Now, I’m not suggesting intelligent design here. After all, our code is barely designed, let alone intelligently so. I see no point in pushing that analogy. Likewise, I see no reason to force the traditional Western conception of a deity as an infallible, all-seeing creator into explaining the messy world of genomics. Instead of relying on ancient texts and the traditions of our forerunners as the basis for inquiry, why not consider what our DNA might say about theological questions?

My (strained) analogy between our Fortran code and DNA suggests that one might think of a creator as an author that learned as it went along. At first, it created cells, at that was good. However, it then thought that the cell should have little hairs help secrete important chemicals, and that seemed better. Then, the creator realized the hairs could be used for propulsion, and flagella appeared. This went on, with experiments in multi-celled life, spinal cords, central nervous systems, and so on. Eventually, we ended up with humans, who are spectacularly good at figuring things out, manipulating their environments, and populating the planet. However, humans are far too often are dogged with detached retinas, mental illness, back pain, and really difficult childbirth. The planet also ended up with a myriad evolutionary dead ends, vestigial appendages, and huge blocks of code that weren’t being used (and haven’t been documented).

If only the creator had taken a good software architecture course in college.

I realize that my analogy is still bad. Theologians probably won’t be happy with the fact that my hypothetical creator is constrained to act — and learn — in time. Time, as we understand it, is connected to space in our theory of gravity, and is therefore a property of this Universe. Can a creator even be contained within its creation? This seems like a paradox to my mind, which is untrained in metaphysics. In any case, I don’t believe that evolution requires a creator to intervene at each step; chance mutations and natural selection seem to be effective on their own. I do think, however, the logic behind using intelligent design to describe biology is equally bad as my initial analogy above. . .

Good design would have produce a code that was modular, that had related information organized into structures, and that output a well-documented data stream (rather than 50 random plots). Fortunately, I get another shot at our group’s code, and I hope to instill a more intelligent design.

I attended a noontime lecture today given by Felice Frenkel, describing her work as a science photographer, and her innovative projects on using drawings as a learning tool for scientific concepts. Her work is a great example how important good visualizations are in helping to convey ideas. She has a new book coming out, No Small Matter, Science of the Nanoscale (with George Whitesides) that looks like it will be interesting. I am putting it on my list of things I’d like to have. The photos she showed in the talk were gorgeous. It looks to be a good study of how visual representations can inform physical intuition.

Ms. Frenkel also talked about some of the NSF-funded work she did on a program to examine how student drawings could be used as a learning tool. The Picturing to Learn site has examples of student drawings that were made in response to questions about basic physical concepts. They reveal what the students are thinking, and can be used to highlight parts of the concepts that students missed. If I were teaching, I probably would include some of these ideas in my classes.

I regularly notice people complaining about conformity in science. Generally, this complaint accompanies a narrative about how someone’s pet theory is ignored by scientists, who are inevitably accused of being slaves to government funding. From my experience, I feel that these complaints are unfounded. My thinking on this was influenced about 10 years ago by reading Schroedinger’s book Nature and the Greeks, and a book by Bertrand Russel (I wish I could remember which one). They can do the subject more justice than I. Nonetheless, I feel there is still a place for someone to defend conformity in science.

It is true that the big breakthroughs that we hear about in science often have at their center some giant of intellect and original thinking. When many of us think of scientists, we think of Newton, Darwin, Einstein, Feynman, and (maybe) Watson and Crick. The problem is, the narratives commonly associated with these scientists often ignores the other great minds that surrounded them. Newton corresponded regularly Leibinz and other mutual friends, which almost certainly influenced their concurrent development of calculus. Einstein was surrounded by other physicists that recognized that Newton’s theories were incompatible with some key observations, and mathematicians who were able to introduce Einstein to the equations that he needed to translate his ideas into predictions.

Most striking is the discovery of DNA. We all probably know that Watson and Crick’s got credit, but how many of us know the name of the woman whose experimental work inspired the Nobel Prize winners’ model?

Science certainly has its heroes, but scientific progress is by no means driven by lone geniuses. The people above were geniuses, no doubt, but they were part of a broad and vibrant scientific community. Moreover, their brilliance was matched equally by their ability to convince their peers that they were correct.

The thing is, science is a collective enterprise. A scientific theory must produce predictions that can be verified by independent observers. This requires that other scientists be willing to accept (tentatively) some paradigm so that they can perform experiments.

Indeed, developing a new theory requires that a scientist understands where the old theory fails. Einstein and his peers had to understand that Newton’s theory of gravity worked well in many situations, predicting the trajectories of cannonballs, and the motions of the planets beyond Venus (Mercury, on the other hand, was one of the failures). Therefore, a key part of the success of Einstein’s theory was explaining how, in most cases, bodies could behave in the way predicted by Newton. Given the success that our major scientific theories have had in making predictions and producing technology, I am convinced that future breakthroughs will emerge by using those theories as working models, and continually testing their bounds.

Don’t get me wrong. I love stories of big, dramatic breakthroughs. I would love to overturn preconceived notions, and find my way into the pantheon of the world’s great geniuses. I also would take great pleasure in hearing about the ideas that will arise that will and take us by surprise.

However, scientists have learned an enormous amount about the natural world in the last five centuries, and currently tens of thousands of people are working in physics, biology, chemistry and engineering. With that in mind, it seems likely that new knowledge will appear in smaller increments than it did in our heroic past. To use a quote also used by Newton, we are standing on the shoulders of giants. Only now, there have been even more giants. Perhaps it is time to let go a bit of the dream that a super-human will come to deliver us our next big breakthrough.

Given the challenges we face as a society, should we put our resources in “big ideas” without good reason to think they will pan out? I think the status quo is working pretty well, because it acknowledges the collective nature of science (and presumably reality). Scientists get funded when they can make it seem plausible to other scientists that their ideas will bear some fruit. If a scientist lacks the perspective to explain why the old work was inadequate, and lacks the skill to convince others their new ideas are worth pursuing, I would claim that giving them money is little better than putting a load of chips on the green slot of a roulette wheel.

In other words, the geniuses have to understand that there are many things to which they should conform.

I’ve been thinking for a while about whether scientists are doing a good job of presenting their work to the public. I have done a little about it, preparing a couple press releases a year while I was an astronomer, some of which were picked up by magazines like Sky and Telescope. I even did a radio interview on Kirsten Sandord’s This Week in Science show. That was fun, but with my new job, I need to find other ways to contribute.

So, I came up with this idea of trying to explain why scientists believe their theories. It is certainly possible to find plenty of information on evolution, climate change, general relativity, and particle physics on the Internet (just head to Wikipedia; it’s one of the first places I go). However, what I have not found is a site that summarizes a wide range of scientific theories in a uniform way, so that people can compare them side-by-side.

Therefore, I have decided to start a site of Science Tracts (taking a cue from the religious) that concisely explain a number of scientific theories from different disciplines. Each page will explain what the theory is, what evidence led scientists to develop the theory, what successful predictions the theory has made, what technology relies on the theories (if any), the connections to other theories, and what big areas of uncertainty remain. I think it is illustrative to compare, for instance, General Relativity and evolution for how well they each work as theories.

Unfortunately, the Pew survey I referred to in my last post also revealed that only 13% of the general public visits Internet sites to learn about science. So, I might be talking to myself. However, I plan to leave space for comments, questions, and moderated debate. Start with this blog post if you’d like, because it may take me some time to figure out how Web 2.0 works. . .

Last week, I read that the Pew Research Center released the results of surveys of scientists and of the general population. The surveys cover a lot of ground. Scientists were asked about the challenges they face in their careers. The broader public was asked whether they follow scientific research through television stories, magazines, or the internet, and they were even quizzed on science questions. Both groups were asked about their views on the the importance of science, and of their opinions about such hot-button issues as stem cell research, evolution, and global warming.

I was pleased to see that most Americans surveyed held science and scientists in high regard. On the other hand, The New York Times, in print and on the TierneyLab blog chose to emphasize that the general public doesn’t share scientist’s beliefs in evolution, or that humans are causing global warming (at least the general public does agree the Earth is warming). I love a good argument, so I felt I had to join the fray.

I think that there are two big reasons for this. First is that the implications of evolution and global warming run up against some dearly-held beliefs. Evolution undermines the notion that man is unique among god’s creations. Global warming suggests that, just by us living our lives, we might be damaging our homes. If you were raised to believe that we are blessed, these ideas are rather hard to adjust to.

At the same time, it is clear to everyone that the details of both evolution and global warming are complex, and not terribly well-understood. The details do not dissuade scientists, because the underlying principles of global warming, evolution, and other scientific theories have astounding explanatory power, and have made important predictions that have been verified by observation and experiment. With this in mind, scientists look at the details and see the possibility of solving important puzzles.

However, I suspect that the general public may want their theories to work like their cars and computers: without a bunch of niggling troubles and inconveniences. This leaves the door open for doubt about the entire theory. It becomes possible for people to go to the popular press with sciency-sounding criticisms of how this-or-that is not understood, and distract people from the sound fundamentals of the theories. The fact that the Pew survey reveals that the general public trusts scientists despite all this is reassuring.

Nonetheless, I think that more can be done to explain what scientists are thinking. So, my next post will describe what I would like to do to shrink this gap, if only a little bit.

I found an article on astro-ph last week, in which Jean Schneider at the Paris Observatory asserts that the question of whether there is intelligent life elsewhere in the Universe was until recently, only asked in Western cultures. Of course, all cultures have had their own notions of deities and spirits. However, those aren’t what the article is about. Deities and spirits are something apart from us; they are part of another realm. What Dr. Schneider was interested in is the concept that life like us exists in space.

The article asserts that speculation about extraterrestrial life stretches back to the ancient Greeks in Western literature (here, Arabic countries are included), but that before 1900, Dr. Schneider hasn’t found any reference to aliens outside the West. If anyone is reading this and knows of a counter-example (aliens, not deities or spirits), please do leave something in the comments.

Dr. Scheider then goes on to propose an explanation for why only those cultures that were influenced by the Greeks have wondered about aliens. In order to consider aliens, it seems logical that one has to have a concept of space as something fundamentally the same as the Earth. The Greeks had this concept, thanks to Euclidean geometry, in which all points in an abstract space are considered equal. In contrast, non-western cultures thought of the sky and the stars as being part of a different realm, a heavenly one where deities and spirits lived. They didn’t ask if there were aliens, because in their thinking, everything apart from Earth was different.

It seems like an interesting example of how culture and language can influence science, by defining which subjects are thought about. My own opinion is that science will inevitably correct its own blind spots, but then again. . . how can I know?

Tags: ,