I was listening to NPR’s Living on Earth this morning, and my curiosity was piqued by a piece on endocrine disrupting compounds. Over the past 12 years or so, scientists have been suggesting that some chemicals that are found to be harmless at high doses might have important biological effects at low doses. This idea is important, because widely-used chemicals such as bisphenol-A (used in food containers) are thought to be harmful to human metabolism. Bisphenol-A has been banned in Europe and Canada, but is still widely used in the U.S. It is a serious issue, because some scientists believe the subtle effects of these compounds could be contributing to the rise in allergies and some cancers in the U.S.

The piece was an interview with Dr. J. Peterson Myers. The piece answered the question about how endocrine disrupting compounds act: they mimic hormones and interfere with the the ability of genes to produce proteins. The part that got me curious was the following claim about their effects:

I should emphasize that it’s not even close to brand spanking new. It’s solid in endocrinology. This is something that physicians have to structure their drug deliveries around. They know that at low doses you can cause effects that don’t happen at high doses. In fact, you can cause the opposite effect.

This seems to imply that the low-dose effect disappears at high doses.

With most toxins, there is a threshold level at which they begins to have harmful effects. For instance, at low doses, botulinum toxin merely paralyzes muscles locally in the body. As Botox, the toxin is used in cosmetic treatments to smooth wrinkles, and prevent the facial expression of emotion (okay, the second effect may not be the intended one). However, at high doses, the toxin causes botulism, a potentially-fatal form of food poisoning. Examples like this inspired the aphorism, “The dose makes the poison.”

The example given in the radio piece of an endocrine disruptor that acts different at low and high doses was Tamoxifen. Tamoxifen is used to prevent the recurrence of breast cancer. At high doses, the drug, and by-products of the drug made by the body, binds to estrogen receptors in the breast. Breast cancer tumor cells require estrogen to grow, so Tamoxifen prevents the growth of tumor cells. However, it turns out that in the uterus, Tamoxifen acts more like estrogen, and can encourage the growth of uterine cancer.

Unfortunately, the effects of Tamoxifen that I have been able to find (note that I am not a biologist, and don’t really know how to search the relevant literature) don’t seem to address the original question I had. Tamoxifen, at the same dose, acts differently on estrogen receptors in different parts of the body. It also acts differently on various hormonal receptors in fish. However, I was not able to find an explanation on how it might be harmful at low doses, but beneficial or benign at high doses.

I can construct a thought experiment, in which a chemical interacts badly with a receptor at low doses, but has a beneficial reaction at a higher dose. The bad reaction at low doses would be a side-effect of its use at higher doses. However, I can’t figure out how to get rid of the bad interaction from the low dose. Shouldn’t it always be identifiable?

Perhaps I am missing the point. It is possible that the problem is that the EPA doesn’t look for the kind of detrimental effects that are characteristic of endocrine disrupting compounds, including elevated long-term risks of cancer. In that case, complaining about the doses tested misses the point. The real problem could be that the testing methods aren’t designed to catch subtle-enough effects.

So, the cumulative effects on an individual of exposure to many different endocrine disrupting compounds might be important. I am going to try to find an expert to explain this better to me. Take this post as the start of a teaching moment: one was or another, it is an example of a “near-miss.”

A link on BoingBoing to a page describing how to make a poster for a conference made me realize that my own talks and posters were no longer easily available on-line. I know that other people’s presentations have influenced my own, and have always been glad when people shared their work. So, I have remedied that with a page of talks and posters.

It also got me thinking again about what goes into a good presentation. Reading the advice on posters, I found a lot of good ideas. However, I think that their recommendation of putting at most 800 words on a poster makes a poster that is still too wordy. In the biggest conferences, people will have hundreds of posters to look at per day, and only an hour or so to do so. I realized that I ended up standing next to my poster most of the time, and the people who were interested in it would ask me to explain it. Very few people would stand and read all the text, and when they did, it was awkward to be standing there.

So, I started to reduce the amount of words on my poster, and just keep the points advertising my conclusions. I made this poster to look a bit like a tabloid, and I think it worked well. My favorite, though, is an attempt to summarize several papers on X-ray emission from the Galactic Center with haiku.

My best talks were given when I followed some of the advice of Patrick Winston on How to Speak. His big four are: reminding the audience several times about what points you are trying to convey by referring back to an outline; using verbal punctuation (such as enumeration) to provide boundaries between points and allow the listener to get on board; teaching difficult concepts by talking about things that are close, but not quite, what you are trying to describe; and asking answerable questions to engage the audience.
If I even follow two of the four, I tend to get positive responses from the audience.

I’ve also incorporated advice from other researchers, including using the titles of slides to summarize my points, putting lists of collaborators at the ends of talks where they don’t derail the introduction, and, most importantly, making sure that only one or two thoughts are placed on each slide (never, ever split a slide into four quarters containing separate things to discuss).

I am particularly proud of a few talks. One of my first PowerPoint talks summarized my thesis, and I received a number of complements on it. I made the talk shortly after watching The Shining, so while the talk was mostly black text on a white background (the only image I had to work with was a simulation I made), the transitions were white text on a black background. The dramatic transitions helped people follow a talk that otherwise had a lot of rather dull graphs.

I also enjoyed giving my most recent talks on the Galactic center, because I had a lot of new images to work with that lent themselves to an attractive visual format. I also got to tour the Netherlands giving my talks.

Although I have improved my presentations through trial-and-error, there is still a lot I need to work on. In my new position, the group really emphasizes giving clear, concise, well-rehearsed presentations. I worked harder on the one 15 minute talk I’ve given so far than on any of my astronomy talks. The group’s work pays off when you compare our talks to the disorganized presentations that are usually churned out by the aerospace industry.

Which brings me around to the books I’ve been reading by Edward Tufte. Among other things, he’s picked apart the ways in which bad presentation contributed to both Space Shuttle disasters (see also Visual Explanations). I like them so far, and I’m going to try to put some of his advice into practice as I develop this site.

According to a press release from GM, estimates based on preliminary EPA standards have the Volt getting 230 miles per gallon in city driving. I’m not sure what these “preliminary standards” are, but after running some numbers, I have a hunch (and I’m not the only one): I think the EPA is only counting gasoline burned, and not the electricity the car uses for short city trips. Here’s why.

The LA Times article states that the Volt uses as little as 25 kWh per 100 miles of city driving. This number really impresses me — that’s probably significantly less than my 2003 Prius uses (see below).

However, I have to ask, How much gasoline would it take to produce that 25 kWh of electricity? The energy density of gasoline is about 37 kWh per gallon. So, if the Volt only uses electricity, and if electricity could be produced from gasoline with 100% efficiency, the Volt would get a mileage of at most about 148 miles per gallon.

Of course, it is not possible to convert gasoline to electricity with 100% efficiency. I found this electrical plant that converts fossil fuel (natural gas) to electricity with an efficiency of 58%. A a typical efficiency is closer to 40%. Electrical transmission losses dissipate another 5-10% of the power. So, if, optimistically, my electricity is made by fossil fuel at about 45% efficiency, the actual mileage would be closer to 67 miles per gallon.

I do think that if we must use cars, electric cars are the way to go. Hopefully, in the future, more of our electricity will be made from wind and solar farms, nuclear power, and maybe even fission reactors (I can dream). In that case, the electricity comes with much less pollution, and the Volt is a winner.

Unfortunately, in the near term, when our electricity is made from coal and natural gas, the preliminary EPA mileage estimates for the Volt have little meaning. The 230 miles per gallon number is laughable. The Volt will get better mileage than a hybrid or diesel sedan, but it is not yet a big gain.

Instead, if you really want to make a difference, think in terms of person-miles per gallon (or gallons per mile per person), and carpool!

I regularly notice people complaining about conformity in science. Generally, this complaint accompanies a narrative about how someone’s pet theory is ignored by scientists, who are inevitably accused of being slaves to government funding. From my experience, I feel that these complaints are unfounded. My thinking on this was influenced about 10 years ago by reading Schroedinger’s book Nature and the Greeks, and a book by Bertrand Russel (I wish I could remember which one). They can do the subject more justice than I. Nonetheless, I feel there is still a place for someone to defend conformity in science.

It is true that the big breakthroughs that we hear about in science often have at their center some giant of intellect and original thinking. When many of us think of scientists, we think of Newton, Darwin, Einstein, Feynman, and (maybe) Watson and Crick. The problem is, the narratives commonly associated with these scientists often ignores the other great minds that surrounded them. Newton corresponded regularly Leibinz and other mutual friends, which almost certainly influenced their concurrent development of calculus. Einstein was surrounded by other physicists that recognized that Newton’s theories were incompatible with some key observations, and mathematicians who were able to introduce Einstein to the equations that he needed to translate his ideas into predictions.

Most striking is the discovery of DNA. We all probably know that Watson and Crick’s got credit, but how many of us know the name of the woman whose experimental work inspired the Nobel Prize winners’ model?

Science certainly has its heroes, but scientific progress is by no means driven by lone geniuses. The people above were geniuses, no doubt, but they were part of a broad and vibrant scientific community. Moreover, their brilliance was matched equally by their ability to convince their peers that they were correct.

The thing is, science is a collective enterprise. A scientific theory must produce predictions that can be verified by independent observers. This requires that other scientists be willing to accept (tentatively) some paradigm so that they can perform experiments.

Indeed, developing a new theory requires that a scientist understands where the old theory fails. Einstein and his peers had to understand that Newton’s theory of gravity worked well in many situations, predicting the trajectories of cannonballs, and the motions of the planets beyond Venus (Mercury, on the other hand, was one of the failures). Therefore, a key part of the success of Einstein’s theory was explaining how, in most cases, bodies could behave in the way predicted by Newton. Given the success that our major scientific theories have had in making predictions and producing technology, I am convinced that future breakthroughs will emerge by using those theories as working models, and continually testing their bounds.

Don’t get me wrong. I love stories of big, dramatic breakthroughs. I would love to overturn preconceived notions, and find my way into the pantheon of the world’s great geniuses. I also would take great pleasure in hearing about the ideas that will arise that will and take us by surprise.

However, scientists have learned an enormous amount about the natural world in the last five centuries, and currently tens of thousands of people are working in physics, biology, chemistry and engineering. With that in mind, it seems likely that new knowledge will appear in smaller increments than it did in our heroic past. To use a quote also used by Newton, we are standing on the shoulders of giants. Only now, there have been even more giants. Perhaps it is time to let go a bit of the dream that a super-human will come to deliver us our next big breakthrough.

Given the challenges we face as a society, should we put our resources in “big ideas” without good reason to think they will pan out? I think the status quo is working pretty well, because it acknowledges the collective nature of science (and presumably reality). Scientists get funded when they can make it seem plausible to other scientists that their ideas will bear some fruit. If a scientist lacks the perspective to explain why the old work was inadequate, and lacks the skill to convince others their new ideas are worth pursuing, I would claim that giving them money is little better than putting a load of chips on the green slot of a roulette wheel.

In other words, the geniuses have to understand that there are many things to which they should conform.

The recent Supreme Court decision on Ricci vs. DeStefano has sparked a lot of talk about affirmative action and “reverse discrimination.” I tend to agree with suggestions that affirmative action should be based on things other than race, such as income level, educational opportunities available where one grew up, and whether ones parents went to college.

However, I have been bothered by all the emphasis on “reverse discrimination.” My hunch has been that this doesn’t do justice to the magnitude of the discrimination that some minorities face. So, I tried to put some numbers on discrimination. The place I knew I could start was affirmative action in college admissions.

I attempted to make a crude estimate of how race-based admissions might decrease the chances that a member of a well-represented majority would be accepted to an elite university. To do this, I needed to compare the relative admission rates of those who are members of under-represented minorities, compared to those for the general population. Say that a fraction f of students are admitted to a school. That means that of M students that apply, N=fM are admitted. Now say that I know the fraction of those students that are under-represented minorities, g. To estimate the effect of racial preferences, I will assume that under-represented minority students were admitted at some factor x times the rate of non-minority students. I happen to have been able to find estimates for these numbers relatively easy, which is why I took this approach.

I then wanted to estimate the fraction of non-minorities admitted as a function of x, the effect of racial preferences. The number of non-minorities admitted is n=(1-g)N. The number of non-minority applicants is m = (1-g/x)M. The admissions rate for non-minorities is then f’ = n/m = (1-g)/(1-g/x)N/M = (1-g)/(1-g/x) f. By changing x, I can tell how the fraction of admitted non-minorities would be affected by affirmative action.

So, I found some admissions numbers for Harvard in the Boston Globe. In 2009, Harvard admitted f=7% of applicants. Other Ivy League schools admitted about 10%, so this seems like a good number to work with. Of the students accepted to Harvard, g=22% were from under-represented minorities. This seems to hold true generally at the Ivy League and University of California Schools, so I think this is also a good number to work with.

I was not able to find numbers for x for Harvard, but I needed to make some assumptions to construct any sort of argument. So, I tried to find numbers from other elite schools. From an article in UCLA’s student newspaper, it would appear that minorities are admitted at a rate up to x=2 times higher than the rates of other students (for MIT). This is roughly supported by the factor-of-two decline in under-represented minorities attending UCLA and UC Berkeley after California ended race-based admissions.

So, if I assume that under-represented minorities are equally qualified as other students, but are given favorable treatment, so that x=2, I get f’ = (1-0.22)/(1-0.22/2) 7% = 6%. So, in this scenario, there is roughly a 1% chance that a non-minority would be negatively impacted by affirmative action.

If I wanted to take a worst-case scenario and assume that minorities are not in fact equally qualified, then I need to do a different calculation. I must emphasize that this is not at all what I believe — this is simply to get a mathematical bound. Let’s pretend that an elite school suddenly decides to admit no minorities. The number of non-minorities admitted could then go up to n=N, and the effective applicant pool would shrink by the fraction that are minorities, so that m = (1-g/x)M. Therefore, f” = 1/(1-g)f = 1/(1-0.22/2) 7% = 8%. So, by changing the rates of admissions for minority students, I could conceivably change the rate of admission for non-minority students to an elite school like Harvard by as little as 1%, and at most 2%.

Fractionally, this is a significant change in the chance that an individual in the majority would get admitted. However, because the absolute chances of getting into Harvard are small, the actual chance that any individual non-minority would be affected by affirmative action is slight — a couple percent at most.

To put this in perspective, I looked into other forms of racial discrimination. A pair of economists did an experiment in 2004, in which they tried to judge the effects of discrimination by sending resumes to employers in the Chicago and Boston areas. The resumes were made to be identical, although half had stereotypically “white” names (such as Emily Walsh), and the other half had stereotypically African-American names (such as Lakisha Washington). The resumes with white names got callbacks in 10% of the cases, while those with African-American names got callbacks in only 6% of cases. So, the effect is about twice as large as the worst-case scenario for “reverse discrimination” under affirmative action.

What about looking at the most dramatic racial disparity in American life — prison populations? In 2008, a black man was 6.6 times more likely to be in prison than a white man, and a hispanic man was 2.4 times more likely to be in prison than a white man. Some of this has to do with the violence of the inner cities. However, a big part of the problem is the fact that, despite the fact that drug use rates are very similar for all racial groups, black men are sentenced for drug offenses at 13 times the rate of white men.

I have to mistrust the motives of anyone who decries “reverse discrimination,” but won’t spend equal breath bemoaning the remnants of racial discrimination in the work force. And if one wants to address those issues, it should be in small breaths taken between shouting about the big issue: that unequal law enforcement policies undermine black and hispanic communities, and deprive their children of opportunity.

Apparently, in the circles of conservative commentators and blog trolls, the claim has been going around that the Earth has been cooling in the last decade. Now, there are lots of places that one can go to find correct information on the web, so I’ve tended not to spend much time responding to ridiculous claims. However, I heard yesterday on NPR that the most popular global warming blog was written by a group denying global warming is caused by people. Clearly, more voices are needed to explain why most scientists do believe humans cause global warming.

figa2lrg
The suggestion that the Earth is cooling is a misrepresentation of the data (see the figure). Global average temperatures have increased by about
0.13&deg C per decade
over the past 50 years.

However, one should not expect to see this trend year-to-year, because yearly temperature measurements vary by about 0.1&deg to 0.2&deg C. These yearly temperature variations are caused by several things. For instance, El Nino tends to cause surface temperatures to rise, because it redistributes heat from the ocean into the atmosphere. Volcanic eruptions, such as the one from Mount Pinatubo in 1991, lower surface temperature by putting chemicals that reflect sunlight into the upper atmosphere.

As a result of the yearly variations, one should only expect to see a trend in global temperatures over timescales a decade or two. Statistically speaking, one can only see a trend in the data when the error in the mean over a long time period is about 3 times smaller than the trend. The error in the mean scales as the yearly errors divided by the square root of the number of measurements. If the yearly errors are equal to the trend, as is the case for global warming, then it would take about 10 years to measure the mean. Measuring a trend would take longer: 20-30 years. Indeed, in 1979, scientists were not sure that a global warming signal had been seen, because they could only look back a few decades. Thirty years later, in 2007, the trend was clear, because almost a hundred years of data was available.

What then of the last 10 years? First, the data looks like the temperature has been roughly constant. As I mentioned above, this is because 10 years is only enough to measure a mean, not a trend. There could not possibly be evidence that the Earth is cooling on a 10 year time scale.

Moreover, cherry-picking 1998 and 2008 to claim a global cooling signal, as I heard the conservative commentatorDeroy Murdock do on the Tavis Smiley show, is ignorant at best, and dishonest at worst. In terms of global temperature, 1998 was the hottest year on record, tied with 2005. It is thought that a strong El Nino effect made 1998 so hot. 2008 was “only” the eigth hottest year on record. Some basic familiarity with numbers would have made the conservative commentator realize that 2008 was cooler, so technically he was correct. However, 8 of the 9 hottest years measured were after 2000; 15 or 16 of the 20 hottest years were since 1990; and at most one of the 20 hottest years were since 1980. The last decade was hot!

Therefore, although the claim that the Earth was cooler in 2008 that 1998 is a true fact, I think that anyone who brings that into the global warming debate is warping the science, and defying logic.

I’ve been thinking for a while about whether scientists are doing a good job of presenting their work to the public. I have done a little about it, preparing a couple press releases a year while I was an astronomer, some of which were picked up by magazines like Sky and Telescope. I even did a radio interview on Kirsten Sandord’s This Week in Science show. That was fun, but with my new job, I need to find other ways to contribute.

So, I came up with this idea of trying to explain why scientists believe their theories. It is certainly possible to find plenty of information on evolution, climate change, general relativity, and particle physics on the Internet (just head to Wikipedia; it’s one of the first places I go). However, what I have not found is a site that summarizes a wide range of scientific theories in a uniform way, so that people can compare them side-by-side.

Therefore, I have decided to start a site of Science Tracts (taking a cue from the religious) that concisely explain a number of scientific theories from different disciplines. Each page will explain what the theory is, what evidence led scientists to develop the theory, what successful predictions the theory has made, what technology relies on the theories (if any), the connections to other theories, and what big areas of uncertainty remain. I think it is illustrative to compare, for instance, General Relativity and evolution for how well they each work as theories.

Unfortunately, the Pew survey I referred to in my last post also revealed that only 13% of the general public visits Internet sites to learn about science. So, I might be talking to myself. However, I plan to leave space for comments, questions, and moderated debate. Start with this blog post if you’d like, because it may take me some time to figure out how Web 2.0 works. . .

Last week, I read that the Pew Research Center released the results of surveys of scientists and of the general population. The surveys cover a lot of ground. Scientists were asked about the challenges they face in their careers. The broader public was asked whether they follow scientific research through television stories, magazines, or the internet, and they were even quizzed on science questions. Both groups were asked about their views on the the importance of science, and of their opinions about such hot-button issues as stem cell research, evolution, and global warming.

I was pleased to see that most Americans surveyed held science and scientists in high regard. On the other hand, The New York Times, in print and on the TierneyLab blog chose to emphasize that the general public doesn’t share scientist’s beliefs in evolution, or that humans are causing global warming (at least the general public does agree the Earth is warming). I love a good argument, so I felt I had to join the fray.

I think that there are two big reasons for this. First is that the implications of evolution and global warming run up against some dearly-held beliefs. Evolution undermines the notion that man is unique among god’s creations. Global warming suggests that, just by us living our lives, we might be damaging our homes. If you were raised to believe that we are blessed, these ideas are rather hard to adjust to.

At the same time, it is clear to everyone that the details of both evolution and global warming are complex, and not terribly well-understood. The details do not dissuade scientists, because the underlying principles of global warming, evolution, and other scientific theories have astounding explanatory power, and have made important predictions that have been verified by observation and experiment. With this in mind, scientists look at the details and see the possibility of solving important puzzles.

However, I suspect that the general public may want their theories to work like their cars and computers: without a bunch of niggling troubles and inconveniences. This leaves the door open for doubt about the entire theory. It becomes possible for people to go to the popular press with sciency-sounding criticisms of how this-or-that is not understood, and distract people from the sound fundamentals of the theories. The fact that the Pew survey reveals that the general public trusts scientists despite all this is reassuring.

Nonetheless, I think that more can be done to explain what scientists are thinking. So, my next post will describe what I would like to do to shrink this gap, if only a little bit.

I believe strongly that energy conservation is crucial toward securing our energy future, so you might think that I would be happy with last week’s announcement by the Obama administration that they will be implementing standards to make light bulbs more efficient. Instead, though, the way the announcement was made has bothered me, because its impact is actually pretty tiny.

The problem is, as Obama stated in his speech, that lighting only consumes about 7% of U.S. energy use. The new standards will not eliminate the energy used for lighting, it will only make it more efficient. How much more efficient? We can cut the press conference numbers Obama used down to size.

First, the speech stated that in the most optimistic analysis, the savings over 30 years are equivalent to powering all American homes for 10 months. That means we will be changing our residential energy use by (10/12)/30 = 2.8%. Moreover, residential uses account for only 21% of total U.S. energy use (according to the Energy Information Administration), so this plan should cut total U.S. energy use by only 0.6%.

Similarly, the speech states that over 30 years, it will be equivalent to taking 166 million cars off the road for one year. Why not phrase this as taking 5 million cars off the road for 30 years, or roughly 2% of the 250 million cars on the road? Well, with this number, the savings seem even smaller. Passenger vehicles account for 17% of U.S. energy use, so the savings may well be only 0.3%.

I’ve been reading the book, Sustainable Energy – Without the Hot Air by David MacKay, and he wittily explains the logical flaw in arguments like the one made in Obama’s speech. The problem is, Obama’s writers took paltry numbers and multiplied them by big ones, to make the impact seem bigger. The real effect has been put better by Prof. MacKay:

If everyone does a little, we’ll achieve only a little.

The cap-and-trade legislation that Obama has been helping through Capitol Hill will be much more effective (if Congress doesn’t get in the way too much). Unfortunately, reducing energy use at the consumers’ end will take serious broad-based efforts that are much bigger than changing light bulbs, and that I am only beginning to appreciate. . .

Tags:

The New York Times ran an editorial last week that caught my attention. The editorial was about a paper comparing the happiness of men and women in the US over the past 50 years or so. I was curious about how one would conduct a study, and whether it could actually reveal meaningful statistical solutions. I can’t access it at home, but I was able to get the original paper at work (during a break, of course!).

The paper was titled, “The Paradox of Declining Female Happiness” (by Betsey Stevenson and Justin Wolfers). The authors used three resources: the General Social Survey (1972-2006), the Virginia Slims Survey of American Women (about every 5 years between 1972 and 2000), and the Monitoring the Future 12th grade study.

Happiness in the General Social Survey, as reported by Stevenson and Wolfers (2009).

Happiness in the General Social Survey, as reported by Stevenson and Wolfers (2009).

I’ve reproduced the first figure, which seems to be the anchor of their argument. It is based on the General Social Survey. In the top half of the figure, the population was divided into three groups, those very happy, pretty happy, and not too happy. The authors notice a negative trend in the fraction of very happy women, from roughtly 40% to 33% over the last 35 years. The fraction of very happy men stays about the same, at around 33%. The figures don’t contain any error estimates for the individual points, but by eye the year-to-year variations would seem to imply that they are around 5% for the very happy fraction.

I am not familiar with the statistic that the authors apply, an “ordered probit regression”. A “probit” appears to be the inverse of a probability distribution, but I don’t know how that is applied to a regression. By my eye, though, any trends are slight, and probably not all that significant. The bottom figure makes no intuitive sense to me. The other figures in the paper are similar, although the trends are less pronounced. In the 12th grade survey (leaving alone whether 18 year olds can be thought of as “women” in anything but the legal and biological senses), the trend seems to be that boys get a bit happier, and girls stay equally happy.

The authors develop these data into a theme. They can split the data finer, and find similar trends in some of their categories of age, race, employment status, and marital status. They also look at European countries, and find consistent trends, although this is really because the uncertainties are so large that no clear trends are evident.

The only plot that showed any clear trend was in suicide rates for men and women: men have 4 times the suicide rate of women. However, that probably has more to do with the fact that men chose methods for suicide that are more likely to succeed.

In the New York Times editorial, Ross Douthat seemed impressed with how measured the prose of the article was in laying out possible reasons for this slight trend. I found myself cynically thinking, that a better title would have been “Men and Women are About Equally Happy.”

I am interested in statistics, but my knowledge is limited to astronomical applications. I would love for someone more knowledgeable to comment on the content of this paper. . .

Tags: ,

« Older entries § Newer entries »