21 December 2010

Beyond the Annual Climate Confab

My latest column for Bridges is up, in it I provide an assessment of the Cancun climate conference.  You can read it here or listen to it here in mp3.  Comments welcomed.

As usual the entire issue is worth a read.  In it you can read about the panel discussion with Alexander Ochs and David Goldston at the Austrian embassy in Washington, DC last week (below).
Below is a picture of me explaining to Alexander the reasons why St. Pauli will stay up this season, despite being on the brink halfway through.  He does not look convinced.
And any guesses as to who are the two recognizable people in the audience who are in this photo (hint: not the lady sleeping in back!)?
Finally, here is me and my favorite science policy blogger.
It was a really fun night, not only with fine drinks and delicious food from Austria, but with an engaging discussion as well.  I am grateful to Ambassador Prosl, Phillip Maxgut, director of the OST, and the Austrian embassy staff for putting it on -- Fröhliche Weihnachten!!

"The Best Climate Book I Have Read"

Peter Kareiva, chief scientist of The Nature Conservancy, reviews The Climate Fix (PDF) and concludes that it is, "the best climate book I have read."  He writes:
"Most NGO policy and science staff will chafe at Pielke’s analysis — but they should all read his arguments and question their own conventional wisdom. After all, that conventional wisdom has not gotten us very far."
Stuff one into the stocking of your favorite NGO policy and science staff member ;-)

Live Radio Discussion Today on Science Integrity Guidelines

I'll be on the Patt Morrison show on Southern California Public Radio at 1:40 Pacific time this afternoon appearing with Al Teich of the AAAS to discuss the guidelines for scientific integrity released by the Obama Administration last week.  Here is the KPCCadvance billing:
1:41 – 1:58:30
Separating the politics from the science, Obama administration releases new guidelines
The Bush administration was blasted for tainting science with politics, perhaps most notably in 2006, when scientist James Hansen, director of the NASA Goddard Institute, accused White House officials of preventing him from talking about findings that linked carbon emissions to global warming. Now after a long delay, the Obama administration is releasing its guidelines to wall off science from politics. The four-page document prohibits agencies from editing or suppressing reports and says scientists are generally free to speak to journalists and the public about their work. It also instructs agencies to describe both optimistic and pessimistic projections, one guideline experts feel might have helped the administration avoid overly optimistic estimates during this year’s BP oil spill.  But not everyone thinks the wall is high enough—some scientists say the guidelines are too general, give too much discretion to the government agencies and leave open the possibility of another Hansen episode. Reading between the lines, what do the guidelines say and are they strict enough to keep science objective?
You can listen in online from the show's homepage.

Video of Legatum Debate


Last month in London at the Legatum Institute, I debated Benny Peiser of the Global Warming Policy Foundation on the subject of subsidies for energy innovation. A video of that debate appears above. My report from the debate can be found here. Enjoy!

20 December 2010

A Policy Practitioner Deconstructs the Science Integrity Guidelines- The President’s Memo, I.

[THIS IS A GUEST POST FROM A REAL LIVE US GOVERNMENT SCIENTIST, SHARON FRIEDMAN. HER VIEWS EXPRESSED HERE ARE HER OWN. SHARON BLOGS AT A NEW CENTURY OF FOREST PLANNING. NOTE: THE WORD CLOUD ABOVE IS OF THE PRESIDENT'S MARCH, 2009 MEMO, WHICH IS DISCUSSED IN THIS POST.]

As Maria said in the Sound of Music, “let’s start at the very beginning, it’s a very good place to start”. In order to talk about the Guidelines, let’s first deconstruct the President’s memo- sentence by sentence, equipped with a handy online dictionary. I have an earnest belief that unclear concepts cannot constitute a firm foundation for sound public policy.
(1) Science and the scientific process must inform and guide decisions of my Administration on a wide range of issues, including improvement of public health, protection of the environment, increased efficiency in the use of energy and other resources, mitigation of the threat of climate change, and protection of national security.
I would argue that the words “and guide” decisions of my Administration strays a bit from what we currently think the role of science and policy should be.

Let’s just use Merriam-Webster online for consistency, GUIDE:
1. to act as a guide to : direct in a way or course
2a : to direct, supervise, or influence usually to a particular end
b : to superintend the training or instruction of
So if we take this sentence literally, the authors are using a word usually defined as including the concept of direction; even “to a particular end.”

Sarewitz describes some of the problems with this concept in his Issues piece here.
(2) The public must be able to trust the science and scientific process informing public policy decisions.
This is a laudable goal, but the fact (as described in my Conveyor Belt post here) is that the people involved in developing policy do not have a voice in framing the research questions, for the most part. And putting together the research snippets developed to inform policy decisions is not, in and of itself, science. Then there is the absence of QA/QC in many research projects putatively designed to inform policy. Finally, if we really cared about peer review for research important to policy, we would monitor who does the reviews, pay people to do it, and make public the review comments and replies.
(3) Political officials should not suppress or alter scientific or technological findings and conclusions.
Hopefully, non-political officials wouldn’t do this either. However, I think this might be a potential minefield – is not doing what the professional wants “suppressing the findings” or just disagreeing with how they should be used in informing policy?
(4) If scientific and technological information is developed and used by the Federal Government, it should ordinarily be made available to the public.
Good on that. I would also say that their papers in journals should be available for free to taxpayers.
(5) To the extent permitted by law, there should be transparency in the preparation, identification, and use of scientific and technological information in policymaking.
This seems generally like a good idea. In my agency, we have been required to document how we use “the best available science” for certain decisions, and it seems to work fairly well.
(6) The selection of scientists and technology professionals for positions in the executive branch should be based on their scientific and technological knowledge, credentials, experience, and integrity.
This seems pretty straightforward- except I’m not sure that we have ever considered “integrity” as a selection criterion and it seems a bit out of place interjected here; especially since the “integrity” idea seemed originally to be about political employees not listening to scientists. I also wonder how the Office of Personnel Management would feel about “integrity” and the merit promotion concept. Let’s turn to the Merriam-Webster definition:
Definition of INTEGRITY
1: firm adherence to a code of especially moral or artistic VALUES : INCORRUPTIBILITY
2: an unimpaired condition : SOUNDNESS
3: the QUALITY or state of being complete or undivided : COMPLETENESS
I guess I’m kind of lost here.. hopefully it would be good to have all employees exhibit integrity- but which moral values? How would you measure them? If you are going to call a reference and ask them about someone’s “integrity”, it seems to me that you should have a good idea exactly what you mean. Otherwise, you could get colorful stories about their activities following imbibing of certain beverages.

I turned to Wikipedia here and my neurons just about experienced meltdown. When scientist terminology meets philosophy terminology there’s always a high potential for generalized fuzziness.
What is the difference between general old integrity and “scientific” integrity? And if politicals ignoring the “science” are the problem, why are we going after the science and technology professionals?

We may see in the next post, where we move on to the principles articulated in the President’s memo.

17 December 2010

Science Integrity Guidelines Soon to Come?

[AFTERNOON UPDATE: The Guidelines are out.]

NPR's Morning Edition had a story today (which quotes my views) on the forthcoming "science integrity" guidelines that are way overdue from the Obama Administration.  One factor helping to shake them loose is undoubtedly a lawsuit filed by Public Employees for Environmental Responsibility (see this PDF). (David Bruggeman has a good set of discussions.)
Last summer John Holdren, the president's science advisor, explained that the task was more difficult than they had anticipated:
I am the first to admit that the process has been more laborious and time-consuming than expected at the outset. Determining how to elaborate on the principles set forth in the Memorandum in enough detail to be of real assistance in their implementation, while at the same time retaining sufficient generality to be applicable across Executive departments and agencies with a wide variety of missions and structures, has been particularly challenging.
My guess is that once the guidelines are released their content will lead everyone to wonder why it took so long for them to be released and that they will represent the start of a process, rather than its conclusion.  Such speculation has a short shelf-life as it seems that we'll find out soon enough.

15 December 2010

A Guest Post: Science on the Conveyor Belt

[THIS IS A GUEST POST BY SHARON F.:
Sharon F. currently works for the Forest Service in land management.  In the past, she worked in FS Research and Development, and with USDA extramural research at the agency now known as NIFA.
FOR ADDITIONAL READING ON THE SUBJECT OF CONNECTING SCIENCE AND STAKEHOLDERS, SEE THIS SET OF PAPERS AND IN PARTICULAR THIS OVERVIEW (PDF).]

Roger asked me to give my thoughts on the recent GAO report “Forest Service Research and Development: Improvements in Delivery of Research Can Help Ensure that Benefits of Research are Realized.” It can be found here.

The report's main recommendation was that
"the Forest Service assess the effectiveness of recent steps FS R&D has taken to improve science delivery and take steps to ensure that individual performance assessment better balance the various types of science delivery activities." (from "what GAO recommends.")
In their letter to Senator Reid, GAO states,
"FS R&D conducts basic research in a range of biological, physical, and social science fields and applies this knowledge to develop technologies and deliver science to federal and state land managers, industry private landowners and other entities."
Not to be pedantic, but there seems to be a disconnect. If we go by the formal OMB definitions of basic, applied research and development
Basic research is defined as systematic study directed toward fuller knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind. Basic research, however, may include activities with broad applications in mind.

Applied research is defined as systematic study to gain knowledge or understanding necessary to determine the means by which a recognized and specific need may be met.

Development is defined as systematic application of knowledge or understanding, directed toward the production of useful materials, devices, and systems or methods, including design, development, and improvement of prototypes and new processes to meet specific requirements.
By using the terms "basic research" and "science delivery" it seems as if they are using what I call the conveyor belt model for bringing science to management. Under this model, scientists determine what research is needed, design it, fund it, publish papers and set the results on the conveyor belt- the user's job is to pick it up and use it. According to this model, all that needs to be done is to focus on the conveyor belt, reward scientists for putting research on the belt, and everything will be fine.

Unfortunately, much research designed without the input of users is not framed in a way to be particularly useful to them. Busy practitioners know exactly what is useful and what is not. So they may disregard most of what comes down the belt.

I have been in multiple meetings over multiple years where the same phenomenon occurs. Users are rounded up (the usual suspects) and asked what we want. If people really cared, this would be an ongoing conversation at many organizational levels and approached systematically. There is a section in the GAO report "Increased Stakeholder Involvement in Setting Research Agendas," but the distinctions of "users" are not clear; for example, universities are also discussed as "stakeholders." If researchers think different things are interesting (or easy to get funded, or more likely to get published) than do practitioners, it is likely that university and FS researchers would tend to be on one side, and practitioners on the other. So mixing in other researchers as "stakeholders" tends to dilute the practitioner vote.

At land grant institutions, the USDA developed a model of research, extension and education. Under that model, people who wanted research knew who they should talk to- professors, the Dean of the school, for example. Extension folks had to go out and talk to real users and communication went both ways (at its best). Students were linked to the real world through extension activities and people.

Through time, less of that funding has been available, due to the ideological hegemony of the "investigator- initiated competitive grant is best" worldview, which can have the result of research "of the scientists, by the scientists, and for the scientists." This may be a good approach for basic science, but not when a problem calls for clear stakeholder involvement in framing and design and a variety of alternative approaches.

If research is funded by other agencies (e.g. NSF, NOAA) with scientist-based criteria, it is putting the scientists between a rock and a hard place to ask them to produce research that's relevant and used. Redesigning the science delivery conveyor belt won't help if no users are standing at the other end.

13 December 2010

Political Affiliations of Scientists

Last week my friend and colleague Dan Sarewitz tossed some red meat out on the table in the form of an essay in Slate on the apparent paucity of Republicans among the US scientific establishment.  Sarewitz suggests that it is in the interests f the scientific community both to understand this situation and to seek greater diversity in its ranks, explaining that "the issue here is legitimacy, not literacy."

Sarewitz's essay has been followed by predictable responses (1,243 of them at Slate alone). Writing at MIT's science journalism tracker Paul Raeburn offers this suggestively sinister critique:
And what is Sarewitz’s political affiliation, I wonder?
Since everyone else knows the answer to this, you'd think a journalist might have ways of figuring it out.  Similarly sophomoric, Chris Mooney, in his characteristic us vs. them fashion, asks if Sarewitz will be joining the forces of evil:
Would Sarewitz himself like to become a Republican?
Such responses dodge the real issue here raised by Sarewitz.

And what is that real issue?  The issue that Sarewitz raises is one of legitimacy.  All of us evaluate knowledge claims outside our own expertise (and actually very few people are in fact experts) based not on a careful consideration of facts and evidence, but by other factors, such as who we trust and how their values jibe with our own.  Thus if expert institutions are going to sustain and function in a democratic society they must attend to their legitimacy.  Scientific institutions that come to be associated with one political party risk their legitimacy among those who are not sympathetic to that party's views.

Of course, we don't just evaluate knowledge claims simply based on individuals, but usually through institutions, like scientific journals, national academies, professional associations, universities and so on. Sarewitz's Slate article did not get into a discussion of these institutions, but I think that it is essential to fully understand his argument.

Consider that the opinion poll that Sarewitz cited which found that only 6% of scientists self-identify as Republicans has some very important fine print -- specifically that the scientists that it surveyed were all members of the AAAS.  I do not have detailed demographics information, but based on my experience I would guess that AAAS membership is dominated by university and government scientists.  The opinion poll thus does not tell us much about US scientists as a whole, but rather something about one scientific institution -- AAAS.  And the poll indicates that AAAS is largely an association that does not include Republicans.

Sarewitz wonders about how this situation might have developed.  One factor might be seen in a recent action of the American Geophysical Union -- another big US science association: AGU recently appointed Chris Mooney to its Board.  I am sure that Chris is a fine fellow, but appointing an English major who has written divisively about the "Republican War on Science" to help AGU oversee "science communication" is more than a little ironic, and unlikely to attract many Republican scientists to the institution, perhaps even having the opposite effect.  To the extent that AAAS and AGU endorse the Democratic policy agenda, or just appear to do so, it reflects their role not as arbiters of knowledge claims, but rather as political actors.

Looking more broadly, I would wager that the partisan affiliation of scientists in the US military, in the energy , pharmaceutical and finance industries would look starkly different than that of AAAS.  If there is a crisis of legitimacy in the scientific community, it is among those institutions which have become to be so dominated by those espousing a shared political view, whatever that happens to be. This crisis is shared by AAAS and AGU, viewed with suspicion by those on the Right, and, for instance, by ExxonMobil, which is viewed by a similar suspicion by those on the Left.  Sarewitz is warning that for many on the Right, institutions like AAAS are viewed with every bit as skeptical an eye as those on the Left view ExxonMobil.

Such views are more than just tribalism, they are expressions of how different people evaluate knowledge claims, and to the degree that they substitute affiliations for evaluating knowledge claims, science becomes pathologically politicized.  Sarewitz thus offers a warning:
American society has long tended toward pragmatism, with a great deal of respect for the value and legitimacy not just of scientific facts, but of scientists themselves. For example, survey data show that the scientific community enjoys the trust of 90 percent of Americans—more than for any other institution, including the Supreme Court and the military. Yet this exceptional status could well be forfeit in the escalating fervor of national politics, given that most scientists are on one side of the partisan divide. If that public confidence is lost, it would be a huge and perhaps unrecoverable loss for a democratic society.
Many observers are so wrapped up in their own partisan battles that they either don't care that science is being associated with one political party or they somehow think that through such politicization they will once and for all win the partisan battles.  They won't. Political parties are far more robust than institutions of science. Institutions of science need help to survive intact partisan political battles.  The blogosphere and activist scientists and journalists offer little help.

12 December 2010

Post-Cancun Climate Policy Debate: RSVP Now

The Office of Science & Technology presents a bridges Lecture Series Event

Tuesday, December 14, 2010
6:00 - 8:00 PM
RSVP

Embassy of Austria
3524 International Court, NW
Washington, DC 20008

08 December 2010

A Seminar for University of Colorado Graduate Students

I recommend this excellent class to CU grad students for Spring, 2011:
ENVS 5100-002 Science and Technology Policy
Tuesdays 9:30 am- 12 pm
Professor Lisa Dilling (ldilling@colorado.edu)

It is the year 2011 and you are the Chair of the U.S. House Science Committee. On your plate is a decision whether or not to authorize a new program to conduct research in geoengineering, or deliberate climate modification. How would you decide? What kinds of considerations would enter into your decision? And how would you engage the debate over deployment of geoengineering technology in the future?

The field of science and technology policy research seeks to understand how we decide what science and technology is prioritized and funded, how we justify such expenditures in society, how we conduct science and technology for societal benefit, and how we govern the use of scientific and technological results in society.

This course seeks to introduce students to science and technology policy research. We will examine the workings of science policy in the government and private sector, and focus this semester on some key emerging topics in science policy such as the debate on research and governance of geoengineering. Student interests will also guide case study selection.

FIFA's Fantasy: "Perfectly Organised, Perfectly Transparent and Perfectly Under Control"

FIFA -- the Fédération Internationale de Football Association -- is the international organization that governs football (or soccer) and is thus responsible for the quadrennial World Cup competitions.  Last week FIFA's Executive Committee decided to award the 2018 World Cup to Russia and the 2022 World Cup to Qatar.

FIFA's venue decisions and its process for making those decisions have come under intense scrutiny and criticism.  Some of this criticism is of course sour grapes, as those on the losing sides included the US, England and Australia, each of whom submitted very strong bids. At the same time there are allegations of corruption and collusion in the process that led to the results. Not only are the US, England and Australia big sporting nations, they are also countries with high expectations for transparency and accountability in international organizations and the countries that host them. 

FIFA is of course not the UN or WHO, and football is not war and peace, so we should not expect the same degree of scrutiny.  But at the same time, FIFA's decision making has placed its processes in a bright spotlight. Consider that today the Swiss government, under which FIFA is incorporated, announced a review of its anti-corruption exemption for international sporting authorities headquartered in Switzerland.  The head of the English Football Association has called for reform of FIFA processes.  The degree to which these efforts lead to actual change will remain to be seen.

However, with the World Cup more global than ever, its growth and success are having predictable consequences on governance and no one should be surprised to see the current controversies.  Specifically, as has been observed in international organizations more generally, the opacity of FIFA's decision making coupled with allegations of corruption in the process will inevitably lead to greater demands for openness and transparency, as part of an inevitable democratization of the institution.

Writing in the journal International Studies Quarterly in 2007, Alexandru Grigorescu explains that there are three factors driving the democratization of international organizations generally, and each seems relevant to the case of FIFA:
[First] information about an organization’s deliberations, decisions, and actions needs to be made available to determine if government representatives and IO [international organization] officials are acting in the public’s interest. If this information is not public, officials cannot be held accountable for their actions . . .

A second argument for transparency stems from the fact that secrecy gives rise to suspicions regarding the workings of an IO (Stiglitz 2002:229), and it reduces its legitimacy (e.g., Zurn 2004). The eroding legitimacy of IOs may in turn lead to calls (sometimes taking the form of public protests) for limiting their roles in the international realm. Transparency is therefore not simply needed for normative reasons. Without it, IOs are also less effective. . .

Lastly, if information is power, the study of who controls such information is relevant for understanding the power relations between the main actors in the global arena: states, IOs, nongovernmental organizations, and the public—relations that are at the center of the broader debates in the global governance literature.
Remarkably, FIFA seems to think that it has a right to operate in secrecy and without accountability.  For instance, its vice-president Jack Warner explained that FIFA explicitly did not vote for England's bid as retribution for UK media investigations of FIFA corruption. I would not expect that the media is going to take well to such implied extortion.  In the era of Wikileaks and Climategate, I fully expect that we are going to hear a lot more about FIFA than we have already.  I'd guess that the UK media is just getting warmed up.

While allegations of corruption in FIFA are certainly not new, what is new is that the World Cup has become the most important and visible sporting event on the planet.  And with such visibility necessarily comes much higher standards of accountability for decisions.  While FIFA may be able to resist change for a while, the forces of democratization of international organizations will at some point impact football. The questions are whether it will do so in a messy fashion or if the organization will evolve in a constructive manner.

Jerome Valcke, the FIFA secretary general who oversaw the 2018 and 2022 World Cup venue voting process, explained that the process was "perfectly organised, perfectly transparent and perfectly under control."  To the extent that his views are are shared in FIFA, I'd bet on a messy outcome.  Stay tuned.

07 December 2010

Flood Losses in Africa

A recent and important paper in GRL discussed the role of climate in the observed increase in African flood losses over the past century.  The paper concluded that climate has had an inconsequential role -- from the paper:
Di Baldassarre, G., A. Montanari, H. Lins, D. Koutsoyiannis, L. Brandimarte, and G. Blöschl (2010), Flood fatalities in Africa: From diagnosis to mitigation, Geophys. Res. Lett., 37, L22402, doi:10.1029/2010GL045467.

Based on the results of both continental and at‐site analyses, we find that the magnitude of African floods has not significantly increased during the Twentieth Century (Figures 2 and 3), and that climate has not been a consequential factor in the observed increase in flood damage. This is consistent with the results previously obtained [Kundzewicz et al., 2005; Bates et al., 2008; Petrow and Merz, 2009; Lins and Slack, 1999; Mudelsee et al., 2003] in different areas, such as North America, Europe, and Australia.
So if floods haven't increased, the cause of increasing damage must lie in factors other than climate:
. . . the intensive and unplanned urbanization in Africa and the related increase of people living in floodplains [Hardoy et al., 2001; Douglas et al., 2008] has led to an increase in the potential adverse consequences of floods and, in particular, of the most serious and irreversible type of consequence, namely the loss of human lives [Jonkman, 2005]. This can be shown, at the continental scale, by analyzing the dynamic of African population and the most recent deadly floods. For instance, Figure 4 shows the spatial distribution of population growth [Nelson, 2010] and the location of the latest floods, and deadly floods, in Africa (Dartmouth Flood Observatory, Global Archive of Large Flood Events, 2010). It can be seen that most of the recent deadly floods have happened where the population has increased more.
Compare this post as well.  The paper was also discussed in a blog posting at the AGU. The paper is an important addition to a growing literature on the subject of disasters and climate change.

Japan and the Kyoto Protocol

Japan's announcement that it would not participate in a second commitment period of the Kyoto Protocol caused quite a stir. As I have shown in a paper on Japan's proposed emissions reductions, it simply cannot hit the aggressive targets that were proposed by a former government during a moment of populist over exuberance. 

Any commitment by Japan to a Kyoto 2 would be substantively meaningless, even if politically popular among some well-meaning but deeply misguided activists.  Japan should be applauded for its refusal to go along with a charade.  Of course, in the climate debate nothing is ever so simple.

The Energy Challenge in One Figure

Last week the FT had a special section on South African Power and Energy.  The report included the excellent graphic shown above (click on it to enlarge).  The graphic shows that in the very near term -- perhaps in the current decade -- South Africa has a huge gap between what it needs in energy supply and what it currently has planned to meet those needs, which are projected to just about double in the next 20 years or less.

South Africa might be considered as representative of the broader global situation, where energy demand growth is being driven by the so-called "developing" countries.  South Africa is going to have enough of a challenge keeping the lights on, much less decarbonizing its economy at a rapid rate. Energy innovation and consequent decarbonization are much broader issues than simply climate change.

Five Books Interview

I was asked by Five Books to recommend five books related to the issue of climate change and my own book, The Climate Fix.  Five Books then interviewed me about my selections.  You can see what I came up with here.

What books would be on your list?

06 December 2010

Broken Windows

The State of Montana's Department of Fish, Wildlife and Parks has threatened Montana State University with a total loss of financial support over a peer-reviewed paper on wolf populations and hunting.

The threat has to do with a debate over a study published by Scott Creel and Jay Rotella of MSU's Department of Ecology which discussed the sustainable yield of wild wolf populations. Some substantive aspects of that debate can be seen in the comments that appear with the open-access article:
Creel S, Rotella JJ (2010) Meta-Analysis of Relationships between Human Offtake, Total Mortality and Population Dynamics of Gray Wolves (Canis lupus). PLoS ONE 5(9): e12918. doi:10.1371/journal.pone.0012918
The FWP did not appreciate learning about the paper via the media following a press release rather than directly from the researchers or the university.  And apparently there is a history of bad relations.

None of that justifies the threat from the State to the university in a letter from the FWP fish and wildlife divsioin administrator, Dave Risley, to the MSU President Waded Cruzado:
By writing this letter, we hope to make you aware of this situation before the only recourse is to permanently and completely dissolve the financial and intellectual relationship between FWP and MSU.
After the letter became public Risley appears to have stepped back a bit from the threat:
"We wanted to get the attention of the university, of the president," Risley said. "In no way, shape or form would we want to stifle academic freedom. We were just looking for professional integrity."

Risley said his letter to Cruzado was not intended to threaten the university. He said his previous letter of complaint to Creel's department head received no response.

Risley said he felt "like when a kid throws a rock at a window" to get someone's attention and inadvertently "breaks the window." He said it had gone further than he expected. "I didn't expect to get a call from the Chronicle," he said. "We wanted to get it to their attention and see some action."
Risley is no doubt seeing some action.  The President of the MSU Faculty Senate had some wise words:
Marvin Lansverk, MSU Faculty Senate chair, said he didn't have many details about the dispute, but if FWP is calling for more communication, that would be fine.

If FWP scientists disagree with Creel's conclusions, Lansverk said, "They have the right and obligation to respond with publications of their own."

However, Lansverk said, "If an administrator disagrees with scientific results, I think it would be inappropriate and detrimental to good science and the public interest to try to intervene or suppress publication of research or to put pressure on an institution to stop doing what universities do. I hope that's not what FWP is trying to do."

03 December 2010

Uber Meteorologist Bob Ryan on The Climate Fix

Bob Ryan, lead meteorologist on the 11PM News on ABC7/WJLA-TV in Washington, DC, says this in his highly positive review of The Climate Fix:
Pielke Jr.’s new book, "The Climate Fix: What Scientists and Politicians Won’t Tell You About Global Warming," manages to beautifully and easily encompass everything from atmospheric science and the third rail of global warming to biodiversity, politics, a bit of history, geoengineering and energy policy. It wraps up with some original thoughts about achieving decarbonization in the future.

Quite a range of topics, but quite a book. A book that should be required reading for all of us thinking and talking about climate change/global warming, its long term consequences and the Gordian Knot of science, politics and policy.

30 November 2010

Why There are No Trends in Normalized Hurricane Losses

The graph above shows data on normalized US hurricane losses 1900 to 2009 and was presented in a talk I gave today.  Why is there no trend in the data?  The two graphs below explain why.  You can do the math.

There are no trends in normalized damage since 1900 because there are no trends in either hurricane landfall frequency (data from NOAA) or intensity (data from Chris Landsea through 2006) over that same period (but rather, a very slight decline in both cases).  If our normalization were to show a trend then it would actually have some sort of bias in it.  It does not, thus we can have confidence that the societal factors are well accounted for in the normalization methodology.

Fabrications in Science

[UPDATE 12/6: Mickey Glantz has this to say on his Facebook page:
kevin trenberth MAY know science but to ask him to review this interdisciplinary assessment is a joke played on readers by Science's editors. scientists are angry because they are losing control of the climate issues to other disciplines and NGOs. I think i will write a review of the climate models and i wonder if Science will print it!]
You don't expect to pick up Science magazine and read an article that is chock full of fabrications and errors.  Yet, that is exactly what you'll find in Kevin Trenberth's review of The Climate Fix, which appears in this week's issue.

It is of course more than a little interesting that Science saw fit to ask one of my most vocal critics to review the book. Trenberth has been on the losing side of debates with me over hurricanes and disasters for many years.  But even so, I am quite used to the hardball nature of climate politics, and that reviewer choice by Science goes with the territory.  It says a lot about Science.  Trenberth's rambling and unhinged review is also not unexpected.  What is absolute unacceptable is that Trenberth makes a large number of factual mistakes in the piece, misrepresenting the book.

Science should publish a set of corrections.  Here is a list of Trenberth's many factual errors:

1. TRENBERTH: "An example that he might have mentioned, but does not, is President George W. Bush's 2001 rejection of the Kyoto Protocol on the grounds that it would hurt the economy. "
REALITY: Actually, Pielke discusses Bush's rejection of Kyoto on pp. 39 and 44
2. TRENBERTH: "Pielke treats economic and environmental gains as mutually exclusive"
REALITY: Not so.  From p. 50, "[A]ction to achieve environmental goals will have to be fully compatible with the desire of people around the world to meet economic goals.  There will be no other way."
3. TRENBERTH: "Pielke does not address the international lobbying for economic advantage inherent in the policy negotiations. "
REALITY: Wrong again.  The international economics of the climate debate are discussed on pp. 59, 65, 109, 219, 231, and 233 and are a theme throughout.
4. TRENBERTH: "He objects to Working Group III's favoring of mitigation (which is, after all, its mission) while ignoring Working Group II (whose mission is adaptation)."
REALITY: Again, not so. Chapter 5 is about the balance between  mitigation and adaptation in international policy and discusses both IPCC WG II and WG III (see pp. 153-155).  What Pielke objects to is defining adaptation as the consequences of failed mitigation.
5. TRENBERTH: "His claims that “the science of climate change becomes irrevocably politicized” because “[s]cience that suggested large climatic impacts on Russia was used to support arguments for Russia's participation in the [Kyoto] protocol”—as if there would be no such impacts and Russia would be a “winner”—look downright silly given the record-breaking drought, heat waves, and wildfires in Russia this past summer."
REALITY: Egregious misrepresentation.  Trenberth selectively uses half  of a quote to imply that Pielke was making a claim that he did not. The part left out by Trenberth (p. 156) was the counterpoint -- specifically that science that suggested few impacts on Russia was used in similar fashion by advocates to argue against the Kyoto Protocol.  Pielke concludes, "In this manner, the science of climate change becomes irreovocably politiciized , as partisans on either side of the debate selectively array bits of science that best support their position."
6. TRENBERTH: "Pielke stresses economic data and dismisses the importance of loss of life."
REALITY: Wrong again. Pielke discusses loss of life related to climate change on pp. 176-178
7. TRENBERTH: "Geoengineering is also dealt with by Pielke, but only briefly."
REALITY Not so. Pielke devotes an entire chapter to geoengineering (Chapter 5).
8. TRENBERTH: "[Pielke] does not address the practicality of storing all of the carbon dioxide."
REALITY: Again, wrong. Pielke addresses the practicality of carbon dioxide storage on pp. 133-134
And even with all these errors and false claims, Trenberth concludes that the book is on the right track:
"[P]rogressively decarbonizing the economy and adopting an approach of building more resiliency to climate events would be good steps in the right direction"
Anyone who has read The Climate Fix should also read Trenberth's review, as they will learn something about Science magazine and a part of climate science community.  As is said, politics ain't beanbag, and climate politics are no different.

New Peer-Reviewed Paper on Global Normalized Disaster Losses

The LSE Grantham Institute, funded by Munich Re (whose global loss data is shown above), has published a new peer-reviewed paper on normalized global disaster losses.
Eric Neumayer and Fabian Barthel, Normalizing economic loss from natural disasters: A global analysis, Global Environmental Change, In Press, Corrected Proof, Available online 18 November 2010, ISSN 0959-3780, DOI: 10.1016/j.gloenvcha.2010.10.004.
The paper finds no evidence of upward trends in the normalized data.  From the paper (emphasis added):
"Independently of the method used,we find no significant upward trend in normalized disaster loss.This holds true whether we include all disasters or take out the ones unlikely to be affected by a changing climate. It also holds true if we step away from a global analysis and look at specific regions or step away from pooling all disaster types and look at specific types of disasters instead or combine these two sets of dis-aggregated analysis. Much caution is required in correctly interpreting these findings. What the results tell us is that, based on historical data, there is no evidence so far that climate change has increased the normalized economic loss from natural disasters."
This result would seem to be fairly robust by now.

Yet claims that global warming has led to increased disaster losses are a siren song to the media and advocates alike, with the most tenuous of claims hyped and the peer reviewed literature completely ignored.  I don't expect that to change.

An Evaluation of the Targets and Timetables of Proposed Australian Emissions Reduction Policies

My paper on Australian emissions reduction proposals has now been published.  Thanks to all those who provided comments on earlier versions. Here are the details:
Pielke, Jr., R. A. (2010), An evaluation of the targets and timetables of proposed Australian emissions reduction policies. Environmental Science & Policy , doi: 10.1016/j.envsci.2010.10.008 

 This paper evaluates Australia’s proposed emissions reduction policies in terms of the implied rates of decarbonization of the Australian economy for a range of proposed emissions reduction targets.The paper uses the Kaya Identity to structure the evaluation, employing both a bottom-up approach (based on projections of future Australian population, economic growth,and technology) as well as a top-down approach (deriving implied rates of decarbonization consistent with the targets and various rates of economic growth). Both approaches indicate that the Australian economy would have to achieve annual rates of decarbonization of 3.8–5.9% to meet a 2020 target of reducing emissions by 5%,15% or 25% below 2000 levels, and about 5% to meet a 2050 target of a 60% reduction below 2000 levels. The paper argues that proposed Australian carbon policy proposals present emission reduction targets that will be all but impossible to meet without creative approaches to accounting as they would require a level of effort equivalent to the deployment of dozens of new nuclear power plants or thousands of new solar thermal plants within the next decade.

29 November 2010

Africa is Big

The Economist provides the maps above and a discussion of their origin from one Kai Krause, a graphics expert who is engaged in a battle against "immappancy."  It is a worthy battle.  In a standard Mercator projection, Africa is indeed deemphasized.  Even maps have politics.

Aynsley Kellow's Science and Public Policy Deeply Discounted

Aynsley Kellow has written to notify me that his excellent book, Science and Public Policy: The Virtuous Corruption of Virtual Environmental Science (2007, Edward Elgar), is on sale for $40, which is a full $70 off of its list price.

Here is a blurb from the book's website:
‘Crusading environmentalists won’t like this book. Nor will George W. Bush. Its potential market lies between these extremes. It explores the hijacking of science by people grinding axes on behalf of noble causes. “Noble cause corruption” is a term invented by the police to justify fitting up people they “know” to be guilty, but for whom they can’t muster forensic evidence that would satisfy a jury. Kellow demonstrates convincingly, and entertainingly, that this form of corruption can be found at the centre of most environmental debates. Highly recommended reading for everyone who doesn’t already know who is guilty.’

– John Adams, University College London, UK


Science and Public Policy
by Aynsley Kellow
Web link: http://www.e-elgar.com/Bookentry_Main.lasso?id=12839

Normally £59.95/$110.00  Special price $40/£25 + postage and packing

To order this book please email (with full credit card details and address):
sales@e-elgar.co.uk, or  on our website enter 'Kellowoffer' in the special
discount code box after entering your credit card details and the discount
will be taken off when the order is processed.
Contents:

Preface
1. The Political Ecology of Pseudonovibos Spiralis and the Virtuous Corruption of Virtual Science
2. The Political Ecology of Conservation Biology
3. Climate Science as ‘Post-normal’ Science
4. Defending the Litany: The Attack on The Skeptical Environmentalist
5. Sound Science and Political Science
6. Science and its Social and Political Context
Bibliography
Index

Quantitative Methods of Policy Analysis

In the upcoming Spring, 2011 term, I am teaching a graduate seminar titled "Quantitative Methods of Policy Analysis."  Here is a short course description:
ENVS 5120
Quantitative Methods of Policy Analysis


This course will survey a range of quantitative methodologies commonly used in applied policy analysis.  The course will cover the role of the analyst and analyses in policy making, formal models of the policy process, the role of quantification in problem definition, basic statistics and probability, data and its meaning (including uncertainties), projection and prediction, decision analysis and game theory, government budgeting, cost-benefit analysis, and graphical methods. The course will be organized around a textbook, individual semester-long projects and various problem sets. No prerequisites are necessary.
The course text will be Analyzing Public Policy: Concepts, Tools, and Techniques, 2nd Edition (2010), by Dipak K. Gupta.  The figure at the top of this post will be discussed on the first day of class.  There are seats available in the course, so if you are a CU student and interested in enrolling, please contact me.

23 November 2010

Some Changes in the Works

Now that my fall "book tour" is just about over and The Climate Fix is well launched, it is time to consider what is next. As readers of this blog will well know, for the past several years I have focused intensively on the climate issue with near daily postings on various aspects of the issue.  For me, the resulting interactions on and off blog have been extremely illuminating and rewarding. But just as after The Honest Broker was published, a book's publishing signifies that it is time for an academic change of course.

For the next several years the focus of my work is not going to be on climate issues, but rather, issues associated with innovation and technology, with energy only a small part of that focus.  My next book is already underway and I have decided to spend most of my time in 2011 on it and other topics that I've neglected, meaning that something else will have to give. That something else will be the intensive focus on the climate debate and the daily climate blogging associated with it.  If I believe my own analysis -- and I think I do -- then the broad outlines of that debate are unlikely to change anytime soon.  I'll continue to be a strong and active advocate for energy innovation and adaptation.

I have no doubts that there will be continued occasion on this blog to discuss and debate the issues raised in The Climate Fix, and there will be things worth discussing related to climate. So in the future I will restrict my discussions of climate to Tuesdays. What appears on this blog on the other days could be something related to my new book, some random musings, high-quality football analysis or nothing at all.  We'll see.

The entire crack staff here at this blog has been given some well-deserved time off, so posting will be scarce in the coming days and weeks as the holidays are here.  Comments will still be cleared, but please have patience if it is slow.

Thanks again to all the readers and commenters!  Happy Holidays!

22 November 2010

Colorado Rapids Win MLS Cup!

It was not the most exciting match, though the added extra time was intense.  The referee was mostly out to lunch throughout the game and the game MVP, Colorado's Conor Casey had his name butchered in the presentation of the trophy.  Colorado was even the 7th seed.  Such is to be expected I suppose in a young league still making its way.

But so what?  They are our team and this is their first trophy, and that is worth celebrating!  Highlights below.

Groupthink or . . . Beware of Climate Labels

What should WE call THEM?
climate skeptics
climate deniers
inactivists
yellow bellied sap suckers
this question is insane

  
pollcode.com free polls
Over the weekend I was on an email list of prominent environmental journalists, bloggers, academics and activists in which one blogger raised the question of what terminology to use to describe those folks, you know, the skeptics or deniers.  I watched in disbelief as people that I respect entertained the question in all seriousness.  A climate scientist helpfully made the political connection explicit by recommending the term "inactivists."  Several of the people on the list had in the past used such terms to try to delegitimize my work.

I about blew a seam.  Seriously.  I emailed the list explaining that this exercise was insane, and about as useful as debating what to call people with dark colored skin -- I can think of a lot of terms used for that purpose.  But why go there?  The answer? Because there is an US and a THEM, and being able to tell the difference is important if we are to put people into bins and delegitimize them.

Of course on the email list there was no consensus as to who the US is and who the THEY is, but they did agree that WE needed terms for THEM.

It would have been totally depressing except for the fact that one journalist spoke up to show some uncommon common sense, suggesting that describing context might be more useful than stripping it away.  Some folks did not engage, so perhaps there is additional hope.  Our ability to have healthy discussions on climate change remains quite challenging.

I've asked to be taken off the list, as I am clearly not one of them.  Put me in the category of people who think that trying to divide the world according to views on some aspects of climate science is just a bad idea. It is especially a bad idea for journalists and policy wonks.

You can participate in the farce by entering your vote in the poll above.

21 November 2010

Emissions Elasticity Test Results are In

In March, 2009 I noted that the projected decline in carbon dioxide emissions provided a chance for a serendipitous policy experiment:
When we eventually learn what happens to global emissions in response to the economic downturn, we will learn something new about the relationship of GDP growth and emissions. In recent years that relationship has strengthened. What will 2009 tell us?
A paper published in Nature Geoscience today provides the results of that experiment.  The BBC reports:
Carbon emissions fell in 2009 due to the recession - but not by as much as predicted, suggesting the fast upward trend will soon be resumed.

Those are the key findings from an analysis of 2009 emissions data issued in the journal Nature Geoscience a week before the UN climate summit opens.

Industrialised nations saw big falls in emissions - but major developing countries saw a continued rise.

The report suggests emissions will begin rising by 3% per year again.

"What we find is a drop in emissions from fossil fuels in 2009 of 1.3%, which is not dramatic," said lead researcher Pierre Friedlingstein from the UK's University of Exeter.

"Based on GDP projections last year, we were expecting much more."
Why were they expecting much more?

Because there is a long history of assuming rates of decline in energy intensity and carbon intensity that are simply not matched by what is happening in the real world.  As the AFP explains:
The global decrease was less than half that had been expected, because emerging giant economies were unaffected by the downturn that hit many large industrialised nations.

In addition, they burned more coal, the biggest source of fossil-fuel carbon, while their economies struggled with a higher "carbon intensity," a measure of fuel-efficiency.
The overly optimistic assumptions of energy and carbon intensity decline was at the core of our 2008 paper in Nature, titled Dangerous Assumptions, which can be found here in PDF.

The results of the serendipitous emissions elasticity experiment provides additional, empirical confirmation of the merits of our arguments.  Additional analysis can be found here for the world (graph at the top of this post) and here for the US (graph below), with trends in both instances going the wrong way.

Plenty of Energy

Last week's New York Times had an article arguing that there are plenty of fossil fuels available to meet projected demand for coming decades.  If that is the case, then all the more reason for accelerated efforts to increase that demand by expanding access and to put a small price on today's energy supply, while it is plentiful and relatively cheap, in order raise the funds necessary to invest in innovation to build a bridge to tomorrow.

Here is an excerpt:
Energy experts now predict decades of residential and commercial power at reasonable prices. Simply put, the world of energy has once again been turned upside down.

“Oil and gas will continue to be pillars for global energy supply for decades to come,” said James Burkhard, a managing director of IHS CERA, an energy consulting firm. “The competitiveness of oil and gas and the scale at which they are produced mean that there are no readily available substitutes in either one year or 20 years.”

Some unpleasant though predictable consequences are likely, of course, as the disaster in the Gulf of Mexico this spring demonstrated. Some environmentalists say that gas from shale depends on drilling techniques and chemicals that may jeopardize groundwater supplies, and that a growing dependence on Canadian oil sands is more dangerous for the climate than most conventional oils because mining and processing of the sands require so much energy and a loss of forests.

And while moderately priced oil and gas bring economic relief, they also make renewable sources of energy like wind and solar relatively expensive and less attractive to investors unless governments impose a price on carbon emissions.

“When wind guys talk to each other,” said Michael Skelly, president of Clean Line Energy Partners, a developer of transmission lines for renewable energy, “they say, ‘Damn, what are we going to do about the price of natural gas?’ ”

Oil and gas executives say they provide a necessary energy bridge; that because both oil and gas have a fraction of the carbon-burning intensity of coal, it makes sense to use them until wind, solar, geothermal and the rest become commercially viable.

“We should celebrate the fact that we have enough oil and gas to carry us forward until a new energy technology can take their place,” said Robert N. Ryan Jr., Chevron’s vice president for global exploration.

Mr. Skelly and other renewable energy entrepreneurs counter that without a government policy fixing a price on carbon emissions through a tax or cap and trade, the hydrocarbon bridge could go on and on without end.
For those interested in stemming the accumulating carbon dioxide in the atmonsphere, even adopting agressive policies in that direction won't change the underlying dynamics:
Even in an alternative world where there is a concerted, coordinated effort to reduce future carbon emissions sharply, the International Energy Agency projected oil demand would peak at 88 million barrels a day around 2020, then decline to 81 million barrels a day in 2035 — just fractionally less than today’s consumption.

Natural gas use, meanwhile, would increase by 15 percent from current levels by 2035. In contrast, global coal use would dip a bit, while nuclear power and renewable forms of energy would grow considerably.

No matter what finally plays out, energy experts expect there will be plenty, perhaps even an abundance, of oil and gas. IHS CERA, which monitors oil and gas fields around the world, projects that productive capacity for liquid fuels could rise to 112 million barrels a day in 2030 (including 2.75 million barrels in biofuels), from 92.6 million barrels a day this year.

“The estimates for how much oil there is in the world continue to increase,” said William M. Colton, Exxon Mobil’s vice president for corporate strategic planning. “There’s enough oil to supply the world’s needs as far as anyone can see.”

More promising still is that the growing oil production comes from a variety of sources — making the world less vulnerable to a price war with the Organization of the Petroleum Exporting Countries or an outbreak of violence in a major producing country like Nigeria. As IHS CERA and other oil analysts see it, new oil is going to come from both conventional and unconventional sources — from anticipated expansions of fields in Iraq and Saudi Arabia and from a continued expansion of deepwater drilling off Africa and Brazil, in the Gulf of Mexico and across the Arctic, where hopes are high in the oil world, although little exploration has yet been done.

The vast oil sands fields in western Canada, deemed uneconomical by many oil companies as few as 15 years ago, are now as important to global supply growth as the continuing expansions of fields in Saudi Arabia, the current No. 1 producer.

“We’ve got a wealth of opportunities to address around the world,” said Mr. Ryan, Chevron’s vice president.

“We have quite a few deepwater settings all over the world, some of them very new, like the Black Sea. There are Arctic settings. We have efforts under way re-exploring Nigeria, Angola, Australia. The easy stuff has been found, that’s true, but in the end, we still have many basins in the world to explore or to re-explore.”
It is not necessary to agree with rosy scenarios of energy abundance to recognize that the current approach to dramatically reducing carbon dioxide emissions is not going to work, even if successful on its own terms.  The sooner we start building that bridge to the future the sooner we can walk across it. It won't be built by targets and timetables for emissions reductions, nor by putting a price on carbon.

The entire NY Times article is worth a read.

18 November 2010

Brilliant Speech by Aggreko CEO Rupert Soames

Here is a speech that everyone interested in climate and energy policy should watch.  Speaking before the Scottish parliament earlier this week, Rupert Soames, CEO of Aggreko -- a world leader in temporary energy supply -- delivers some straight talk to policy makers (BBC coverage).  He focuses on Great Britain, but the lessons are of broad relevance.  Have a look.

17 November 2010

About that 2020 UK Climate Change Act Emissions Reduction Target

At tonite's vibrant discussion of climate science, policy and the media at the British Council, I commented that the UK Climate Change Act was doomed to failure in meeting its 34% target for emissions reductions below a 1990 baseline by 2020.  In the discussion with the audience, a familiar voice boomed from the back of the room that most climate experts would disagree with my assessment.

Today the UK National Grid provided some additional insight on this issue when it issued a press release on expected renewable energy by 2020:
A new report published by National Grid today shows that 31,950 MW of existing and proposed renewable generation have agreements in place to connect to the high voltage transmission system by 2020, placing the UK on track to meet 2020 renewable targets.

National Grid analysis identifies that about 29,000 MW of renewable transmission connected generation capacity is needed to meet the UK government’s target of 15 per cent of energy to come from renewable sources by 2020.

National Grid’s Transmission Networks Quarterly Connections Update, published today, shows:

Current transmission connected renewable generation:   4,950 MW

Proposed renewables projects with connection agreements up to 2020 as at 26 October 2010:   27,000 MW
So 27 GW of new capacity from renewables works out to 9 GW of supply at a 33% efficiency (and the word on the London street, literally, is that 33% is overly generous).

Using the simple math of energy and decarbonization from The Climate Fix, 9 GW works out to the equivalent amount of carbon-free energy as produced by 12 nuclear power plants.  In The Climate Fix, I argue that the UK needs 40 nuclear power plants worth of carbon free energy by 2015 (at the latest) to be on track to meeting its emissions reduction goal for 2020.  (See also this paper.)  So the UK is 5 years and at least 28 nuclear power plants worth of carbon free energy short (to get from 2015 to 2020 it would need dozens more).  So I'll stand by my judgment.

If that voice from the back of the room wishes to contest these numbers, he is welcome to do so here, but somehow I doubt he will;-)

RMS Responds to the Sarasota Herald-Tribune

Several colleagues shared this letter submitted by RMS to the Sarasota Herald-Tribune's recent articles about reinsurance and catastrophe models.  I offer several comments below the letter.
Letter to the Editor

Your article ‘Florida Insurers Rely on Dubious Storm Model’ (November 14) contains some key inaccuracies about why and how RMS derived its medium-term hurricane risk model, and how these models are used by insurers.

Most fundamentally, catastrophe models deliver probabilistic forecasts not deterministic predictions. A probabilistic activity forecast means that, on average, a certain number of hurricanes can be expected over a period of time. The actual number experienced in a particular period will be just one sample from a broad distribution of possible outcomes.

There is widespread agreement within the scientific community that the number of intense North Atlantic hurricanes has increased since the 1970s, and that since 1995 overall hurricane frequency has been significantly higher than the long-term historical average since 1900. The question is, how much higher is the frequency and how will it impact hurricanes making landfall in the U.S.?

Given the lack of scientific consensus on this subject, we have attempted to answer this question by gaining the perspective of expert hurricane climatologists. The scientists were deliberately kept at a distance from the commercial implications of their recommendations. In our annual reviews of medium-term activity rates (the next five years), we have worked with a total of 17 leading experts, representing a broad spectrum of opinions. The process has evolved year on year, including the introduction of an independent moderator to oversee the elicitation in 2007. More recently we have employed a range of forecast models and methodologies subject to peer review in a scientific publication. Even when the experts involved and the scientific forecasting models have changed, the results of the five-year forecasts have remained remarkably consistent.

If RMS had been estimating medium-term activity rates during the 1970s and 1980s, the medium-term view would have shown lower activity than the historical average of activity. It should also be noted that 2010 has been another very active year for North Atlantic hurricanes. Fortunately none of these made landfall in Florida.

As an independent catastrophe risk modeler, the aim of our models has always been to provide the best unbiased estimate of risk to help the insurance industry and policy-holders to recognize and manage, and where possible, reduce the risk through the application of risk mitigation programs and initiatives. There is no commercial advantage for us to overstate the risk.

Pricing insurance risk involves a complex set of decisions. Models help determine the key drivers of risk, allowing insurers and reinsurers to understand their exposure to catastrophic loss. Other market conditions, such as the worldwide shortage of capital after the high-loss years of 2004 and 2005, also have dramatic influences on the availability and pricing of insurance.

We welcome review and debate of the timeframe over which catastrophe models should be used to characterize hurricane risk.

However, this debate should be based on a balanced and constructive view of the facts.

Sincerely,

Hemant Shah
President & CEO
Risk Management Solutions (RMS)
I have two responses to this letter.

First, Shah is correct that the RMS outlook does not offer deterministic predictions.  However, the measured language in the letter to the editor is contrary to how RMS characterized its outlook at the time.  Consider this excerpt from a peer-reviewed paper describing its prediction methodology published subsequent to its issuance of its 2006-2010 prediction (PDF):
The medium-term perspective is more specifically defined here as a window covering the next 5 yr. There are both scientific and business reasons for choosing the 5-yr horizon. The variance of predictions over 5 yr is smaller than that of seasonal forecasts, in part because of the way that the variations accompanying the state of the El Ni˜no are implicitly accounted for, as 5 yr nears the average period of one ENSO cycle. Predictions at longer timescales, such as 10 or 20 yr are also found to be less skillful, given the observed multidecadal variability. Five yr also bound most business applications within the insurance industry, whether it is planning for capital allocation or for transferring financial risk through Catastrophe Bonds, for example.
There is no discussion of uncertainties or probabilities associated with the prediction in the paper, and the term "prediction" is used throughout.

Second, Shah states:
Even when the experts involved and the scientific forecasting models have changed, the results of the five-year forecasts have remained remarkably consistent.
This is indeed remarkable.  So remarkable that after participating as an RMS elicitator in 2008 I looked into it and found that the results have little to do with choice of experts, but rather, the methodology employed by RMS.  Of course the results changed little.  When RMS did change its methodology a bit, the expected losses dropped a bit, and RMS suspended its elicitation process.

Along with its peers, RMS is an important company.  They do work that potentially helps make the global reinsurance and insurance industry do its work with a closer connection to empirical science. It is precisely because RMS is so important that it merits close attention.  Like ratings agencies, RMS and other catastrophe modelers are too important to a range of public outcomes to be left to govern themselves.  As much as cat modelers may not welcome greater external attention and accountability, as a result of their success and importance, that time has come.