Crystal balls, virtual realities and ‘storylines’
By Richard S Courtney
“Projections” of anthropogenic climate change reported in the Third Assessment Report (TAR) of the UN Intergovernmental Panel on Climate Change (IPCC) are assessed to be based on pseudo-science.
The world’s news media has recently been reporting a dripping of propaganda from the United Nations (UN) Intergovernmental Panel on Climate Change (IPCC). Throughout the early months of 2001, IPCC representatives have proclaimed more and worst future disasters from global warming (GW) resulting from enhanced greenhouse effect induced by emissions of greenhouse gases (GHGs: mostly CO2) from human activities.
The IPCC defines GW to be “climate change”(1) and it has three independent Working Groups (WGs). WG I considers the science of climate with specific reference to GW. WG II considers probable impacts of GW. And WG III considers mainly “mitigation options” (i.e. ways to reduce GW), though adaptation has recently crept onto its research agenda
Founded by the World Meteorological Organisation (WMO) and the UN Environment Program (UNEP) in 1988, the IPCC was instructed to provide updates of GW at 5-year intervals by the ‘Rio Earth Summit’ held in 1992. In 2001, the IPCC’s Third Assessment Report (TAR) has been released in small pieces each presented to the media with substantial – and often untrue – hype. For example, The TAR cites projected warming of 1.4 to 5.8 deg.C by 2100, and Klaus Toepfer – head of UNEP – said this “should sound alarm bells in every national capital and in every local community.”(2)
The IPCC’s Second Assessment Report (SAR) was released in 1996, and it projected warming of only 1.0 to 3.5 deg C by 2100(1). There has been no publication of any new scientific evidence to support the GW hypothesis since publication of the SAR, and the TAR does not cite any such evidence. But much new evidence has been discovered that contradicts the hypothesis and indicates that past projections of warming were exaggerated. Therefore, it is surprising that the TAR projects warming of 1.4 to 5.8 deg.C by 2100.
All predictions of future GW derive from outputs of General Circulation Models (GCMs) of the Earth’s climate system. In 1990 the IPCC models predicted most warming would occur near the poles in winter(3), but as early as 1993 a study of data from two independent intergovernmental organisations (i.e. NATO and the Warsaw Pact) showed that Arctic temperatures had cooled between 1950 and 1990 when atmospheric CO2 concentration rose by more than 20%(4). Subsequent IPCC so-called ‘scientific’ reports ignored this while IPCC representatives argued that melting of polar ice must be absorbing heat and thus keeping the Arctic cool(5). However, new evidence indicates that polar ice did not thin between 1990 and 2000(6) when atmospheric CO2 concentration continued to rise. Also, as early as 1991 there was evidence that a negative feedback prevents tropical ocean surface temperatures rising above 305K (i.e. present maximum ocean surface temperature)(7). The IPCC argued that the cause of the limit to maximum sea surface temperature was not known and, therefore, GW could induce a change to the limit. But two recent studies(8,9) have each indicated that a negative feedback exists. Simply, changes to cloud cover reduce solar heating and this compensates for any additional energy (e.g. from greenhouse warming) that is in – or is transported to – tropical ocean regions. The GCMs do not include – and do not emulate – this feedback.
So, new evidence proves that the GCMs are not capable of accurate emulation of effects of changes to greenhouse gases in the air: the GCMs predict polar warming that is not happening and tropical warming that cannot happen. The TAR does not report this. It provides larger projected temperature rises than were projected in the SAR, and uses the hypothesis of ‘anthropogenic aerosol’ to excuse the GCMs’ failures. This excuse was first raised in the SAR, but a subsequent paper in E&E(10) showed it to be invalid. The TAR introduces possible effects of natural aerosol (e.g. sea salt) as a method to excuse the failure of original aerosol excuse. But excuses have risks, and new evidence concerning aerosols provides even more doubt to the GCMs’ performances.
The aerosol hypothesis claims anthropogenic aerosol provides greenhouse cooling that masks the GW predicted by the GCMs. But a recent study has shown that soot (i.e. carbonaceous material from combustion) combines with anthropogenic aerosol in the air and the combination provides strong greenhouse warming(11). So, the aerosol should have increased GW, not reduced it as the aerosol excuse assumes. The globally averaged warming (i.e. radiative forcing potential) from the soot/aerosol is calculated to be powerful (0.55 Wm-2) and is between the potentials of carbon dioxide (1.56 Wm-2) and methane (0.47 Wm-2) that IPCC had claimed to be the two major trace greenhouse gases.
The new soot/aerosol finding not only disproves the aerosol excuse (thus removing the last fig leaf from GCM inadequacies for predicting GW), it also provides doubt to every aspect of the GCMs. IPCC representatives have repeatedly asserted that radiative forcing potentials were the most indisputable part of their models. But the new soot/aerosol finding shows the IPCC models are not good representations of radiative forcing. Nobody can know what else may be seriously wrong with the GCMs.
Given this avalanche of evidence against GW projections in the SAR, it is reasonable to consider the reasons why and how the TAR projects even more future GW than the SAR projected.
GW research is ‘big business’. Governments are probably spending more than $5 billion p.a. on it – the US government alone is spending more than $2 billion p.a. – and administrators of scientific institutions can be expected to protect against damage to this income. Several institutions are now dependant on GW for their existence. For example, the Hadley Centre in the UK produces the IPCC’s so-called ‘scientific’ reports (i.e. reports from IPCC Working Group I). It was founded by the UK government to study GW, gets almost all its funds from UK government, and only exists to study GW. Also, scientists are human beings – not saints – so a scientist asked to peer-review a paper is not likely to be favourable to a paper that threatens research funding to his institution (his job is threatened by the paper).
Despite the financial pressure to support IPCC assertions, many scientists have spoken out against GW. For example, the 1992 Heidelberg Appeal says the IPCC assertions of GW result from “pseudo-scientific arguments or false and non-relevant data”, and has been signed by more than 2,000 scientists including 62 Nobel Prize winners. In 1998 more than 18,000 US scientists signed a petition that says GW is based on “unfounded panic-mongering based on flawed ideas”.
The “pseudo-scientific arguments” are clearly demonstrated by the assertions of future GW in the IPCC’s TAR. They arise from Chapter 2 of the report by IPCC Working Group III. These assertions are so extraordinary that in my Expert Peer Review for the IPCC I recommended; “TAR WG III Chapter 2 should not be published” and I commended that “the ‘Writing Teams’ of other TAR Chapters should object to publication TAR WG III Chapter 2. In my opinion, their failure to object could risk damage to their reputations as a result of association with Chapter 2” because it “is the most disingenuous and dangerous document it has ever been my misfortune to read.”(12) But it was published, and the Vice Chairman of IPCC Working Group II, Martin Manning, then spoke out to make clear that he also disagrees it.
The TAR had three drafts that each gave different projected temperature increases between the years 1990 and 2100. Its First Draft was issued on 6th November 1999 and included a graph showing projected temperature increases for the period of from 1.5 to 4.0 deg. C (Figure 9.13 (a), page 66 of Chapter 9 “Projections of Future Climate Change”). The Second Draft, issued on 16th April 2000, included a similar graph that showed the projected temperature increases to be between 1.3 and 5.0 deg. C (Figure 9.18, page 88 of Chapter 9). The Third, and Final Draft, issued 22 October 2000, contains another similar graph showing temperature projections from 1990 to 2100 of between 1.4 and 5.8 deg. C (Figure 9.14, page 85 of Chapter 9).
It is important to note that increased severity of future GW was not indicated by any significant advances in Climate Science between the issuing of the first draft and the final draft. Indeed, the scientific discoveries pertaining to Arctic ice(6), tropical oceanic temperature(8,9) and aerosol effects(11) suggested that overestimates of future GW should be reduced. Yet the most extreme projected temperature for between 1990 and 2100 grew from 4.0 to 5.8 deg. C, with little change in the minimum, between the first and final drafts. This is because the drafts do not report any science to substantiate these projections. They report model projections based on “scenarios”.
Chapter 2 of Working Group III describes the origin and nature of these “scenarios”.
A sub-committee of IPCC Working Group III produced the scenarios with no input from the climate scientists (IPCC Working Group I) who were invited to comment on the TAR. Most of the scenario authors involved are economists and “futurologists”, and many of those invited to comment on their work were “activists”.
Each “scenario” is a “storyline” that includes assumptions about how the world, civilisation, history and energy use will develop in the next 100 years. There is no limit to the assumptions that might be made about the state of the world in the year 2100, and the authors claim their scenarios are “neither predictions nor forecasts”. However, if the scenarios are not predictions and not forecasts then they have no practical use, so one wonders why Klaus Toepfer – head of the UN Environment Program – said they “should sound alarm bells in every national capital and in every local community.”(2).
Also, the scenario authors do not place any probability levels on their scenarios. This means that they – and everybody else – are forced to assume that even the most improbable – some say ridiculous – scenarios are just as likely as those that agree with reality.
Additionally, the word “scenario” may be thought to be ambiguous. The problem arises because the scenario authors use a method that does not permit distinct separation of scenario types. The method has the following stages.
1. “Storylines” of future human activity changing over time are created (i.e. social/technology change scenarios).
2. For each “storyline”, the GHG emissions anticipated in future years are estimated (i.e. emissions modelling).
3. The changes to mean global temperature in future years resulting from the anticipated future GHG emissions are estimated (i.e. climate modelling).
The complete scenario contains all three stages; (a), (b) and ©. Hence, in each complete scenario, accumulating effects resulting from social/technology changes alter extrapolations from existing social/technology systems, existing GHG emissions, and existing climate.
The technology, wealth and population growth assumptions that go into the “storylines” and the political and social engineering required for the “storylines” are not published. Therefore, they cannot be challenged. However, it is possible to consider if each scenario projects a change to GHG emissions or climate conditions (e.g. atmospheric CO2 concentration) that is reasonable in the immediate future.
The scenario authors say the “scenarios deal with the future, so they cannot be compared with observations”. On face value this seems reasonable, but it is not true because the scenarios project from the present and some of them project from disagreement with observations of the actual present climate data. The following examples illustrate this.
* All the scenarios set their CO2 emissions for year 2000 at 7.9 to 8.1GtC. The likely fossil fuel figure is 6.2GtC but the scenarios add to this 0.7-1.2.GtC for “deforestation” and this means that their figure for fossil fuels’ emissions of 6.8GtC in year 2000 is too high by 10%.
* All the scenarios except B1 have atmospheric CO2 concentrations starting to rise in year 2000 as a result of exponential increase (i.e. 0.4% p.a.), but the recent trend in the concentration has been a declining rate of increase.
* All the scenarios except B1 assume that the downward trend of methane rate-of-increase of the past 16 years will suddenly reverse in the year 2000, and in two cases (A1F1 and A2) it rises dramatically. But nobody knows why the existing downward trend exists, so how can this trend be projected to reverse as a result of social/technology change ?
* All the scenarios assume that SO2 emissions will reduce even if fossil fuel consumption increases.
* All the scenarios except B1 project huge increases to fossil fuel usage by 2100 starting from now. The increase is a factor of 6.3 for scenario A1B. There is a staggering projected increase to coal production by a factor of 12 for scenario A2 rising to a factor of 14 for the most extreme scenario. (While SO2 emissions decrease !)
Importantly, the probabilities of these scenarios are not assessed and if they cannot be compared to observations they are not science; they are guesses. Chapter 2 of Working Group III admits this.
The Chapter considers only one type of quantitative future-predictive climate model and “does not include quantitative scenarios produced using other methods; for example heuristic estimation such as Delphi”. The Chapter does not state why it chooses only to consider one type of quantitative model, but says that its ‘Writing Team’ listed 519 scenarios of the type they decided to accept, and that 150 of these “were mitigation (climate policy) scenarios”. Also, “Of the 150 mitigation scenarios, a total of 126 long-term scenarios that cover the next 50 to 100 years have been selected for this review”. So, from the 519 scenarios of the only type they were willing to consider, the Writing Team considered 126. The ‘Writing Team’ formulated “narrative storylines” for the future and selected four models – from their selection of 126 models – to describe their four “storylines”. The Chapter says the Writing Team used few “storylines” because they “wanted to avoid complicating the process by too many alternatives”. But they later increased the “storylines” from four to six to obtain the 5.8 deg. C projection in the Chapter’s final draft. The Writing Team formed “modelling groups” that each had “principal responsibility” to develop a “marker scenario” for one of the “storylines”. The Writing Team’s choice of the marker scenarios “was based on extensive discussion” that included “preference of some modelling teams”. An original total of 40 scenarios were generated from four storylines and this was increased to 60 scenarios generated from the six storylines in the Chapter’s final draft. The Chapter says, “the markers are not necessarily the median or mean of the scenario family, but are those scenarios considered by the SRES writing team as illustrative of a particular storyline”.
Simply, the Chapter explains that the six models selected as “markers” by the Writing Team are those that the Writing Team most liked, and these “markers” cannot be claimed to be typical of anything.
Put another way, the “storylines” are a selection made using personal preference of 6 untypical models from 126 models that were chosen from a list of 519 quantitative models of one particular type, and other types of quantitative model also exist. The Chapter does not state the simple truth that such selection permits almost any storylines that could generate almost any preferred projections of the future.
If that seems like pseudo-science, then the Chapter contains worse. The Chapter states that, “Most generally, it is clear that mitigation scenarios and mitigation policies are strongly related to their baseline scenarios, but no systematic analysis has published on the relationship between mitigation and baseline scenarios”. This statement is in the middle of the Chapter and is not included in the Chapter’s Conclusions. Failure to list this statement as a conclusion is strange because this statement is an admission that the assessed models do not provide useful predictions of effects of mitigation policies. How could the predictions be useful if the relationship between mitigation and baseline is not known ?
Also, the only valid baseline scenario is an extrapolation from current trends. The effect of an assumed change from current practice cannot be known if there is no known systematic relationship between mitigation and baseline scenario. But each of the scenarios is a claimed effect of changes from current practice. So, the TAR itself says the scenarios are meaningless gobbledygook.
The Chapter is honest about one thing, though. It openly admits why it pretends such mumbo-jumbo is science. Its Introduction states that the Chapter considers “societal visions of the future” that “most share a common goal: to explore how to achieve a more desirable future state”. There are many differing opinions on what would be a “a more desirable future state” (c.f. those of Mussolini and Marx) but the Chapter does not overtly state its definition of “desirable”.
And the Chapter concludes: “Perhaps the most powerful conclusion emerging from both the post-SRES analyses and the review of the general futures literature is that it may be possible to very significantly reduce GHG emissions through integration of climate policies with general socio-economic policies, which are not customarily as climate policies at all.”
Simply, this conclusion of Chapter 2 of WG III TAR calls for changes to socio-economic policies that are not climate policies (at very least, this conclusion provides an excuse for such changes). And the Chapter’s Introduction states that these changes are intended to achieve “a more desirable future state” based on “societal visions of the future”.
This conclusion derived by the method that generated it for the purpose stated in the Chapter is an abuse of science. Indeed, it is not science to make predictions of how to change the future by use of selected scenarios when “no systematic analysis has published on the relationship between mitigation and baseline scenarios”: this is pseudo-science of precisely the same type as astrology.
1. ed. Houghton JT et al., Intergovernmental Panel on Climate Change, “Climate Change 1995: The Science of Climate Change”, Cambridge, 1996
2. Reuters. Agency Press Release reporting comments of Toepfer K. 22.01.01.
3. ed. Houghton JT et al., Intergovernmental Panel on Climate Change, “Climate Change: The IPCC Scientific Assessment”, Cambridge, 1990
4. Kahl JD et al., “Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years”, Nature, v361, 335-337 (1993)
5. MacCracken M. personal communication to Daly J & Courtney RS by email. 28.02.00. Published at http://www.users.bigpond.com/kparish/cli...acCracken1
6. Winsor P., Geophysical Research Letters, v28, no.6, 1039-1041 (2001).
7. Ramanathan & Collins, Nature, v351, 27-32 (1991)
8. Geophysical Research Letters, v28, no.4, 792-793 (2001)
9. NASA press release March 2001
10. Courtney RS, Energy & Environment, v10, no.10, 491-502 (1999)
11. Jacobson MZ, "Strong radiative heating due to the mixing state of black carbon in atmospheric aerosols", Nature, v409, 695-697 (2000)
12. Courtney RS, Expert Peer Review of the 2nd Draft of IPCC Working Group III Chapter 2 (2000)
It is our attitude toward free thought and free expression that will determine our fate. There must be no limit on the range of temperate discussion, no limits on thought. No subject must be taboo. No censor must preside at our assemblies.
–William O. Douglas, U.S. Supreme Court Justice, 1952