'Climate Change' as 'Paradigm Shift'
A conceptual foray that seeks to further our interpretive understanding of WTF is happening to scientific enquiry in the post-scientific era
Editorial prelim: this post is way too long for email; please read online or in the app.
Scientists don’t believe, they have evidence.
Thus Kary Mullis, as cited by Celia Farber in her amazing Serious Adverse Events (Chelsea Green, 2023, p. 129), which I’m almost done reading. If you don’t have a copy yet, change it.
Northern Entanglements of Media, Capital, Governance
Earlier today, while sipping my morning coffee and lazily browsing through Norwegian legacy media, I stumbled across a very troubling piece. I first noted a small headline on Bergens Tidende’s homepage, linking to a a piece of what I call ‘secondary’ journalism, i.e., someone wrote a summary of something that appeared elsewhere. This happens quite frequently here in Norway as most legacy media is owned by the Schibsted Group, including in particular the country’s leading newspapers in Oslo (Aftenposten) and Bergen (Bergens Tidende).
As an aside, we briefly note that it is equally common to find such notes (‘first published at [insert outlet]’), yet even the briefest glance at the ownership structure behind both Aftenposten and Bergens Tidende is telling: both are owned by the Schibsted Group whose majority owner (60.1%) is the Tinius Trust, a foundation organised and incorporated in the mid-1990s by legendary journalist Einar Fredrik Åke ‘Tinius’ Nagell-Erichsen through his investment company Blommenholm Industrier, which owns another quarter (25.1%) of Schibsted shares. The remainder, as Wikipedia informs us (of all places), is held the likes of Deutsche Bank, Goldman Sachs, JP Morgan Chase, or UBS, although their shares remain in the single-digits. Needless to say, by the way, that the Tinius Trust is strong in its ‘corporate governance principles’, which express the foundation’s ‘fundamental belief’ to be an ‘active owner and seek[ing] influence…as the largest owner of Schibsted’.
So far, nothing new under the Northern sun, which means we may return to the issue at-hand.
‘Legacy Media’, Predictably, Misses the Point
Yesterday, 1 Oct. 2023, Bergens Tidende ran an interesting piece. Entitled, ‘Official Statistics Bureau Published Controversial Paper on Climate [Change]’ (orig. SSB publiserte kontroversiell forskningsartikkel om klima), which was not much more than a cheap recap of what Aftenposten had published earlier. Since both outlets are owned by Schibsted, this is literally a case of ‘same shit, different smell’.
Hence, we shall follow the latter’s account (paywalled original, archived work-around), whose title and sub-title spill the beans:
SSB [Statistisk Sentralbyrå, or Statistics Norway] Published Controversial Research Paper on Climate Change—gets slaughtered
Statistics Norway published a research paper that casts doubt on anthropogenic climate change. Now they want to change their own rules [of what gets published].
You probably guessed why these pieces triggered my interest…here’s a partial translation of the piece’s highlights. Written by Stine Barstad, the article went live on 1 Oct. 2023 in the early afternoon (my translation and emphases):
‘Results imply that the effect of man-made CO2 emissions does not appear to be sufficiently strong to cause systematic changes in the pattern of the temperature fluctuations.’
This is part of the conclusion of a research article published in the SSB news feed on Monday morning.
The claim, which regularly crops up in the climate debate, runs counter to what a unified global research community supports:
Human emissions of greenhouse gases from the burning of oil, coal and gas are causing the planet to warm up. This has been thoroughly documented and proven.
Leaving aside the problematic notion of ‘prove’ in a scientific sense—it’s basically an axiom, i.e., something that is accepted as proved until otherwise. There is more than a wide gap between something that is ‘thoroughly documented’ and (vs.) something that is ‘proven’, by the way, but for whatever reason, Ms. Barstad is not inclined to consider such more-than-just-semantic nuances.
But it wasn’t just the content of the message that made many people choke on their breakfast Monday morning. It was the sender.
SSB lends credibility
‘Statistics Norway lends credibility to conclusions that are not tenable and would otherwise not receive any attention’, says Christian Bjørnæs, Director of Communications at the Cicero Centre for Climate Research.
Others go further and accuse SSB of spreading alternative facts.
The article is now being eagerly shared on social media by people who use the SSB article to support their view that climate change is not caused by our emissions.
See how fast we went from something that is ‘thoroughly documented’ to the typical invocation of causative allegations. Judging from the editorial slant (and, possibly, personal bias of Ms. Barstad), whatever ‘science’ used to be is now increasingly used as a moral bludgeon. How dare you, unwashed, deplorable rabble, to ‘eagerly share’ such heretical content on social media?
Cicero [a quasi-mainstream news outlet] is highly critical of the fact that the article was published on SSB’s website alongside the SSB logo and without any further explanation of what the article was, or who was behind it.
This, by the way, is only objectively true if we simply discount all the other ‘research papers’ published, or disseminated, by Statistics Norway for, well, public policy discussion purposes. Obviously, these ‘other’ pieces do not deviate from the mainstream, hence what the climate change paper authors did is, in fact, ‘controversial’ to outright ‘harmful’, right? Right /sarcasm.
Interestingly, these kinds of research papers are not ‘peer-reviewed’ as is the case with scientific journal or monograph publications, but they are subject to internal referees, i.e., someone at Statistics Norway actually read the paper, found it worthwhile, and got it published—and this is the thought-and-deed crime that upsets the guardians of discourse.
Climate scientist Bjørn Samset describes the analysis as ‘so weak that it is almost worthless’.
Here we can see Mr. Samset calling the internal referees at SBB—morons. Yet, the following paragraph also reveals a more sinister issue at-hand:
Who is really behind the controversial article, and what does it say?
Statistician and retired civil engineer
The article summarises a so-called discussion note from the research department of Statistics Norway. Until Thursday, the SSB article itself said nothing about what such a discussion paper is. Nor did it say anything about who is behind it.
You had to go to a PDF article written in English to find out. [oh, what a bummer—one has to click on a link and read something in English…]
Linda Nøstbakken, Director of Research at Statistics Norway, says that it is common in social research to publish such working papers to get input and discussion before they are sent on for publication. ‘It has been read and commented on by colleagues at Statistics Norway, but not peer-reviewed. The conclusions are at the authors' own expense and are not endorsed by Statistics Norway.’
No-one who’s in his or her right mind doing scholarship needs official endorsement, by the way. This is for cult followers, not for pack leaders.
In this case, the conclusions are drawn by John Kristoffer Dagsvik and co-author Sigmund H. Moen. Dagsvik is a retired statistician and researcher on pension conditions for Statistics Norway. Moen is a retired civil engineer.
Now we know who co-authored the paper—and here’s where it gets ‘funny’ as in revelatory as regards the advanced decay of science as an enterprise as it has been practiced since what scholarship calls ‘The Scientific Revolution’ of the 17th century (apart from Steven Shapin’s work, the historiographic standard account is H. Floris Cohen’s The Scientific Revolution, Chicago, 1994).
As We Enter a New Paradigm, Science Revealed to be Over
Remember, all the above is given, cited, and moralised before a single statement from the research paper is cited. In other words, legacy media ‘reporting’ is putting the proverbial cart (contextualising, editorialising) before the horse (the evidence).
So what is it that the two have done that has attracted so much attention?
Study Doesn’t Find CO2 Effect
Dagsvik and Moen have analysed temperature series from NASA from 95 stations that measure surface temperatures around the world.
The analysis is an update of an article published in the Journal of the Royal Statistical Society [Series A, no. 183, pp. 883-908] in 2020.
Oh, look, there actually is an underlying, peer-reviewed piece that, apparently, Mr. Dagsvik and Mr. Moen, both retirees, co-authored and submitted to a truly ‘scientific’ journal. You can find the paper—entitled, ‘How does the temperature vary over time Evidence on the stationary and fractal feature of temperature fluctuations’, by clicking on the title. By the way, I do have (institutional) access to the content published by none other than Oxford University Press (a very reputable publisher), and if you don’t but wish to read that paper, drop me a line via email.
The article itself was submitted in 2016, apparently went through a number of tweaks, edits, and, of course, the review-plus-editorial process before it was published in 2020. It has yet to elicit any comments from ‘the scientific community’, yet the authors—who published a refereed piece of scholarship in a relevant academic outlet—are buried for daring to ‘update’ their analysis by someone (Stine Barstad) who, if her LinkedIn profile is somewhat accurate, worked on literally everything under the sun, holds an undergraduate degree in ‘journalism and economics’ from the City University of London (1998-2001) and a master of arts degree in ‘politics of the world economy’ from LSE (2003-04).
In this undertaking, she is assisted by the road-rage equivalent of a fully credentialised ‘climate scientist™’, Bjørn Samset, (bio here) who barely keeps his emotions at-bay. Needless to say, originally trained as a physicist, his faculty bio describes him as ‘science disseminator’ who holds a Ph.D. in Nuclear Physics from the University of Oslo, Norway (2006).
If you or I, who by the way, don’t hold any of these fancy titles, give talks like ‘how we can save the world’ (Dr. Samset), and just read papers, do something like this, many people would roll their eyes.
With that being said, shall we look at the paper by Dagsvik and Moen, two ‘retired’ professionals, one a statistician and the other a civil engineer. What could these two people possibly know that, say, ‘journos’ like Stine Barstad or ‘science disseminators’ like Bjørn Samset wouldn’t know?
So, let’s find out what all the fuss is about, shall we?
Dagsvik and Moen, ‘To what extent are temperature levels changing due to greenhouse gas emissions?’ (2023)
From the authors’ abstract (emphases mine, references omitted; source):
Weather and temperatures vary in ways that are difficult to explain and predict precisely. In this article we review data on temperature variations in the past as well possible reasons for these variations. Subsequently, we review key properties of global climate models and statistical analyses conducted by others on the ability of the global climate models to track historical temperatures. These tests show that standard climate models are rejected by time series data on global temperatures. Finally, we update and extend previous statistical analysis of temperature data . Using theoretical arguments and statistical tests we find that the effect of man-made CO2 emissions does not appear to be strong enough to cause systematic changes in the temperature fluctuations during the last 200 years.
Sounds spicy, eh? Let’s dive into the paper.
A typical feature of observed temperature series over the last two centuries is that they show, more or less, an increasing trend…A key question is whether this tendency is part of a cycle, or whether the temperature pattern during this period deviates systematically from previous variations. Even if recent recorded temperature variations should turn out to deviate from previous variation patterns in a systematic way it is still a difficult challenge to establish how much of this change is due to increasing man-made emissions of carbon dioxide (CO2) and other greenhouse gases.
At present, there is apparently a high degree of consensus among many climate researchers that the temperature increase of the last decades is systematic (and partly man-made). This is certainly the impression conveyed by the mass media. For non-experts, it is very difficult to obtain a comprehensive picture of the research in this field, and it is almost impossible to obtain an overview and understanding of the scientific basis for such a consensus. By looking at these issues in more detail, this article reviews past observed and reconstructed temperature data as well as properties and tests of the global climate models (GCMs). Moreover, we conduct statistical analyses of observed and reconstructed temperature series and test whether the recent fluctuation in temperatures differs systematically from previous temperature cycles, due possibly to emission of greenhouse gases.
This, by the way, explains—to me at least—the fury of legacy media: these two ‘retirees’ have the audacity of (finally) calling out the gaslighting by mainstream media. Kudos, gentlemen!
Historically, however, there have been large climatic variations. Temperature reconstructions indicate that there is a ‘warming’ trend that seems to have been going on for as long as approximately 400 years. Prior to the last 250 years or so, such a trend could only be due to natural causes. The length of the observed time series is consequently of crucial importance for analyzing empirically the pattern of temperature fluctuations and to have any hope of distinguishing natural variations in temperatures from man-made ones.
Dagsvik and Moen then go through a long listing of such historical temperature records, which, in some cases, date back to 1659 in England, which is the longest such record that includes monthly temperatures.
How, by the way, do Dagsvik and Moen proceed from these common-sensical observations?
One way to distinguish the effect of man-made emissions of greenhouse gases on temperatures from the effect of natural causes, is to check if temperature variations can be explained using GCMs. For this to be possible, a minimum requirement must be that GCMs are able to reproduce historically observed temperatures. Several researchers have applied advanced statistical methods to investigate the ability of GCMs to track global temperature series, and we review results from their analysis.
Oh, goody, a meta-analysis, so to speak, but they add a few words of warning:
Since the total impact on climate from various sources is not well understood the fluctuations in observed and reconstructed time series temperature data may be hard to explain. They may therefore to some extent appear unsystematic (stochastic). An alternative research approach is therefore to investigate whether the temperature series are consistent with a statistical model, and what the features of such a model might be. This was the approach taken by Dagsvik et al. (2020) and several of the references therein. A rigorous statistical analysis of the temperature phenomenon is, however, more complicated than might be expected. There are several reasons for this. First, it turns out that temperature, as a temporal process, appears to have cycles that can last for decades (long memory), if not hundreds of years. It is for precisely this reason that even such a prolonged increase in recent observed temperature series should not simply be interpreted as a trend leading to permanent climate change.
Note the extremely clear language and the careful wording that seeks to distinguish between observations (long temperature records) and their interpretation. I for one wish that we would have more of these kinds of papers than of the other sorts.
Basically, Dagsvik and Moen structure their long paper in the following way:
Section 2 describes historical data and climatic variations in the past and
Section 3 discusses these variations
Section 4 discusses the main global climate models (GCM), with Section 5 providing a review of the literature on these models
Section 6 formulates their own approach, with Section 7 dealing with the results from their own investigation
Section 8, finally, provides ‘bounds of maximum temperature values’ under specific conditions.
In the following, I shall provide summaries from sections 2 and 3, as well as sections 4 and 6; I shall thereafter linger a bit more on the subsequent sections 6 through 8. Once I’m through with the paper, we shall revisit what legacy media and the guardians of ‘the science™’ in the post-scientific era are saying before, briefly, discussing its implications.
Sections 2 and 3: Historical Variations
‘Apart from the last 250 years, data are based on reconstructions from several sources such as ice cores, tree rings and lake sediments’, the authors note that some ice cores may allow for reconstruction of past temperatures up to 2m years. The ‘modern’ record, conducted via satellite observations in the troposphere, runs since 1979.
Ice cores from Greenland and Antarctica show that four interglacial periods (125,000, 280,000, 325,000 and 415,000 years before now) were ‘warmer than the present’ (Bazinga, Dr. Haustein). They continue to note that ‘the typical length of a glacial period is about 100,000 years, while an interglacial period typically lasts for about 10-15,000 years. The present inter-glacial period has now lasted about 11,600 years.’
The authors survey a number of papers, all of which indicate ‘during the past 10,000 years temperatures over long periods were higher than they are today. The warmest phase occurred 4,000 to 8,000 years ago and is known as the Holocene Climate Optimum or the Atlantic Period’.
So much for the reconstructions, and what about observations since 1979? First, many problems with methodological implications are noted:
During the 20th century the makeup of the measurement network shifted…Many of the land stations have also moved geographically during their existence, and their instrumentation changed…
The satellite temperature records also have their problems, but these are generally of a more technical nature and therefore correctable. In addition, the temperature sampling by satellites is more regular and complete on a global basis than that represented by the surface records.
The different temperature records might not be of equal scientific quality…large administrative changes…[and] the degree of uncertainty (measurement errors) in these global temperature series have changed over time. Also, the number of weather stations and sea observation sites have increased over time. As a result, the variance of the measurement errors has probably decreased over time.
Pointing to Essex et al. (2006), Dagsvik and Moen posit that ‘the whole concept of global temperature’ is subject to questioning. Put differently, it might turn out that it would perhaps be better described as a ‘social construct’ (oh, the irony), as ‘any given set of local temperature measurements distributed across the world can be interpreted as there are both a “warming” and a “cooling” tendency going on simultaneously, making the notion of global warming physically ill-posed’.
As regards key sources of these variations, the sun as primary energy input is mentioned, as are the Milankovitch Cycles for the ice ages (which incl. the Earth’s axial tilt, precession, the elliptical orbit around the sun). Moreover, the ocean’s capacity to ‘store enormous capacities of CO2’ is also mentioned—as is Henry’s Law about proportional absorption of liquids (ocean) and the atmosphere.
Accordingly, one explanation…is that the variation of the storage capacity of the oceans, due to fluctuating temperatures, is the dominating effect.
In addition to seasonal variations and glacial periods, observed temperatures seem to vary for reasons that are only partly understood. Some of the variations are due to solar radiation, cloud formations and greenhouse gases (water vapor, argon, CO2, aerosols,6 methane, nitrous oxide and ozone).
Put bluntly: could be many of these factors, they may be (partially) correlated, or something else, i.e., we don’t really know. The sun’s role is crucial, esp. the so-called grand solar cycles of 350-400 years, but it may well be that other planets in our solar system also affect variations on Earth, as do clouds (which we don’t know much, if anything about).
The most important greenhouse gas is water vapor which varies greatly at any given place and time. About 66-85% of the natural greenhouse effect can be traced back to water vapor and small droplets in clouds. The next most significant greenhouse gas is CO2, which is different from water vapor in that its concentration in the atmosphere is pretty much the same all over the Earth. The amount of CO2 and other gases that humans have added to the atmosphere over the past 250 years increases the ability of the atmosphere to impede heat from diffusing into space.
There is also mention of El Nino/La Nina cycles and, of course, the conventional offramp for whatever we don’t know, ‘chaos’ (theory).
Sections 4 and 5: Global Climate Models and a Naked Emperor (IPCC)
The main argument in these two sections is that, following work by Judith Curry, a number of strongly-worded assertions are made:
While some of the relations in GCMs are based on well-established theory from physics, such as the Navier-Stokes equations, there are representations that are only approximations and not based on physical laws…Common resolutions for GCMs are about 100-200km in the horizontal direction and about one km vertically, and a time-stepping resolution of about 30 minutes. Due to the relatively coarse resolutions of the models, there are many important processes that take place within the cells determined by the model resolution, such as clouds and precipitation [to say nothing about, e.g., sea ice, fog, and everything in-between]
Note, briefly, that the ‘resolution’ of such GCMs is at least once removed from observed reality as these ‘cells’ are determined by the models. If you thought that would be ‘bad’, wait for the listing of limitations, incl.
The effect of increasing CO2 emissions on the climate cannot be evaluated precisely on time scales that are of the order of less than or equal to 100 years.
There is a lack of knowledge of the uncertainty which is partly due to the choice of the subscale models and the parameterization and calibration of these, as well as insufficient data [I skipped their discussion of subscale models]
According to some evaluations, GCMs are not sufficiently reliable to distinguish between natural and man-made causes of the temperature increase in the 20th century. Some of the predictions from GCMs are accompanied by standard errors, as in statistical analysis. But since the GCMs are deterministic models one cannot interpret these standard errors in the same way as in statistics. [apart from assertions, such as ‘this has been proven by
the sciencemodels’, the models are also deterministic, i.e., they project their ‘findings’ backwards in time; if this was a historical text, I’d call this an anachronism]GCMs are typically evaluated applying the same observations used to calibrate the model parameters. [who’s watching the watchmen? Oh, yes, the same peers who review the studies describing the findings—how does one spell ‘conflict of interests’ in climate science?]
I suppose another long-ish quote from Dagsvik and Moen suffices to round off this section:
In an article in Science, Voosen (2016) writes: ‘Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash’. Unfortunately, models that match 20th century data as a result of calibration using the same 20th century data are of dubious quality for determining the causes of the 20th century temperature variability.
More interestingly, methodically speaking, is the subsequent review of the accuracy of these GCMs to accurately predict past temperatures, which is called ‘hindcasts’:
A key question is therefore whether the GCMs can be trusted to provide reliable predictions. One way of examining the quality of the GCMs is to check if the temperature predictions (hindcasts) from the GCMs are able to track the global temperature time series…
Prior to Beenstock et al. (2016) the ability of GCMs to track global temperature series has not, to the best of our knowledge, been subjected to rigorous empirical testing by means of advanced statistical methods such as cointegration tests.
Let that sink in for a moment: the entire UN-led effort since the late 1980s has been done without any testing of models until—2016.
Beenstock et al. (2016) have used data…produced by 22 selected GCMs for the period 1880-2010 to test whether the regression model…fits the data…[they] found that statistical tests rejected the hypothesis…which means that the regression model postulated above does not hold. In other words, this means that the process produced by the GCMs is unable to track global temperature.
McKitrick and Christy (2020) have done a somewhat similar analysis for the period 1979-2014 and they found that the GCMs overpredict the global temperatures after 2000…Fildes and Kourentzes (2011) compared the tracking behavior of one GCM to simple time series and neural network models, and found that the latter outperformed the former, despite their simplicity [that I find even more staggering: simple considerations trump expensive ‘models’]…
it may be theoretically possible that the GCMs are able to track the ‘true’ latent global temperature series reasonably well, despite the fact that they do not track the corresponding observed (constructed) one. In any case, the analyses of Beenstock et al. (2016), and McKitrick and Christy (2020) are startling and raise serious doubts about the quality of the GCMs, and in particular, if the CO2 sensitivity has been correctly identified.
In this section, Dagsvik and Moen commit a seemingly egregious sin: they loudly, and roundly so, criticise the IPCC:
In an IPCC review it was claimed that ‘There continues to be very high confidence that the models reproduce observed large-scale mean surface temperature patterns (pattern correlation ∼0.99)’ (IPCC, 2014, p. 743). But as discussed above, the mere fact that these correlations are high does not necessarily mean that the GCMs that produced them have been validated successfully…The statement by IPCC cited above is therefore misleading.
Oh, look, the emperor has no clothes. Thus the backlash, even though I doubt it that anyone who so loudly demands the crucifixion of Dagsvik and Moe actually understood the paper.
Sections 6 and 7: Re-Analysis and Evaluation of GCM
In terms of aims, let’s re-iterate that ‘the current paper extends the analysis of Dagsvik et al. (2020) by using observed temperature series up to 2021 for a set of weather stations, in contrast to the analysis of Dagsvik et al. (2020) who only analyzed a few temperature series up to 2012 whereas most of the temperature series used in their analysis ended between 1980 and 2012’.
While I shall not reproduce the strictly methodological section 6 (in which Dagsvik and Moen explain how the try to test the GCM by trying to investigate the unsystematic aspects of them), the main beef in the paper is in section 7 in which the authors describe their results:
Since global temperature constructions use different data sources at different time periods they become problematic to analyze with statistical time series methods, as mentioned above, because their statistical properties may vary over time in a way that is not known. Specifically, by looking at the HadCRUT3 time series (Figure B1 in Appendix B) it appears that the variance of the temperatures is greater the first 30 years than in the subsequent years…the aggregate series were found to be stationary in contrast to the HadCRUT3 series. This means that the trend in the aggregate series is unsystematic (stochastic). The reason why the HadCRUT3 series is non-stationary may not be because of the increasing trend but because of the systematic change in the pattern of variations over time (variance and auto-correlation, not visible in the figure).
What Dagsvik and Moen are saying here is this: whatever the data inputs in the HadCRUT3 model, there is no clearly discernible trend in one of the most widely used global temperature models.
As regards the temperature reconstructions based on Greenland ice cores, essentially the same has been found by Dagsvik and Moen, holding that ‘current decadal mean snow temperature in central Greenland has not exceeded the envelope of natural variability of the past 4000 years’.
Conclusions…
Here, and esp. in light of media coverage and Statistics Norway’s announced response, it is worth citing longer passages again:
We have reviewed data on climate and temperatures in the past and ascertained that there have been large (non-stationary) temperature fluctuations resulting from natural causes.
Subsequently, we have summarized recent work on statistical analyses on the ability of the GCMs to track historical temperature data. These studies…raise serious doubts about whether the GCMs are able to distinguish natural variations in temperatures from variations caused by man-made emissions of CO2…
Despite long trends and cycles in these temperature series…results imply that the effect of man-made CO2 emissions does not appear to be sufficiently strong to cause systematic changes in the pattern of the temperature fluctuations. In other words, our analysis indicates that with the current level of knowledge, it seems impossible to determine how much of the temperature increase is due to emissions of CO2.
…and Implications
Don’t get me wrong, dear readers, as the father of two primary schoolers, it is in my distinct interest that we ensure a liveable climate for our descendants. This is also the core message of Dagsvik and Moen, who, after all, undertook the above study during their retirement, perhaps even thinking of it as a public service for the benefit of humanity.
They also, scientifically speaking, raise an important point about the models that are currently used, their possible errors and biases, and call out those who peddle alleged certainties despite these facts.
In all, Dagsvik and Moen make a couple of valid points that, ideally, should not be censored and ridiculed, much less lead to changes in a publicly-funded institution, such as Statistics Norway. And this means that we must talk about the rest of the above-cited hit piece in Aftenposten, too:
Here is what Stine Barstad took away from the paper (mind you, she only did this after the above-related editorialising):
They find that the temperature changes seen here over the past 150-200 years are not statistically significantly different from fluctuations seen in the past.
Therefore, the hypothesis that the warming we see now is only due to natural variations cannot be rejected, they say.
In the article, they highlight individual studies that point to weaknesses in the temperature measurements and climate models used. These include statistical tests that conclude that the climate models are unable to distinguish natural climate variations from those caused by CO2 emissions.
Thus, they conclude:
‘In other words, our analysis indicates that with the current level of knowledge, it seems impossible to determine how much of the temperature increase is due to emissions of CO2.’
So far, so accurate, eh? The problem—note, again, the editorial slant—in the subsequent paragraph:
However, they also make a small reservation that the temperature series coupled with, for example, models based on geophysical processes, may give a different picture. They therefore do not rule out that a systematic temperature shift may be underway—even if it is not captured by their statistical analyses.
Or the models that they analysed; note the sleight of hand by Ms. Barstad: she blows up a small reservation by two seasoned authors to infer that the statistical analysis they did could be missing something important. In doing so, she simply diverts the readers from the above-related key insights, such as that there was no double-checking of models prior to 2016.
Here’s how Ms. Barstad spins this:
Big Conclusions, Small Basis
‘We believe that when we have 95 very long time series, it would be strange not to be able to detect a systematic trend if CO₂ emissions have any effect’, Dagsvik tells Aftenposten.
It sounds logical. So why do they get such a hard time?
‘Here, they have gone into a large, complicated system, looked at a small part and concluded something based on this small part. But their conclusions are not proportionate to the analysis that has been done’, says climate researcher Helge Drange at the Bjerknes Centre.
Both he and Samset point out that the researchers seem to completely ignore all the other research that documents the physical, statistical and scientific correlations between greenhouse gas emissions and the temperature increase. This is summarised in the UN's latest climate report.
Overlooking the main heat storage
In addition, they highlight a critical shortcoming in the analysis that leaves the authors literally out of their depth: the oceans. [this lacunae has been mentioned explicitly on pp. 11-2, with reference to Henry’s law; to call out Dagsvik and Moen for doing so—above all, in rather impolite language—is: revelatory in terms of biases].
Global warming is happening because more energy is being stored on the planet. Around 90% of this energy is stored in the ocean. But scientists have only analysed surface temperature measurements [which is also what Dagsvik and Moen have analysed; also: pray tell, where does the energy come from?].
A recently published article shows how the extra heat on the planet has been distributed over the past 60 years:
The red colour at the bottom corresponds to increased heat in the atmosphere due to increased air temperature. The SSB article analyses parts of this.
The grey area shows increased heat in the bedrock and land, plus heat that has been used to melt glaciers and ice caps such as Greenland and Antarctica.
The blue field shows increased heat in the ocean.
[edit: this is the illustration in question]
‘It shows that there is a very strong increase of heat in the global climate system. Based on this alone, I would immediately falsify the study’, says Drange.
Dagsvik confirms that they have not analysed temperature data from the ocean because he believes the older measurements are too weak. He says this is certainly a ‘valid objection’.
’You could of course say that there is a weakness in the analysis and that the ocean has an important function’, he tells Aftenposten.
He says that the purpose of the study is to emphasise that there is greater uncertainty associated with the relationship between temperature and greenhouse gas emissions than he believes is apparent.
We Should be Cautious
Do you think we should not endeavour to cut emissions because the correlation is not sufficiently certain?
‘No, given that there is so much uncertainty, I would have to say that it is wise to consider the precautionary principle. Many climate sceptics probably disagree’, [Dagsvik] says.
Co-author Moen tells Dagbladet that the two have worked hard to be heard, and that it was ‘a godsend’ that SSB agreed to publish the article. The two have previously been refused publication in Statistics Norway's research series because they write about topics outside their own field of expertise.
And they are unlikely to get the opportunity again. Statistics Norway is now tightening up:
‘We have revised our guidelines so that we are now required to publish only working papers on research related to Statistics Norway projects’, says Research Director Linda Nøstbakken.
She says this is done to ensure that Statistics Norway has the expertise to offer good comments and input and to quality assure the work. ‘But the guidelines were changed after the article was submitted. It is not a given that this would have passed under the current guidelines," she says.
Must be More Precise
She says Dagsvik has good expertise in statistics, but not in climate modelling.
Communications Director Hege Tunes says that Statistics Norway has now initiated an effort to separate the research notes more clearly from the statistics.
What do you say to the critics who believe you give up your credibility to research that doesn't measure up?
‘It would be completely wrong for us not to publish a working paper because its conclusions are controversial or unpopular. This is a matter of freedom of expression, and I'm glad that others are entering the discussion and debating this on a professional basis. It is important for researchers to receive input in this way. But we must be clear as the sender and label it so that users understand what this is. It's a responsibility we have’, she adds.
Bottom Lines
This has become overly long already, hence I’ll delimit myself to a few key issues.
First, ‘the scientific revolution’ is over. It had a good run since around the mid-17th century, but by the late 20th century, the gatekeepers have won.
During ‘the scientific revolution’, all one needed is a clearly-described method that would permit others to replicate the results—and voilà. This is how ‘science’ used to work, and this also does not describe how ‘the science™’ works these days.
Much like in the case by Skrable et al., which I discussed at length not too long ago, the argument now is: we’ll no longer cherish ‘renegade scientists’ who ‘work outside their fields’ for their points of view; rather, critical, cross-disciplinary thinking will now be censored.
The findings by Dagsvik and Moen, by the way, underscore the argument made by Skrable et al.: humanity’s impact in terms of sheer quantities of emissions appears insufficient to meaningfully—systematically, in the words of Dagsvik and Moen—alter Earth’s climate system.
Oh, lest I forget, let’s not forget that the question of ‘gatekeepers’ in academia, legacy media, international institutions, such as the UN and the IPCC, is of utmost importance.
Before ‘the scientific revolution’, the various Christian denominations claimed universal and exclusive rights to ‘splain stuff to ‘the rabble’. Once this was overcome by empiricism and the diffusion of the scientific method, many things changed.
We may be moving back in time, though, at least in terms of the attempts by the gatekeepers to keep dissident opinions at bay. Thomas Kuhn wrote about this in his classic The Structure of Scientific Revolutions (1961), and what we’re witnessing here—as with the mod-RNA debacle—is yet another example of the next paradigm shift.
History has many examples of the ultimate futility of these attempts, hence, kudos to you, Mr. Dagsvik and Moen, for daring to know.
"She says Dagsvik has good expertise in statistics, but not in climate modelling."
Climate modelling is statistics! She's sadly a typical representative of the modern woman in academia: no thought, no logic, no knowledge. The proverbial golden ring in the pig's snout was surely said about such as her.
You ponder what the difference between Dagsvik and Moen, and their critics is. I believe it's this:
1) D&M can actually use math. Not calculators or programs, but math - abacus, slide rule, pen and paper and understand what they are doing. Their critics can perform the operation of feeding numbers into a program only. That's not math. Therefore, the critics of D&M cannot critcise their findings, since they lack the skills required.
2) The element of howling with the pack cannot be understated, nor overstated: it reads like a reflex-action against a challenge to orthodoxy, the epitome of "not science". Thus, no matter the facts, dogma rules.
3) The critics have been rewarded for being dogmatists for decades, perhaps even raised in that atmosphere at home and in academia, so they are unable to recognise this in their reaction.
---
I'll give a somewhat tangential example from helping mother shop for groceries. Thinking out loud, I pondered how a lime from Peru could be "climate friendly" to buy here in middle Sweden. It had been transported by diesel-powered ship and diesel-powered truck to the store, across the planet. If one believes in climate-theory, the only way to be climate-friendly is to buy only such produce that can be grown locally, the closer to the customer the better. Which pre-cludes lime, and anyone believing they can buy lime and be climate-friendly is a clinical moron and/or a hypocritical liar and a coward.*
Concluding this Hamlet-like musing, the lime playing the part of Yorick, I noticed people around me giving me "low Ph-value stares" so to speak.
So they must understand the conflict inherent, yet they use various automated processes to escape cognitive dissonance and a loudmouthed man pointing out the problem triggers the dissonance?
I wager the response from the critics of D&M is perhaps based in something equally simple-yet-complex.
*I really do speak like that, when Hectoring or lecturing. Work-related injury, I'm sure.