An Introduction Into Global Mean Sea Level, A Fallacy of Alarmism, and Beyond
- How Reliable Is This Graph ? Courtesy of UC@Boulder
“The IPCC considers two simple indices of climate change, global mean temperature and sea level rise. The change in global mean temperature is the main factor determining the rise in sea level; it is also a useful proxy for overall climate change.”
IPCC Technical Paper III1.2.4
The Global Temperature and Sea Level
Implications of Stabilizing Greenhouse Gases
Having already written several posts on sea levels, I think it has become necessary to investigate the origins of sea level data, how it is interpreted, and what, if any, conclusions can be derived from it.
Currently, there is 2 datasets on sea levels. Tide gauge data and satellite altimetry data.
Tide gauges in the U.S. are currently monitored by the National Oceanic and Atmospheric Administration (NOAA) and the Permanent Service for Mean Sea Level (PSMSL) is a data collection service that receives metric data from over 200 different sources. Of nearly 2000 sites around the globe, some of the non-US tide gauges are also being monitored by NOAA. The rest of the tide gauges are monitored by their perspective agencies.
Satellite altimetry data which comes from the Topex/Poseidon satellite is monitored by the Jet Propulsion Laboratory.
Tide gauges, depending on each individual site, varies in length from 200 years to just a few years. Simply stating, the amount of long term tide gauges is limited at best. Satellite data goes only to 1992, when NASA launched Topex.
Global Mean Sea Level is, for the lack of a better term, in its infancy.
Currently, Church and White et al. (2004) is the only publicly available reconstructed sea level dataset covering the period between 1950-2001, as stated by the Colorado Center of Astrodynamics Research University College of Colorado at Boulder in Reconstructing Global Mean Sea Level From Tide Gauges using Satellite Altimetry (Oct, 2010).
While there seems to be a variety of datasets in reference to regional datasets, the Church and White paper is really our only available source for global mean sea levels. That paper can be viewed here.
I want to highlight a certain segments of that paper that I feel is necessary to understand before I continue:
The central dataset we use for the period 1950–2000 is monthly mean sea levels from the data archive of the PSMSL.
No comment needed here.
We use primarily the Revised Local Reference (RLR) data but also some metric data, downloaded from the PSMSL Web site in February 2003. The most recent data were for 2002 but for many stations the data end before 2000.
Though the paper was written in 2004, data up to 2000 was their primary source of data.
The metric records can have substantial and unknown datum shifts and their use in time series analysis is generally not recommended.
So if metric records are not recommended, why use them?
We filled gaps of 1–2 months (by spline interpolation) and deleted continuous sections shorter than 2 yr, eliminating 256 records.
Values were added, but as to the number of records that needed spline interpolation, is not mentioned.
Where there were both RLR and metric records for stations, the redundant metric record was deleted (1063 records).
The metric dataset contains 1950 stations and the RLR dataset 1159 stations. So out of 1950 datasets, 887 records without a RLR counterpart is used. Even though the PSMSL warns against using the metric datasets.
We also deleted records for 95 locations beyond the TOPEX/Poseidon latitude range and 37 records more than 250 km from the nearest altimeter grid point.
Basically, any tide records that could not be located within the range of satellite data, was not used. Which only brings into question, if the satellite records don’t cover the entire globe, how can this paper even be considered to be Global Means. Shouldn’t it be called, ‘Partial Global Means’?
This left a total of 1658 records for further assessment. We then removed locations where there was serious disagreement between nearby records, where the gauges were in unsuitable locations (e.g., in an estuary, especially when there was another gauge closer to the ocean), where the records were too fragmented or noisy to be useful, or where there were large residual trends (greater than 10 mm yr−1).
I don’t want to say this is cherry-picking, but without stating which sites were used or not used, depending on its location or noise, how can it be verified that their choices weren’t somehow motivating.
As the TOPEX/Poseidon altimeter data used are on a 1° × 1° grid, we found the nearest such grid point for each tide gauge. Where there were multiple tide gauges for a single grid point, the change in height at each time step were averaged to produce a single time series.
I suppose my biggest concern was why wasn’t an average used of all the stations used to compare with their model driven Global Means Sea Level.
I would be interested in comparing the graph that came from this study, next to another graph that used an average of all RLR sites.
A total of 945 records (670 RLR and 275 metric) are combined into 454 composite records, of which 426 have useful data in the time span from January 1950 through to December 2000.
So, what we have here is a compiled record of both RLR and Metric, some with composite averages of several sites, without knowing which is which or why. This only brings to question more doubt to its validity.
The number of these composite locations that passed our quality control checks is 154 in 1950, rises to more than 240 prior to 1960, peaks at 317 in 1986 before falling rapidly in the last 5 yr to 196 in 2000.
And the most damning evidence of all is this a compilation of non-homogeneous data. Throughout their time series, different times used different sites, combined or not combined, value added or not value added, within the confines of altimery boundaries and not global. Fascinating.
The regional distribution of the gauges clearly demonstrates the largest density of gauges is in the North Atlantic and North Pacific Oceans, particularly in the 1950s. Even in the 1980s, noticeable gaps remain in sea level data for the Southern Ocean, the South Atlantic Ocean, and the western Indian Ocean.
And lastly, they express that distribution is densest in certain region and at different times, with noticeable gaps in other regions and at other times.
With all of these disparities in homogeneity, this paper is considered to be the foundation for Global Mean Sea Level. A bench mark of success in this new age of modelling. Because for all of its success, this is but a model. A reflection of reality. A guess.
I would like to point out though, that Church and White concluded in their paper, this conclusion:
Decadal variability in sea level is observed but to date there is no detectable secular increase in the rate of sea level rise over the period 1950–2000.
This comment is a far cry from what Church and White are saying now:
“As well as the inexorable rise in sea level associated with climate change, there is also interannual variability in sea level. Sea level anomalies in 2004 compared with the average over the period 1961 to 1990 show large positive sea level anomalies in the central equatorial Pacific.”
So Church and White go from saying they see no increase in rate of sea level rise to a large positive sea level rise, yet base this positive rise with a paper they wrote, where they say they see no increase in rise. I am amazed.
I want to present several statements made about Church et al. (2004). These statements come from a paper I mentioned earlier, Reconstructing Global Mean Sea Level From Tide Gauges using Satellite Altimetry (Oct, 2010):
This is quite eye-popping. This paper is making statements that the Church et al. (2004) is:
- Not accommodating for time dependence of spatial paterns.
- That there method of EOF(Empirical Orthogonal Function) is mode mixing.
- EOF is not a good basis for signals in the ocean and unable to explain spatial variability.
So this paper, which shall be called Hamlington et al. (2010), calls into question Whites method of its use of EOF and presents a new method, they call CSEOF.
So for comparison sake, Hamlington et al (2010), presents this graph in that paper:
Now its only fair I present Church and Whites graphical representation of GMSL here:
I have to chuckle a bit in regards to that first image. This graph is the culmination of years of study and research, and the end result looks like a child with a crayon, smudged the end of the graph with a huge gray mark, hiding most, if any, graphical lines from all of their hard work. That and the fact they chose to go with the higher of the 3 models they used in their study. At least they presented the other modeled outcomes.
But back to the ‘facts’. Between Church et al. and Hamlington et al. we have a reconstructed GMSL rise, from 1950-2000, of 1.89 and 1.91 mm/year.
This is a far cry from the predictions made, either in Main Stream Media journalism, or recent scientific publications.
To suggest sea levels are going to rise beyond what either of these two paper present, is just plain ‘alarmism’.
These two papers are the only 2 papers that are currently available published on Global Mean Sea Level.
If anyone presents an alarming piece of journalistic trash to accelerating rising sea levels, just bring them here and have them explain how they can suggest such alarmism, when the only two papers on GMSL suggest nothing of the kind.
Even if I don’t agree with how these papers derived any factual evidence for determining GMSL, their own announcements and conclusions don’t suggest an accelerated rise of GMSL.
All we do know is that tide gauges are limited in explaining global tends, but are strong in interpreting regional trends. Period. End of story.
Satellite altimetry is another matter altogether. And both the White et al. and the Hamlington et al. both reconstructed tide gauge data, using satellite altimetry data.
But what do we know about Satellite data. Rather than try to explain what we know about it, lets look at the confidence level of the Topex/Poseidon Merged Geographical Data Record, as described by the Jet Propulsion Laboratory:
Sources of Error:
There are various sources of error that affect the measurement of sea level. They include measurement noise, mispointing and skewness effects, EM bias, ionospheric error, wet tropospheric error, dry tropospheric error, and altimeter bias and drift. They have all been measured and corrections are included in the GDR. However, these corrections are not perfect: the DORIS ionosphere is one cm less accurate than the TOPEX ionosphere; tidal models have errors of a few centimeters; inverse barometer effect only applies in certain space/time wavelength bands, etc. [Fu et al., 1994].
Confidence Level/Accuracy Judgement:
The table below accurately describes the repeatability of the altimeter measurement. However, the 5 cm RMS accuracy is a global average, in certain regions it is higher. Mean sea surface and geoid in the GDR are much less accurate than the 5 cm repeat. At this point, they are used only for qualitative assessment; the difference between sea level minus geoid does not measure geostrophic current accurately. Sea state bias is a function of wave height, wind speed, and other parameters. The GDR correction for sea state bias is an empirical fit [Gaspar et al., 1994] that also absorbs some wind-related ocean variability into the correction. This subject is still being studied. The issue of long term drift of measurements at the sub-centimeter level per year is still being debated.
- Satellite Altimetry Errors
So, to further our understanding the validity of using satellite data to reconstruct tide gauge records, we are presented with measurements of error, presented here by the JPL, showing the lack of accuracy anywhere from 4.7-5.1 cm or greater at a global mean level, to even higher discrepancies at a regional level.
And lastly, here what they say about using their data:
Limitations of the Data:
This is new, research data and has only been available to the remote sensing community for a short time. Data analysis is still in its early stages and there is, as yet, no consensus on how to process the data. Hence a suite of parameters and flags have been included to allow users to make their own selection criteria. Calculation of sea surface height from the altimeter range and environmental corrections is the responsibility of the user.
In other words, JPL is telling those that use their data anyway they see fit, but we don’t see how any consensus can be process from it, and calculations and corrections is strictly up to the user.
I don’t know about you, but I got all kinds of red flags going off at the moment.
Seeing as how both Church et al. and Hamington et al. both used this satellite data to reconstruct tide gauges, and the JPL is telling those that use their data at their own risk, really doesn’t seem to build any confidence in either of their data reconstructions for me.
No wonder the University of Colorado at Boulder hasn’t updated their website on GMSL. Because even with any decline that might show in the GMSL for the period since their last update, who can agree one way or the other if any of the data is viable.
This much is certain. All we really have is individual tide guages and then an interpretation of those gauges, both in regional computations and global inference, using models.
What I suggest is a painstakingly thorough approach to viewing each individual tide gauge site and determine if or how much sea level rise there is and give it either a mark for rise or decline for overall rise, and another mark for either acceleration or deceleration. Then tally the results and see what we come up with.
I don’t see how this could be any worse than what is currently available to us.
I suppose I should get started.
Wish me luck.
Until then, please read my other posts regarding sea levels.
You can read them here on More On Sea Levels and Sea Level Rise .
Anthony Watts also has several articles recently posted on wattsupwiththat.com that deserve reading as well. They are: