29 July 2014

Arctic Ice Guesses 2014

Have to bite the bullet here and discuss my guesses for the September 2014 Arctic sea ice extent average.  The thing which has made them so difficult is that they're so different from each other.  Now, one method I've retired.  It was simply so bad last year that there's no point in continuing it.  That is the one I did based on a population growth (of ice-free area) curve.

That leaves, however, two different model-based guessers.  The first one, which appears at the Sea ice prediction network as 'Wang', is based on doing a statistical regression between what the CFSv2 (climate forecast system, version 2) predicts for September ice area and what is observed.  The second also uses CFSv2, but in a different way.  Namely, we know that the model is biased towards ice being too extensive (which the Wang method addresses statistically) and to being too thick.  The Wu method is based on thinning the ice and seeing what the extent is thicker than a critical limit (60 cm it turns out).  (Both Wang and Wu work with me, or vice versa, and we discuss how to work on these guesses.)

The guesses are:
June -- Wang -- 6.3 million km^2 0.47 stdev
July -- Wang -- 5.9 million km^2 0.47 stdev
July -- Wu -- 5.1 million km^2 0.56 stdev
June -- Wu -- 4.8 million km^2 0.65 stdev

One of the things to notice is that the two estimates moved towards each other from June to July.  Wu rose, the higher Wang declined.  The second is, the Wu method has a standard deviation (variability of its estimate) that is double what it was last year.  Whatever is going on in the model, it is much less self-consistent in previous years.  Much more uncertain.  This is one of the reasons for ensemble modeling (part of the Wu approach).

You can also see that the Wang estimate is the highest of all -- even higher than the Watts up with that group estimate.  This is true in both June and July.

So, what's up?  Well, I'm not sure.  Some of it is certainly related to sea ice thickness estimates.  Xingren (Wu) did a different approach based on thickness for June, which we didn't submit, but which landed in between the official June estimates from Wang and Wu.  With the step towards convergence from June to July between Wu and Wang methods, I'm inclined to guess (a meta-guess) 5.5 million km^2 for September.  If this were to occur in reality, it probably suggests something important.  What, exactly, I'm still pondering.

28 July 2014

Yabba2 -- Construction


Katherine Monroe:

Below are the full instructions on how to build exactly what I built. There is so much that could be done to improve the design. I know it is not anywhere close to perfect. The materials I used were makeshift, whatever was lying around the house or wasn’t too expensive. But that was the point. I like spontaneity. It doesn’t have to be extremely elaborate to work and to be useful. This is for anyone who wants to do anything with it or for anyone who is just interested.
  
Materials
1. Vernier Flow Rate Sensor, Order Code: FLO-BTA/FLO-CBL
2. Vernier Lab Quest by Vernier Software and Technology.13979 SW Millikan Way, Beaverton, Or 97005. 888-837-6437. (for transmitting and collecting data from the Flow Rate Sensor.)
3. 3 22” steel dowel rods
4. Compressed fiber board
5. Minwax Polyurethane Varnish
6. 24 Gauge- 100 ft. Green Floral Wire Twister
7. Small foosball
8. 2 IDEC Sensors, Magnetic Proximity Switches. Type: DPRI-019. Premium Waterproof Clear Silicone Sealant (without Acetic Acid)
10. Plugable USB to RS-232 DB9 Serial Adapter (Prolific PL2303HX Rev D Chipset)
11. RS232 Breakout - DB9 Female to Terminal Block Adapter
12. Xnote stop watch, version 1.66 (downloadable at http://www.xnotestopwatch.com/)
13. Loctite Epoxy glue
14. Drill
15. Hammer
16. 2 Brass quarter inch Phillips Head screws
17. Electric hand held reciprocating saw
18. Electrical tape
19. 4” by 3/4” strip of thin steel (cut from a can)
20. Twisted Nylon string
21. 2’ long wooden slat (to be used as a handle for carrying and placing the designed device in the water.)
22. Study Site: United States Geological Survey (0164900), Northeast Branch of Anacostia River at Riverdale, MD. (Test site was just next to the USGS data collection gauge.) (-38.961,  -76.626)

25 July 2014

Yabba -- Building your own stream gauge

Katherine Monroe*, the author/inventor of this stream gauge, is a graduate of Eleanor Roosevelt High School, in the same class as Elliott Rebello.  Her senior project was quite different, and you'll get to see the details in her own words.  Part 1 is today, the narrative.  Part 2 will be on Monday -- the full parts list and construction instructions.


Engineering the “Yabba Dabba Doo”

By: Katherine Monroe
June 2014
Eleanor Roosevelt High School

One year ago, as a rising senior at Eleanor Roosevelt High School in Greenbelt, MD, I was faced with the same grueling task that all students in the Science and Technology program were: RP- that is Research Practicum. This is what we had been leading up to for the past three years and now, here it was. 

RP is the year long research project that all seniors in the Science and Technology program at Roosevelt are required to complete. By the end of the year we had to have completed a science fair backboard, a laminated poster, a power point, and a five chapter paper. We had a whole class dedicated to working on all the different aspects of the project and to learning how to analyze data quantitatively and statistically. We were told to come up with a project that was interesting to us because we would be spending the entire year working on it. Some students applied for internships with NASA, USDA, NIH, the University of Maryland, the National Zoo, Walter Reed Hospital and more. Other students applied for programs established by and within the school and other students worked separately from any structured programs. 

I chose to apply to a program started by one of our school’s AP Chemistry teachers called WISP (Watershed Integrated Study Program.) It was a program which emphasized local water quality studies. Students in WISP formed groups and measured chemical and physical properties of local waterways at a bunch of sites across the county. We measured nitrate and phosphate concentrations, dissolved oxygen levels, alkalinity, ph, turbidity, total dissolved solids, temperature and took seasonal macroinvertebrate data. We then added our values to an ongoing database which students could draw from for all sorts of studies which require long term data collection.

I applied to WISP because out of the endless ocean of things I was unsure of I was sure of at least one thing and that was my love for the environment and for being outdoors. After having been accepted to WISP I began the process of deciding what to do for my project. In the end the basis of my project came from the one other thing I was sure of which was that I enjoyed building things. So I knew I wanted to build something and I knew it should relate to the local water quality movement that WISP was promoting. I looked at what we did in WISP and thought about what we measured. One aspect of water quality that I found important to a gaining a comprehensive understanding of a stream or river’s health (that we did not measure in WISP) was the speed of the water in the stream.

The water speed can provide insight into the types of organisms that can live in a stream or river, to the flow of sediment down a river, and sometimes to the oxygen levels of a river. The greater the speed of a river, the more aerated it typically is, and the higher the dissolved oxygen level. All of these can greatly affect the health of a stream or river. Stream speed can also help in understanding volume flow rate of a stream and in identifying storm water runoff patterns near and around the stream or river and in developing flood models. Overall stream speed seemed like an important factor that we did not account for in WISP due to what I believe to be a range of reasons, the expense of the necessary equipment, the complicated nature of taking stream speed measurements at a variety of points along a stream and still getting inaccurate results due to the variability of speed along an uneven stream bed, and maybe more. 
 
I decided that I wanted to design and build something that would measure stream speed; something that would be cost effective and accurate, and something that would be easy for anyone who wanted to do research, like the kind we do in WISP, to build for their own purposes. The point was to encourage citizen science by going through all the steps independently and then showing people what I had done so that they could do it too. 
 
In the end what I came up with consisted of an open track along which a light and neutrally buoyant ball was pushed by the flowing water. On either end of the track there were magnetic sensors which timed how long it took for the ball to move from one end of the track to the other. From this the speed was computed. This is where the name of the device comes in. I decided to call it the “Yabba Dabba Doo” because it looked like something out of the Flintstones (or maybe like an old-fashioned push lawn mower.) 
 
Next I had to figure out if my design actually worked. In order to do that, I compared my device to an already existing speed measurement device by the company Vernier. I assumed that the Vernier data was accurate. My null hypothesis was that the average of the speeds taken with my device would be statistically equivalent to the average of the data taken with the Vernier device. Strangely enough, I wanted to FAIL to reject the null hypothesis. Statistics are weird. I collected data with each of the devices within a half an hour period of each other (assuming that the stream speed would not change in that amount of time.) Then I analyzed the data through a statistical t-test which looked for a significant difference between the two sets of speeds and their averages. 
 
After multiple trials and readjustments to the design I got what looked like a pretty accurate result. Initially (before reaching my final design), the object moving along the track of the Yabba Dabba Doo was a metal disk attached to the metal rods of the track with metal rings. All that metal caused for a lot of friction between the disk and the track which prevented the disk from reaching the speed of the water and gave me slower averages than the Vernier averages. This also yielded a significant difference in the statistics which I did not want. In order to minimize the coefficient of friction there, I changed my design to one which consisted of that light weight, neutrally buoyant foosball (which I mentioned earlier,) that was attached to the metal track with small sections of plastic drinking straw. The foosball had no tendency to float or sink in the water and caused less friction on the track. Furthermore, the coefficient of friction between the plastic straw and the metal track was much less than between the original metal rings and the metal track. After making this design change I got averages that were much closer together between the Yabba Dabba Doo and the Vernier and in a majority of my statistical t-tests there was no significant difference. In the end I had a device that seemed to be working pretty accurately and cost $350 less to build than to buy the Vernier. 
 
Going through that process, of trial and error and trial and error and trial and then success!!! was extremely gratifying. I got to experience the life of an engineer first hand and to learn about the plethora of unforeseen problems that can arise. 
 
This entire year was a great learning experience for me. I learned what a null hypothesis was and how to go about trying to reject it (or in my case, fail to reject it.) I learned all sorts of things about the materials that I used to build my device. I learned how to bear standing out in winter weather water up to my waist, wearing my mom’s baptismal waders, for the good of science! I learned about all the things that can go wrong and need to be accounted for in a field study like this one. I learned how to access all sorts of functions on excel, power point, and word. And I learned something about myself. I learned that engineering, and I think in particular environmental engineering, is something that I could easily be passionate about and be satisfied with in the future. And for a chronically confused and disoriented teenager about to go to college, that is reassuring.

Below are the full instructions on how to build exactly what I built. There is so much that could be done to improve the design. I know it is not anywhere close to perfect. The materials I used were makeshift, whatever was lying around the house or wasn’t too expensive. But that was the point. I like spontaneity. It doesn’t have to be extremely elaborate to work and to be useful. This is for anyone who wants to do anything with it or for anyone who is just interested. 


[Back to your host: Directions on Monday ; The * is that Ms. Monroe normally goes by a more informal version of her name and I've gone with the formal here.  Formal for publication is a rule I use myself (I'm not usually Robert), and one which I've learned is helpful for women to be taken seriously.] 

23 July 2014

Data are ugly

Current news about whether there really is an increase in Antarctic sea ice cover is reinforcing my belief, shared by most people who deal with data, that data are ugly.  This work argues that the trend that some have seen in some trend analyses has more to do with the data processing than with nature.  I encourage you to read the article in full itself.  It is freely available.

From the abstract:
Although our analysis does not definitively identify whether this change introduced an error or removed one, the resulting difference in the trends suggests that a substantial error exists in either the current data set or the version that was used prior to the mid- 2000s, and numerous studies that have relied on these observations should be reexamined to determine the sensitivity of their results to this change in the data set.
One of the obnoxious things about data sources is that they don't remain the same forever.  This is not so much a problem for my concerns about weather prediction, since the atmosphere forgets what you said you observed in a few days.  But for a climate trend, the entire record is important.  For the data set being discussed, the Bootstrap Algorithm (Comiso) applied to passive microwave, we immediately run in to data obnoxing.  Since 1978, there have been several passive microwave instruments -- SMMR, SSMI F-8, SSMI-F11, SSMI-F13, 14, 15, AMSR-E, SSMI-S F16, 17, 18, and AMSR-2.  They didn't all fly at the same time, and they don't have exactly the same methods of observation.  And none of them exactly observe 'sea ice', which leads to a universal problem which we (people who want to use these instruments to say something about sea ice) all have to deal with.

So a few considerations of what all is behind the scenes of this paper and the earlier Screen, 2011.  The latter paper involved some of my work (read deep in to the acknowledgements).  This one doesn't, but the fundamental issues are the same ...