All the shining perfumes I splashed on my head,
And all the fragrant flowers I wore,
Soon lost their scent.
Everything I put between my teeth
And dropped into my ungrateful belly
Was gone by morning.
The only things I can keep
Came in through my ears.
Callimachus ca. 310-240 BC
In Pure Pagan: Seven Centuries of Greek Poems and Fragments, selected and tranlated by Burton Raffel.
17 December 2008
16 December 2008
Science and consensus
Sometimes people are right about a statement and then draw the wrong conclusion about it. Noting that science doesn't 'do' consensus is such a case. By the time you've progressed to the point of general agreement -- and all a consensus is is general agreement, not universal agreement -- the point has dropped out of being live science.
The science is in the parts we don't understand well. That's effectively part of the definition for doing science. Dropping two rocks of different mass off the side of a building and seeing which one hits the ground first is no longer science. We reached consensus on that some time back. Now if you have a new experiment which tests something interesting (i.e., we haven't tested that one to death already), have at it and do that science.
I didn't appreciate it properly at the time, but a sign on the chemistry department door in my college put it best: "If we knew what we were doing, it wouldn't be science." The live part of science involves learning new things. If you already know what will happen, you're not learning new things so aren't doing science. After you've learned something new, and others have tested it and confirmed your learning, then we have a piece of scientific knowledge. It isn't live science any more, but it's a contribution to the world and can be used for other things. A consequence of this is that you wind up knowing a lot if you stay active in doing science. But it isn't the knowing that motivates scientists (certainly not me) it is the finding out new things about the world.
So we have two sides to science -- the live science, where you don't have consensus -- and the consensus, the body of scientific knowledge that can be used for other things (engineering, decision making, ...). The error made by the people who try to deny, for example, the conclusions of the IPCC reports because 'science doesn't do consensus' is that they're confusing the two sides. The live science, which is summarized in the IPCC reports, doesn't have consensus. That's why it's live and why folks have science to do in the area. The body of scientific knowledge, which is also summarized in the reports, does have a consensus, which is being described in detail as to what the consensus is about and how strong it is.
It is possible that the consensus is wrong in its conclusions. But the folks denying it need not only for it to be wrong, but to be wrong in a very specific way. If they wanted to make scientific arguments, which is what, say, Wegener did in advancing continental drift in the 1920s, they can do so. But it is their responsibility to make the arguments scientifically and back them with strong scientific evidence, as Wegener himself noted. They don't do that.
I'll follow up the matter of the consensus having to be wrong in a specific way in a different note or two at a later date.
The science is in the parts we don't understand well. That's effectively part of the definition for doing science. Dropping two rocks of different mass off the side of a building and seeing which one hits the ground first is no longer science. We reached consensus on that some time back. Now if you have a new experiment which tests something interesting (i.e., we haven't tested that one to death already), have at it and do that science.
I didn't appreciate it properly at the time, but a sign on the chemistry department door in my college put it best: "If we knew what we were doing, it wouldn't be science." The live part of science involves learning new things. If you already know what will happen, you're not learning new things so aren't doing science. After you've learned something new, and others have tested it and confirmed your learning, then we have a piece of scientific knowledge. It isn't live science any more, but it's a contribution to the world and can be used for other things. A consequence of this is that you wind up knowing a lot if you stay active in doing science. But it isn't the knowing that motivates scientists (certainly not me) it is the finding out new things about the world.
So we have two sides to science -- the live science, where you don't have consensus -- and the consensus, the body of scientific knowledge that can be used for other things (engineering, decision making, ...). The error made by the people who try to deny, for example, the conclusions of the IPCC reports because 'science doesn't do consensus' is that they're confusing the two sides. The live science, which is summarized in the IPCC reports, doesn't have consensus. That's why it's live and why folks have science to do in the area. The body of scientific knowledge, which is also summarized in the reports, does have a consensus, which is being described in detail as to what the consensus is about and how strong it is.
It is possible that the consensus is wrong in its conclusions. But the folks denying it need not only for it to be wrong, but to be wrong in a very specific way. If they wanted to make scientific arguments, which is what, say, Wegener did in advancing continental drift in the 1920s, they can do so. But it is their responsibility to make the arguments scientifically and back them with strong scientific evidence, as Wegener himself noted. They don't do that.
I'll follow up the matter of the consensus having to be wrong in a specific way in a different note or two at a later date.
15 December 2008
How to decide climate trends
Back to trying to figure out what climate might be, conceptually, and then trying to figure out what numbers might represent it. A while ago, I looked at trying to find an average value (in that case, for the global mean surface temperature) and found that you need at least 15 years for your average to stabilize, 20-30 being a reasonable range. Stabilize means for the value of the average for a number of years to be close to the average to a somewhat longer or shorter span of years. While weather can and does vary wildly, climate, if there is such a thing, has to be something with slower variation.
But most tempests in blog teapots are about trends. I'm going to swipe an idea from calculus/analysis and have a look at deciding about trends. One of the reasons to take a variety of courses, including ones that may not seem relevant at the time, is to have a good store of ideas to swipe. Er, a strong research background.
As before, I'm going to require that the trend -- to be specific, that the slope of the best fit (in terms of the sum of the squares of the errors being as small as possible) line should become stable in the above sense. This is sometimes referred to as ordinary least squares, and even more breezily as OLS. I don't like that acronyming since I keep reading it as optical line scanner, curtesy of a remote sensing instrument.
There's a little more, however, that we can do. When I looked at averages, I took them centered on a given month or year. So estimating the climate temperature for 1978, say, involved using data from 1968 to 1988. The reason, which I didn't explain at the time, is that if climate is a slowly changing thing, then the temperature a year later tells you as much about this year as the temperature a year earlier. And, as a rule, tells you more about this year than the observations 2 years earlier.
My preference for a centered data span conflicts with what people would generally like to do -- to know what the climate of 2008 (for instance) was during, or at least only shortly after, 2008. On the other hand, you can't always get what you want. The priority for science is to represent something accurately. If you can't do that, then you have to keep working. A bad measure (method, observation, ...) is worse than no measure.
So we have two methods to look at already: 1) compute the trend using some years of data centered on our time of interest and 2) compute the trend using the same number of years of data but ending with our time of interest . I'll add a third: 3) compute the trend using the same number of years of data but starting with the year of interest. (This is the addition prompted by Analysis.)
In numerical analysis, we refer to these as the forward, centered, and backwards computations (we move forward towards the point/time of interest, we center ourselves at the point/time of interest, or we look backwards to the point of interest). For a wide variety of reasons, we generally prefer in numerical analysis to use centered computations. In real analysis (a different field), where one deals with infinitesimal quantities, it is required that the forward and backward methods give the same result -- or else the quantity (I'm thinking about defining a derivative) is considered not to exist at that point. We're not dealing with infinitesimals here, so can't require that they be exactly equal. On the other hand, if the forward and backward methods give very different answers from each other, it greatly undermine our confidence in those methods. If the difference is large enough, we'll have to throw them out.
So what I will be doing -- note that I haven't done the computations yet, so I don't know how it will turn out -- is to
1) take a data set of a climate sort of variable (I'll pick on mean surface air temperature again since everybody does; specifically, the NCDC monthly global figures)
2) for every year from 31 years after the first year of data to 31 years before the last year of data
(I'm taking 31 be able to compute forward slopes for the first year I show over periods as long as that, likewise the 31 years at the end for backwards)
I)
a) Compute forward slope using 3-31 years (for 3, 5, 7, 9, .. 31)
b) Compute centered slope using 3-31 years (meaning the center year plus or minus 1, 2, 3, 4 ... to 15)
c) Compute backward slope using 3-31 years (again 3, 5, 7, 9, .. 31)
II)
a-c) For each, look to see how long a period is needed for the result of the slope computation to settle down (as we did for the average). I expect that it will be the same 20-30 years, maybe longer, that the average took. If it's a lot faster, no problem. If it's longer, then I have to restart with, say, the data more than 51 years from either end.
3) Start intercomparisons:
a) compute differences between forward and backward slopes (matching up the record length -- only look at 3 years forward vs. 3 years backward, not vs. 23 years backward), look for whether the differences tend toward zero with length of record used. If not, likely rejection of forward/backward method. If so, then the span where it is close to zero is probably the required interval for slope determination.
b) ditto between the forward and centered slope computations. The differences will be smaller than 3a since half the data the centered computation uses is what the forward computation also used. Still, I'll look for whether the two slopes converge towards each other. If they don't, then the forward computation is toast.
4) Write it up and show you the results. I'm planning this for next Monday. Those of you with the math skills are welcome (and encouraged) to take your own shot at it, especially if you use more sophisticated methods than ordinary least squares, or other data sets than NCDC. But I'll ask you to hold on putting them to your blogs until after this one appears.
I'll also be providing links to sources (tamino, real climate, stoat, ... and others to be found) which have already done similar if not quite the same things.
Part of the idea here is to illustrate to my proverbial jr. high readers what a science project looks like, start to finish. Some aspects are:
But most tempests in blog teapots are about trends. I'm going to swipe an idea from calculus/analysis and have a look at deciding about trends. One of the reasons to take a variety of courses, including ones that may not seem relevant at the time, is to have a good store of ideas to swipe. Er, a strong research background.
As before, I'm going to require that the trend -- to be specific, that the slope of the best fit (in terms of the sum of the squares of the errors being as small as possible) line should become stable in the above sense. This is sometimes referred to as ordinary least squares, and even more breezily as OLS. I don't like that acronyming since I keep reading it as optical line scanner, curtesy of a remote sensing instrument.
There's a little more, however, that we can do. When I looked at averages, I took them centered on a given month or year. So estimating the climate temperature for 1978, say, involved using data from 1968 to 1988. The reason, which I didn't explain at the time, is that if climate is a slowly changing thing, then the temperature a year later tells you as much about this year as the temperature a year earlier. And, as a rule, tells you more about this year than the observations 2 years earlier.
My preference for a centered data span conflicts with what people would generally like to do -- to know what the climate of 2008 (for instance) was during, or at least only shortly after, 2008. On the other hand, you can't always get what you want. The priority for science is to represent something accurately. If you can't do that, then you have to keep working. A bad measure (method, observation, ...) is worse than no measure.
So we have two methods to look at already: 1) compute the trend using some years of data centered on our time of interest and 2) compute the trend using the same number of years of data but ending with our time of interest . I'll add a third: 3) compute the trend using the same number of years of data but starting with the year of interest. (This is the addition prompted by Analysis.)
In numerical analysis, we refer to these as the forward, centered, and backwards computations (we move forward towards the point/time of interest, we center ourselves at the point/time of interest, or we look backwards to the point of interest). For a wide variety of reasons, we generally prefer in numerical analysis to use centered computations. In real analysis (a different field), where one deals with infinitesimal quantities, it is required that the forward and backward methods give the same result -- or else the quantity (I'm thinking about defining a derivative) is considered not to exist at that point. We're not dealing with infinitesimals here, so can't require that they be exactly equal. On the other hand, if the forward and backward methods give very different answers from each other, it greatly undermine our confidence in those methods. If the difference is large enough, we'll have to throw them out.
So what I will be doing -- note that I haven't done the computations yet, so I don't know how it will turn out -- is to
1) take a data set of a climate sort of variable (I'll pick on mean surface air temperature again since everybody does; specifically, the NCDC monthly global figures)
2) for every year from 31 years after the first year of data to 31 years before the last year of data
(I'm taking 31 be able to compute forward slopes for the first year I show over periods as long as that, likewise the 31 years at the end for backwards)
I)
a) Compute forward slope using 3-31 years (for 3, 5, 7, 9, .. 31)
b) Compute centered slope using 3-31 years (meaning the center year plus or minus 1, 2, 3, 4 ... to 15)
c) Compute backward slope using 3-31 years (again 3, 5, 7, 9, .. 31)
II)
a-c) For each, look to see how long a period is needed for the result of the slope computation to settle down (as we did for the average). I expect that it will be the same 20-30 years, maybe longer, that the average took. If it's a lot faster, no problem. If it's longer, then I have to restart with, say, the data more than 51 years from either end.
3) Start intercomparisons:
a) compute differences between forward and backward slopes (matching up the record length -- only look at 3 years forward vs. 3 years backward, not vs. 23 years backward), look for whether the differences tend toward zero with length of record used. If not, likely rejection of forward/backward method. If so, then the span where it is close to zero is probably the required interval for slope determination.
b) ditto between the forward and centered slope computations. The differences will be smaller than 3a since half the data the centered computation uses is what the forward computation also used. Still, I'll look for whether the two slopes converge towards each other. If they don't, then the forward computation is toast.
4) Write it up and show you the results. I'm planning this for next Monday. Those of you with the math skills are welcome (and encouraged) to take your own shot at it, especially if you use more sophisticated methods than ordinary least squares, or other data sets than NCDC. But I'll ask you to hold on putting them to your blogs until after this one appears.
I'll also be providing links to sources (tamino, real climate, stoat, ... and others to be found) which have already done similar if not quite the same things.
Part of the idea here is to illustrate to my proverbial jr. high readers what a science project looks like, start to finish. Some aspects are:
- lay out a method before you start, and consider what it means both if the results are as you expect them to be, and if they're the other way around.
- consider what you'll do if they're different
- look at what other people have already done
- write it all up so that others can learn from what you did
10 December 2008
More blogs
I read quite a few more blogs than are on the blogroll. As I mentioned in the original blogroll note, these are ones that link over here. (And might be missing some. Please let me know if I am.)
A recent addition is my daughter's, http://evenmoregrumbinescience.blogspot.com/ It'd be good to see more comments on her post about teaching physics to women (see 'No Silver Bullet', or 'Teaching Women Science'). Last I looked, it's only the two of us. And since I was the one who taught her how to build rockets, her responses aren't exactly surprises.
More from my reader:
Climate
Other, mostly biology:
Due reminder: These other blogs, and my comments on them, may not be as mild-mannered as here.
A recent addition is my daughter's, http://evenmoregrumbinescience.blogspot.com/ It'd be good to see more comments on her post about teaching physics to women (see 'No Silver Bullet', or 'Teaching Women Science'). Last I looked, it's only the two of us. And since I was the one who taught her how to build rockets, her responses aren't exactly surprises.
More from my reader:
Climate
- Climate Change: The Next Generation
- ClimateSpin
- NASA: JPL
- Real Climate
- William Connolley: Stoat
- Coby Beck: A Few Things Ill-Considered
Other, mostly biology:
- Phil Plait: Bad Astronomy
- Chris Nedin: Ediacaran
- ERV
- John Wilkins: Evolving Thoughts
- Mark Chu-Carroll: Good Math, Bad Math
- Chris Mooney and Sheril Kirshenbaum: The Intersection
- Two Minds
- PZ Myers: Pharyngula
- Troy Britain: Playing Chess with Pigeons
- Mike Dunford: The Questionable Authority
- Orac: Respectful Insolence
Due reminder: These other blogs, and my comments on them, may not be as mild-mannered as here.
09 December 2008
Who can do science
Everyone can do science. Most people, especially younger children, do so on a routine basis. Science is just finding out more about the universe around you. Infants playing peekaboo are conducting a profound experiment. They cover their eyes and everything vanishes. When they uncover their eyes, everything is back. Wow! Things have persistent existence! Even if you can't see them, they're still there. Then you cover your eyes, and the child can still see you. Existence continues. Whoa. Elaborate the game. One of you hides behind something. Then pops out. Whee! Things continue to exist even with your eyes open, even if they pass from view. No wonder children giggle at the game. This is a profound discovery about the nature of the universe.
In similar vein, we can all speak prose in our native languages, or run. The thing which becomes a question later along is whether you are doing it at professional level. I run, for instance. Most of us can. And, with appropriate training, almost all of us can, say, run a marathon. Very few us of can run a marathon, however, at the pace that elite runners do. Even fewer could do so without undertaking very serious, elite-level, training to make the attempt. Similarly, while we can all talk, and most of us write, very, very few are realistic candidates for the best seller's lists, or Nobel prizes.
So it goes with science. Doing it at a professional level is a lot harder than doing it at all. One thing you often encounter in coming up with ideas is to discover that your wonderful, creative, idea was already thought of. I give myself points (when looking outside my field) for how recently it was thought of. More than 300 years ago, only 1 point. Less than 200 is more, and less than 100 even more. Every so often I manage to go out of field and come up with a new idea (to me) that professionals thought of only 30-50 years ago. On the rare occasion, I come up with one that they didn't come up with until within the last 30 years. I give myself a lot of points for those. They're pretty rare.
That's one part of doing science at a professional level -- your idea or discovery has to be not only new to you, but new to the world. Consequently, a lot of the training for becoming a professional involves learning what is already known. The answer is, unfortunately for we who'd like to make a grand splash of some kind, a lot. Worse, there are now centuries of very creative, knowledgeable people who have been working at it. Coming up with something novel is, therefore, hard.
A high school friend illustrated this neatly, if accidentally, for me. We met up over a holiday early in our college careers and he was complaining about the lack of creativity in computer science. For instance, thinking that something new and good was to be had by looking at 3 value logic systems rather than 2 value as was expressed in binary computers. He was confident such an idea would never be looked at. The week before, I was at a presentation about 3 value logic circuits and why they'd be useful. And the novel part was not the idea, which was much older, but how the speaker planned to implement it in hardware.
Conversely, if you'd like to do something novel, you're much better off looking at some area that is new, using new equipment, etc., so hasn't had a long history for people to work out a large number of ideas. In that vein, it's much easier to pull off on the satellite remote sensing of tropospheric temperatures for climate (a topic less than 20 years old) than for the surface thermometer record (well over 100 years old). A fellow I know published in the prestigious Geophysical Research Letters, largely on the strength of this point. Paper came out in 2003. He looked at how the Spencer and Christy satellite algorithm worked, and realized that it assumed something which in high latitudes was not a good assumption. He then worked out what the implications were (i.e., the trends in sea ice cover would be falsely reported as trends in temperatures), and documented it well enough to be published in the professional literature:
Swanson R. E., Evidence of possible sea-ice influence on Microwave Sounding Unit tropospheric temperature trends in polar regions, Geophys. Res. Lett., 30 (20), 2040, doi:10.1029/2003GL017938, 2003. (You can follow this up at http://www.agu.org/)
Now, the thing is, Richard did not have a doctorate. He had a master's. And his master's was not in science, it was engineering. What mattered is that he saw something that hadn't been noticed, documented it well, and submitted it to the professional publication. And he got published in this high profile journal even though he had no PhD, nor even previously worked in the field. I keep him in mind when people talk about the 'conspiracy to keep out' ... whoever.
On a different hand, as an undergraduate I did do work worth a coauthorship on a significant journal paper. (Significant journal, that is, whether the paper was significant, I leave to its readers.) But that was working for a faculty member, and while my contribution was indeed (I realized later, I assumed that Ed was simply a nice guy -- which he was, but that turned out to be a different matter) worthy of a coauthorship, I couldn't have gotten the project started on my own. Once started in a fruitful area, I could have finished it, but for a professional, you want to see the person be able to find out what the fruitful area is.
So how young can you go; how much experience is needed? Well, if you choose right, and are creative enough, jr. high. My niece managed a science fair project last year that I still encourage her to write up for serious publication. She hit on an idea in an area that hasn't been studied a lot already (it's new and people there have been assuming an answer, she documented it -- good science) and a way of testing it (ditto) and collected the data and evaluated it scientifically. Yay! She might need a hand on the professional writing and statistics description, but the science part, she nailed solo.
Where does that leave us as readers of blogs and such? Alas, it means we have to think. The presence of a PhD is not a guarantee of correctness. Nor is the absence of one a guarantee of error. And this remains true even if we consider what area the PhD was in and the like. What is more reliable is that the older an area (surface temperature record interpretation, for instance) the less likely it is that someone can make a contribution or correction without doing quite a lot of work. The field is long past the point where it's likely that they've not noticed the urban heat island, for instance. (I haven't searched seriously, but have already run across reference to it from the early 1950s.) Blog commentators who plop this one down as if it were an ace of trump: "Ha, they didn't consider the urban heat island. Therefore, I can conclude whatever I want." or the like, can speedily be added to your list of unreliable sources. Urban heat island has been considered, quite often, for longer than they've been alive. It's an old field. Tackling ARGO buoys, less overwhelming an obstacle. (But be sure your math is up to the work! )
As younger people (those of you who are, which doesn't seem to be many, alas; but you parents remember it for your kids' sake) it means that the time to start working on doing science is today. Do your own science (meaning, learn things about the world), and try to do some professional level science too (try to learn things that nobody else has figured out yet). The heart of science is in finding things out. This doesn't have to have anything to do with what you're doing in school. But things that interest you, whatever that may be.
In similar vein, we can all speak prose in our native languages, or run. The thing which becomes a question later along is whether you are doing it at professional level. I run, for instance. Most of us can. And, with appropriate training, almost all of us can, say, run a marathon. Very few us of can run a marathon, however, at the pace that elite runners do. Even fewer could do so without undertaking very serious, elite-level, training to make the attempt. Similarly, while we can all talk, and most of us write, very, very few are realistic candidates for the best seller's lists, or Nobel prizes.
So it goes with science. Doing it at a professional level is a lot harder than doing it at all. One thing you often encounter in coming up with ideas is to discover that your wonderful, creative, idea was already thought of. I give myself points (when looking outside my field) for how recently it was thought of. More than 300 years ago, only 1 point. Less than 200 is more, and less than 100 even more. Every so often I manage to go out of field and come up with a new idea (to me) that professionals thought of only 30-50 years ago. On the rare occasion, I come up with one that they didn't come up with until within the last 30 years. I give myself a lot of points for those. They're pretty rare.
That's one part of doing science at a professional level -- your idea or discovery has to be not only new to you, but new to the world. Consequently, a lot of the training for becoming a professional involves learning what is already known. The answer is, unfortunately for we who'd like to make a grand splash of some kind, a lot. Worse, there are now centuries of very creative, knowledgeable people who have been working at it. Coming up with something novel is, therefore, hard.
A high school friend illustrated this neatly, if accidentally, for me. We met up over a holiday early in our college careers and he was complaining about the lack of creativity in computer science. For instance, thinking that something new and good was to be had by looking at 3 value logic systems rather than 2 value as was expressed in binary computers. He was confident such an idea would never be looked at. The week before, I was at a presentation about 3 value logic circuits and why they'd be useful. And the novel part was not the idea, which was much older, but how the speaker planned to implement it in hardware.
Conversely, if you'd like to do something novel, you're much better off looking at some area that is new, using new equipment, etc., so hasn't had a long history for people to work out a large number of ideas. In that vein, it's much easier to pull off on the satellite remote sensing of tropospheric temperatures for climate (a topic less than 20 years old) than for the surface thermometer record (well over 100 years old). A fellow I know published in the prestigious Geophysical Research Letters, largely on the strength of this point. Paper came out in 2003. He looked at how the Spencer and Christy satellite algorithm worked, and realized that it assumed something which in high latitudes was not a good assumption. He then worked out what the implications were (i.e., the trends in sea ice cover would be falsely reported as trends in temperatures), and documented it well enough to be published in the professional literature:
Swanson R. E., Evidence of possible sea-ice influence on Microwave Sounding Unit tropospheric temperature trends in polar regions, Geophys. Res. Lett., 30 (20), 2040, doi:10.1029/2003GL017938, 2003. (You can follow this up at http://www.agu.org/)
Now, the thing is, Richard did not have a doctorate. He had a master's. And his master's was not in science, it was engineering. What mattered is that he saw something that hadn't been noticed, documented it well, and submitted it to the professional publication. And he got published in this high profile journal even though he had no PhD, nor even previously worked in the field. I keep him in mind when people talk about the 'conspiracy to keep out' ... whoever.
On a different hand, as an undergraduate I did do work worth a coauthorship on a significant journal paper. (Significant journal, that is, whether the paper was significant, I leave to its readers.) But that was working for a faculty member, and while my contribution was indeed (I realized later, I assumed that Ed was simply a nice guy -- which he was, but that turned out to be a different matter) worthy of a coauthorship, I couldn't have gotten the project started on my own. Once started in a fruitful area, I could have finished it, but for a professional, you want to see the person be able to find out what the fruitful area is.
So how young can you go; how much experience is needed? Well, if you choose right, and are creative enough, jr. high. My niece managed a science fair project last year that I still encourage her to write up for serious publication. She hit on an idea in an area that hasn't been studied a lot already (it's new and people there have been assuming an answer, she documented it -- good science) and a way of testing it (ditto) and collected the data and evaluated it scientifically. Yay! She might need a hand on the professional writing and statistics description, but the science part, she nailed solo.
Where does that leave us as readers of blogs and such? Alas, it means we have to think. The presence of a PhD is not a guarantee of correctness. Nor is the absence of one a guarantee of error. And this remains true even if we consider what area the PhD was in and the like. What is more reliable is that the older an area (surface temperature record interpretation, for instance) the less likely it is that someone can make a contribution or correction without doing quite a lot of work. The field is long past the point where it's likely that they've not noticed the urban heat island, for instance. (I haven't searched seriously, but have already run across reference to it from the early 1950s.) Blog commentators who plop this one down as if it were an ace of trump: "Ha, they didn't consider the urban heat island. Therefore, I can conclude whatever I want." or the like, can speedily be added to your list of unreliable sources. Urban heat island has been considered, quite often, for longer than they've been alive. It's an old field. Tackling ARGO buoys, less overwhelming an obstacle. (But be sure your math is up to the work! )
As younger people (those of you who are, which doesn't seem to be many, alas; but you parents remember it for your kids' sake) it means that the time to start working on doing science is today. Do your own science (meaning, learn things about the world), and try to do some professional level science too (try to learn things that nobody else has figured out yet). The heart of science is in finding things out. This doesn't have to have anything to do with what you're doing in school. But things that interest you, whatever that may be.
08 December 2008
Question Place 4
New month and I'm here, so here's a new spot for questions (plus comments and suggestions).
I'll put one out myself, on a non-climate issue (fortunately, your questions needn't be on climate either). I've been thinking a bit about my reading, and noticing that almost all of it was originally written in English. That's ok, as there's more good stuff written first in English than I can hope to read. Still, there's some awfully good writing from other modern languages. So, I'll welcome suggestions for good fiction that you've read which were originally written in other languages (but have decent translations in English). I've already got fair ideas for French, German, and Russian, and a little for Czech, Italian, and older Chinese and Japanese. But that leaves a lot of languages untouched.
Some areas of study
In writing the recent note on what a PhD means, it occurred to me that it might be worth mentioning the areas that I've taken classes in. This, after agreeing with the comment that you can't presume that a PhD person has more than a 101-level knowledge in areas outside of what they study themselves. You can't presume it, but then again, odds are good that there are some areas where a person goes above the 101 level.
My schools used peculiar numbering schemes, so I'll partition it directly by who was in the class:
Graduate level areas:
I'm leaving out a number of things because, well, I don't remember everything offhand, much less in order. But it's a sampling. One thing not missing is any lower level courses in astronomy and astrophysics -- I started with the graduate level courses. Also not missing is my courses in glaciology. I've never taken one, but my first (coauthored) paper was on the subject. I later wrote one solo on a different area of glaciology. Absence of courses is not a guarantee of absence of professional level knowledge. One thing, ideally, you learn along the way to your PhD (better if you get in practice while still in elementary school!) is how to teach yourself new subjects.
My schools used peculiar numbering schemes, so I'll partition it directly by who was in the class:
Graduate level areas:
- Astrophysics: galaxies, interstellar medium, cosmology, astrophysical jets
- Geosciences: geophysical fluid dynamics, geochemistry, atmospheric chemistry, numerical weather prediction, tides, radar meteorology, cloud physics, ...
- Engineering: Engineering fluid dynamics,
- Math: asymptotic analysis, partial differential equations, ...
- Linguistics: syntactic analysis, computational linguistics
- Paleoclimatology
- History: History of Science, Intellectual History of Western Europe
- Physics: Quantum Mechanics, Solid State Physics, Nuclear and Particle Physics
- Math: bunches, including probability, statistics, differential geometry, nasty things to do to ordinary and partial differential equations, numerical ways of beating on such equations and systems of equations
- Physical chemistry
- ... and probably several more
I'm leaving out a number of things because, well, I don't remember everything offhand, much less in order. But it's a sampling. One thing not missing is any lower level courses in astronomy and astrophysics -- I started with the graduate level courses. Also not missing is my courses in glaciology. I've never taken one, but my first (coauthored) paper was on the subject. I later wrote one solo on a different area of glaciology. Absence of courses is not a guarantee of absence of professional level knowledge. One thing, ideally, you learn along the way to your PhD (better if you get in practice while still in elementary school!) is how to teach yourself new subjects.
05 December 2008
National Academies Survey
The National Academies (US, for science and for engineering) are holding a survey to see what it is that people are interested in hearing about in science and engineering. The choices are limited, but that makes for an easy poll to answer. Less than 2 minutes for me. Maybe 30 seconds.
http://www.surveygizmo.com/s/75757/what-matters-most-to-you-iv
http://www.surveygizmo.com/s/75757/what-matters-most-to-you-iv
04 December 2008
What does a PhD mean
The trivial answer to the subject question is 'Doctor of Philosophy', which doesn't help us much. I was prompted to write about it by responses to Chris's comments on the pathetic petition over on Chris Colose's blog, wherein a reader seemed to think that once one had a PhD, one had received a grant of omniscience.
Ok, not quite. Rather, to quote Stephen (11 June 2008) directly: "...a person with a PhD is more apt to think critically before making a decision. Granted, it’s not true in every case, obviously, but someone with a PhD in any scientific field is statistically more likely to look at all of the information available to them." Unfortunately he never gave us a pointer to where those statistics were gathered -- the ones that supported his claim that PhDs in a scientific field were 'statistically more likely' .... I'm minded of the observation that 84.73% of all statistics on the net are made up.
The details of what a PhD means vary by advisor, school, and era. But for what's at hand, the finer details don't matter. One description of doctoral requirements is "an original contribution to human knowledge". This much being true whether we're talking about science or literature. The resulting contribution should (more so these days than a century ago) be publishable, and published, in the professional literature. Notions vary, but there's also a principle that someone who earns (or is a candidate to receive) a PhD should conduct the work with significantly less guidance than an MS candidate. And far less than an undergraduate. Again, true whether science or literature.
One thing you don't see there is 'more apt to think critically' about everything they comment on. You also won't find 'look at all information' about everything. The about everything is my addition, not the exact quote. But for the comments to be meaningful, they have to apply to the specific thing at hand, whatever it is that's at hand, whether it's the pathetic petition or other things allegedly about science.
The posession of a doctorate says, instead, that the owner is likely to be capable of making an original contribution to knowledge, without too much guidance from someone else. This is a pretty good sign. But it hardly means that the owner has been turned in to Mr. Spock. PhD holders are human still. We all have capabilities, PhD or no. And we humans don't always exercise the highest of our abilities. The area where you can bet (if not guarantee) that a PhD holder is more apt to think critically and consider all information is the area of their professional work. Outside that ... you're much better off to either ask if they brought full ability to bear, or to assume that they didn't.
In saying that, remember, I do have a PhD myself. It is possible that a PhD-holder is bringing full abilities to bear. If so, then they can fare better than most non-PhDs in evaluating things which claim to be science. I did, for example, take such a look at a couple of different scientific papers regarding left-handedness (I'm left-handed and interested in the topic). I thought they were very bad, for a number of reasons of 'how you do science'. The serious work, however, was done by the people who were in the field (PhD or no) and wrote the rebuttal papers for the peer-reviewed literature. They named many things that I got, and many more besides. Being a scientist got me about 1/3rd of the way through the list of errors that the original authors had committed.
On the other hand, many of the things I looked at and for in evaluating those papers were things I'm discussing here and seriously believe a jr. high student can learn to apply regularly.
So where are we? As Chris suggested in his response to Stephen: "From experience with my professors though, I wouldn’t ask many of them a question outside of their field, at least beyond 101 level stuff, so maybe not. But at the same time, none of them would go off signing petitions about things they know little about." Those are both good rules of thumb. Outside the professional field, a PhD-holder can't be presumed to know more than (or, depending on field, even) 101 level stuff. But most of us know this about ourselves, so toss the junk mail petitions where they belong when they arrive.
Ok, not quite. Rather, to quote Stephen (11 June 2008) directly: "...a person with a PhD is more apt to think critically before making a decision. Granted, it’s not true in every case, obviously, but someone with a PhD in any scientific field is statistically more likely to look at all of the information available to them." Unfortunately he never gave us a pointer to where those statistics were gathered -- the ones that supported his claim that PhDs in a scientific field were 'statistically more likely' .... I'm minded of the observation that 84.73% of all statistics on the net are made up.
The details of what a PhD means vary by advisor, school, and era. But for what's at hand, the finer details don't matter. One description of doctoral requirements is "an original contribution to human knowledge". This much being true whether we're talking about science or literature. The resulting contribution should (more so these days than a century ago) be publishable, and published, in the professional literature. Notions vary, but there's also a principle that someone who earns (or is a candidate to receive) a PhD should conduct the work with significantly less guidance than an MS candidate. And far less than an undergraduate. Again, true whether science or literature.
One thing you don't see there is 'more apt to think critically' about everything they comment on. You also won't find 'look at all information' about everything. The about everything is my addition, not the exact quote. But for the comments to be meaningful, they have to apply to the specific thing at hand, whatever it is that's at hand, whether it's the pathetic petition or other things allegedly about science.
The posession of a doctorate says, instead, that the owner is likely to be capable of making an original contribution to knowledge, without too much guidance from someone else. This is a pretty good sign. But it hardly means that the owner has been turned in to Mr. Spock. PhD holders are human still. We all have capabilities, PhD or no. And we humans don't always exercise the highest of our abilities. The area where you can bet (if not guarantee) that a PhD holder is more apt to think critically and consider all information is the area of their professional work. Outside that ... you're much better off to either ask if they brought full ability to bear, or to assume that they didn't.
In saying that, remember, I do have a PhD myself. It is possible that a PhD-holder is bringing full abilities to bear. If so, then they can fare better than most non-PhDs in evaluating things which claim to be science. I did, for example, take such a look at a couple of different scientific papers regarding left-handedness (I'm left-handed and interested in the topic). I thought they were very bad, for a number of reasons of 'how you do science'. The serious work, however, was done by the people who were in the field (PhD or no) and wrote the rebuttal papers for the peer-reviewed literature. They named many things that I got, and many more besides. Being a scientist got me about 1/3rd of the way through the list of errors that the original authors had committed.
On the other hand, many of the things I looked at and for in evaluating those papers were things I'm discussing here and seriously believe a jr. high student can learn to apply regularly.
So where are we? As Chris suggested in his response to Stephen: "From experience with my professors though, I wouldn’t ask many of them a question outside of their field, at least beyond 101 level stuff, so maybe not. But at the same time, none of them would go off signing petitions about things they know little about." Those are both good rules of thumb. Outside the professional field, a PhD-holder can't be presumed to know more than (or, depending on field, even) 101 level stuff. But most of us know this about ourselves, so toss the junk mail petitions where they belong when they arrive.
03 December 2008
Words to beware of
Some words have good meanings in normal conversation, and different meanings in science. 'Theory' is one such. But one that is seriously hazardous to try to interpret until you know the full context is 'rapid'. For folks studying chemical reactions, that can be a femtosecond. For geologists studying tectonic processes, it can be millions of years ('rapid uplift of the Himalayan plateau').
Even within a field, say glaciology, you can be looking at a few hours (rapid breakup of an ice shelf) to a few thousand years (rapid onset of an ice age).
Other contributions of words that vary widely between fields and even within fields? We'll take 'sudden' and the like as covered by 'rapid'.
Even within a field, say glaciology, you can be looking at a few hours (rapid breakup of an ice shelf) to a few thousand years (rapid onset of an ice age).
Other contributions of words that vary widely between fields and even within fields? We'll take 'sudden' and the like as covered by 'rapid'.
01 December 2008
Plotting software
Time for some collective wisdom. What are some good, noncommercial (or at least not expensively commercial; matlab does a good job, but the price tag has 4 digits left of the decimal, 2 would be ok) plotting packages? One of the best I ever encountered was CricketGraph, but that was back in 1990 or so. They seem defunct and it's getting hard to run on modern computers. (Impossible with Mac OSX 10.5, doable in 10.4, but it came out in the era of OS 6). Mac or *nix platforms.
I'm not trying to do anything elaborate, just view some data, perhaps multiple x or y axes, logarithmic axes, put labels where I'd like (at a click, not by computing displacements, to name a flaw of GrADS). Data to come from plain text files. I suppose I can insert commas if the software insisted. I'd as soon be able to go to a few hundred thousand data points, and 10,000 or so is definitely required. If it turns out that 'grapher' (shipped with Macs) does the job, I'll register my embarrassment and ask how to make it read in a data file.
I'm not trying to do anything elaborate, just view some data, perhaps multiple x or y axes, logarithmic axes, put labels where I'd like (at a click, not by computing displacements, to name a flaw of GrADS). Data to come from plain text files. I suppose I can insert commas if the software insisted. I'd as soon be able to go to a few hundred thousand data points, and 10,000 or so is definitely required. If it turns out that 'grapher' (shipped with Macs) does the job, I'll register my embarrassment and ask how to make it read in a data file.
Subscribe to:
Posts (Atom)