18 May 2009

The central skill of a scientist

"Another beautiful theory slain by an ugly fact." I believe that is from Julian Huxley, but it could be T. H. Huxley or JBS Haldane instead.

In any case, being able to say that and move on to the next theory or idea is the central skill of a scientist. The math, experiments, field observations, and so on, are only tools. Central is to know that what you think, perhaps even very strongly, to be the case today could run in to some ugly facts tomorrow ... and then you'll have to change your mind in accord with this new evidence.

One of my encounters with this involved a notion (far too early in the process to call it a theory) about clouds. A friend was studying clouds and the snowfall from them. The clouds were in a bunch of parallel bands. This was no surprise. The surprise was that spaced regularly down the bands were 'knots' of particularly high snowfall rates. Why should such a thing happen? I happened to be looking at hydrodynamic stability problems at the time, and one looked to be about right. So I mentioned it, and he said 'Bob, there's no known surface tension effect in clouds.' Ok, I knew that, but maybe we had the first observations that there was such a surface tension effect in clouds. The thing was, the mechanism made definite predictions about how far apart the knots would be. So, we checked. The knots were not spaced right for that idea, and were nowhere near. Ugly fact slayed my beautiful theory.

Oh well, time to move on to newer and better ideas. And there is the hard part to doing science. People -- scientists included -- get attached to their ideas, or to ideas they learned long ago. Letting go of one because the data just aren't there, new data come along and show the old data was bad, or any of the host of reasons that leads us to change our minds in science ... that's hard. This doesn't mean that you must accept whatever new thing anyone presents. That would be foolish; you're certainly entitled to check under the hood, kick the tires, take the new out for a test drive. But, after challenging the new (data, model, theory, inference, ...) if you can't shake it, you have to grant it at least tentative acceptance -- even if it is contrary to something else that you like better.


Alastair said...

There is an unrecognised effect (hence no references) similar to surface tension associated with clouds. It is what keeps their external boundaries sharp. The effect is greater at lower altitudes. How would this fit with your knots?

Cheers, Alastair.

Robert Grumbine said...

You'll have to provide more details, and make it a recognized, or at least recognizable, effect. My cloud physics friends, as they fly cloud droplet and ice crystal sensors towards clouds see no very sharp boundary. (That, too, was a problem for my notion.)

Cloud edges may appear sharp -- if they're made only of water droplets and you see them from an appropriate angle. If the droplets freeze, the edges of the cloud rapidly become far less optically distinct. This is called glaciation, and is a warning sign that the cloud is about to rain on someone.

Alastair said...

Well, I can give you a reference now :-)

McDonald, A. B. (2009) "Error in OLR Model", Poster, RMetSoc Conference

Unfortunately you need to format the abstract yourself.

WRT clouds, they emit blackbody radiation which is absorbed by the greenhouse gases within 30 m of their surfaces, which means that the clouds are surrounded by warmer air. The air further away does not receive as much radiation in the greenhouse gas bands because the radiation is mostly "frozen out", so the air is cooler.

The abstract does not mention the cloud effect, but it is something I have been considering for some time.

Although my abstract was accepted, I was warned that my poster would not be well received! Anyway, what do you think?

Cheers, Alastair.

Robert Grumbine said...

Your effect definitely wouldn't work to save that old notion of mine. No worry anyhow. I've had quite a few ideas since that 22 year old one.

For your presentation ... I see why you were warned about the likely reception. But congratulations for pursuing your ideas into the professional setting! Now, the most likely response is to be ignored. That's something observed by all of us who have given posters (made presentations, written papers, ...) If not ignored, then you're going to get some pretty critical comments. Welcome them, take notes, ask for more detail and where you can learn more about the processes the person mentions. Then make an improved model of your idea.

One thing which looks wrong with your version is an 'and then what happens' issue. Namely, I'll suppose that you're right about all the radiation from the cloud being absorbed by the air within 30 m and that air getting warmed.

What happens next? Warm air rises. If you're right, clouds should be surrounded by walls, about 30 m thick, of ascending air. But we don't see such walls. Updrafts are in the centers of convective clouds, not around the edge. Wind-parallel bands (like my friend was studying) have descending air in the clear section around the bands, not ascending as your model requires.

Two books I've recently purchased, so now can recommend, are Grant Petty's A first course in Atmospheric Radiation and A first course in Atmospheric Thermodynamics, both from www.sundogpublishing.com. If you don't already have them, and mastery from start to finish of both, then let me strongly suggest you do both.

Ian said...

Thanks, this was fun to read. It really can be hard to let go of an idea. It feels like a bit of a sunk costs problem, after hours of scratching your head over "uncooperative" data.

I once experienced a funny version of this, where some colleagues/friends encouraged me to keep slaving away at an interesting idea that, in the end, I was sure would never come out of any data I collected. My friends thought they were being helpful by encouraging me to continue - after all, who wants to see a friend put in all that enthusiastic work just to admit defeat? :)

Alastair said...

Thanks for your comments. I now realise that I will have to be careful when I mention 30 m. That is the effective limit at which radiation is absorbed, but the radiation falls off exponentially from the surface as explained by Beer's Law.

Tyndall wrote that 10% of the radiation was absorbed in the first 10 feet. That means that 9% is absorbed in the next 10 feet, with 8.1% in the next 10 feet. By the time you get to 300 feet (30 m) there is nothing left to absorb.

My second point is probably more contentious, but warm air does not necessarily rise. In the atmosphere, when air warms it is the pressure which increases not the density. For instance in deserts the surface air continually gets hotter until there is a hole in the inversion layer, and a dust devil forms. Convection is mainly due to evaporation which does reduce density, because water vapour has a lower molecular weight than air (well you know what I mean.)

So the cloud effect is, as I see it, a bit like this. A molecule on the outside of a cloud condenses and gains sensible heat due to the loss of latent heat. Some of the sensible heat is lost by radiation, which prevents immediate re-evaporation (the molecule is too cold.) The radiation heats adjacent outer molecules of water vapour which lose that energy to air molecules due to collisions. So the cloud is surrounded by a thin skin of warmer air preventing the cloud from expanding.

That skin is too thin and too cool to convect. Would that fit your scheme as a surface tension substitute?

I have posted my abstract Error in OLR Model? on my own blog properly formatted so that it is readable. I am arguing that LTE does not apply to terrestrial planetary atmospheres.

Thanks for you book recommendations. I have taken them in the past notably Chandrasekhar's "Radiative Transfer"! Interestingly, he also makes the mistake of assuming that the planetary atmosphere of the Earth behaves in the same manner as that of the photosphere of the Sun i.e. it is in LTE. I can't imagine Grant Petty is aware of that error. Certainly, having searched the preview of his book on Amazon there is no mention of radiation being "frozen out."

See what Eric Weisstein has to say about LTE. But I have really got to get on with writing the Poster!

Cheers, Alastair.

Robert Grumbine said...

Ian: Good example. On a different hand, I may well have encouraged you to pursue the idea also. Even if I was pretty sure that the notion you were pursuing wouldn't be supported by the data that were out there, or could be out there.

The thing is, even if your original notion (like many of mine) is not correct, it's possible that as you work on it, you'll find something else that does work. If you never started working, you'd never find the good idea that does work. Most of the ideas I've had that did work, I got by starting out on something else instead.

The important part of doing science is learning more about how the universe works. Whether your original idea holds up, in pursuing it, you learn more about the universe. So that 'sunk cost' is never really lost.

Philip H. said...

A better quote, with the same message:

The greatest difficulty int the world is not for people to accept new ideas, but to make them forget about old ideas (JOhn Maynard Keynes).

Now, that done, here's my devils advocacy question for you: Given that science is a series of rigorous (or not rigorous) tests of idead against facts, what conclusions can you draw about the status of the climate denial movement? I ask, knowing generally your answer, because they all too often hang their hat on "new evidence" that supposedly contradicts "old" tehory on AGW.

Robert Grumbine said...

Related to your quote Philip is "It isn't what you know that's the problem, it's what you know that isn't so." Also Keynes, if I remember rightly. I think they're all strongly related. Part of the difficulty with 'slain by an ugly fact' is that you have to admit that your original idea has indeed been slain, that it's time to forget that one and move on.

Rather than speculate about a 'movement' that I don't think exactly is one, and whose membership you and I might not even agree about entirely, let's take a look at some responses on the blog here, to the Does CO2 correlate with temperature note.

On the science end of responses, some folks noted errors they thought I'd made and suggested the corrections. A little disagreement in the comments between some of the science-minded folks (you and gmcrews, for instance). Or the person just asking whether correlation was necessarily causation. No problem. The thing that makes it no problem is, the line of the commentary in such notes is focused on understanding the system -- what does the correlation mean? Should it be a logarithmic response instead of linera? Does the graph fairly represent the data involved? Joseph (13 April) noted his own work, and some interesting differences between it and what I showed.

On the antiscientific (or, as John Mashey would say, agnotological) side ... learning more about the system doesn't show up. Instead, we have people talking about how during the ice ages CO2 followed temperature, and explicitly or otherwise using that to reject that there might be a meaningful correlation since the industrial revolution. Or they chant some mantra about 'autocorrelation', as if that would make the correlation go away, but, importantly, never do the computations to show that, indeed, it does. Most oddly were those who denied that anybody ever had said that there was no correlation, in spite of the fact that the article was prompted by, and linked to, a source that did so.

On the science side, folks stayed on topic, asked questions for information, did work to establish conclusions they'd reached, or to persuade me to move mine. All these involve the 'risk' that their beautiful idea would be slain by some ugly fact that I might bring up. Likewise, they have the opportunity to persuade me that my idea had been slain.

On the agnotological side, they don't stay on topic (a number of the posts were directed to other parts of the blog, they talk about ice age cases which are not relevant to the present, etc.), ask 'debater's questions' -- questions not for information but for rhetorical effect, and don't do any of the work to support a point of theirs. Somehow I'm supposed to be persuaded by the namecalling and yelling? (some comments did not go through for that reason). They run no risk of their own 'beautiful idea' being slain by ugly data, as they're not even reading the ugly data. For the same reason they also have no prospect of persuading me that my idea has been slain.

Different line comes to mind: "A turtle makes progress only when he sticks his head out." Science is rather turtle-like in that respect. To make progress, you have to take some risks -- make statements that can be shown wrong, and to change your position if the evidence is not in favor of it.

Alastair said...

I have to protest! Scientific theories are not slain by ugly facts. I gave you an ugly fact from Fleeming Jenkins, that any trait that evolved would very soon be bred out, but that did not kill the theory of natural selection. As Max Plank explained, scientific theories don't die they only fade away. It's the professors who are buried.

(I am not arguing for creationism or even intelligent design. n fact I have discovered that the error at the root of the climate models is caused by the belief that the global climate is an intelligently designed control system.)

As Dale Carnegie pointed out in "How to Win Friends and Influence People" even after someone has been defeated in an argument they will still believe that they are correct, and that something will turn up to prove them right. Unusually, Mendel's work did that for Darwin.

There is a recent book called "The Black Swan" which explains what the world is really like. It explains better than I can how "There are more things in Life, the Universe and Everything, Horatio, than are dreamt of in your philosophy ... . Systems, such as life, the universe and everything e.g. the climate system are chaotic. Linear systems are just a special case.

In sci.environment, Mike Tobin scornfully dismissed my ideas when he went searching for the next paradigm shift. But it wouldn't be a paradigm shift if it didn't seem ridiculous at first sight. That is where you find paradigm shifts, amongst the stones that the builder has cast aside.

Anyway, I recommend that you read "The Black Swan." by Nicolas Nassim Taleb.Cheers, Alastair.

Robert Grumbine said...

Alastair: I hadn't realized you were serious in putting forth Jenkin's 100+ year old argument. It was answered long before you and I were born.

Jenkin was writing at a time before it was known that inheritance is not an averaging process (blending, in the term of the day). It also ignored an important feature of evolution. Namely, that it is not random. If you assume that evolution is a strictly blending process, and that the results of the blending have no effect on the survival of the descendants, then you arrive at the problem Fleeming posed for Darwin.

First issue at hand, is that it was not a fact, ugly or otherwise, but a prediction of Jenkin's. He did not run any breeding experiments to demonstrate his inferences. Assertions unsupported by observation don't qualify as facts.

Second part, is the theoretical basis was flawed. He had to assume both that a) traits had no effect on survival (else the organisms with the traits would survive better from this generation to the next, and then have much improved chance of meeting up with other advantaged individuals) and b) that the traits would never reappear. That is, if some advantageous mutation appeared, it would be impossible for it to show up again, and, in particular, impossible to show up in another individual in the same generation. (I'm speaking here in terms of sexual species. The argument against Jenkin is much easier in asexual species like bacteria.)

Third part of why he was wrong was something that neither he nor Darwin knew about in detail. Namely, that we have genes and inheritance is necessarily chunkwise (genes) rather than blending. Darwin's 1st edition was much more in this direction. Later editions moved towards blending (and Lamarkian) views, not because of increasing data for those, but no understanding of how discrete inheritance could work fundamentally.

Between Jenkin and Darwin, we also see the superiority of observation. Jenkin had a strictly theoretical prediction, zero experiment or observation. Darwin derived his theoretical constructs from observations. Though he didn't know how the traits were transmitted, they clearly were, the traits could indeed affect survival (thence further transmission), and new traits did arise in even 'stable' populations. Those fundamentals were entirely untouched by later discovery of genes providing a quantization of inheritance.

On the other hand, quantization also shows the error in the line you attribute to Planck. His own development of quantization was accepted by people already in his field. He did not have to wait for everyone already in the field to die off. Nor did Einstein in his work on quantum mechanics, or special relativity, or general relativity. Schoedinger and Heisenberg's quantum mechanics was rapidly taken up by older physicists, most notably Bohr. And ... I could go on for quite a while.

Progress by waiting for the 'old guard' to die off is seldom the case in science.

Usually, people who do the things that make for 'paradigm shift', don't think of it that way at the time they're doing it. This is one of my interests in reading original sources. And looking weird is quite easy. Almost all notions do. Almost all of them are wrong. Weird and correct is exceedingly rare. In a different note, you said Chandresekhar was wrong. He had an extraordinarily prolific and long career. If I were you, I'd be much more careful to make such a declaration. Almost nobody who said Chandra was wrong ever turned out to be right in saying so. Most notable example, and the University of Chicago might still be sending thank you notes to the UK for this, was Eddington's maltreatment of Chandra. Chandrasekhar got a Nobel for that work. If you're going to say he's wrong, you have a lot more homework to do. Saying so is also a way to get yourself ignored. Scientists often hear from people they don't know, who say that they have proof that Newton/Galileo/Einstein/Chandrasekhar/... is wrong. You don't want to get yourself classed with them.

Alastair said...


Jenkins idea was an ugly fact. Admittedly it was only a thought experiment, but it is well known that species will revert to type over a period of time. It was accepted as a valid criticism by Darwin but did not end his theory.

My raising this issue is an ugly fact that counters your theory of science being guided by simple logic, yet you are no more prepared to abandon your thesis than Darwin was his.

So I have a double proof that I am right :-)

PS Just to add a little value, there is an article on this matter by Stephen Jay Gould here entitled "Fleeming Jenkin revisited; this obscure, but able, Victorian gentleman convinced Darwin himself on an important evolutionary point."

Robert Grumbine said...

Thought experiments are not facts. The saying was not 'another beautiful slain by an ugly thought experiment'.

The thought experiment was theoretically troubling, true. But, as Darwin had a good stack of facts to work with, the theory remained unmoved. To move the theory, Jenkin would have needed facts.

Given that Jenkin was shown to be flat wrong by later observations, you're certainly not going to succeed here in elevating his thought experiment to fact status.

In any case, do read a bit more carefully. I never did mention 'simple logic', nor did I say that the saying was an absolute law, religiously adhered to. Thinking that it was, was part of Popper's mistake in his naive falsification phase.

Anonymous said...

Related to what you name ‘the central skill of a scientist’ is the natural tendency to think along familiar lines. This, what I call ‘professional deformation’, often stands in the way of the ‘central skill’.

I was partly inspired by an excellent presentation of Oreskes, in which she sais: “We all gravitate towards certain kinds of evidence and arguments, and they tend to be the ones with which we are most familiar.”

I think it is this professional deformation that makes some people stick to their beautiful pet pieve theories despite evidence to the contrary (the ugly facts).

Alastair said...

"In any case, do read a bit more carefully. I never did mention 'simple logic', nor did I say that the saying was an absolute law, religiously adhered to. Thinking that it was, was part of Popper's mistake in his naive falsification phase."

My apologies. I thought you were what NNB calls a Platonist in his book "The Black Swan". I was trying to convert you into a Mandelbrotian, but perhaps you are there already :-)

John Mashey said...

In practice, I think there are at least at two discernably-different cases, which might be analogs of Type I (false positive) and Type II (false negative) errors in statistics.

In the first case, you got an idea that was quickly slaughtered, and moved on. Good! Had you kept with it, despite non-accumulation of evidence, it seems like a "My theory is unappreciated" error.

In the second case, a new hypothesis comes along, and you are initially unconvinced, perhaps very publicly, but as evidence builds, you never change your mind. For instance, you might have argued for climate sensitivity to be very low, ~20 years ago, and still do. That seems a different kind of error, like a false negative.

Robert Grumbine said...

our changing climate:
That's a good example of where people can go wrong. I do tend to write about ideals. Since scientists are real people, we don't always behave ideally. The nature of the evidence (is it the sort that I'm used to or some new (read that as 'untested', 'not firmly established')) can be a barrier to changing old conclusions.

Wegener (whom I understand Oreskes has studied, though I still haven't read the source) committed just such an error while trying to advance his continental drift. One of the barriers to acceptance of his idea was that the drift rates were exceedingly different (factor of 1000 or more if I remember correctly) between different pairs of points, depending on which pairing you took. A major contributor to this was that he used pre-radiometric dates for the age of separation. Had he used the new (but used for several years) radiometric ages, this discrepancy would largely have disappeared.

But, radiometric dating was new and he didn't trust it. So he used the older methods' dates, and crippled his own argument. It was hard enough to envision the continents moving at all. But to envision them moving at wildly different rates ... that was over the top. Not that I've seen this argument in print, but I do think it must have run through minds at the time. Something close does show up in Jeffreys' The Earth, 1929 edition, but he focuses more on the complete untenability of plowing continents through the sea floor. He was right about this. Where he and Wegener went astray was envisioning the continents plowing through the sea floor.

I'm probably not a Mandelbrotian. I do think his early 1980s fractals book was pretty. But I think you mean more than that. No matter. I'm a pragmatist in most cases. Namely, while I do have ideals, or see ideals being held in communities, I also realize that we're all human so none of us will carry out the ideals ... ideally.

I think you're exactly right here. It's a difficult balancing act:
a) let in good new ideas that deserve it.
b) preserve good old ideas that haven't been opposed by good enough data.
Or, as said more pithily: "It's good to have an open mind, but not so open your brain falls out."

John Mashey said...

Naomi's book is well worth reading, as that whole thing was a whole lot more complicated that I'd ever thought, and makes very interesting reading of history-of-science.

ALso, keep an eye out for her forthcoming book (early next year, I think) on history of American denialism of global warming. I've reviewed a few chapters and it is fascinating, in a horrid way. Wegener&co at least were having an argument *within* science.