First step is, somebody has to put something forward for consideration. In this case, my note on field relevance last week. One important aspect of this is, the 'something' has to be said concretely enough that people can point to the mistakes you've made.
The second step is that the comments (reviews) have to point to specific things that are wrong. Ranting about leftists (happened elsewhere) doesn't count. Saying that I grossly understated the relevance of biologists because -- and give reasons for that 'because' -- does.
The third step is for the original author to revise the article in response to the reviewer comments. That doesn't necessarily mean 'do what every reviewer wants', not least because the reviewers (c.f. gmcrews and John Mashey) may disagree. But there should be at least some response, if only to add some explanation in the article that addresses why you're not doing (what reviewer X wanted). I'll be doing that later, but am waiting for words from the biology folks about how the field applies to deciding whether and how much of recent climate change is due to human activity.
To summarize the comments some here (do read the originals if you haven't already):
Each of this is a common general sort of comment to see in a peer review. To rephrase it more generally:
In terms of my rewriting process, the first two are pretty easy to deal with. Many people made many good comments. Those can be incorporated fairly straightforwardly, along with the fields that the comments prompted me to remember even if they weren't directly mentioned.
The second two, however, aren't quite so obvious. The third is taken care of if I go clearly to making the question addressed "How much of the recent warming is due to human activity?" And that is what the graphic actually tried to address (though still with some issues with respect to the first two sorts of comment).
But, is it useful to address that question in this way? My thought was that for non-experts, it could be a useful guide when encountering, say, a 'conference' whose speakers were almost entirely from the lower ranges. On the other hand those antiscientific conferences are seldom so specific about what they're addressing. Either the figure is focussed on too narrow a question, or many separate such figures would be needed. Experts, or at least folks at, say, K6 and above in Mashey's scale, should just go read the original materials to decide.
I haven't decided which way to go on this. Comments, as always, welcome. I also realized that it's a long time since I wrote up my comment policy, and link policy, so they are now linked to from the upper right (in the 'welcome' section).
In the mean time, I'm taking down my version of the figure and asking those who have copied it to remove it as well.
But, to come back to peer review:
All this illustrates why it is you want to read peer-reviewed sources for your science. Nobody knows everything, so papers can otherwise be incomplete, inaccurate, etc.. People can also think that something is obvious, but have forgotten about things that they themselves do know (like my temporary brain death about biology as a field for knowing that climate is changing). Or they know certain things so well themselves that they don't write it up well for the more general audience. (Even in a professional journal, most of the readers aren't in your particular sub-sub-sub-field. 'more general' may only mean make it accessible in the sub-sub-field instead, but that can still be a challenge.) In a productive peer review process, these questions are all addressed.
6 comments:
How can biologists help answer the question are humans responsible?
It's a good question. One answer that springs to mind is assistance in isolating what are and aren't effects of climate change (I'm thinking along the lines of range or abundance change and the like not necessarily being climate driven, rather due to changes in land use - something an ecologist would be better able to point out than other disciplines), but that's not really answering the question (there may of course be knock-on climate effects caused by changes in land-use but the attribution of these are probably beyond the field of ecology).
I'm struggling to find an answer, I'll be interested to see if someone else comes up with one...
Chris S.
One never knows with peer review.
In 1977, there was a conference called "Language Design for Reliable Software", about languages like ADA that had all sorts of features to help this (and some do). This was a very hot topic right then.
On a bit of a lark, a colleague (Brian Kernighan) and I wrote a paper that took somewhat of a counter view. I.e., that it helped more to build tools & components that could be easily found and reused, as the most reliable software was that which you didn't have to write fresh.
We got back referee's reports:
1)Yes, great.
2)Yes, great.
3)No, nothing new.
4)OK, but irrelevant to this conference.
so:reject.
We laughed a bit.
Someone attended the conference, and oddly, exactly this exact topic was raised and discussed fairly often.
A bit later, a friend of Brian's was looking for an article for Software-Practice & Experience, so Brian offered him this one, published in 1979.
Then, a bit later, someone from IEEE Computer wanted an updated version for there, which got published in 1981 as "The UNIX Programming Environment" which was referenced moderately often for decades (although Google Scholar gets confused about it).
So, one never knows...
John: I certainly have my own stories about peer-review failures as well. Most of the time, though, papers are better after being shown around -- whether in a formal peer review or passing out the next to final draft to colleagues. Actually, I have to limit the improvement to the content and parts of presentation. The writing style usually gets crippled into a very dead, passive, form.
Oops, I didn't mean this as an example of peer-review failure:
we *didn't* write the sort of paper they were looking for (and we knew it. I said it was sort of a lark. :-)
I just meant that one can sometimes get back confusing/contradictory advice, and of course, as it happened, the paper ended up with way wider exposure than if it had been accepted at the conference.
To me, real failures are those that let absolute junk through too easily. Missing a good paper can happen for a lot of reasons, and usually it gets out there sooner or later, which was the other point. I.e., it's assymetric. Acceptance doesn't mean it's right, but repeated non-acceptance is useful data.
(I'm sure you know all this, but the readership might not).
I think you should leave the graphic up, it's hard to follow the story without it, and it makes it look like you're trying to hide something (not that I think you are :).
Best thing would be to leave a disclaimer under it pointing to this (excellent) post.
Keep up the good work.
naught101:
The folks likely to think badly of me, will do so regardless of whether that graphic remains up or not. The graphic itself, given responses here, doesn't communicate my message well. Indeed, it prevents my message from being communicated at all. A graphic that is doing active harm, I'll pull down. In science, we're supposed to learn from what happens in the real world. I learned that my graphic was of negative use.
Those who might complain about me 'hiding' something, would complain about something else with it up. So they're not interesting. People who encounter the old post as it now stands, I hope, will get something of the sense of what I mean. The graphic itself is irrelevant to the main point -- for any given question, there are groups more likely, and less likely, to have studied the topic. They are, then, more likely, or less likely, to be good sources to turn to for more information about the topic. It'll depend, very sensitively, on what, exactly, the question you have in mind is.
Post a Comment