.:[Double Click To][Close]:.
Get paid To Promote 
at any Location





Friday, August 20, 2010

The not so moral mind of Harvard's Marc Hauser

An update on Harvard evolutionary psychologist Marc Hauser's research misconduct from the Chronicle of Higher Ed

It looks like the man has done irreparable harm to his reputation. At least three of his published papers have been withdrawn by Harvard, and it sure looks like the corruption extends into a significant portion of his work. This article forces some questions: How much of his popular book Moral Minds is thrown into question by this episode? Should the man be allowed to continue on at Harvard? And, most interesting, given that much of his research was funded with Federal grants, (that would be tax money), and given that it is likely that the Dept. of Health and Human Services is now investigating, what legal recourse does the government have? Will the taxpayer get a refund? Yeah. I know the answer to that one..

An internal document, however, sheds light on what was going on in Mr. Hauser's lab. It tells the story of how research assistants became convinced that the professor was reporting bogus data and how he aggressively pushed back against those who questioned his findings or asked for verification.


What's the brouhaha about? Well according to the Chronicle article, one bit of research explored whether or not rhesus monkeys, favorites of labs everywhere, notice changes in patterns of sounds. Why is this considered important? A connection with developmental psychology of human language acquisition, supposedly:

The experiment tested the ability of rhesus monkeys to recognize sound patterns. Researchers played a series of three tones (in a pattern like A-B-A) over a sound system. After establishing the pattern, they would vary it (for instance, A-B-B) and see whether the monkeys were aware of the change. If a monkey looked at the speaker, this was taken as an indication that a difference was noticed.

The method has been used in experiments on primates and human infants. Mr. Hauser has long worked on studies that seemed to show that primates, like rhesus monkeys or cotton-top tamarins, can recognize patterns as well as human infants do. Such pattern recognition is thought to be a component of language acquisition.


The methodology of the study was simple. Two people independently watch video of the monkey being subject to the A B A and A B B patterns, and watch for facial reactions, stares and the like. Now, suppose you are one of these persons: If you see a marked reaction, you record the fact. If there is no discernible change in reaction, you record that as well. If the two independent scorers concur, we have what looks to be evidence of the monkey's having noticed the difference. (In the parlance, these two scorers are called "coders", but I'll stick with 'scorers'.)

Now, after the two scorers score, a third party is supposed to review the results. Well, in the experiment in question, one scorer was Hauser, another was one of his research assistants, the third party (let's call him the 'ref') was another research assistant. Well, when the ref noticed marked discrepancies between his score keepers' results (Hauser's result showing strong support for the hypothesis that the little guys noticed the change in pattern, the assistant showing no discernible trace of recognition)he brought it to the attention of Hauser:

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant's codes, he found that the monkeys didn't seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. "I don't feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder," he wrote.


Hauser continued to bob and weave in the face of the suggestion that they use the third scorer (who should be referred to as 'the ref in the upstairs review booth'. In fact, instead of agreeing to this reasonable request, he intimidated the assistants. The Chronicle provides a text of an email:

"i am getting a bit pissed here," Mr. Hauser wrote in an e-mail to one research assistant. "there were no inconsistencies! let me repeat what happened. i coded everything. then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn't agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D. ... we need to resolve this because i am not sure why we are going in circles."


The ref and a grad student bravely stood up to Hauser (Keep in mind, their academic careers depended on this person of authority. He was clearly abusing that authority in the interests of pursuing his own celebrity. If you look at his book, he's a one man P.R. campaign.) Apparently the ref and grad student had not at this point in time reviewed the video. In fact, it is implied in the article that they did not have permission to do so. But, they were insistent. In the face of continued refusals they decided to go to the booth themselves, that is, review the video themselves, once again independently of one another.

Their results strongly agreed with the scorer who found no discernible differences in the monkey behavior. (Was the review booth biased by Hauser's temper tantrums? We really need to go to the booth, but my instincts tell me they were not.)

In fact they found glaring discrepancies:

They then reviewed Mr. Hauser's coding and, according to the research assistant's statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn't so much as flinch. It wasn't simply a case of differing interpretations, they believed: His data were just completely wrong.


In other words, he was making it up, lock stock and barrel. And this had been going on for some time, with other assistants, working on other projects, being forced to report data that looked to be cooked, or outright bogus, their mentor and boss insisting it be used in publications and reports. The brave whistle blowers then went to the proper authorities at Harvard. Harvard raided the lab in 2007, confiscating documents and computer records, and have been investigating ever since. What a mess.

On par with blowing a no hitter for Tiger pitchers.



How to avoid this? Well, sticking with the sports analogies, it might help if Hauser showed a little contrition..you know, like Jim Joyce. Man up. Step up to the pump, and admit you were wrong.




Long term solution, that would help prevent episodes like this and the CRU "Climategate" imbroglio?

The simple answer would be adoption of research guidelines that emphasise openness over proprietary rights, in the interests of fostering useful replications, corroboration and falsification. When a study is published, so too should the full set of data and methodology used. As well, professional associations and journals should expect this, demand it, and come down hard on violators, and refuse to publish, and refrain from issuing tepid hand slaps after extended and closed investigations.

In terms specific to the study, a big red light goes off when I see that Hauser, who had a vested interest in the study, was one of the scorers. He should have recruited disinterested third-party scorers. Pretty basic stuff.

He should not have forbidden viewing of the video records. What is more, the assistants and grad students are right in implying that a greater number of scorers would make the study more robust and persuasive, should a sizable majority concur in their results.

As a truly scientific researcher, someone after the truth, and not after self promotion, Hauser should have arranged things thusly. And always, ALWAYS go to the booth. Isn't that what peer review, attempted replication and science is all about?

No comments:

Post a Comment