Tuesday, 20 June 2017

The Irresistible Allure of Controversy

On June 13 a paper appeared on the preprint arXiv that brought into question LIGO's first detection of gravitational waves. Six days later one of my LIGO colleagues wrote a blog post highlighting errors that debunk the claim. His response was circulated amongst the LIGO and Virgo collaborations, and discussed within their many fascinating email lists, and officially approved as a response. As such, it is professional, polite, and to the point. This post, on the other hand, has not been circulated to my LIGO colleagues, or discussed in any mailing lists, or sanctioned by anyone. So any residual politeness is entirely my own fault.

After the preprint appeared last week, a number of people shared it on social media. Its visibility then jumped up a notch when a science blogger wrote a post about it for Forbes. A number of LIGO people were peeved, but many of the comments I saw on twitter and Facebook, not to mention other science blogs, cheered that this is exactly how science is supposed to work.

If “how science works” means that whenever your calculations question a revolutionary result, you rush to publish before checking every possible source of error with a fine-tooth comb, either because you are too excited to remember what you learned about science as an undergraduate, or because you assume that the publicity will be worth it even if you turn out to be wrong, and if it means that, even though you understand the details of neither the original analysis nor the critical “re-analysis”, you nonetheless jump at the chance to write a “we need to hear both sides” piece for a major mainstream publication, this time either because your zealous determination to write a naive morality tale about the scientific method has overshadowed your dedication to due diligence, or because, once again, the publicity will be worth it even if you turn out to be wrong — if that is how science truly works, then I guess this little episode was extremely enlightening to us all.

An innocent bystander might ask: what’s the big deal? After all, one group of scientists raised an issue with the result of another group of scientists. The first group of scientists took note, checked their results, and showed that there was no issue after all. Is that not, in the end, a lovely parable of science operating at its very best?

The hell it is. The errors in the re-analysis of the LIGO results were too basic to count as a legitimate criticism. When I was a student, working through physics homework problems, I regularly discovered that Kepler, Newton, Maxwell and Einstein were all wrong, although, to be fair, their botch-ups never matched the fumbling incompetence of those nincompoops who wrote my text books. The reason you have never heard of those screw-ups by the so-called heroes of science history, is because whenever I arrogantly pointed them out to my instructors, they in turn pointed out to me that I had also made a few errors of my own. After correcting for those errors, I was forced, reluctantly, to admit that perhaps that scoundrel Newton had once again slipped through my net.

Ian Harry’s guest post on Sean Carroll’s blog goes through this very clearly and thoroughly in the case of the LIGO “re-analysis”, but let me emphasise one point. The problem the authors of the new paper saw with the LIGO data, an apparent correlation between the noise at the two LIGO sites, could be reproduced by performing their analysis on made-up data that, by construction, contained no such correlation. This is a fairly simple check: cook up a toy example, where you know what the answer is, and make sure that your analysis reproduces it. If you think you have found an error in one of the biggest scientific results of the century, such a basic check is the least you can do.

Now, I have been in the LIGO collaboration for some years, and I have heard data analysts argue about statistics almost daily (at the 68% confidence level). Maybe I am being unfair in assuming that a well-trained careful reader of the LIGO publications could not fail to miss this point? Maybe it really is quite subtle? And maybe it is only because the LIGO data analysts have spent so many years messing around with Fourier transforms, that they spotted so quickly the error in this new paper? If the authors could not find any problem in their re-analysis, wasn’t it good scientific practice to publish their results, and have the issue resolved through the proper channels of scientific communication?

Yeesssss, except that one of the channels of modern scientific communication is email. The authors have written papers in the past that involve a re-analysis of LIGO data, and issues with those papers have been pointed out to them. Their response: keep publishing.

Ok, perhaps they were just excited — and also inspired by the lessons of science history. Isn’t stubbornness in the face of criticism the standard precursor to a major breakthrough? Refusing to be silenced by a 1000-person collaboration is nothing! Galileo doubted Aristotle’s theories, which were over a thousand years old and received the not insubstantial support of the Catholic Church. These are the heroic tales of science valour that we are raised on.

What the science fairy tales neglect to point out — or, rather, we often neglect to notice —  is that the number of successful science revolutionaries in history is very small. For every Galileo, Newton and Einstein, there are many thousands of other arrogant upstarts who merely made a fool of themselves. One of the skills of being a scientist is to sufficiently check your results before that happens.

Fortunately, the science community is forgiving. If you write a sloppy paper, chances are that no-one will take any notice of it, and it can spend the rest of your career languishing harmlessly at the bottom of your publication list. That could have been the fate of the LIGO re-analysis. A few people would have written to the authors to point out some errors, the authors could have accepted these — or not, it would not really have mattered — and the paper would have gone unnoticed.

Why did that not happen? Because Forbes decided to publish an online post with the title, “Was it all just noise? Independent analysis casts doubt on LIGO’s detections.” Forbes, in case you did not know, is a business magazine that claims a readership of 6.7 million people. Although I do not have the statistics to hand, I believe this is slightly higher than the preprint arXiv’s.

Now, I could complain about the frenzied sensation-driven nature of mainstream publishing, or the sloppy journalism of the person who wrote the Forbes post, at pretty much the same length that I’ve gone on about the flaws in the “re-analysis”, but the over-arching explanation is simple. It speaks directly to the people who are tempted to blather that this is all science as usual. The simple fact is that the LIGO discovery was a massive scientific event, and a massive media event as well, and when that happens, people behave strangely. Even intelligent, reasonable people. This is not science as usual — this is fame as usual. The only thing that is surprising is that it took so long to happen.

In the end, it is the Forbes piece that annoys me the most. The writer has a blog that I have always read as an honest, professional effort to communicate how science is really done, and which often swats silly results and fake controversies. This new piece instead perpetuates the warped public perception of science, which is focussed entirely on high-profile “controversies” that bear as much relation to how most of science works as celebrity gossip magazines do to most people’s real lives.

Science is not the same as breaking news. No-one needs to urgently know scientific results. They can wait. When the BICEP2 results first came out (everyone’s favourite recent public science drama), and the first doubts were raised, people breathlessly asked, “Will the results hold up? Are they correct?” You know the best way to answer that question? Wait. Wait until the damn paper has actually passed peer review and been published. Wait a year and then write a popular article about it. This is not an outbreak of war. It is not a stock-market collapse. Science need not follow the urgent timelines of the standard news media. “Science communicators”, if they really want to communicate science, should resist the urge to chase the news cycle. If you think that the only way to make science exciting is to exaggerate false drama, I suggest you switch to movie screenplays.

But back to the LIGO story. I would like to think that all of the people involved acted in good faith, and just got carried away with excitement. That’s how human beings are, and in the end little harm was done. But I think there is also something we can learn from it. That, at least, is my official excuse to fish for clicks.

UPDATE (20/6/2018): [I will follow my own advice, and provide an update one year later, when this story is (hopefully) long dead.  In the mean time, I will quietly add to my list of the saps who cite this as an upstanding example of the proper workings of science.]

More Gravitational-Wave Stories

February, 2016:
The Discovery
How it Felt
How We Squeezed Out the Juicy Science

March, 2016:
Trying to Explain Gravitational Waves (Part I) (Part II)

June, 2016:
Book Review: Black Hole Blues
Detection Number 2 -- Black Holes Rule
Rumours, Secrets and Other Sounds of Gravitational Waves

February, 2017:
One Year Anniversary (of being world famous)

June, 2017:
Detection Number 3 -- Nothing to see here: they are black holes


  1. Cold and deadly -- like well-served revenge.

  2. I found the "new movie release" format of announcing the discovery of gravitational waves as fame seeking as the two critics who said it is all noise. You reap what you sow.

    1. When you have a breakthrough result, you can show us how an announcement is supposed to made.

  3. So I'm curious - if you were a science journalist (as in, whether you make a living depends on the content you produce), would you really wait a year to publish results that had just been announced, watching all the other stories go up online and make your reporting irrelevant, all so that you could read the peer-reviewed paper once it finally comes out? (And that's assuming that peer review even shuts out all errors - it does not.) I agree that science journalism that responds to the news cycle isn't ideal, but the alternative is bankruptcy. That's why, generally speaking, the rule is to get outside expert opinion on the research before moving forward, where "outside expert" here should have been "someone at LIGO who really understands the data." Even if LIGO wasn't planning an official response, a LIGO-affiliated scientist could have better addressed the paper's claims, even off the record, and would have helped in the decision on whether to move forward with the article.

    All that said, I agree wholeheartedly that this wasn't at all an illustration of the way science works - I don't see how anyone could make that claim. This was about science journalism and communication.

    1. I am a scientist, not a journalist. I have no idea what a journalist would do. As a scientist, if I were shown a paper that, for example, claimed to have found flaws in the discovery of the Higgs boson, and asked to write about it, I would decline. There would be a HUGE probability that the paper was not only wrong, but ignorable, but I would not have the knowledge, resources, or a network of trusted expert contacts, to properly assess it. Checking with outside experts is exactly what you say should happen, but there's the intermediate "meta-expertise" of knowing who the experts are, and being able to speak to them. I worried about that in an old post about Harry Collins.

    2. Get outside opinion indeed: here's my understanding of what happened.

      - Some weeks or months ago Sabine got wind of the new paper the NBI group were writing. She interpreted it as a concerted effort to discredit the LIGO detection. (Note that the group had already written 2 papers along similar lines in terms of calculations but with entirely different conclusions.) She had miscellaneous exchanges with people at AEI Hannover - some of us are LSC members but we don't speak officially for the LSC - and maybe others, but since a draft was not available no-one could say or do anything specific about the rumours.

      Then some days before the paper was put on the arxiv, the NBI group decided to send it to A. Buonanno (LSC principal investigator at AEI Potsdam) for comment. She in turn asked her data analysis postdoc (Ian Harry) to look into it, however given the length and unfamiliarity of the paper he could not provide an instantaneous response ; in the event it took a week or so to write up the main issues, and in the intervening time the NBI people decided to go ahead and post to arxiv anyway.

      Then Sabine H. went to her junior LSC member contacts again and asked - not what their opinion of the paper was, but just whether the collaboration intended to make any response to the preprint. And at that time, indeed there was no plan to issue any formal response.

      This falls short of what a journalist would/should have done - i.e. going to the collaboration management, saying that she was writing an article about the NBI paper for Forbes.com and asking for any official response that could be quoted.

      Note that the Forbes article does have some paraphrase of a technical critique of the NBI paper, but does not directly quote any LSC member. Though I am pretty sure a few of us told her, quite definitely and quotably, what we thought of the paper ... it is still a mystery why Sabine effectively refused to give anyone from LSC a voice.

  4. "The writer has a blog that I have always read as an honest, professional effort to communicate how science is really done,"

    Well, the people in other fields who have been treated by her in precisely the same way arrived at your conclusion a long time ago.....

    1. Let’s not go down that road. But this touches on an interesting question. How do we decide who to trust on science, or any highly technical issues? Someone who writes on a wide range of topics will not be an expert on most of them, or perhaps any. I wrote a lot about this expertise stuff in the past, and see the link in the previous comment reply. I think that people trying to report or debate science disagreements are deluding themselves — and the public — and that’s why I say that the best thing to do is wait until the dust settles.


[Note: comments do not seem to work from Facebook.]