http://www.internationalskeptics.com/fo ... p?t=316580
Well, it looks like JeanTate has *finally* noticed that the LIGO team gave us “alternative facts” when they presented their so called gravitational-wave “discovery” paper. You just noticed in February of 2017 that the LIGO team misrepresented their actual veto data because of little old me? Yep, you folks are way too isolated and detached from reality.
I dug up our original conversation at Thunderbolts for you. You can also find the link here:
http://www.thunderbolts.info/forum/phpB ... =3&t=16172
Welcome to the discussion JeanTate (finally). I look forward to your full and complete explanation of their false statements.
In case you didn’t notice, that LIGO paper isn’t just any old “dogma rehash” paper. It’s supposedly a big time “discovery” paper, potentially worthy of a Nobel Prize, prestige and scientific notoriety. The discovery of gravitational waves would surely be a hugely important discovery. Considering their importance to physics, “discovery” papers should be held to the absolute highest ethical standards in science.
We all know from the BICEP2 fiasco paper that hundreds of astronomers can and do sign their names to a supposed ‘discovery” paper, and yet they can utterly and completely botch the meaning and the “interpretation” of the data all to hell even if the hardware works exactly as it’s advertised to work, and the data set that the hardware produces is perfect. In other words, we know for a fact that astronomers have a bad habit and a proven track record of crying wolf and misrepresenting the data sets even if they understand and factually represent the equipment correctly. In the case of LIGO however, they blatantly *mispresented* a critically important aspect of the equipment itself, and left out critical data related to that equipment and the original veto of the signal by that equipment.
The *entire* premise of the LIGO paper, just like the BICEP2 fiasco paper, is based upon it’s *sigma number* that is associated with their bold claim that they can confidently and mathematically *rule out* all other potential explanations for the signal in question. The BICEP2 folks made up their sigma number to suit themselves with respect to eliminating other potential explanations for the data set. When the dust finally settled we found out that BICEP2 didn’t actually observe any patterns on the walls of some mythical snow globe universe as they erroneously claimed. They simply saw the polarized photon emissions from all the dust around our own galaxy! Nothing like missing the mark of the emission pattern by *billions* of light years! Even though their “discovery” paper eventually bit the dust (literally), the race was now on as to who gets to claim that they “discovered” gravitational waves first, so here we go again with LIGO.
So, let’s look at LIGO and see what *really* happened on the date in question with respect to that specific signal and their equipment, and see what their equipment reported during the time of the event in question:
http://ligo.elte.hu/magazine/LIGO-magazine-issue-8.pdf
LLO – September 14, 2015, 09:53:51 UTC – Alex Urban, Reed Essick:
The Coherent WaveBurst (cWB) data analysis algorithm detected GW150914. An entry was recorded in the central transient event database (GraceDB), triggering a slew of automated follow-up procedures. Within three seconds, asynchronous automated data quality (iDQ) glitch-detection follow-up processes began reporting results. Fourteen seconds after cWB uploaded the candidate, iDQ processes at LLO reported with high confidence that the event was due to a glitch. The event was labeled as “rejected” 4 seconds afterward. Automated alerts ceased.
Processing continued, however. Within five minutes of detection, we knew there were no gamma-ray bursts reported near the time of the event. Within 15 minutes, the first sky map was available.
At 11:23:20 UTC, an analyst follow-up determined which auxiliary channels were associated with iDQ’s decision. It became clear that these were un-calibrated versions of h(t) which had not been flagged as “unsafe” and were only added to the set of available low latency channels after the start of ER8. Based on the safety of the channels, the Data Quality Veto label was removed within 2.5 hours and analyses proceeded after re-starting by hand.
Emphasis mine. So we know for a fact that within eighteen seconds of the event in question, that the veto methods which were in place at that time, and which had been in place *throughout* the entire ER8 test phase of their recent upgrades, rejected the signal in question with high confidence no less. How did the software even assign a confidence figure in the first place? Someone had to go in by hand and override that Data Quality Veto. Whatever the cause of the original veto, the signal was originally rejected specifically by the “Data Quality Veto” with high confidence. Those are the facts surrounding the veto methods that were in place at the time of the signal.
Now let’s take a close look at the ‘discovery’ papers and see what they actually told us about that original data quality veto, and how they explained their reasoning for manually overriding that veto:
http://arxiv.org/abs/1602.03844
Because GW150914 occurred during the early morning hours at both detectors, the only people on-site were the control room operators. Signs of any anomalous activity nearby and the state of signal hardware injections were also investigated. These checks came back conclusively negative [37]. No data quality vetoes were active within an hour of the event.
Emphasis mine. Huh? Boloney! WTF is that nonsense? There absolutely was a “Data Quality Veto” within 18 seconds of the event! When did LIGO start doing physics and making “discoveries” based upon “alternative facts”? The entire paper in question spends countless amounts of time, energy and column space explaining how and why all those various veto methods *ensure* that the signal in question is not a glitch, and not just background noise. They go to great lengths to supposedly eliminate any false signals based on their various veto methods and that’s how they supposedly decide if it’s a real signal or a false alarm. This signal was actually rejected by the veto methods which were in place at the time with *high confidence*, within 18 seconds of the signal. Where do they get off *mispresenting the facts* and claiming that no vetoes were active within an hour of the event when they were active within 18 seconds of the event? The whole paper is now “questionable” and dubious because they left out very important and critical information about the “Data Quality Veto” method in question, how it worked, the specific source code, how and why the software decided a “confidence” figure when it rejected the signal with “high confidence”, how it assigned any “confidence” in the veto in the first place, why they manually overrode the veto, etc., etc., etc. Instead we got “alternative facts” instead of the truth. Jonesdave116 is so concerned about “lies of omission” by EU/PC proponents but this is a blatant omission about the existence of the veto. It got vetoed within 18 seconds! They erroneously claimed in the published paper that no veto happened within an hour of the signal! Bullshit!
For the record, I don’t mind the LIGO team manually overriding the Data Quality Veto. That’s fine. Human input seems to be part of their whole “process” in fact. What I resent however not being told the truth, but rather getting “alternative facts” surrounding that veto and being erroneously told that no veto took place in the peer reviewed material. Did the peer reviewers ever know about that omission/misstatement of critical veto information? What gave them the LIGO team the right to misrepresent the events about that veto, and their manual override of that veto in the published paper?
So why would the LIGO team choose to omit that important Data Quality Veto information if they really were so confident in the quality of the signal, and confident in their decision to override that data quality veto? Why didn’t they tell us the *truth* and explain the veto, show us the actual software routines which originally rejected the signal, explain what the lines of code relate to in terms of “safety” and “confidence”, and explain to us their reasoning for overriding of the veto etc.? Were they 100 percent “confident” of their subjective choice to override the veto or some number less than 100 percent confident in their choice that might have affected that 5.1 sigma confidence figure? How did they assign a confidence figure to their subjective choice to override the veto, and where was that number calculated into their sigma figure? We’ll never actually know any of that now because they told us in the published and peer reviewed paper that no veto even happened! Did the peer reviewers even know that a veto actually took place within 18 seconds of the signal, or were they kept in the dark intentionally as well?
Let’s take a closer look now at how the LIGO team came up with their original confidence figure and see if there are any “problems” in the methodologies *before* we even look at the effects of that veto to the published paper.
I'd have to question their entire methodology as it relates to their 5.1 sigma confidence claim.
First of all, there’s no visual confirmation of this signal or any subsequent signals either, despite the event supposedly releasing the energy equivalent of three full solar masses in the form of gravitational waves alone. With all that mass/energy release in under a half of a second, no significant light could be seen from the event? Really? They claimed to have eliminated every other potential cause of the signal based on some perceived lack of any external verification, supposedly because no vetoes were present even though there actually was a veto present, but they pulled a blatant double standard with respect to their *own* claim as to cause. They never tried to eliminate their *own* claim as to cause based on a lack of an external verification the same way that they eliminated every other potential cause of the signal based on a lack of external support. There was no external visual confirmation of this signal, therefore if they used the same process of elimination that they used on everything else, they should have eliminated their own claim too due to a lack of external corroboration! But nooooooooo! They blatantly switched elimination methods when they got to their own claims however, and simply eliminated their need for external corroboration entirely. They used two entirely different standards of elimination, one easy standard for them to pass (no external corroboration required), and a much more difficult standard of external corroboration for every other potential explanation. That’s a blatant double standard.
They also basically cherry picked 16 "clean" days of data from more than about 40 possible days worth of useful comparative data. They then cleaned the already “cherry picked” 16 day data set some more to remove any and all known influences that might negatively affect their sigma number, and *then* they tried to apply that 16 days of "hand cleaned" and cherry picked data to a 203,000 year window of time in terms of what was 'typical' of all background noise events. That's simply an irrational premise in the final analysis. Not all terrestrial events or solar events are certain to occur in a 16 day window with enough frequency to show up in such an analysis, particularly when they start with a *completely cherry picked and cleaned up* data set which specifically minimizes the real potential for external natural influences by removing any ‘difficult” data from the test set.
When we look at that 5.1 sigma figure, it’s ultimately nothing more than a “questionable” and debatable number to begin with. They didn’t use the same process of elimination of requiring external support for their own claimed cause of the signal as they did with every other potential cause. They *cherry picked* the original data set to select only 16 ‘clean’ days of data instead of using all 40 days of raw data they had available to them. They then massaged that cherry picked data set even more too. Then they “assumed” that their cherry picked and cleaned up 16 day data set would necessarily be representative of a 203,000 year time frame. The error potentials on that premise alone are staggering. They used a grand total of 16 days of hand picked and heavily massaged data to represent in excess of over 74 million days of random data. Wow.
Now at only 5.1 sigma confidence to start with, even a *tiny* confidence adjustment as a result of them manually overriding that data quality veto could easily push that sigma confidence number out of the “discovery” category entirely. I’m supposed to sit back however and pretend that the original veto never happened, and pretend that they really were “100 confident” in the sigma figure, even though they intentionally hid the data quality veto information from me entirely in the published paper. I’m sorry folks, but that isn’t even ethical behavior in the first place. If they actually were that confident in their conclusions about the veto, they would have mentioned the veto and explained how it worked and why they overrode the veto in the published “discovery” paper. That Data Quality Veto information was critical and vital information that should *never* have been left out of the published paper.
Their whole case however is predicated upon us believing that any other “natural” cause of the signal would have been ruled out by the veto methods as described in the paper. What they didn’t tell us in the published account however is that those supposedly “trustworthy” veto methods actually *vetoed* the signal! Instead, they gave us “alternative facts” and told us that no vetoes took place within an hour of the signal when in fact they took place within 18 seconds of the signal in question! They obviously weren’t really that “confident” in their figures or they would not have gone to great lengths to hide the existence of that data quality veto. Now we get to see if the Nobel Prize committee rewards the use of “alternative facts” in published paper when they hand out their prizes. It should be an interesting year in physics. This gravitation-wave discovery is “tainted” by omission of vital information, way more tainted than the BICEP2 fiasco. At least the BICEP2 team did not misrepresent their equipment or the information they got from that equipment in their published paper. Whatever mistakes they made in interpreting the BICEP2 data were honest mistakes based on a lack of a complete published data sets from Planck. In the LIGO case however, the mistake isn’t even necessarily an honest one. It’s an outright misrepresentation of the hardware and what information that hardware and software *actually* produced, including the veto.
Extraordinary claims require extraordinary evidence. Unfortunately without that critical veto information, and without any visual confirmation of the signal, this isn’t even “ordinary” evidence of gravitational waves, let alone extraordinary evidence.
And to think that you folks whine and make a big fuss about Thornhill presumably being “loose” with the facts in terms of omission of information. Sheesh. What complete hypocrites. Where’s your scientific outrage over the omission of vital veto information in that LIGO paper? Let me guess? Excuses, excuses, and more lame and blatantly hypocritical excuses? What say you jonesdave116? JeanTate?