Post by Avatar on May 12, 2024 23:46:56 GMT
Previously; I mentioned that modern scientists are cowards.
youtu.be/vvRkq_nzjzI?si=Fe0LJXfN68hqyqIU
Summary; Did this nitwit get his yardstick wrong? The measurement standard, called a “standard candle” (Supernovas); is not what this idiot used. His “standard yardstick” was baryonic acoustic oscillations. Bubbles were formed by sound waves ~400,000 years after the universe inflated. These “bubbles” are hot spots seen in the cosmic microwave background radiation.
[coverthumb]https://th.bing.com/th/id/OIP.VIgdGXjti_afTSD697u-YQHaDt?rs=1&pid=ImgDetMain[/coverthumb]
Red are hot zones. Blue are cold zones. Mapping is by infrared and RADIO waves. By galaxy counting, the distribution of galaxies in hot regions is known, apparently, to be more numerous than in cold regions. So, by using clumps of galaxies (super-clusters) and measuring their collective velocities' red shifts; this nitwit thinks he can check the expansion rate of the universe. It sounds reasonable. Galaxies are easier to measure as units than supernovas. You have a bigger light source. You can average the luminosity. Likewise, you can measure the collective redshift of a whole target array as each baryonic acoustic bubble EXPANDS as the universe expands. Now that should give you, if you measure super clusters the expanding surfaces of those bubbles, a second check on the supernova measurement of the expansion rate. It is “safe science”. You test your measurement method, it is supposed to agree with the expansion rate set by the supernova yardstick, and you go on to your next research grant.
Oops.
The scanning array to capture the light needed is incredible. The nitwit managed to assemble the best engineers in the world (Ours), The instrument was put in place and then COVID-19 hit. After shenanigans, the first run started in May 2021. Then the Dark Energy Scanning Instrument had a fire. That instrument is sited near Tucson, Arizona (Kit Peak observatory). There was a range fire that surrounded the observatory. Arizona firefighters fought the fire. DESI was knocked out. One year of data. While DESI was repaired.
Number crunching produces a catalog. I understand it as being exactly like one of my standard firing tables.
First crunch results yields confusion. This is where the “chicken___ moment” comes it.
Confirmation bias.
youtu.be/wo3xpigIjts?si=o_mNJAs0qnsCKxpO
The function check^1 in the data is to measure for a 1 in 4.5→5 million chance probability of error. If the odds you got it wrong in the raw data is only 1 in 5 million (5 sigma), then you can argue your data is rather good. Now, your interpretation can still be wrong…
This consists of running a fake data set that simulates your real results, or running a real analysis, using your intended methodology using several artificial data sets of different types. You then cross test subsets of real data to see if the dropouts in the subsets agree with each other. IOW you try to “blind” the team of analysts, as many ways as possible, to what the real data is. Then you sort out the real data runs from the facsimiles and dummy runs to see the “real results”.
It apparently is 2.5 Sigma that dark energy expansion changes over time. It “might” not be a constant acceleration.
1 chance in 400.
Then you go outside and scream at the sky. “Why do you do this to me?”
Second data run. Now you have to REALLY bear down to make sure confirmation bias is eliminated, because here it comes…
Either you have a DUD result or it is the Nobel Prize.
Future?
Is it an oscillation or a decay effect? Flip a coin. The data is incomplete. Some investigated were excited, some were; "Aw, Shit! We screwed up."
Avatar
youtu.be/vvRkq_nzjzI?si=Fe0LJXfN68hqyqIU
Summary; Did this nitwit get his yardstick wrong? The measurement standard, called a “standard candle” (Supernovas); is not what this idiot used. His “standard yardstick” was baryonic acoustic oscillations. Bubbles were formed by sound waves ~400,000 years after the universe inflated. These “bubbles” are hot spots seen in the cosmic microwave background radiation.
[coverthumb]https://th.bing.com/th/id/OIP.VIgdGXjti_afTSD697u-YQHaDt?rs=1&pid=ImgDetMain[/coverthumb]
Red are hot zones. Blue are cold zones. Mapping is by infrared and RADIO waves. By galaxy counting, the distribution of galaxies in hot regions is known, apparently, to be more numerous than in cold regions. So, by using clumps of galaxies (super-clusters) and measuring their collective velocities' red shifts; this nitwit thinks he can check the expansion rate of the universe. It sounds reasonable. Galaxies are easier to measure as units than supernovas. You have a bigger light source. You can average the luminosity. Likewise, you can measure the collective redshift of a whole target array as each baryonic acoustic bubble EXPANDS as the universe expands. Now that should give you, if you measure super clusters the expanding surfaces of those bubbles, a second check on the supernova measurement of the expansion rate. It is “safe science”. You test your measurement method, it is supposed to agree with the expansion rate set by the supernova yardstick, and you go on to your next research grant.
Oops.
The scanning array to capture the light needed is incredible. The nitwit managed to assemble the best engineers in the world (Ours), The instrument was put in place and then COVID-19 hit. After shenanigans, the first run started in May 2021. Then the Dark Energy Scanning Instrument had a fire. That instrument is sited near Tucson, Arizona (Kit Peak observatory). There was a range fire that surrounded the observatory. Arizona firefighters fought the fire. DESI was knocked out. One year of data. While DESI was repaired.
Number crunching produces a catalog. I understand it as being exactly like one of my standard firing tables.
First crunch results yields confusion. This is where the “chicken___ moment” comes it.
Confirmation bias.
youtu.be/wo3xpigIjts?si=o_mNJAs0qnsCKxpO
The function check^1 in the data is to measure for a 1 in 4.5→5 million chance probability of error. If the odds you got it wrong in the raw data is only 1 in 5 million (5 sigma), then you can argue your data is rather good. Now, your interpretation can still be wrong…
This consists of running a fake data set that simulates your real results, or running a real analysis, using your intended methodology using several artificial data sets of different types. You then cross test subsets of real data to see if the dropouts in the subsets agree with each other. IOW you try to “blind” the team of analysts, as many ways as possible, to what the real data is. Then you sort out the real data runs from the facsimiles and dummy runs to see the “real results”.
It apparently is 2.5 Sigma that dark energy expansion changes over time. It “might” not be a constant acceleration.
1 chance in 400.
Then you go outside and scream at the sky. “Why do you do this to me?”
Second data run. Now you have to REALLY bear down to make sure confirmation bias is eliminated, because here it comes…
Either you have a DUD result or it is the Nobel Prize.
Future?
Is it an oscillation or a decay effect? Flip a coin. The data is incomplete. Some investigated were excited, some were; "Aw, Shit! We screwed up."
Avatar