Caltech: The Mechanical Universe

Many Internet forums have carried discussion of the Electric Universe hypothesis. Much of that discussion has added more confusion than clarity, due to common misunderstandings of the electrical principles. Here we invite participants to discuss their experiences and to summarize questions that have yet to be answered.

Moderators: MGmirkin, bboyer

Re: Caltech: The Mechanical Universe

Unread postby allynh » Tue Dec 11, 2012 10:35 am

I just finished reading a great book.

Dogmatism in Science and Medicine by Henry H. Bauer
(How Dominant Theories Monopolize Research and Stifle the Search for Truth)

The book is devastating. It should be a required read for everybody. I'll have to read it many times before I get over the shock. HA!

I've seen what he is talking about my whole life. Time and again I've run into the reactions he describes. Time and again I've come up against "information monopolies" that get upset when I point out that there is something wrong or missing with their theory.

I always looked at it as the Cassandra Curse. Where no one listens to new information that goes against what they "know", then when the revolution comes and you say, "I told you so", they always say, "No you didn't; and anyway, the events were obvious to everyone." HA! No they weren't.

In the long ago and the far away, I was taught The Structure of Scientific Revolutions by Thomas S. Kuhn. But the interesting thing is, even though the title clearly talks about "revolutions" no one ever taught that. They only discussed the concept of "paradigm shifts" as if they were normal steps in the process of advancement. I had to sit through too many seminars at work where some con man would babble about "paradigm shifts" without ever discussing the fact that each meant the complete overthrow of the existing order. HA!
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Tue Feb 05, 2013 10:54 am

This is a major source to look at the geology of key sites.

Exploring the Grand Canyon on Google Maps ... -maps.html

So far they have walking views of:

Bright Angel Trail

Colorado River

scenic overlooks

South Kaibab Trail

Meteor Crater

If you have not used Google views, just use your mouse to drag the view around and up and down. Take a walk in Meteor Crater to start and learn the system. When you look around at any of the sites, remember that they were carved by lightning. HA!
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Thu Feb 14, 2013 11:36 am

When you guys get the chance, watch this NOVA program.

Earth From Space ... space.html

All the pieces of the puzzle are there, they simply assembled it wrong, or imposed old consensus views on what they saw. Watch the video, and notice the little chapter marks along the bottom. Watch it through once, then go through and start asking EU questions.

In Chapter Five, Undersea Waterfall, the brine current is a plasma. The circuit that it makes around the Earth is literally an electrical circuit.

The TPODs have already addressed Chapter Eight, Half a World Away, where the dust from the Sahara feeds the Amazon, yet that dust was under a vast fresh water lake 8,000 years ago, so how old is the Amazon? HA!

In Chapter 10, Strike of Life, and Chapter 11, Magnetic Shield, they have these two reversed.

In Strike of Life, they babble about how lightning is made by static electricity(ice particles smashing together), yet in Magnetic Shield, they show vast energy streaming in from the Sun, and they don't see that the lightning comes from the Sun.

In Strike of Life, they babble about nitrates being made by lightning causing nitrogen and oxygen to combine together, and never mention the transmutation of elements. Lightning produces silicon, iron, sulfur, etc...

In Strike of Life, they show the fires caused by lightning that cover the earth, yet they blame humans as the cause of CO2 production rather than fire.

In Chapter Six, On Not-So-Solid Ground, they show the vast number of volcanoes active on a regular basis spewing out sulfur, yet in the final chapter they try to blame humans for putting out more sulfur than volcanoes, yet how can they tell the difference between sulfur produced by man vs. volcanoes or lightning.

In the end chapter, showing all of the satellite views combined, they still blame humans for all pollution, even after showing the natural sources from all over the planet.

The Team needs to take the video apart and use it to build a series of YouTube videos showing the EU view, and how the NOVA video distorts reality.
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Wed Nov 06, 2013 7:46 pm

The stuff by Robert Anton Wilson that is discussed in the videos is exactly what I've seen in my life, "I know I'm wrong, I want to be less wrong". HA!

The Wisdom of Robert Anton Wilson: A Tonic for the Internet Age ... ternet-Age
On October 23rd, the London Fortean Society celebrated "The Late Great Robert Anton Wilson" at the Horse Hospital in London. featuring lectures by our good friend John Higgs as well as Daisy Eris Campbell. John's well-presented talk, in which he riffs on RAW's thoughts about belief and reality, has been uploaded to YouTube and I heartily recommend it - in fact, I wish everyone on Earth would hear what John is talking about, because it's such a key aspect of the ways in which we fool ourselves (often to the detriment of others). I've embedded the talk below (John's talk is just over half an hour, followed by about 15 minutes of questions and money burning...literally), and after it I've pulled out a short quote from John's talk that resonated strongly with me (also, to whomever produced the video, I enjoyed the easter egg!):
The reason why I think Bob is important, and Bob is different, I think it can be summed up in a principle he talks about called the 'cosmic shmuck' principle, and it goes like this. If you wake up in the morning and you do not realise that you are a cosmic shmuck, you will remain a cosmic shmuck. But if you wake up in the morning and you think 'oh god, I'm a cosmic shmuck', you'll be very embarrassed [and] you'll want to be less of a cosmic shmuck; you'll try to be less of a cosmic shmuck; and slowly, over time, you'll become less of a cosmic shmuck.

And the fact that the underlying principle of Robert Anton Wilson's philosophy is "I know I'm wrong, I want to be less wrong", is very different to now, our current internet culture, where the underlying philosophy is "I'm right, and I want you to know that". And if you go onto any internet discussion, or debate, or things like that, you find people declaring certainties loudly, people with very fixed positions that they can express in 140 characters, that they hunker down and defend, and don't listen to anything else, and attempt to drown out all the others. That's so different to Robert Anton Wilson: he believed – hang on, the word believe is difficult with Bob – he thought that what you believed imprisoned you, he thought convictions create convicts.

His philosophy can be called 'multiple-model agnosticism'. That's not just agnosticism about God, that's agnosticism about everything...

There's a key core point [to Bob's philosophy], this phrase 'reality tunnel', that's at the heart of all Bob's thinking, so I think it's worth defining for you. A reality tunnel is the model of reality that you build in your head. It's not reality, it's what you think reality is. Just as Korzybski said, "the map is not the territory"; as Alan Watts said, "the menu is not the meal"; in the same way, your reality tunnel is not reality. It's a model you personally built over your entire life, based on your experiences, your memories, your senses, your prejudices, your culture, and to a large and surprising degree, language. And that's fine, that's normal, we need models. We need models to understand what's going on around us, to predict what's going to happen next. But a model is, by definition, a simplified version of something. It may look roughly the same, and it gives you a good idea of things, but there are going to be places where it lacks the detail, or it's just wrong or it's different. And when your reality tunnel doesn't map reality, then you are wrong. And the fact that we use these things means that we will always be wrong.

You can read more of John's thoughts on these topics in his fantastic books, The Brandy of the Damned, and The First Church on the Moon (both fiction), as well as The KLF: Chaos, Magic and the Band who Burned a Million Pounds (non-fiction, even though it might seem more fictional than the first two).

These are the videos:

The Late Great Robert Anton Wilson event at the Horse Hospital - part 1

The Late Great Robert Anton Wilson event at the Horse Hospital - part 2
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Fri Nov 22, 2013 9:28 am

I just watched a great NOVA program, they kept talking about the Electric Earth.

At the Edge of Space
Program Description
Between the blue sky above and the infinite blackness beyond lies a frontier that scientists have only just begun to investigate. In "At the Edge of Space," NOVA takes viewers on a spectacular exploration of the Earth-space boundary that's home to some of nature's most puzzling and alluring phenomena: the shimmering aurora, streaking meteors, and fleeting flashes that shoot upwards from thunderclouds, known as sprites. Only discovered in 1989, sprites have eluded capture because they exist for a mere split-second—40-times faster than an eye blink. NOVA rides with scientists in a high-flying weather observation plane on a hunt for sprites, finally snaring them in 3D video and gaining vital clues to unraveling their mystery. Combining advanced video technology with stunning footage shot from the International Space Station, "At the Edge of Space" probes the boundary zone and offers an entirely new perspective on our home planet.

This is one DVD that the Team has to buy and comment on. NOVA is starting to get close to being Electric, they are almost there. HA!
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Thu Dec 05, 2013 11:04 am

The Team needs to check out this paper and the process they used.

A large part of what the Team needs to do is offer proof that many of the bogus ideas in consensus science have verifiable origins. If the process these people used is something the Team could do or fund, then a good effort should be made to do debunk the pernicious science myths using Reference Publication Year Spectroscopy.

The Legend of Darwin’s Finches Unmasked — The Physics arXiv Blog — Medium
The story of Darwin’s finches is one of the most famous in science. It describes Charles Darwin’s fascination with the different types of finches he found on the Galapagos Islands during his famous expedition in the 1830s.

Over time, the finches on different islands evolved independently. In particular, their beaks show remarkable variation having become specially adapted to the feeding conditions in different microenvironments.

The legend is that Darwin carefully collected these birds and organised them by their geographical environment. It was this process that played a central role in persuading Darwin of the truth of evolution.

Or so the story goes. The truth is somewhat different. The finches are certainly good evidence of evolution and commonly cited today by evolutionary biologists. But Darwin did not grasp the significance of them at the time of his voyage, as the legend suggests. He makes little mention of them in his Journal of Researches and does not mention them at all in On the Origin of Species.

That raises an interesting question. How did this legend come into existence? What chain events conspired to credit Darwin with observations and insights into these finches that he never made?

Today, we get an answer thanks to the data mining work of Werner Max at the Max Planck Institute for Solid State Research in Stuttgart and Lutz Bornmann at the Administrative Headquarters of the Max Planck Society in Munich.

These guys say that it should be possible to identify the the origin of scientific ideas by studying the pattern of links between papers that reference them. And the papers that have contributed most to spreading an idea or story should stand out, assuming that the most highly cited papers are the most influential.

Max and Bornmann say this method is similar to spectroscopic analysis. This reveals the most important molecules in a sample by studying the pattern of wavelengths they emit. By analogy to this, they call their method Reference Publication Year Spectroscopy.

The technique is straightforward. Max and Bornmann simply look up the term “Darwin’s finches” in the Science Citation Index, an online database of references listed in scientific papers. They then collect all the references mentioned in these papers and determine their frequency and date. That gives them a distribution of papers by the date and frequency of citation.

The results are straightforward to interpret. The first peaks in this distribution come in 1859 and 1871 and correspond to Darwin’s books On the Origin of Species and The Descent of Man.

Max and Bornmann say that it is no surprise that these books are commonly referenced given their importance. “However, a careful reading of these books shows that they are not the origin of the legend about Darwin finches,” they add.

However, the biggest peak in this distribution dates from 1947 and corresponds to the publication of a book called “Darwin’s Finches” published by the evolutionary biologist David Lack. This is strong evidence that Lack’s book is the origin of legend, say Max and Bornmann.

And it is not the only evidence. In 1982, the psychologist Frank Sulloway published a detailed study of the story suggesting that it was little more than a scientific legend. Sulloway also points the finger firmly in Lack’s direction as the populariser of this legend (although others had mentioned it earlier).

Sulloway’s approach is far more detailed, thorough and time-consuming than Max and Bornmann’s. What’s interesting, though, is that their vastly different methods result in the same conclusion.

Clearly, Reference Publication Year Spectroscopy could be a powerful tool for anybody contemplating similar work on the history of science. Just how much work this approach could have saved Sulloway isn’t clear but it is certainly quick.

Will it help to change the common misconception about Darwin’s interest in the finches named after him? That’s too early to tell.

For the moment, it remains a powerful legend. As Sulloway puts it: “It has become, in fact, one of the most widely circulated legends in the history of the life sciences, ranking with the famous stories of Newton and the apple and of Galileo’s experiments at the Leaning Tower of Pisa”.

Legends indeed!

Ref: Tracing the Origin of a Scientific Legend by Reference Publication Year Spectroscopy (RPYS): the Legend of the Darwin Finches

Tracing the origin of a scientific legend by Reference Publication Year Spectroscopy (RPYS): the legend of the Darwin finches

Werner Marx, Lutz Bornmann
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Thu Aug 07, 2014 10:42 am

In one of the threads, Corona posted a link to a video from ScienceAtNASA on Youtube. They call their videos ScienceCasts.

ScienceAtNASA ... p1lgj4bl1A

The Team should look at the ScienceAtNASA videos and think about posting Space News as replies directly under appropriate ScienceCasts. That way people seeing the ScienceCasts would then see Space News and the Electric Universe viewpoint.

Look at the ScienceCasts and see if there are any that inspire Space News videos. HA!
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Mon Apr 20, 2015 1:29 pm

I came across an interesting concept that the Team needs to address. I did a search of the Forum and only found two brief comments, but no real discussion of the implications that the paradox has to show.

In other words, if liquid water was impossible on the Earth for most of its history, does this show a scientific basis for the "Golden Age" as discussed in the Saturn Myth.

Faint young Sun paradox
Faint young Sun paradox

The faint young Sun paradox or problem describes the apparent contradiction between observations of liquid water early in Earth's history and the astrophysical expectation that the Sun's output would be only 70 percent as intense during that epoch as it is during the modern epoch. The issue was raised by astronomers Carl Sagan and George Mullen in 1972.[1] Explanations of this paradox have taken into account greenhouse effects, astrophysical influences, or a combination of the two.

Early solar output

Early in Earth's history, the Sun's output would have been only 70 percent as intense as it is during the modern epoch. In the then current environmental conditions, this solar output would have been insufficient to maintain a liquid ocean. Astronomers Carl Sagan and George Mullen pointed out in 1972 that this is contrary to the geological and paleontological evidence.[1]

According to the Standard Solar Model, stars similar to the Sun should gradually brighten over their main sequence lifetime.[2] However, with the predicted solar luminosity 4 billion (4 × 109) years ago and with greenhouse gas concentrations the same as are current for the modern Earth, any liquid water exposed to the surface would freeze. However, the geological record shows a continually relatively warm surface in the full early temperature record of Earth, with the exception of a cold phase, the Huronian glaciation, about 2.4 to 2.1 billion years ago. Water-related sediments have been found that date to as early as 3.8 billion years ago.[3] Hints of early life forms have been dated from as early as 3.5 billion years,[4] and the basic carbon isotopy is very much in line with what is found today.[5] A regular alternation between ice ages and warm periods is only found occurring in the period since one billion years ago.[citation needed]

Greenhouse hypothesis

When it first formed, Earth's atmosphere may have contained more greenhouse gases. Carbon dioxide concentrations may have been higher, with estimated partial pressure as large as 1,000 kPa (10 bar), because there was no bacterial photosynthesis to reduce the gas to carbon and oxygen. Methane, a very active greenhouse gas that reacts with oxygen to produce carbon dioxide and water vapor, may have been more prevalent as well, with a mixing ratio of 10−4 (100 parts per million by volume).[6][7]

Based on a study of geological sulfur isotopes, in 2009 a group of scientists including Yuichiro Ueno from the University of Tokyo proposed that carbonyl sulfide (OCS) was present in the Archean atmosphere. Carbonyl sulfide is an efficient greenhouse gas and the scientists estimate that the additional greenhouse effect would have been sufficient to prevent Earth from freezing over.[8]

Based on an "analysis of nitrogen and argon isotopes in fluid inclusions trapped in 3.0- to 3.5-billion-year-old hydrothermal quartz" a 2013 paper concludes that "dinitrogen did not play a significant role in the thermal budget of the ancient Earth and that the Archean partial pressure of CO2 was probably lower than 0.7 bar".[9] Burgess, one of the authors states "The amount of nitrogen in the atmosphere was too low to enhance the greenhouse effect of carbon dioxide sufficiently to warm the planet. However, our results did give a higher than expected pressure reading for carbon dioxide – at odds with the estimates based on fossil soils – which could be high enough to counteract the effects of the faint young Sun and will require further investigation."[10]

Following the initial accretion of the continents after about 1 billion years,[11] geo-botanist Heinrich Walter and others contend that a non-biological version of the carbon cycle provided a negative temperature feedback. The carbon dioxide in the atmosphere dissolved in liquid water and combined with metal ions derived from silicate weathering to produce carbonates. During ice age periods, this part of the cycle would shut down. Volcanic carbon emissions would then restart a warming cycle due to the greenhouse effect.[12][13]

According to the Snowball Earth hypothesis, there may have been a number of periods when Earth's oceans froze over completely. The most recent such period may have been about 630 million years ago.[14] Afterwards, the Cambrian explosion of new multicellular life forms started.

Examination of Archaean sediments appears inconsistent with the hypothesis of high greenhouse concentrations. Instead, the moderate temperature range may be explained by a lower surface albedo brought about by less continental area and the "lack of biologically induced cloud condensation nuclei". This would have led to increased absorption of solar energy, thereby compensating for the lower solar output.[15]

Greater radiogenic heat

In the past, the geothermal release of decay heat, emitted from the decay of the isotopes potassium-40, uranium-235 and uranium-238 was considerably greater than it is today.[16] The figure to the right shows that the isotope ratio between U-238 to U-235 was also considerably different than it is today, with the ratio essentially equivalent to that of modern low-enriched uranium. Therefore, natural uranium ore bodies, if present, would have been capable of supporting natural nuclear fission reactors with common light water as its moderator. Any attempts to explain the paradox must therefore factor in both radiogenic contributions, both from decay heat and from any potential natural nuclear fission reactors.

Greater tidal heating

The Moon was much closer to Earth billions of years ago,[17] and therefore produced considerably more tidal heating.[18]


Phanerozoic Climate Change
A minority view, propounded by the Israeli-American physicist Nir Shaviv, uses climatological influences of solar wind, combined with a hypothesis of Danish physicist Henrik Svensmark for a cooling effect of cosmic rays, to explain the paradox.[19] According to Shaviv, the early Sun had emitted a stronger solar wind that produced a protective effect against cosmic rays. In that early age, a moderate greenhouse effect comparable to today's would have been sufficient to explain an ice-free Earth. Evidence for a more active early Sun has been found in meteorites.[20]

The temperature minimum around 2.4 billion years goes along with a cosmic ray flux modulation by a variable star formation rate in the Milky Way. The reduced solar impact later results into a stronger impact of cosmic ray flux (CRF), which is hypothesized to lead to a relationship with climatological variations.

An alternative model of solar evolution may explain the faint young Sun paradox. In this model, the early Sun underwent an extended period of higher solar wind output. This caused a mass loss from the Sun on the order of 5−10 percent over its lifetime, resulting in a more consistent level of solar luminosity (as the early Sun had more mass, resulting in more energy output than was predicted). In order to explain the warm conditions in the Archean era, this mass loss must have occurred over an interval of about one billion years. However, records of ion implantation from meteorites and lunar samples show that the elevated rate of solar wind flux only lasted for a period of 0.1 billion years. Observations of the young Sun-like star π1 Ursae Majoris matches this rate of decline in the stellar wind output, suggesting that a higher mass loss rate can not by itself resolve the paradox.[21]

Examination of Archaean sediments appears inconsistent with the hypothesis of high greenhouse concentrations. Instead, the moderate temperature range may be explained by a lower surface albedo brought about by less continental area and the "lack of biologically induced cloud condensation nuclei". This would have led to increased absorption of solar energy, thereby compensating for the lower solar output.[15]
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Wed Jun 29, 2016 12:13 pm

I stumbled on a new word that sums up many problems that exist.

Fascinating discussion via Wired‘s Clive Thompson, and Stanford historian of science Robert Proctor, on Agnotology:

“When it comes to many contentious subjects, our usual relationship to information is reversed: Ignorance increases.

[Proctor] has developed a word inspired by this trend: agnotology. Derived from the Greek root agnosis, it is “the study of culturally constructed ignorance.”

As Proctor argues, when society doesn’t know something, it’s often because special interests work hard to create confusion. Anti-Obama groups likely spent millions insisting he’s a Muslim; church groups have shelled out even more pushing creationism. The oil and auto industries carefully seed doubt about the causes of global warming. And when the dust settles, society knows less than it did before.

“People always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out,” Proctor says. “But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop caring about what’s true and what’s not.” (emphasis added)

Fairly amazing, and when it comes to certain issues, its dead on.

What an awesome definition:

Agnotology: Culturally constructed ignorance, purposefully created by special interest groups working hard to create confusion and suppress the truth.

How More Info Leads to Less Knowledge
Clive Thompson
WIRED MAGAZINE: 17.02TECH BIZ ... t_thompson

The link above defaults to this:

Clive Thompson on How More Info Leads to Less Knowledge
Is global warming caused by humans? Is Barack Obama a Christian? Is evolution a well-supported theory?

You might think these questions have been incontrovertibly answered in the affirmative, proven by settled facts. But for a lot of Americans, they haven't. Among Republicans, belief in anthropogenic global warming declined from 52 percent to 42 percent between 2003 and 2008. Just days before the election, nearly a quarter of respondents in one Texas poll were convinced that Obama is a Muslim. And the proportion of Americans who believe God did not guide evolution? It's 14 percent today, a two-point decline since the '90s, according to Gallup.

What's going on? Normally, we expect society to progress, amassing deeper scientific understanding and basic facts every year. Knowledge only increases, right?

This article has been reproduced in a new format and may be missing content or contain faulty links. Contact to report an issue.

Robert Proctor doesn't think so. A historian of science at Stanford, Proctor points out that when it comes to many contentious subjects, our usual relationship to information is reversed: Ignorance increases.

He has developed a word inspired by this trend: agnotology. Derived from the Greek root agnosis, it is "the study of culturally constructed ignorance."

As Proctor argues, when society doesn't know something, it's often because special interests work hard to create confusion. Anti-Obama groups likely spent millions insisting he's a Muslim; church groups have shelled out even more pushing creationism. The oil and auto industries carefully seed doubt about the causes of global warming. And when the dust settles, society knows less than it did before.

"People always assume that if someone doesn't know something, it's because they haven't paid attention or haven't yet figured it out," Proctor says. "But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop caring about what's true and what's not."

After years of celebrating the information revolution, we need to focus on the countervailing force: The disinformation revolution. The ur-example of what Proctor calls an agnotological campaign is the funding of bogus studies by cigarette companies trying to link lung cancer to baldness, viruses—anything but their product.

Think of the world of software today: Tech firms regularly sue geeks who reverse-engineer their code to look for flaws. They want their customers to be ignorant of how their apps work.

Even the financial meltdown was driven by ignorance. Credit-default swaps were designed not merely to dilute risk but to dilute knowledge; after they'd changed hands and been serially securitized, no one knew what they were worth.

Maybe the Internet itself has inherently agnotological side effects. People graze all day on information tailored to their existing worldview. And when bloggers or talking heads actually engage in debate, it often consists of pelting one another with mutually contradictory studies they've Googled: "Greenland's ice shield is melting 10 years ahead of schedule!" vs. "The sun is cooling down and Earth is getting colder!"

As Farhad Manjoo notes in True Enough: Learning to Live in a Post-Fact Society, if we argue about what a fact means, we're having a debate. If we argue about what the facts are, it's agnotological Armageddon, where reality dies screaming.

Can we fight off these attempts to foster ignorance? Despite his fears about the Internet's combative culture, Proctor is optimistic. During last year's election, campaign-trail lies were quickly exposed via YouTube and transcripts. The Web makes secrets harder to keep.

We need to fashion information tools that are designed to combat agnotological rot. Like Wikipedia: It encourages users to build real knowledge through consensus, and the result manages to (mostly) satisfy even people who hate each other's guts. Because the most important thing these days might just be knowing what we know.

Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Tue Apr 18, 2017 12:23 pm

I have not read this book, but the book review itself is deeply insightful, and goes far in explaining the problem of presenting new information that changes people's minds.

People Have Limited Knowledge. What’s the Remedy? Nobody Knows ... nbach.html
APRIL 18, 2017

Janet Hansen
Why We Never Think Alone
By Steven Sloman and Philip Fernbach
Illustrated. 296 pp. Riverhead Books. $28.

In “The Knowledge Illusion,” the cognitive scientists Steven Sloman and Philip Fernbach hammer another nail into the coffin of the rational individual. From the 17th century to the 20th century, Western thought depicted individual human beings as independent rational agents, and consequently made these mythical creatures the basis of modern society. Democracy is founded on the idea that the voter knows best, free market capitalism believes the customer is always right, and modern education tries to teach students to think for themselves.

Over the last few decades, the ideal of the rational individual has been attacked from all sides. Postcolonial and feminist thinkers challenged it as a chauvinistic Western fantasy, glorifying the autonomy and power of white men. Behavioral economists and evolutionary psychologists have demonstrated that most human decisions are based on emotional reactions and heuristic shortcuts rather than rational analysis, and that while our emotions and heuristics were perhaps suitable for dealing with the African savanna in the Stone Age, they are woefully inadequate for dealing with the urban jungle of the silicon age.

Sloman and Fernbach take this argument further, positing that not just rationality but the very idea of individual thinking is a myth. Humans rarely think for themselves. Rather, we think in groups. Just as it takes a tribe to raise a child, it also takes a tribe to invent a tool, solve a conflict or cure a disease. No individual knows everything it takes to build a cathedral, an atom bomb or an aircraft. What gave Homo sapiens an edge over all other animals and turned us into the masters of the planet was not our individual rationality, but our unparalleled ability to think together in large groups.

As Sloman and Fernbach demonstrate in some of the most interesting and unsettling parts of the book, individual humans know embarrassingly little about the world, and as history progressed, they came to know less and less. A hunter-gatherer in the Stone Age knew how to produce her own clothes, how to start a fire from scratch, how to hunt rabbits and how to escape lions. We today think we know far more, but as individuals we actually know far less. We rely on the expertise of others for almost all our needs. In one humbling experiment, people were asked to evaluate how well they understood how a zipper works. Most people confidently replied that they understood it very well — after all, they use zippers all the time. They were then asked to explain how a zipper works, describing in as much detail as possible all the steps involved in the zipper’s operation. Most had no idea. This is the knowledge illusion. We think we know a lot, even though individually we know very little, because we treat knowledge in the minds of others as if it were our own.

This is not necessarily bad, though. Our reliance on groupthink has made us masters of the world, and the knowledge illusion enables us to go through life without being caught in an impossible effort to understand everything ourselves. From an evolutionary perspective, trusting in the knowledge of others has worked extremely well for humans.

Yet like many other human traits that made sense in past ages but cause trouble in the modern age, the knowledge illusion has its downside. The world is becoming ever more complex, and people fail to realize just how ignorant they are of what’s going on. Consequently some who know next to nothing about meteorology or biology nevertheless conduct fierce debates about climate change and genetically modified crops, while others hold extremely strong views about what should be done in Iraq or Ukraine without being able to locate them on a map. People rarely appreciate their ignorance, because they lock themselves inside an echo chamber of like-minded friends and self-confirming newsfeeds, where their beliefs are constantly reinforced and seldom challenged.

According to Sloman (a professor at Brown and editor of the journal Cognition) and Fernbach (a professor at the University of Colorado’s Leeds School of Business), providing people with more and better information is unlikely to improve matters. Scientists hope to dispel antiscience prejudices by better science education, and pundits hope to sway public opinion on issues like Obamacare or global warming by presenting the public with accurate facts and expert reports. Such hopes are grounded in a misunderstanding of how humans actually think. Most of our views are shaped by communal groupthink rather than individual rationality, and we cling to these views because of group loyalty. Bombarding people with facts and exposing their individual ignorance is likely to backfire. Most people don’t like too many facts, and they certainly don’t like to feel stupid. If you think that you can convince Donald Trump of the truth of global warming by presenting him with the relevant facts — think again.

Indeed, scientists who believe that facts can change public opinion may themselves be the victims of scientific groupthink. The scientific community believes in the efficacy of facts, hence those loyal to that community continue to believe they can win public debates by marshaling the right facts, despite much empirical evidence to the contrary. Similarly, the traditional belief in individual rationality may itself be the product of groupthink rather than of empirical evidence. In one of the climactic moments of Monty Python’s “Life of Brian,” a huge crowd of starry-eyed followers mistakes Brian for the Messiah. Caught in a corner, Brian tells his disciples: “You don’t need to follow me, you don’t need to follow anybody! You’ve got to think for yourselves! You’re all individuals!” The enthusiastic crowd then chants in unison: “Yes! We’re all individuals!” Monty Python was parodying the counterculture orthodoxy of the 1960s, but the point may be true of the belief in rational individualism in other ages too.

In the coming decades, the world will probably become far more complex than it is today. Individual humans will consequently know even less about the technological gadgets, the economic currents and the political dynamics that shape the world. How could we then vest authority in voters and customers who are so ignorant and susceptible to manipulation? If Sloman and Fernbach are correct, providing future voters and customers with more and better facts would hardly solve the problem. So what’s the alternative? Sloman and Fernbach don’t have a solution. They suggest a few remedies like offering people simple rules of thumb (“Save 15 percent of your income,” say), educating people on a just-in-time basis (teaching them how to handle unemployment immediately when they are laid off) and encouraging people to be more realistic about their ignorance. This will hardly be enough, of course. True to their own advice, Sloman and Fernbach are well aware of the limits of their own understanding, and they know they don’t know the answer. In all likelihood, nobody knows.
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Wed May 03, 2017 12:46 pm

This is an interesting piece.

Why String Theory Is Still Not Even Wrong ... ven-wrong/
Physicist, mathematician and blogger Peter Woit whacks strings, multiverses, simulated universes and “fake physics”

John Horgan

Peter Woit, shown here at the March for Science in New York City, says the problems of string theory have become more severe since he critiqued it more than a decade ago in his book Not Even Wrong. The problems include “the complexity, ugliness and lack of explanatory power of models designed to connect string theory with known phenomena, as well as the continuing failure to come up with a consistent formulation of the theory.” Credit: Pamela Cruz

At its best, physics is the most potent and precise of all scientific fields, and yet it surpasses even psychology in its capacity for bullshit. To keep physics honest, we need watchdogs like Peter Woit. He is renowned for asserting that string theory, which for decades has been the leading candidate for a unified theory of physics, is so flawed that it is “not even wrong.” That phrase (credited to Wolfgang Pauli) is the title of Woit’s widely discussed 2006 book (see my review here) and of his popular blog, which he launched in 2004. Woit, who has degrees in physics from Harvard and Princeton and has taught mathematics at Columbia since 1989, tracks mathematics as well as physics on his blog, and some of his riffs (like a recent one on the difference between Lie groups and Lie algebras) are strictly for experts. But he provides plenty of clear, non-technical explanations for non-experts like me. Woit, whom I’ve known for more than a dozen years, is a good guy. He can be blunt, but he is always fair, and he does not indulge in cheap shots, snark or grandstanding. The next time the media tout an alleged breakthrough in physics or mathematics, check out Not Even Wrong to get the real scoop. Woit and I recently had the following email exchange. —John Horgan

Horgan: What’s the biggest pleasure you get from blogging?

Woit: The most rewarding experiences I've had because of the blog have been occasions on which it has put me in contact with people I greatly respect who have found the blog useful or interesting. Sometimes this has been a meeting in person, sometimes I've heard from someone by email, and sometimes via a comment on the blog. When a blog entry attracts a discussion involving well-informed people with something interesting to say who appreciate what I'm trying to do, that's a great pleasure.

Horgan: You've recently denounced “fake physics.” What is it? Are journalists mostly to blame for it?

Woit: By "fake physics" I mean pseudo-scientific claims about physics that share some of the characteristics of "fake news", in particular misleading, overhyped stories about fundamental physics promoting empty or unsuccessful theoretical ideas, with a clickbait headline. Those most to blame for this are the physicists involved, who should know better and be aware that the way they are promoting their work is going to mislead people. Journalists need to be skeptical about what they're being told by scientists, but often they're more or less accurately reporting impressive sounding claims being made by physicists with impeccable credentials, and not in a good position to evaluate these.

Horgan: Do you still think string theory is “not even wrong”?

Woit: Yes. My book on the subject was written in 2003-4 and I think that its point of view about string theory has been vindicated by what has happened since then. Experimental results from the Large Hadron Collider show no evidence of the extra dimensions or supersymmetry that string theorists had argued for as "predictions" of string theory. The internal problems of the theory are even more serious after another decade of research. These include the complexity, ugliness and lack of explanatory power of models designed to connect string theory with known phenomena, as well as the continuing failure to come up with a consistent formulation of the theory.

Horgan: Why do you think Edward Witten told me in 2014 that string theory is “on the right track”?

Woit: I think the conjectural picture of how string theory would unite gravity and the standard model that Witten came up with in 1984-5 (in collaboration with others) had a huge influence on him, and he's reluctant to accept the idea that the models developed back then were a red herring. Like many prominent string theorists, for a long time now he no longer actively has worked on such models but, absent a convincing alternative, he is unlikely to give up on the hope that the vision of this period points the way forward, even as progress has stalled.

Horgan: Are multiverse theories not even wrong?

Woit: Yes, but that's not the main problem with them. Many ideas that are "not even wrong", in the sense of having no way to test them, can still be fruitful, for instance by opening up avenues of investigation that will lead to something conventionally testable. Most good ideas start off "not even wrong", with their implications too poorly understood to know where they will lead. The problem with such things as string-theory multiverse theories is that "the multiverse did it" is not just untestable, but an excuse for failure. Instead of opening up scientific progress in a new direction, such theories are designed to shut down scientific progress by justifying a failed research program.

Horgan: What’s your take on the proposal of Nick Bostrom and others that we are living in a simulation?

Woit: I like quite a bit this comment from Moshe Rozali (at URL ... nt-1733601): "As far as metaphysical speculation goes it is remarkably unromantic. I mean, your best attempt at a creation myth involves someone sitting in front of a computer running code? What else do those omnipotent gods do, eat pizza?"

Horgan: Sean Carroll has written that falsifiability is overrated as a criterion for distinguishing science from pseudo-science? Your response?

Woit: No one thinks that the subtle "demarcation problem" of deciding what is science and what isn't can simply be dealt with by invoking falsifiability. Carroll's critique of naive ideas about falsifiability should be seen in context: he's trying to justify multiverse research programs whose models fail naive criteria of direct testability (since you can't see other universes). This is however a straw man argument: the problem with such research programs isn't that of direct testability, but that there is no indirect evidence for them, nor any plausible way of getting any. Carroll and others with similar interests have a serious problem on their hands: they appear to be making empty claims and engaging in pseudo-science, with "the multiverse did it" no more of a testable explanation than "the Jolly Green Giant did it". To convince people this is science they need to start showing that such claims have non-empty testable consequences, and I don't see that happening.

Horgan: Is it possible that the whole push for unification of physics is misguided?

Woit: In principle it's of course possible that the sort of unification present in our best current theory is all there is. There are however no good arguments for why this should be, other than that it's proving hard to do better. The lesson of history is not to give up, that seemingly hard problems of this sort often find solutions. Looking in depth into the technical issues, I don't see anything inherently intractable, rather a set of puzzling problems with a lot of structure, where it looks like we're missing one or two good ideas about how things should fit together.

Horgan: Is physics in danger of ending, as Harry Cliff has warned?

Woit: One should be wary of claims about "physics" in general since it has many subfields, facing different issues. High-energy particle physics is a subfield that is in danger of ending. On the experimental front, it faces fundamental technological obstacles. Any next generation accelerator able to explore even modestly higher energies than the LHC will be far off in the future and very expensive. Whether there's the will to finance and build such a thing is now unclear. On the theoretical front, the field is now in crisis, due to the absence of experimental results that point to a better theory, as well as a refusal to abandon failed theoretical ideas.

Horgan: Is mathematics healthier than theoretical physics?

Woit: Mathematics is in a much healthier state than theoretical physics. One reason for this is that it has never been driven by experiment, so is immune to the problem of technological experimental barriers. Absent experiment to point the way forward and keep everyone honest, mathematics has developed a different culture than theoretical physics, one that emphasizes rigorous clarity about the dividing line between what one understands and what one doesn't. This clarity makes possible agreement on what is progress: that which moves the dividing line in the right direction. I believe that in its current crisis, theoretical physics could benefit a lot from behaving more like mathematicians. (I've had no luck though in getting physicists to agree with me).

Horgan: Will machines ever replace mathematicians?

Woit: It's hard to even guess at what the relation between machines and human beings will be in the far future. In the not-so-distant future, I can imagine machines or algorithms replacing some of what mathematicians do, but don't see any indication that they can replace the kind of creative work now being done by the best mathematicians.

Horgan: Are you an optimist or pessimist about the future of science? What about the future of humanity?

Woit: I've always liked the Antonio Gramsci slogan "pessimism of the intellect, optimism of the will" [written by Gramsci, an Italian politician and philosopher, while he was in prison in 1927].

"Science" seems to me a very heterogeneous activity, with some parts of it healthy, others in decline, sometimes a victim of their own success in a way you identified in The End of Science. I'm pretty ignorant about most subfields of science. Of the two I know best, one (fundamental physics) is in trouble, while the other (pure mathematics) is doing as well as ever.

As for the future of humanity, the collapse of any semblance of a healthy democracy in the US last year with the advent and triumph of "post-truth" politics has for me (and I'm sure many others) made it much harder to be an optimist. The longer-term trend of increasing concentration of wealth and power in the hands of a minority seems unstoppable. The "disruptive innovation" of our new Silicon Valley overlords and brave new world of social media and omnipresent digital monitoring of our existence is starting to make some of the dystopias of science fiction look frighteningly plausible. I'm still waiting for the future of peace, love and understanding promised when I came of age during the late 1960s.

Horgan: What’s your utopia?

Woit: Besides the peace, love and understanding thing, in my utopia everyone else would have as few problems and as much to enjoy about life as I currently do.

Further Reading:

Meta-Post: Horgan Posts on Physics, Cosmology

See my Q&As with Edward Witten, Steven Weinberg, George Ellis, Carlo Rovelli, Scott Aaronson, Stephen Wolfram, Sabine Hossenfelder, Priyamvada Natarajan, Garrett Lisi, Paul Steinhardt, Lee Smolin, Robin Hanson, Eliezer Yudkowsky, Stuart Kauffman, Christof Koch, Rupert Sheldrake and Sheldon Solomon.
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Wed May 17, 2017 11:05 am

I finished reading the book that I mentioned above, The Knowledge Illusion, deeply disturbing.

The Team needs to read the book many times, and understand what they are facing. I would love to have Mel Acheson read the book and add it to his arsenal. HA!

What is interesting, is that many of the examples they write about are wrong, yet they prove their point by the fact that despite their knowledge, they got things wrong.

They present some great tactics for making people realize that they do not know things in depth. Yet, the disturbing point is that every tactic you can use to inform people can still not reach people who are so personally invested in their own world view that they refuse to ever question their own.

People Have Limited Knowledge. What’s the Remedy? Nobody Knows ... nbach.html
APRIL 18, 2017

Janet Hansen
Why We Never Think Alone
By Steven Sloman and Philip Fernbach
Illustrated. 296 pp. Riverhead Books. $28.

This is a sample from the introduction.
Introduction: Ignorance and the Community of Knowledge

Three soldiers sat in a bunker surrounded by three-foot-thick concrete walls, chatting about home. The conversation slowed and then stopped. The cement walls shook and the ground wobbled like Jell-O. Thirty thousand feet above them in a B-36, crew members coughed and sputtered as heat and smoke filled their cabin and dozens of lights and alarms blared. Meanwhile, eighty miles due east, the crew of a Japanese fishing trawler, the not-so-lucky Lucky Dragon Number Five (Daigo Fukuryū Maru), stood on deck, staring with terror and wonder at the horizon.

The date was March 1, 1954, and they were all in a remote part of the Pacific Ocean witnessing the largest explosion in the history of humankind: the detonation of a thermonuclear fusion bomb nicknamed “Shrimp,” code-named Castle Bravo. But something was terribly wrong. The military men, sitting in a bunker on Bikini Atoll, close to ground zero, had witnessed nuclear detonations before and had expected a shock wave to pass by about 45 seconds after the blast. Instead the earth shook. That was not supposed to happen. The crew of the B-36, flying a scientific mission to sample the fallout cloud and take radiological measurements, were supposed to be at a safe altitude, yet their plane blistered in the heat.

All these people were lucky compared to the crew of the Daigo Fukuryū Maru. Two hours after the blast, a cloud of fallout blew over the boat and rained radioactive debris on the fishermen for several hours. Almost immediately the crew exhibited symptoms of acute radiation sickness—bleeding gums, nausea, burns—and one of them died a few days later in a Tokyo hospital. Before the blast, the U.S. Navy had escorted several fishing vessels beyond the danger zone. But the Daigo Fukuryū Maru was already outside the area the Navy considered dangerous. Most distressing of all, a few hours later, the fallout cloud passed over the inhabited atolls Rongelap and Utirik, irradiating the native populations. Those people have never been the same. They were evacuated three days later after suffering acute radiation sickness and temporarily moved to another island. They were returned to the atoll three years later but were evacuated again after rates of cancer spiked. The children got the worst of it. They are still waiting to go home.

The explanation for all this horror is that the blast force was much larger than expected. The power of nuclear weapons is measured in terms of TNT equivalents. The “Little Boy” fission bomb dropped on Hiroshima in 1945 exploded with a force of sixteen kilotons of TNT, enough to completely obliterate much of the city and kill about 100,000 people. The scientists behind Shrimp expected it to have a blast force of about six megatons, around three hundred times as powerful as Little Boy. But Shrimp exploded with a force of fifteen megatons, nearly a thousand times as powerful as Little Boy. The scientists knew the explosion would be big, but they were off by a factor of about 3.

The error was due to a misunderstanding of the properties of one of the major components of the bomb, an element called lithium-7. Before Castle Bravo, lithium-7 was believed to be relatively inert. In fact, lithium-7 reacts strongly when bombarded with neutrons, often decaying into an unstable isotope of hydrogen, which fuses with other hydrogen atoms, giving off more neutrons and releasing a great deal of energy. Compounding the error, the teams in charge of evaluating the wind patterns failed to predict the easterly direction of winds at higher altitudes that pushed the fallout cloud over the inhabited atolls.

This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations.

How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is? These are the questions we will try to answer in this book.

Thinking as Collective Action

The field of cognitive science emerged in the 1950s in a noble effort to understand the workings of the human mind, the most extraordinary phenomenon in the known universe. How is thinking possible? What goes on inside the head that allows sentient beings to do math, understand their mortality, act virtuously and (sometimes) selflessly, and even do simple things, like eat with a knife and fork? No machine, and probably no other animal, is capable of these acts.

We have spent our careers studying the mind. Steven is a professor of cognitive science who has been researching this topic for over twenty-five years. Phil has a doctorate in cognitive science and is a professor of marketing whose work focuses on trying to understand how people make decisions. We have seen directly that the history of cognitive science has not been a steady march toward a conception of how the human mind is capable of amazing feats. Rather, a good chunk of what cognitive science has taught us over the years is what individual humans can’t do—what our limitations are.

The darker side of cognitive science is a series of revelations that human capacity is not all that it seems, that most people are highly constrained in how they work and what they can achieve. There are severe limits on how much information an individual can process (that’s why we can forget someone’s name seconds after being introduced). People often lack skills that seem basic, like evaluating how risky an action is, and it’s not clear they can ever be learned (hence many of us—one of the authors included—are absurdly scared of flying, one of the safest modes of transportation available). Perhaps most important, individual knowledge is remarkably shallow, only scratching the surface of the true complexity of the world, and yet we often don’t realize how little we understand. The result is that we are often overconfident, sure we are right about things we know little about.

Our story will take you on a journey through the fields of psychology, computer science, robotics, evolutionary theory, political science, and education, all with the goal of illuminating how the mind works and what it is for—and why the answers to these questions explain how human thinking can be so shallow and so powerful at the same time.

The human mind is not like a desktop computer, designed to hold reams of information. The mind is a flexible problem solver that evolved to extract only the most useful information to guide decisions in new situations. As a consequence, individuals store very little detailed information about the world in their heads. In that sense, people are like bees and society a beehive: Our intelligence resides not in individual brains but in the collective mind. To function, individuals rely not only on knowledge stored within our skulls but also on knowledge stored elsewhere: in our bodies, in the environment, and especially in other people. When you put it all together, human thought is incredibly impressive. But it is a product of a community, not of any individual alone.

The Castle Bravo nuclear testing program is an extreme example of the hive mind. It was a complex undertaking requiring the collaboration of about ten thousand people who worked directly on the project and countless others who were indirectly involved but absolutely necessary, like politicians who raised funds and contractors who built barracks and laboratories. There were hundreds of scientists responsible for different components of the bomb, dozens of people responsible for understanding the weather, and medical teams responsible for studying the ill effects of handling radioactive elements. There were counterintelligence teams making sure that communications were encrypted and no Russian submarines were close enough to Bikini Atoll to compromise secrecy. There were cooks to feed all these people, janitors to clean up after them, and plumbers to keep the toilets working. No one individual had one one-thousandth of the knowledge necessary to fully understand it all. Our ability to collaborate, to jointly pursue such a complex undertaking by putting our minds together, made possible the seemingly impossible.

That’s the sunny side of the story. In the shadows of Castle Bravo are the nuclear arms race and the cold war. What we will focus on is the hubris that it exemplifies: the willingness to blow up a fifteen-megaton bomb that was not adequately understood.

Ignorance and Illusion

Most things are complicated, even things that seem simple. You would not be shocked to learn that modern cars or computers or air traffic control systems are complicated. But what about toilets?

There are luxuries, there are useful things, and then there are things that are utterly essential, those things you just cannot do without. Flush toilets surely belong in the latter category. When you need a toilet, you really need it. Just about every house in the developed world has at least one, restaurants must have them by law, and—thank goodness—they are generally available in gas stations and Starbucks. They are wonders of functionality and marvels of simplicity. Everyone understands how a toilet works. Certainly most people feel like they do. Don’t you?

Take a minute and try to explain what happens when you flush a toilet. Do you even know the general principle that governs its operation? It turns out that most people don’t.

The toilet is actually a simple device whose basic design has been around for a few hundred years. (Despite popular myth, Thomas Crapper did not invent the flush toilet. He just improved the design and made a lot of money selling them.) The most popular flush toilet in North America is the siphoning toilet. Its most important components are a tank, a bowl, and a trapway. The trapway is usually S- or U-shaped and curves up higher than the outlet of the bowl before descending into a drainpipe that eventually feeds the sewer. The tank is initially full of water.

When the toilet is flushed, the water flows from the tank quickly into the bowl, raising the water level above the highest curve of the trapway. This purges the trapway of air, filling it with water. As soon as the trapway fills, the magic occurs: A siphon effect is created that sucks the water out of the bowl and sends it through the trapway down the drain. It is the same siphon action that you can use to steal gasoline out of a car by placing one end in the tank and sucking on the other end. The siphon action stops when the water level in the bowl is lower than the first bend of the trapway, allowing air to interrupt the process. Once the water in the bowl has been siphoned away, water is pumped back up into the tank to wait for next time. It is quite an elegant mechanical process, requiring only minimal effort by the user. Is it simple? Well, it is simple enough to describe in a paragraph but not so simple that everyone understands it. In fact, you are now one of the few people who do.

To fully understand toilets requires more than a short description of its mechanism. It requires knowledge of ceramics, metal, and plastic to know how the toilet is made; of chemistry to understand how the seal works so the toilet doesn’t leak onto the bathroom floor; of the human body to understand the size and shape of the toilet. One might argue that a complete understanding of toilets requires a knowledge of economics to appreciate how they are priced and which components are chosen to make them. The quality of those components depends on consumers’ demand and willingness to pay. Understanding psychology is important for understanding why consumers prefer their toilets to be one color and not another.

Nobody could be a master of every facet of even a single thing. Even the simplest objects require complex webs of knowledge to manufacture and use. We haven’t even mentioned really complicated things that arise in nature such as bacteria, trees, hurricanes, love, and the process of reproduction. How do those work? Most people can’t tell you how a coffeemaker works, how glue holds paper together, or how the focus works on a camera, let alone something as complex as love.

Our point is not that people are ignorant. It’s that people are more ignorant than they think they are. We all suffer, to a greater or lesser extent, from an illusion of understanding, an illusion that we understand how things work when in fact our understanding is meager.

Some of you might be thinking, “Well, I don’t know much about how stuff works, but I don’t live in an illusion. I’m not a scientist and I’m not an engineer. It’s not important for me to know those things. I know what I have to know to get along and make good decisions.” What domain do you know a lot about? History? Politics? Economic policy? Do you really understand things within your area of specialty in great detail?

The Japanese attacked Pearl Harbor on December 7, 1941. The world was at war, Japan was an ally of Germany, and while the United States was not yet a participant, it was clear whose side it was on—the heroic Allies and not the evil Axis. These facts surrounding the attack are familiar and give us a sense that we understand the event. But how well do you really understand why Japan attacked, and specifically why they attacked a naval base on the Hawaiian Islands? Can you explain what actually happened and why?

It turns out that the United States and Japan were on the verge of war at the time of the attack. Japan was on the march, having invaded Manchuria in 1931, massacred the population of Nanking, China, in 1937, and invaded French Indochina in 1940. The reason that a naval base even existed in Hawaii was to stop perceived Japanese aggression. U.S. president Franklin D. Roosevelt moved the Pacific Fleet to Hawaii from its base in San Diego in 1941. So an attack by Japan was not a huge surprise. According to a Gallup poll, 52 percent of Americans expected war with Japan a week before the attack occurred.

So the attack on Pearl Harbor was more a consequence of a long-standing struggle in Southeast Asia than a result of the European war. It might well have happened even if Hitler had never invented the blitzkrieg and invaded Poland in 1939. The attack on Pearl Harbor certainly influenced the course of events in Europe during World War II, but it was not caused directly by them.

History is full of events like this, events that seem familiar, that elicit a sense of mild to deep understanding, but whose true historical context is different than we imagine. The complex details get lost in the mist of time while myths emerge that simplify and make stories digestible, in part to service one interest group or another.

Of course, if you have carefully studied the attack on Pearl Harbor, then we’re wrong; you do have a lot to say. But such cases are the exception. They have to be because nobody has time to study very many events. We wager that, except for a few areas that you’ve developed expertise in, your level of knowledge about the causal mechanisms that control not only devices, but the mechanisms that determine how events begin, how they unfold, and how one event leads to another is relatively shallow. But before you stopped to consider what you actually know, you may not have appreciated how shallow it is.

We can’t possibly understand everything, and the sane among us don’t even try. We rely on abstract knowledge, vague and unanalyzed. We’ve all seen the exceptions—people who cherish detail and love to talk about it at great length, sometimes in fascinating ways. And we all have domains in which we are experts, in which we know a lot in exquisite detail. But on most subjects, we connect only abstract bits of information, and what we know is little more than a feeling of understanding we can’t really unpack. In fact, most knowledge is little more than a bunch of associations, high-level links between objects or people that aren’t broken down into detailed stories.

So why don’t we realize the depth of our ignorance? Why do we think we understand things deeply, that we have systematic webs of knowledge that make sense of everything, when the reality is so different? Why do we live in an illusion of understanding?

What Thinking Is For

To get a better sense of why this illusion is central to how we think, it helps to understand why we think. Thought could have evolved to serve several functions. The function of thought could be to represent the world—to construct a model in our heads that corresponds in critical ways to the way the world is. Or thought could be there to make language possible so we can communicate with others. Or thought could be for problem-solving or decision-making. Or maybe it evolved for a specific purpose such as building tools or showing off to potential mates. All of these ideas may have something to them, but thought surely evolved to serve a larger purpose, a purpose common to all these proposals: Thought is for action. Thinking evolved as an extension of the ability to act effectively; it evolved to make us better at doing what’s necessary to achieve our goals. Thought allows us to select from among a set of possible actions by predicting the effects of each action and by imagining how the world would be if we had taken different actions in the past.

One reason to believe that this is why we think is that action came before thought. Even the earliest organisms were capable of action. Single-celled organisms that arose early in the evolutionary cycle ate and moved and reproduced. They did things; they acted on the world and changed it. Evolution selected those organisms whose actions best supported their survival. And the organisms whose actions were most effective were the ones best tuned to the changing conditions of a complex world. If you’re an organism that sucks the blood of passing fauna, it’s great to be able to latch on to whatever brushes against you. But it’s even better to be able to tell whether the object brushing against you is a delicious rodent or bird, not a bloodless leaf blowing in the wind.

The best tools for identifying the appropriate action in a given circumstance are mental faculties that can process information. Visual systems must be able to do a fair amount of sophisticated processing to distinguish a rat from a leaf. Other mental processes are also critical for selecting the appropriate action. Memory can help indicate which actions have been most effective under similar conditions in the past, and reasoning can help predict what will happen under new conditions. The ability to think vastly increases the effectiveness of action. In that sense, thought is an extension of action.

Understanding how thought operates is not so simple. How do people engage in thinking for action? What mental faculties do people need to allow them to pursue their goals using memory and reason? We will see that humans specialize in reasoning about how the world works, about causality. Predicting the effects of action requires reasoning about how causes produce effects, and figuring out why something happened requires reasoning about which causes are likely to have produced an effect. This is what the mind is designed to do. Whether we are thinking about physical objects, social systems, other individuals, our pet dog—whatever—our expertise is in determining how actions and other causes produce effects. We know that kicking a ball will send it flying, but kicking a dog will cause pain. Our thought processes, our language, and our emotions are all designed to engage causal reasoning to help us to act in reasonable ways.

This makes human ignorance all the more surprising. If causality is so critical to selecting the best actions, why do individuals have so little detailed knowledge about how the world works? It’s because thought is masterful at extracting only what it needs and filtering out everything else. When you hear a sentence uttered, your speech recognition system goes to work extracting the gist, the underlying meaning of the utterance, and forgetting the specific words. When you encounter a complicated causal system, you similarly extract the gist and forget the details. If you’re someone who likes figuring out how things work, you might open up an old appliance on occasion, perhaps a coffee machine. If you do, then you don’t memorize the shape, color, and location of each individual part. Instead, you look for the major components and try to figure out how they are connected to one another so that you can answer big questions like how the water gets heated. If you’re like most people and you’re not interested in investigating the insides of a coffee machine, then you know even less detail about how it works. Your causal understanding is limited to only what you need to know: how to make the thing work (with any luck you’ve mastered that).

The mind is not built to acquire details about every individual object or situation. We learn from experience so that we can generalize to new objects and situations. The ability to act in a new context requires understanding only the deep regularities in the way the world works, not the superficial details.

The Community of Knowledge

We would not be such competent thinkers if we had to rely only on the limited knowledge stored in our heads and our facility for causal reasoning. The secret to our success is that we live in a world in which knowledge is all around us. It is in the things we make, in our bodies and workspaces, and in other people. We live in a community of knowledge.

We have access to huge amounts of knowledge that sit in other people’s heads: We have our friends and family who each have their little domains of expertise. We have experts that we can contact to, say, fix our dishwasher when it breaks down for the umpteenth time. We have professors and talking heads on television to inform us about events and how things work. We have books, and we have the richest source of information of all time at our fingertips, the Internet.

On top of that, we have things themselves. Sometimes we can fix an appliance or a bicycle by looking at it to see how it works. On occasion, what’s broken is obvious when we take a look (if only this were more common!). You might not know how a guitar works, but a couple of minutes playing with one, seeing what happens when the strings resonate and how their pitch changes when their lengths are changed, might be enough to give you at least a basic understanding of its operation. In that sense, knowledge of a guitar can be found in the guitar itself. There is no better way to discover a city than to travel around it. The city itself holds the knowledge about how it is laid out, where the interesting places to go are, and what you can see from various vantage points.

We have access to more knowledge today than ever before. Not only can we learn how things are made or how the universe came to be by watching TV, we can answer almost any factual question by typing a few characters on a keyboard and enlisting a search engine. We can frequently find the information we need in Wikipedia or somewhere else on the web. But the ability to access knowledge outside our own heads is not true only of life in the modern world.

There has always been what cognitive scientists like to call a division of cognitive labor. From the beginning of civilization, people have developed distinctive expertise within their group, clan, or society. They have become the local expert on agriculture, medicine, manufacturing, navigating, music, storytelling, cooking, hunting, fighting, or one of many other specialties. One individual may have some expertise in more than one skill, perhaps several, but never all, and never in every aspect of any one thing. No chef can cook all dishes. Though some are mighty impressive, no musician can play every instrument or every type of music. No one has ever been able to do everything.

So we collaborate. That’s a major benefit of living in social groups, to make it easy to share our skills and knowledge. It’s not surprising that we fail to identify what’s in our heads versus what’s in others’, because we’re generally—perhaps always—doing things that involve both. Whenever either of us washes the dishes, we thank heaven that someone knows how to make dish soap and someone else knows how to provide warm water from the faucet. We wouldn’t have a clue.

Sharing skills and knowledge is more sophisticated than it sounds. Human beings don’t merely make individual contributions to a project, like machines operating in an assembly line. Rather, we are able to work together, aware of others and what they are trying to accomplish. We pay attention together and we share goals. In the language of cognitive science, we share intentionality. This is a form of collaboration that you don’t see in other animals. We actually enjoy sharing our mind space with others. In one form, it’s called playing.

Our skulls may delimit the frontier of our brains, but they do not delimit the frontier of our knowledge. The mind stretches beyond the brain to include the body, the environment, and people other than oneself, so the study of the mind cannot be reduced to the study of the brain. Cognitive science is not the same as neuroscience.

Representing knowledge is hard, but representing it in a way that respects what you don’t know is very hard. To participate in a community of knowledge—that is to say, to engage in a world in which only some of the knowledge you have resides in your head—requires that you know what information is available, even when it is not stored in memory. Knowing what’s available is no mean feat. The separation between what’s inside your head and what’s outside of it must be seamless. Our minds need to be designed to treat information that resides in the external environment as continuous with the information that resides in our brains. Human beings sometimes underestimate how much they don’t know, but we do remarkably well overall. That we do is one of evolution’s greatest achievements.

You now have the background you need to understand the origin of the knowledge illusion. The nature of thought is to seamlessly draw on knowledge wherever it can be found, inside and outside of our own heads. We live under the knowledge illusion because we fail to draw an accurate line between what is inside and outside our heads. And we fail because there is no sharp line. So we frequently don’t know what we don’t know.

Why It Matters

Understanding the mind in this way can offer us improved ways of approaching our most complex problems. Recognizing the limits of our understanding should make us more humble, opening our minds to other people’s ideas and ways of thinking. It offers lessons about how to avoid things like bad financial decisions. It can enable us to improve our political system and help us assess how much reliance we should have on experts versus how much decision-making power should be given to individual voters.

This book is being written at a time of immense polarization on the American political scene. Liberals and conservatives find each other’s views repugnant, and as a result, Democrats and Republicans cannot find common ground or compromise. The U.S. Congress is unable to pass even benign legislation; the Senate is preventing the administration from making important judicial and administrative appointments merely because the appointments are coming from the other side.

One reason for this gridlock is that both politicians and voters don’t realize how little they understand. Whenever an issue is important enough for public debate, it is also complicated enough to be difficult to understand. Reading a newspaper article or two just isn’t enough. Social issues have complex causes and unpredictable consequences. It takes a lot of expertise to really understand the implications of a position, and even expertise may not be enough. Conflicts between, say, police and minorities cannot be reduced to simple fear or racism or even to both. Along with fear and racism, conflicts arise because of individual experiences and expectations, because of the dynamics of a specific situation, because of misguided training and misunderstandings. Complexity abounds. If everybody understood this, our society would likely be less polarized.

Instead of appreciating complexity, people tend to affiliate with one or another social dogma. Because our knowledge is enmeshed with that of others, the community shapes our beliefs and attitudes. It is so hard to reject an opinion shared by our peers that too often we don’t even try to evaluate claims based on their merits. We let our group do our thinking for us. Appreciating the communal nature of knowledge should make us more realistic about what’s determining our beliefs and values.
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby jacmac » Wed May 17, 2017 6:02 pm

I just read a shorter piece on the same topic.
They quote the above book.
That's What You Think by Elizabeth Kolbert.
The New Yorker, February 27, 2017
Posts: 470
Joined: Wed Dec 02, 2009 12:36 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Thu May 18, 2017 1:03 pm

Wow! That's a fun article. More great books to check out. Thanks...

Why Facts Don’t Change Our Minds ... -our-minds
New discoveries about the human mind show the limitations of reason.

Elizabeth KolbertFebruary 27, 2017 Issue
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring.
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm

Re: Caltech: The Mechanical Universe

Unread postby allynh » Fri May 26, 2017 2:40 pm

Way up on page one I mentioned the Nova program, _Monster of the Milky Way_. The episode shows that there is no black hole at the center, yet they claim that it is there.

This is a TED talk Andrea Ghez did in 2009.

The hunt for a supermassive black hole ... black_hole

At the ten minute mark she shows the picture improving because of adaptive optics, and shows the motion of the stars at the center. Notice, you can see the stars moving around a "center", but you cannot see anything there. Since a black hole is "invisible" that's what must be there. Problem is, a black hole would have an accretion disk that would be visible. They claim the black hole is on a "diet" thus not visible. How convenient.

- At the 12 minute mark she says that there is no "alternative" to explaining what they see other than a black hole.

- Then just before 13 minutes she points out that everything they have observed at the center of the galaxy is inconsistent with what they think should happen around a black hole.

- At 14 minutes she says they are trying to resolve this contradiction.

- At 15:15 she shows the animation of the stars moving. While the animation is running she sums up her points which basically contradicts what you can see.

This is a lecture by Andrea Ghez, January 2017.

The Monster Black Hole at the Center of the Milky Way

- At minute 27 she talks about when she started in 1994, and was denied access to telescope time. She was told her technic would not work, and even if it did she wouldn't see anything.

- At 30 minutes she has an animation of the adaptive optics.

- At 33 minutes she mentions the sodium lasers that they use to read the atmosphere. They create artificial stars based on the sodium that is trapped at 90km. That in itself is deeply disturbing.

- At 34 minutes she shows the animation of the stars moving at the center.

- At 41:30 minutes she mentions that they have predictions of what will happen, but every prediction is inconsistent with their observations. This comes back to _The Knowledge Illusion_ that I mentioned up thread.

- At 44:45 she starts the bigger animation showing star motion showing young and old stars. This mix should not exist based on predictions.

Think about it. She has spent her entire career looking for something that does not exist, convincing herself that she is on the right track, and completely missing the actual implications of what she has observed. She's looking right at the answer, and can't see it. If she knew the Electric Universe theories, she would have the real answer.

This is the new telescope they are trying to build.

Thirty Meter Telescope

This is the Nova link to see the original episode.

Monster of the Milky Way

This is a version on YouTube for those who can't access the PBS site.

Universe Space Documentary - Monster of the Milky Way , Science Documentary


Monster of the Milky Way ... y-way.html

Galactic Explorer Andrea Ghez
Posts: 817
Joined: Fri Aug 22, 2008 5:51 pm


Return to Electric Universe - Net Talk

Who is online

Users browsing this forum: No registered users and 1 guest