Online scientific discourse is broken and it can be fixed

Has science taken a wrong turn? If so, what corrections are needed? Chronicles of scientific misbehavior. The role of heretic-pioneers and forbidden questions in the sciences. Is peer review working? The perverse "consensus of leading scientists." Good public relations versus good science.

Moderators: bboyer, MGmirkin

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Wed Jan 15, 2014 10:59 am

Study Demonstrates How We Support Our False Beliefs
Aug 21, 2009

( -- In a study published in the most recent issue of the journal Sociological Inquiry, sociologists from four major research institutions focus on one of the most curious aspects of the 2004 presidential election: the strength and resilience of the belief among many Americans that Saddam Hussein was linked to the terrorist attacks of 9/11.

Although this belief influenced the 2004 election, they claim it did not result from pro-Bush propaganda, but from an urgent need by many Americans to seek justification for a war already in progress.

The findings may illuminate reasons why some people form false beliefs about the pros and cons of health-care reform or regarding President Obama's citizenship, for example.

The study, "There Must Be a Reason: Osama, Saddam and Inferred Justification" calls such unsubstantiated beliefs "a serious challenge to democratic theory and practice" and considers how and why it was maintained by so many voters for so long in the absence of supporting evidence.

Co-author Steven Hoffman, Ph.D., visiting assistant professor of sociology at the University at Buffalo, says, "Our data shows substantial support for a cognitive theory known as 'motivated reasoning,' which suggests that rather than search rationally for information that either confirms or disconfirms a particular belief, people actually seek out information that confirms what they already believe.

"In fact," he says, "for the most part people completely ignore contrary information.

"The study demonstrates voters' ability to develop elaborate rationalizations based on faulty information," he explains.

While numerous scholars have blamed a campaign of false information and innuendo from the Bush administration, this study argues that the primary cause of misperception in the 9/11-Saddam Hussein case was not the presence or absence of accurate data but a respondent's desire to believe in particular kinds of information.

"The argument here is that people get deeply attached to their beliefs," Hoffman says.

"We form emotional attachments that get wrapped up in our personal identity and sense of morality, irrespective of the facts of the matter. The problem is that this notion of 'motivated reasoning' has only been supported with experimental results in artificial settings. We decided it was time to see if it held up when you talk to actual voters in their homes, workplaces, restaurants, offices and other deliberative settings."

The survey and interview-based study was conducted by Hoffman, Monica Prasad, Ph.D., assistant professor of sociology at Northwestern University; Northwestern graduate students Kieren Bezila and Kate Kindleberger; Andrew Perrin, Ph.D., associate professor of sociology, University of North Carolina, Chapel Hill; and UNC graduate students Kim Manturuk, Andrew R. Payton and Ashleigh Smith Powers (now an assistant professor of political science and psychology at Millsaps College).

The study addresses what it refers to as a "serious challenge to democratic theory and practice that results when citizens with incorrect information cannot form appropriate preferences or evaluate the preferences of others."

One of the most curious "false beliefs" of the 2004 presidential election, they say, was a strong and resilient belief among many Americans that Saddam Hussein was linked to the terrorist attacks of Sept. 11, 2001.

Hoffman says that over the course of the 2004 presidential campaign, several polls showed that majorities of respondents believed that Saddam Hussein was either partly or largely responsible for the 9/11 attacks, a percentage that declined very slowly, dipping below 50 percent only in late 2003.

"This misperception that Hussein was responsible for the Twin Tower terrorist attacks was very persistent, despite all the evidence suggesting that no link existed," Hoffman says.

The study team employed a technique called "challenge interviews" on a sample of voters who reported believing in a link between Saddam and 9/11. The researchers presented the available evidence of the link, along with the evidence that there was no link, and then pushed respondents to justify their opinion on the matter. For all but one respondent, the overwhelming evidence that there was no link left no impact on their arguments in support of the link.

One unexpected pattern that emerged from the different justifications that subjects offered for continuing to believe in the validity of the link was that it helped citizens make sense of the Bush Administration's decision to go to war against Iraq.

"We refer to this as 'inferred justification,'" says Hoffman "because for these voters, the sheer fact that we were engaged in war led to a post-hoc search for a justification for that war.

"People were basically making up justifications for the fact that we were at war," he says.

"One of the things that is really interesting about this, from both the perspective of voting patterns but also for democratic theory more generally, Hoffman says, "is that we did not find that people were being duped by a campaign of innuendo so much as they were actively constructing links and justifications that did not exist.

"They wanted to believe in the link," he says, "because it helped them make sense of a current reality. So voters' ability to develop elaborate rationalizations based on faulty information, whether we think that is good or bad for democratic practice, does at least demonstrate an impressive form of creativity."

Posts: 800
Joined: Thu Mar 13, 2008 11:14 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by Plasmatic » Wed Jan 15, 2014 11:12 am

Chris, I am working on a response and many of your most recent posts will prove to be excellent foils for my points. Looking "in the mirror" indeed!.......
"Logic is the art of non-contradictory identification"......" I am therefore Ill think"
Ayn Rand
"It is the mark of an educated mind to be able to entertain a thought without accepting it."

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Wed Jan 15, 2014 11:52 pm

My hope is that you will explain how these ideas will lead to a scientific social network which will help the world to deal with complexity and information overload.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 16, 2014 10:54 am

Getting at the Root of the Visualization Problem in Scientific Discourse
By Applying a Model for How the Mind Works, In Practice

Just as people shouldn't feel the need to prove the central claims of the Electric Universe as they are simply learning what it is, in order to redesign peoples' interface to knowledge, we need to learn the most contemporary and detailed model for how minds generally works. The world's leading authority on this topic, as best I can tell, appears to be Daniel Kahneman, and the book we want to focus on is unfortunately 500+ pages ...
So, when you run into this situation, you really need to immediately head over to YouTube and search for video. We are not trying to establish lines of argumentation which prove Kahneman's thesis here. We're not practicing philosophy. We are designing an interface, based upon the models that are created by scientists. What we want to do is to learn the basics of his model as quickly as possible. Then, not only can we start checking to see if it corresponds to our own personal observations, but we can also attempt to apply it to this process of decision-making which we are trying to understand. In the process of learning the model, I personally expect the value of this approach to become self-evident: Scientific thinkers are prone to imagining that the mind is purely rational, and yet, we can observe that people frequently behave in an imperfectly rational manner. If we can't clarify how this system actually works with additional detail, then our attempt to design the site will necessarily be more art than science. With a detailed model in our heads, we can at least formulate meaningful hypotheses about how people will react to the experience of two conflicting scientific worldviews and uncertainty in light of either incomplete or too much information.

What scientific thinkers appear to not realize is that even though most marketers probably don't actually understand this level of detail that I'm about to present on the subconscious, if they are good marketers, they've nevertheless constructed some sort of a tacit, gut-level instinct for how consumer's approach decision-making. Effective marketers know the questions which are racing through your head at this point of decision, and they design the packaging or website interface to answer those questions. It all happens very, very fast in the consumer's mind. For most product categories, we are speaking of 0.8 - 10 seconds of thought, and it differs for each product category. Bleach, for example, tends to be at the very bottom of this range: People will only spend 0.8 seconds evaluating what bleach to buy! If anybody believes that this is a rational decision that is being made here, then you're not fully appreciating how little time that is, nor what rational thought IS.

What I'm going to propose is that if the computer is to extend our knowledge in a seamless manner through some system of visualizing knowledge, then that process of visualization starts for the system users at the level of how the mind works … Not how we think the mind should work if we were perfectly rational thinkers, but rather, how it actually works, in practice … Not at some rational, epistemological characterization of a problem space based upon some topic (like plasma physics), but rather at that specific place in some specific press release or other content at which a conversation actually begins, in practice. From that starting point, and by contrast with rational, scientific thinking, certain biases will tend to observationally emerge (if we are looking for them). And this is an important part of what Kahneman will add to our own efforts: We can take these biases which he investigates, and if we wish, use them as the basis for informing the visualization of this human-to-knowledge interface in a way that will at least put people into the mindset of the biases which they should be trying to avoid.

The question which for me remains is how the user's activity might interface with such a system -- for the thinker is not normally looking AT these biases. They are obviously looking THROUGH them, AT something which matters to them. And so, once again, we have different levels of potential focus, and in this instance each level is a different type of starting point. In other words, there are at least three separate "roots" at play here:
  • There is the epistemological root and the objective knowledge it seeks to understand.
  • There is the root of the annotation in the press release, which is designed to focus the group upon some particular claim, which will act as the start for discourse.
  • And there is the psychological root, which Kahneman will suggest is an amalgamation of two separate "personalities" which he calls system 1 and system 2.
What will shortly become clear, through observing Kahneman's model, is the evolutionary need for the subconscious. The problem of the rational mind is that rational thought is inherently slow. It cannot, even in theory, support our survival needs.

So, it should be apparent that the most foundational root -- the one which is motivating us to actually respond in some manner -- is the claim itself. Once we are focused upon some aspect of this claim, then the subconscious is going to swing into action right away with a set of "answers" which will arrive with biases. So, we really have no choice but to immediately deal with the subconscious' tendency towards bias within the way in which we visualize our annotations and the discussions that they lead to. The rational mind is a much slower and reflective process, and so epistemology is actually the last of the three roots here -- in terms of the chronology of our thought processes -- which will be driving visualization.

Okay, so it may take a couple of days, but I'm going to transcribe Kahneman's YouTube explanation for his model next, and emphasize the points which are most relevant for our own problem of creating a scientific social network …

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 16, 2014 1:54 pm

This was just posted to physorg …
What you think is right may actually be wrong – here's why
6 hours ago by Peter Ellerton, The Conversation

We like to think that we reach conclusions by reviewing facts, weighing evidence and analysing arguments. But this is not how humans usually operate, particularly when decisions are important or need to be made quickly.

What we usually do is arrive at a conclusion independently of conscious reasoning and then, and only if required, search for reasons as to why we might be right.

The first process, drawing a conclusion from evidence or facts, is called inferring; the second process, searching for reasons as to why we might believe something to be true, is called rationalizing.

Rationalise vs infer

That we rationalise more than we infer seems counter-intuitive, or at least uncomfortable, to a species that prides itself on its ability to reason, but it is borne out by the work of many researchers, including the US psychologist and Nobel Laureate Daniel Kahneman (most recently in his book Thinking Fast and Slow).

We tend to prefer conclusions that fit our existing world-view, and that don't require us to change a pleasant and familiar narrative. We are also more inclined to accept these conclusions, intuitively leaping to them when they are presented, and to offer resistance to conclusions that require us to change or seriously examine existing beliefs.

There are many ways in which our brains help us to do this.

Consider global warming

Is global warming too difficult to understand? Your brain makes a substitution for you: what do you think of environmentalists? It then transfers that (often emotional) impression, positive or negative, to the issue of global warming and presents a conclusion to you in sync with your existing views.

Your brain also helps to make sense of situations in which it has minimal data to work with by creating associations between pieces of information.

If we hear the words "refugee" and "welfare" together, we cannot help but weave a narrative that makes some sort of coherent story (what Kahneman calls associative coherence). The more we hear this, the more familiar and ingrained the narrative. Indeed, the process of creating a coherent narrative has been shown to be more convincing to people than facts, even when the facts behind the narrative are shown to be wrong (understood as the perseverance of social theories and involved in the Backfire Effect).

Now, if you are a politician or a political advisor, knowing this sort of thing can give you a powerful tool. It is far more effective to create, modify or reinforce particular narratives that fit particular worldviews, and then give people reasons as to why they may be true, than it is to provide evidence and ask people to come to their own conclusions.

It is easier to help people rationalise than it is to ask them to infer. More plainly, it is easier to lay down a path for people to follow than it is to allow them to find their own. Happily for politicians, this is what our brains like doing.

How politicians frame issues

This can be done in two steps. The first is to frame an issue in a way that reinforces or modifies a particular perspective. The cognitive scientist George Lakoff highlighted the use of the phrase "tax relief" by the American political right in the 1990s.

Consider how this positions any debate around taxation levels. Rather than taxes being a "community contribution" the word "relief" suggests a burden that should be lifted, an unfair load that we carry, perhaps beyond our ability bear.

The secret, and success, of this campaign was to get both the opposing parties and the media to use this language, hence immediately biasing any discussion.

Interestingly, it was also an initiative of the American Republican party to rephrase the issue of "global warming" into one of "climate change", which seemed more benign at the time.

Immigration becomes security

In recent years we have seen immigration as an issue disappear, it is now framed almost exclusively as an issue of "national security". All parties and the media now talk about it in this language.

Once the issue is appropriately framed, substitution and associations can be made for us. Talk of national security allows us to talk about borders, which may be porous, or even crumbling. This evokes emotional reactions that can be suitably manipulated.

Budgets can be "in crisis" or in "emergency" conditions, suggesting the need for urgent intervention, or rescue missions. Once such positions are established, all that is needed are some reasons to believe them.

The great thing about rationalisation is that we get to select the reasons we want – that is, those that will support our existing conclusions. Our confirmation bias, a tendency to notice more easily those reasons or examples that confirm our existing ideas, selects just those reasons that suit our purpose. The job of the politician, of course, is to provide them.

Kahneman notes that the more familiar a statement or image, the more it is accepted. It is the reason that messages are repeated ad nauseam, and themes are paraphrased and recycled in every media appearance. Pretty soon, they seem like our own.

How to think differently

So what does this mean for a democracy in which citizens need to be independent thinkers and autonomous actors? Well, it shows that the onus is not just on politicians to change their behaviour (after all, one can hardly blame them for doing what works), but also on us to continually question our own positions and judgements, to test ourselves by examining our beliefs and recognising rationalisation when we engage in it.

More than this, it means public debate, through the media in particular, needs to challenge preconceptions and resist the trend to simple assertion. We are what we are, but that doesn't mean we can't work better with it.

Posts: 800
Joined: Thu Mar 13, 2008 11:14 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by Plasmatic » Thu Jan 16, 2014 10:34 pm

The first problem in physics is to choose the correct concepts to apply to our observations.
Wal Thornhill ... sis-again/

Lets review:

A recurring theme in this thread is the question of what one ought to "focus on" in scientific discourse. My post asking, "CONCEPTS; WHO NEEDS THEM?" was intended to demonstrate how the epistemic method one uses in concept formation will determine what one designates as an essential characteristic of the existents one is focusing on /isolating from the surrounding context one is observing. The way one does this was shown to effect both, how one classifies/integrates into a category ones observations into words, and consequently how one uses that categorization in a sentence. I used Chris' own statements/usage as a "test case" to evaluate how he categorized the concept "creative". That is, the way he communicated was a consequence of the method he employed in his cognitive processing of the world. I concluded that, based on his usage he was trying to integrate two different concepts into one and that was causing him to formulate invalid propositions and to speak equivocally about creativity and the cognitive fact of the subconscious.

Now, I will use Chris' own usage of the concept subconscious as another "test case" for evaluating his integration of that concept. While I do this, I'm going to demonstrate broader ironic facts relevant to both, the exchanges between Chris and I, and to many of Chris' own words in this thread.

Chris said:
Plasmatic, my definition does not at all deviate from the modern conventional view of the subconscious. Subconscious simply means below your ability to perceive it, and if you are not willing to accept that there exists a part of your brain which operates below your rational awareness, then the burden is really upon you to explain who or what is driving your car when your rational mind is occupied with thoughts other than driving? I think it's plain to see that there are very complex decisions being made in that specific situation (and many others), and that those decisions are made even when your rational focus wanders away from that which you are doing.....Part of the reason why I am not fully engaging this conversation is that I'm not quite sure what sort of gap exists between modern scientific culture and objectivism. I do see that modern scientific culture seems to reflect the general view (apparently held by objectivists) that the subconscious plays no important role in scientific practice, but I am so far very unimpressed by this notion that we can explain everything in terms of epistemology. Feel free to post that which you feel is important, but I will ask that you try harder to write for comprehension, and to the point....nobody can take the red pill for you. If you simply observe the alleged behavior of the subconscious, you'll see that it is difficult to observe. So, part of the journey here is whether or not you, personally, are willing to try to observe it, for yourself. That's not a conversation for you and me. That's between you and you ...

Chris, its clear from the above that you have concluded from the view that the subconscious is imperceptible, that this somehow makes it irrational, and this is why you see the concept "irrational" as essential to the concept subconscious. Likewise you have categorized "awareness" as essential to the concept rational.

Its also clear that you have categorized your usage in accordance with the consensus of "modern conventional view", and apparently you think that this is the type of situation the "onus of proof" principle is generalized from (we'll get to that)....You then go on to state that you are "so far very unimpressed" with the "the general view" seemingly held by "modern scientific culture" about the subconscious....

If electrons are imperceptible, then does this mean that they are "irrational"?

If one can become aware of their own bias, then does that mean that bias is rational?

If you are unimpressed with the "the general view" seemingly held by "modern scientific culture" about the subconscious, then how can your definition not "at all deviate from the modern conventional view of the subconscious"?

If your view of the subconscious of "consumers" is part of the "science of mind"- "psychology and sociology", then is science a different category than the "modern conventional view"? Unless you think they are separate categories....(?)

If science is a separate category/concept from "the modern conventional view", then how is the consensus view of the subconscious you differ from "not at all" part of "the modern conventional view "?

If the subconscious is imperceptible, then how can one "observe" its behavior?

You have made more package deals here and it has clearly caused you to equivocate. The question is why? What is the cause of these mis-integrations? How does this relate to many of the things you've posted recently? Can you use these discussions as "test cases" for your own quest here? Do you see how the method you use to conceptualize effects the propositions you formulate your arguments with? This will in turn effect any models you construct into worldviews! At each level the invalid concepts will show up along with the invalid propositions that are predicated as a result of accepting these categorizations. But they are effects of a root cause, an invalid method of concept formation.
The organization of concepts into propositions, and the wider principles of language—as well as the further problems of epistemology—are outside the scope of this work, which is concerned only with the nature of concepts. But a few aspects of these issues must be indicated.
Since concepts, in the field of cognition, perform a function similar to that of numbers in the field of mathematics, the function of a proposition is similar to that of an equation: it applies conceptual abstractions to a specific problem.
A proposition, however, can perform this function only if the concepts of which it is composed have precisely defined meanings. If, in the field of mathematics, numbers had no fixed, firm values, if they were mere approximations determined by the mood of their users—so that "5," for instance, could mean five in some calculations, but six-and-a-half or four-and-three-quarters in others, according to the users' "convenience"—there would be no such thing as the science of mathematics.
Yet this is the manner in which most people use concepts, and are taught to do so.
Above the first-level abstractions of perceptual concretes, most people hold concepts as loose approximations, without firm definitions, clear meanings or specific referents; and the greater a concept's distance from the perceptual level, the vaguer its content. Starting from the mental habit of learning words without grasping their meanings, people find <ioe2_76> it impossible to grasp higher abstractions, and their conceptual development consists of condensing fog into fog into thicker fog—until the hierarchical structure of concepts breaks down in their minds, losing all ties to reality; and, as they lose the capacity to understand, their education becomes a process of memorizing and imitating. This process is encouraged and, at times, demanded by many modern teachers who purvey snatches of random, out-of-context information in undefined, unintelligible, contradictory terms.

Notice how many things in this quote you are in agreement with?

This is what the "Logical Leap" is all about:

Scientific Disagreements–and Philosophic Causes
Philosophy is primarily about method; it’s about the principles that tell us how to discover knowledge. And even a quick look at the history of science shows us that these principles are not obvious. In astronomy, for instance, Ptolemy and Copernicus did not simply disagree in their scientific conclusions about the solar system; they also disagreed in their underlying philosophic ideas about how to develop a theory of the solar system. In essence, Ptolemy thought it was best to settle for a mathematical description of the appearances, whereas Copernicus began the transition to focusing on causal explanations. So what is the goal of science–to describe appearances, or to identify causes? The answer depends on the philosophy you accept.

Similarly, in 17th century physics, Descartes and Newton did not simply disagree in their scientific theories; they strongly disagreed about the basic method of developing such theories. Descartes wanted to deduce physics from axioms, whereas Newton induced his laws from observational evidence. So what is the essential nature of scientific method–is it primarily deductive, or primarily inductive? And what is the role of experiment in science? The answers depend on your theory of knowledge.

Here’s another example: Consider the contrast between Lavoisier, the father of modern chemistry, and the alchemists of the previous era. Lavoisier did not merely reject the scientific conclusions of the alchemists; he rejected their method of concept-formation and he originated a new chemical language, and then he used a quantitative method for establishing causal relationships among his concepts. So how do we form valid concepts, and what is the proper role of mathematics in physical science? Again, your answers to such fundamental questions will depend on the philosophy you accept.

Finally, consider the battle between two late 19th century physicists, Boltzmann and Mach. Boltzmann was the leading advocate of the atomic theory and he used that theory to develop the field of statistical thermodynamics. Mach, on the other hand, was a leading advocate of positivism; he thought that physicists should stick to what they can see, and that the atomic theory was nothing more than speculative metaphysics. So what is the relationship between observation and theory, and how is a theory proven, and are there limits to scientific knowledge? Once again, these are philosophic questions.

Such issues have not gone away with time. There is a great deal of controversy in theoretical physics today, and these basic issues of method are at the heart of the controversy. Some physicists say that string theory is a major triumph that has unified quantum mechanics and relativity theory for the first time. Other physicists argue that string theory is just a mathematical game detached from reality–that it isn’t a theory of everything, but instead a theory of anything. And we’re starting to hear similar criticisms of Big Bang cosmology; if the theory is so flexible that it can explain anything, the critics say, perhaps it actually explains nothing.

How do we decide these issues? How do we know the right method of doing science, and what standards should we use to evaluate scientific ideas? These are some of the questions that I try to answer in my book. . . .
You have also made many general assertions about me and Objectivism and regarding the subconscious :
I do see that modern scientific culture seems to reflect the general view (apparently held by objectivists) that the subconscious plays no important role in scientific practice....Objectivism is Not a Consumer Perspective....
The problem with your approach, Plasmatic, is that you insist upon starting with objectivism. In other words, you completely refuse to start with the customer's perspective. You furthermore insist that people are rational -- in principle, disconnected from any customer behavior -- and then you completely ignore the irrational behavior of people who come to the conclusion that the EU is nonsense. Something went wrong here....How did those people come to that conclusion? Do you really believe that those people relied upon a scientific or some sort of epistemological methodology to come to that conclusion? When I watch and interact with people online who are trying to make sense of whether or not they should spend more time on this EU idea, I see people trying to apply mental shortcuts. They are literally trying NOT to think. They want to know what experts have to say about it; in other words -- again -- they are trying to avoid having to think

Some of these made me laugh out loud!.....

What I want to do here is to help you "metacognate" on your own methods to see if you've actually swallowed your own "red pill"

Ironically you have brought up the very concepts in your recent articles that I mean to expand on while addressing the above assertions .

One unexpected pattern that emerged from the different justifications that subjects offered for continuing to believe in the validity of the link was that it helped citizens make sense of the Bush Administration's decision to go to war against Iraq.
"We refer to this as 'inferred justification,'" says Hoffman "because for these voters, the sheer fact that we were engaged in war led to a post-hoc search for a justification for that war......
"Our data shows substantial support for a cognitive theory known as 'motivated reasoning,' which suggests that rather than search rationally for information that either confirms or disconfirms a particular belief, people actually seek out information that confirms what they already believe.
"In fact," he says, "for the most part people completely ignore contrary information.
"The study demonstrates voters' ability to develop elaborate rationalizations based on faulty information," he explains.........
The concept of justification used above refers to an essential subject within epistemology!
Defined narrowly, epistemology is the study of knowledge and justified belief. As the study of knowledge, epistemology is concerned with the following questions: What are the necessary and sufficient conditions of knowledge? What are its sources? What is its structure, and what are its limits? As the study of justified belief, epistemology aims to answer questions such as: How we are to understand the concept of justification? What makes justified beliefs justified? Is justification internal or external to one's own mind? Understood more broadly, epistemology is about issues having to do with the creation and dissemination of knowledge in particular areas of inquiry.

What the article from Physorg is doing is taking a position based on the authors answers to the questions asked in the Stanford article on epistemology and in Harriman's article!

recall I asked for "a justification of this notion that creativity is a kind of emotional non-rational process"

The question is what should an individuated thinker do with that position? Does knowing that every single statement someone makes presupposes taking certain inescapable positions on philosophy make one more likely to try to identity them? How does having an explicit knowledge of what justification is effect how one makes and responds to others statements/claims? What exactly does the subconscious have to do with it? How does the conscious methods one employs relate to the subconscious? How does my own answers to the above reduce your claims about me and Oism to absurdity? Lets see!

I said:
I want to stress that I do see the subconscious as relevant as well as what Rand called "psycho-epistemology" I will elaborate soon.
What is the objectivist position on the subconscious?
You have no choice about the necessity to integrate your observations, your experiences, your knowledge into abstract ideas, i.e., into principles. Your only choice is whether these principles are true or false, whether they represent your conscious, rational convictions—or a grab-bag of notions snatched at random, whose sources, validity, context and consequences you do not know, notions which, more often than not, you would drop like a hot potato if you knew.
But the principles you accept (consciously or subconsciously) may clash with or contradict one another; they, too, have to be integrated. What integrates them? Philosophy
. A philosophic system is an integrated view of existence. As a human being, you have no choice about the fact that you need a philosophy. Your only choice is whether you define your philosophy by a conscious, rational, disciplined process of thought and scrupulously logical deliberation—or let your subconscious accumulate a junk heap of unwarranted conclusions, false generalizations, undefined contradictions, undigested slogans, unidentified wishes, doubts and fears, thrown together by chance, but integrated by your subconscious into a kind of mongrel philosophy and fused into a single, solid weight: self-doubt, like a ball and chain in the place where your mind's wings should have grown.You might say, as many people do, that it is not easy always to act on abstract principles. No, it is not easy. But how much harder is it, to have to act on them without knowing what they are?
Your subconscious is like a computer—more complex a computer than men can build—and its main function is the integration of your ideas. Who programs it? Your conscious mind. If you default, if you don't reach any firm convictions, your subconscious is programmed by chance—and you deliver yourself into the power of ideas you do not know you have accepted. But one way or the other, your computer gives you print-outs, daily and hourly, in the form of emotions—which are lightning-like estimates of the things around you, calculated according to your values. If you programmed your computer by conscious thinking, you know the nature of your values and emotions. If you didn't, you don't...........
man's values and emotions are determined by his fundamental view of life. The ultimate programmer of his subconscious is philosophy—the science which, according to the emotionalists, is impotent to affect or penetrate the murky mysteries of their feelings.
The quality of a computer's output is determined by the quality of its input. If your subconscious is programmed by chance, its output will have a corresponding character. You have probably heard the computer operators' eloquent term "gigo"—which means: "Garbage in, garbage out." The same formula applies to the relationship between a man's thinking and his emotions.
A man who is run by emotions is like a man who is run by a computer whose printouts he cannot read. He does not know whether its programming is true or false, right or wrong, whether it's set to lead him to success or destruction, whether it serves his goals or those of some evil, unknowable power. He is blind on two fronts: blind to the world around him and to his own inner world, unable to grasp reality or his own motives, and he is in chronic terror of both. Emotions are not tools of cognition. The men who are not interested in philosophy need it most urgently: they are most helplessly in its power.
The men who are not interested in philosophy absorb its principles from the cultural atmosphere around them—from schools, colleges, books, magazines, newspapers, movies, television, etc. Who sets the tone of a culture? A small handful of men: the philosophers. Others follow their lead, either by conviction or by default. For some two hundred years, under the influence of Immanuel Kant, the dominant trend of philosophy has been directed to a single goal: the destruction of man's mind, of his confidence in the power of reason. Today, we are seeing the climax of that trend........Observe that the history of philosophy reproduces—in slow motion, on a macrocosmic screen—the workings of ideas in an individual man's mind. A man who has accepted false premises is free to reject them, but until and unless he does, they do not lie still in his mind, they grow without his conscious participation and reach their ultimate logical conclusions. A similar process takes place in a culture: if the false premises of an influential philosopher are not challenged, generations of his followers—acting as the culture's subconscious—milk them down to their ultimate consequences.
Philosophy:Who needs it?

This happens to be some of my favorite quotes from Rand.

Clearly nothing you said of me or the Oist view of the subconscious is correct. Rand thought that the relationship between the automatic, passive processes of the subconscious to the active aspects of consciousness were very important and she called the interplay mans "psycho-epistemological method" which caused his "sense of life":
Psycho-epistemology is the study of man’s cognitive processes from the aspect of the interaction between the conscious mind and the automatic functions of the subconscious. ... ology.html
A sense of life is formed by a process of emotional generalization which may be described as a subconscious counterpart of a process of abstraction, since it is a method of classifying and integrating. But it is a process of emotional abstraction: it consists of classifying things according to the emotions they invoke—i.e., of tying together, by association or connotation, all those things which have the power to make an individual experience the same (or a similar) emotion. For instance: a new neighborhood, a discovery, adventure, struggle, triumph—or: the folks next door, a memorized recitation, a family picnic, a known routine, comfort. On a more adult level: a heroic man, the skyline of New York, a sunlit landscape, pure colors, ecstatic music—or: a humble man, an old village, a foggy landscape, muddy colors, folk music. . . . The subverbal, subconscious criterion of selection that forms his emotional abstractions is: “That which is important to me” or: “The kind of universe which is right for me, in which I would feel at home.” . . .

It is only those values which he regards or grows to regard as “important,” those which represent his implicit view of reality, that remain in a man’s subconscious and form his sense of life.

“It is important to understand things”—“It is important to obey my parents”—“It is important to act on my own”—“It is important to please other people”—“It is important to fight for what I want”—“It is important not to make enemies”—“My life is important”—“Who am I to stick my neck out?” Man is a being of self-made soul—and it is of such conclusions that the stuff of his soul is made. (By “soul” I mean “consciousness.”)

The integrated sum of a man’s basic values is his sense of life.

I submit to you that the essential concepts you are trying to find that differentiate the subconscious from the conscious are passive and active....or automatic and volitionally initiated. The subconscious is an automatic process we conceptualize by differentiating the cognitive process we are actively trying to initiate from those that are automatic.

Try relating this to creativity.....

There are many themes in your recent articles that relate to the above quotes, but the inferences drawn in them are terrible.
"Our data shows substantial support for a cognitive theory known as 'motivated reasoning,' which suggests that rather than search rationally for information that either confirms or disconfirms a particular belief, people actually seek out information that confirms what they already believe.

"In fact," he says, "for the most part people completely ignore contrary information.

"The study demonstrates voters' ability to develop elaborate rationalizations based on faulty information,"
"The argument here is that people get deeply attached to their beliefs," Hoffman says.

"We form emotional attachments that get wrapped up in our personal identity and sense of morality, irrespective of the facts of the matter.
The researchers presented the available evidence of the link, along with the evidence that there was no link, and then pushed respondents to justify their opinion on the matter. For all but one respondent, the overwhelming evidence that there was no link left no impact on their arguments in support of the link.

One unexpected pattern that emerged from the different justifications that subjects offered for continuing to believe in the validity of the link was that it helped citizens make sense of the Bush Administration's decision to go to war against Iraq.
"We refer to this as 'inferred justification,'" says Hoffman "because for these voters, the sheer fact that we were engaged in war led to a post-hoc search for a justification for that war.
we did not find that people were being duped by a campaign of innuendo so much as they were actively constructing links and justifications that did not exist.

"They wanted to believe in the link," he says, "because it helped them make sense of a current reality. So voters' ability to develop elaborate rationalizations based on faulty information, whether we think that is good or bad for democratic practice, does at least demonstrate an impressive form of creativity."
The obvious question is: How did those scientist overcome these proposed habits that they claim to have identified themselves? I will spend time on this later. Ive spent way too long on this post!

Just a couple more things:

Chris said:
What I would encourage you to do is to spend some time imagining what your own preferred system for annotating press releases would look like. You already know from your own experiences online what sort of pitfalls scientific discourse tends to succumb to. How would you use objectivism as a basis for formulating a structure which would steer the discourse to favorable learning outcomes?........

What you will notice when you engage this problem at that level is that the problem space is larger than just epistemology. I don't know that you'll come to that realization, however, without actually trying to imagine an actual interface of your own. I hope you decide to do it, because I am incredibly interested in what you might come up with. If we have to wait for me to develop a fluency in objectivism, before we can get a better picture of the type of features that an objectivist scientific social network would include, then this could take some time......

The problem with your approach, Plasmatic, is that you insist upon starting with objectivism. In other words, you completely refuse to start with the customer's perspective. You furthermore insist that people are rational -- in principle, disconnected from any customer behavior -- and then you completely ignore the irrational behavior of people who come to the conclusion that the EU is nonsense. Something went wrong here........

You only wanting to talk about objectivism
I suggest you "metacognate" on how you came to the above nonsensical claims about me! What justification did you offer? How many questions have you asked me on my positions you are asserting proclamations on? I very deliberately avoided discussing Objectivism until you brought it up. I "insisted" on using the Socratic method repeatedly. I have constantly asked you questions on your premises. Do you need me to go find these quotes for you?

I will deal with the myth of a neutral structure later...

I think Morpheus needs to take those mirror finish glasses off and turn them around!
"Logic is the art of non-contradictory identification"......" I am therefore Ill think"
Ayn Rand
"It is the mark of an educated mind to be able to entertain a thought without accepting it."

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 16, 2014 11:24 pm

Here is the transcript …

I don't like the word, irrationality, because it conveys to me, you know, frothing at the mouth and a lot of things that are just not descriptive of humans. But, it is a fact that … the rational agent model, which plays a central role in economics and other social sciences … is built on the assumption that … economic agents are rational -- or at least, are rational enough so that you can assume rationality as the basis of a lot of economic modeling.

And, in a way, this is obviously not true, and everybody knows it's not true. And Amos Tversky used to say that all of the economists he knows are fully aware of their spouse not being rational or their dean not being rational. But, when it comes to economic agents, in principle, or in general, then they assume and defend the assumption of rationality.

I propose a very simple diagnostic for imperfect rationality. And when you look at a judgment or at a preference, you can represent it as some combination of considerations. Each of these considerations is assigned some kind of a weight, and it's a very general scheme for representing judgments and preferences. And a failure of perfect rationality, in the broadest sense, is that the weighting is not quite right. That is, there are some considerations and some facts that are given too much weight -- more than they should have -- and I'm not going to define what is the correct weight, because that depends upon a lot of things. And there are many ways of defining correct weights. But, once you define a correct weight in an acceptable way, then it turns out that people don't use those weights, don't apply those weights. Considerations are over-weighted or under-weighted. And this is one, I think, convenient way of thinking of failures of rationality.

Most of the familiar definitions of failure of rationality are of this kind. You know, we commonly assume and recognize that we over-weight the present over the future; that we over-weight losses over gains; when they look at lottery tickets that specify an amount to be won, and you know that the probability is minuscule, people over-weight the amount rather than the probability, and they find those lottery tickets too attractive -- more attractive than they should -- if they were perfectly rational ...

People over-weight information that is currently available to them over information that they do not have. So, information that I need, but don't have, gets much less weight than information that I do happen to have.

People over-weight emotional factors … They over-weight vivid examples over … statistical description.

People under-weight certain things, and actually, one very general characteristic of the way that people think is that people under-weight statistical characteristics of ensembles. They tend to think about individual cases. And they focus on individual cases …

There are other diagnostics. And I'll mention three of them. One that Amos Tversky and I did a lot of work on is violations of the rule that is called rule invariance. And the invariance rule specified that certain factors should be given weight of zero. So, for example, if you describe a cold cut of meat as being 20% fat or 80% fat-free, in terms of the substance that you are saying, you are saying exactly the same thing. And if people were assigning a monetary value to acquiring such a piece of such meat, then they should not be influenced by whether it is described by 20% fat or 80% fat-free. They ARE influenced by that, so they over-weight something that they should neglect. The content -- the meat itself, in this case -- is invariant. But, it's described in different ways. You should be indifferent to these ways. You should not be affected by them. But, in fact, all of us are.

So, this is one family of violations of rationality. There is another which I think is extremely important. And this is that …

we tend to focus on the problem at hand. And we tend to focus very narrowly on the problem at hand

… And this takes the form of investors deciding on a particular investment, and very frequently without regard for their other investments, or for what else they have. People are making a decision without considering the fact that there will be further decisions in the future that, you know, they should be considering what they will do then when deciding what to do now. And it turns out people actually don't.

Then, there are failures of what I call objectivity, and that one isn't discussed a lot, but it's quite straightforward …

We tend to think about reasoning as going from premises to conclusions. So, reasoning is directional. There are certain things that you assume, and then from those assumptions, you get somewhere. And it's supposed to be directional. When you see influences that go the other way -- that is, when people believe in arguments because they believe in the conclusions -- that is a very characteristic failure of perfect rationality. And it's one that we very commonly find.

Now, there has been a lot of work along those lines, documenting these types of violations of the rules of perfect rationality. And, economists in particular complain, and I think rightly so, about the work of psychologists in this area. And they accuse us, quite correctly, of not having a theory. Economists have theories. Psychologists don't. In fact, you could say without exaggeration that economists have theories, and psychologists tend to have lists. They have lists of phenomena. And, you know, long lists of phenomena. And, this is not considered elegant in some circles. And among my economist friends, it is not viewed kindly.

What psychologists do have -- and it is true, we really do not have a general theory -- we do have a lot of what I would call intermediate-level generalizations. We have facts that are not very limited facts. They have a certain range. But, they are not linked very closely to each other. There isn't a body -- a theory -- to hold all of this together.

Now, when I settled down to write the book, Thinking, Fast and Slow, it's in a way an attempt to go a little beyond what we have in psychology in general. By trying to take a lot of phenomena, and trying to put them within a single unified framework. Now, the framework, I must say, is nothing to be proud of. It's in fact a somewhat disreputable approach -- the one that I have taken. And I've taken it very deliberately.

20:00 -

I speak in the book of two agents in the mind. I speak of system 1 and system 2 as if they were things, and actually, as if they were persons. I'm trying to endow them with a certain character -- with propensities, with skills, with habits, with things that they tend to do, or are unable to do. So, that's the way that I go at it. Now, I really don't believe that there are such agents in the mind. This, by the way, is contrary not only to the rules of economics, because it's not a proper theory. It's actually contrary to the rules of doing proper psychology, because the psychologists learn not to explain behavior by assuming little people inside of the head. They're called homunculi. And you try not to explain the behavior of the agent by population inside the head. And I do precisely that. And I do it without shame. I will tell you why I do it.

I will speak of system 1, and that's the quick one. That's thinking fast. That's the intuitive one, and I'll try to describe it. And I'll speak of system 2. That's the slow thinking in the title. This is the controlled one, the careful one. And I'll speak of their interactions as if they're interacting, and sort of, one of them trying to control the other, and vice-versa. And of course I don't believe that there are such systems in the brain, or that there are these little people. So, why do I do it? Well, I'm going to tell you the correct way of describing everything that I'm going to say. When I say that system 1 does x, or system 2 does x, or system 2 cannot do x, what I really mean by that is, well, there are classes of behaviors. There are classes of mental activities that you could call type 1 and type 2. Now, psychologists have absolutely no objections about calling classes of behavior type 1 and type 2. Now, when I say that system 1 does all of the behaviors that are of type 1, and system 2 does all of the behaviors of type 2, so I've invented an agent. Why have I invented an agent? You're going to find out, actually, within the next hour, and I'm quite sure that I'm right. This makes it easy to think about … psychology … because our minds appear to be wired for processing information about agents. We're just very, very good at endowing agents -- you know, animal, human -- at endowing agents with traits, with propensities, with skills. And once we have endowed an entity with a propensity or with a trait, we tend to remember them. And eventually, when we have a rich description of an agent, that agent does develop a sort of personality. And then we're able to, in intuitive ways, of guessing how that agent will behave in other circumstances. This is really, I believe, some thing we have that is built in, that we can do. And I'm deliberately building on that. This is the way I think when I think of system 1 and system 2. I can translate that into proper language -- type 1 and type 2 -- any time. So could you. By and large, I think, unless you're trying to impress some purists, it's not worth the effort. You can just go on thinking in terms of system 1 and system 2, and of the psycho-drama between them.

23:45 -

So, let me tell you a bit about the two systems. I'll start with system 2. System 2, that's the slow one. And … basically you can think of it as the system which engages in effortful activities. Type 2 activities, if you will, are activities that demand effort. Now, how do we know that they demand effort? Well, there are two main tests: One is physiological. There are many manifestations of effort, including dilation of the pupil, heart rate quickens, lots of things happen … You can tell when people are exerting mental effort.

And the main diagnostic is that when people exert effort in one activity, they cannot perform other activities at the same time. So … my diagnostic is that, if it is something that you cannot do while making a left turn into traffic, then it's effortful. Because when you make a left turn into traffic, you stop talking. And if people have any sense who are sitting next to you, they stop talking. And that's why speaking on the cell phone is so dangerous when you drive, it's because the people who talk to you don't know that you're overtaking a car or that you're making a left turn into traffic. So … we have what is called a limited capacity for effort, which means that if we do one thing, there are other things we cannot do. We cannot do several effortful things at the same time.

Effortful activities are experienced, by and large, as MY activities … There is an impression -- a feeling -- of agency attached to most of these effortful activities. This is something that I do. So, multiplying two numbers is effortful. Focusing intently on a particular spot, and trying not to let attention wander. That's effortful. Being polite to somebody you don't like, that's effortful. We know about these things because they strain the physiology. They leave people tired. And there are many indications that when people are tired, after exerting effort, they find it more difficult to exert effort in other activities. So, this is what system 2 does. And, by and large, when we think of our minds, we think of what system 2 does. We think of ourselves as thinking people, as deliberate people, as going in an orderly way from one state of mind to another -- and of making choices and making decisions, and of being quite conscious, because what goes on unconsciously, well, we're not conscious of. So, the little that we see of our internal life, that's largely system 2. That's the thesis of the book, actually .

We greatly exaggerate the importance of system 2 in our lives and in human affairs in general.

27:15 -

Now, what is system 1? This is the fast thinking of the title, Thinking, Fast and Slow, and mostly I'll talk about one aspect of system 1, which is memory. You know, we have this marvelous organ between our ears. It makes sense of the world second by second, so that anything that happens, we sort of understand. We get the right ideas about it. We anticipate what comes next. All of this happens courtesy of that thing that we have, our associative memory. It creates an interpretation of the world as we go, so I see that scene. You see me. This is really not something that we choose to see. I don't choose to see you. It happens to you. And this is the characteristic experience that is associated with system 1, with automatic activities. They happen to you. You don't do them. They happen.

And, so, events and stimuli from the outside world, they create sensations automatically, they evoke memories automatically. So, if I say 2 plus 2, a number came to your mind. You didn't have to ask for it. You didn't have to decide to perform that computation. You couldn't help carrying out that computation to save your life, any more than you could help yourself from reading a word on the billboard. Those things just happen.

But, system 1 doesn't merely call for well-learned associations of this kind. If I ask you, well, you think it was really pre-ordained that World War I … would occur? And, for those of you who know anything about the history of World War I, this instantly brought a lot of information to mind. That happened automatically. You didn't ask for it. We have not only an interpretation of the world as it exists, but we can call out complex stories that would take a long time to describe.

And, in a way, they all come at once.

So, you know, if I ask you what are the chances of any changes to the rules of filibusters in the next ten years, something will happen to you. You know a fair amount about this. Now, articulating it will be effortful, but a lot will happen without you having to think about it. And these things really happen very fast.

My favorite example which is, of course, in the book … is of … I won't try to imitate it. It's a male voice speaking in an upper-class British accent, and saying a sentence, and the sentence -- while … events in the individual's brain are recorded continuously, so that's the background. That's the story -- the sentence is 'I have large tattoos all down my back'. It takes about a third of a second, and you get a characteristic brain response, and the characteristic brain response is surprise. And it has a clear signature. If you think about what it takes for someone to be surprised by such an event, it is really extraordinary. You've got to identify the voice as an upper-class British voice. You've got to bring up the stereotype of upper-class British male. You've got to bring up the idea that it's unlikely that many of them have large tattoos down their back. That is an incongruity, hence a surprise. And that is detectable within less than half a second. This is amazing. And that is … system 1. All of this happens unbidden. You don't have to think about it. This just happens to you. And a lot of complicated things like that.

32:00 -

System 1 is not dumb, so … You know, we tend to think of system 2 as the higher system, but what we do really well, we do with system 1, because skills are system 1. So, driving -- when we drive, it's almost effortless -- except when we take left turns into traffic, or overturn cars on a narrow road -- almost effortless, you can talk when you drive. But, so that's system 1. It's a complicated activity, and it is an activity that we perform almost without effort, and automatically. And there are many such activities.

They're intuitive skills. We are very impressed by intuitive skills. We're impressed by the skills of firefighters, and chess players, and physicians, great diagnosticians. But, all of us have skills at that level, and many of them -- especially in the social domain. We can recognize each others' moods. We can say to someone, you're tired. You know, we can recognize a spouse's mood from one word on the telephone, and you know, many other things from that general nature. The skills are in system 1, in that automatic memory system that interprets the world. So, our mind --

what we think of as our mind, the real products of our mind -- most of it is system 1. Most of it happens through events that are experienced passively as, you know, my memory provides that information. Then, we do something with it. We articulate it. We respond with it. We elaborate on it, and that tends to be system 2.

So, what's the image that I'm drawing here? Many people have two systems. Psychologist Jonathan Haidt had a very similar idea, and he speaks of the elephant and the writer.

I propose another image, and it's the image of a … whole bunch of reporters and an editor. And the reporters are system 1. And the editor is system 2. And what happens is the reporter provides stories, and the stories are ready-made. But, they haven't been approved yet. They haven't been endorsed yet. And then there is the editor. The editor is busy. But the editor just has time to have a quick look at the story, and if it's okay, it goes to print. It gets endorsed. And a LOT of what system 2 does, in my view, is endorse system 1.

And then … you know, vague impressions become beliefs, and sort of vague tendencies become decisions. There is a lot that happens when system 2 becomes involved, but the action in many of these cases comes from system 1.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 16, 2014 11:39 pm

35:30 -

So, in many cases, the editor just, you know, checks the story, and off it goes. It gets endorsed. In some cases, the editor stops the story, and either for a minor rewrite, or completely suppressed. We better not publish this. That happens a lot … If you're talking with someone, you know, and you have that irresistible thought that this is a real idiot, it's unlikely that you're going to say it. And by the way, it's effortful not to say it in some cases. So, this is system 2. The inhibition is system 2, and it's the editor suppressing something. Quite often, the editor may say, well, let's have another story. Let's have another take on this. Or, let me reason very slowly, and then you have the editor himself or herself going over the story and recreating it. So, that's … system 1.

And, let me add something else about that editor -- an essential characteristic of system 2 …

System 2 is really lazy, and it is really governed by a rule of least effort. And there is a LOT of evidence to that effect. Now, there are large individual differences in how lazy system 2 is. It's much lazier for some than for others …

… Shane came up with a very good example, and the example which, I'm sure many of you are familiar with, is a puzzle. And, the puzzle is the following: A bat and a ball, together, cost a dollar ten. The bat costs a dollar more than the ball. How much does the ball cost? And the beauty of this example is that a number comes to everybody's mind. It's ten cents. And this is an associative reaction, and, you know, most people -- including those who don't say ten cents, ultimately, that's the first response that comes to mind … The … ten cents is wrong. And it's very easy to see that ten cents is wrong, because if the ball costs ten cents, then the bat costs a dollar and ten cents. The total is $1.20, so that's false.

Now, I don't know the proportion at Yale, but I suppose it couldn't be higher than at Princeton. About 50% of Princeton students get that wrong, in writing, in a written test. They just say ten cents.

Now, this is, I think, a VERY important discovery. And it's an important fact, because there is something basic that you know about everybody who wrote ten cents: They didn't check.

… You know, if they had checked, they would have left it blank, they would have found the correct solution -- which is 5 cents, if you're still working on it … They would have done something else. Something else would have happened.

So, system 2 … does as little as possible. That's sort of the image that I'm proposing. You have those two systems. System 2 is in control. It can suppress, like an editor … You know, not now, or not at all, not this story, or let's do something else, or let's recompute. But, by and large, the action comes from system 1 … Proposing, suggesting, having impressions, having feelings that become emotions that become intentions that become decisions that become beliefs. And if we want to understand how the mind works, actually it turns out that we had better understand what are the operating rules of system 1.

40:15 -

… There is a basic heuristic that you can follow in trying to predict behavior. It doesn't always work. Heuristics don't always work ...

But, it works a fair amount of the time, which is to ask: What is the first reaction? What's the first reaction to a situation, to a problem, to a word? The first reaction is very diagnostic is a system 1 reaction. It's the first emotion. It's whether you approve or disapprove. It's whether you want to approach or avoid. There are many things which happen instantly which are the first reaction, and either that first reaction will be carried over directly into beliefs and actions and so on, or it will … provide the anchor for what happens next, so that even if if is not really endorsed, it is still influential. Even if system 2, the editor, has assigned somebody else to rewrite the story, something of the original story is still there. And so, as a heuristic, it's a useful heuristic.

Now, let me talk a bit about the operating characteristics of system 1. And what I will do is mainly focus on characteristics that explain failure of perfect rationality.

I think of system 1 as a storyteller. And I'm using a lot of metaphors, but by storytelling, what I mean is, system 1 sees connections between events. And you have to think of … memory as, you know, a vast network of ideas which are linked associatively, which are linked not by mere associations, but by specific links, from cause to effect, from token to type …

And what we have with that machine, it looks for connections. And it does this automatically. One of my best examples is one I learned from Nassim Taleb. It's in The Black Swan. And it talks about Bloomberg News on the day that Saddam Hussein was captured. And I don't recall the exact sequence, but at one stage, the bond market went up. And at that stage, the … very big headline was: Saddam Hussein creates fears in the markets … the bond market goes up. Then, the bond market went radically down. And there was a story: Saddam Hussein's capture reassures the public, the bond market went down.

What happens -- this is completely typical -- you have a fact, and the fact is something happened in the market. It went up or it went down. It looks for an explanation. In memory, there is a search for an explanation. The explanation has to be an event that is possibly, potentially, causally related, and that happened earlier, and that is sufficiently salient and surprising to be the cause. Now, here, you get the bond market doing something, and anything that was going to happen to the bond market that day was going to be attribute to Saddam Hussein, because that was an event which was surprising and sort of consequential. And that's the way it works.

So, we tend to tell stories. Those stories, we don't even have to tell them. They occur to us in that way ...

The world comes interpreted. And it comes interpreted in causal terms.

… This emphasis on causality … I can't emphasize causality enough, because it really is a characteristic of the way that the system works -- looking for causes, inferring causes, and working from causes forward.

So, it tells stories. Then there is something else that happens that is a characteristic of system 1, I think, which is that …

it tells the best possible story. So, it tells a story that is coherent. And I call that associative coherence. It makes sense … It makes sense … for an individual to be all good or all bad.

… We really don't like the idea of Hitler loving little children or flowers, when in fact he did … This bothers us because it is not emotionally consistent. It is not associatively consistent. We tend to look for stories that have that form of consistency and coherence.

Now, you can make a coherent story from VERY little information. In fact, the less information, in some cases, the easier it is to make a coherent story. Now, what matters here is that subjective confidence -- the comfort people have -- appears not to be determined by the amount of information. The confidence that people have in their impressions in the stories that system 1 is telling them, the confidence is determined by the associative coherence of the story -- by whether the story makes internal sense. If we have a good story, we feel confident in it. It's the internal contradictions that lower our confidence. This is radically different from the rational way of assigning probability to an event, or to a story, or to a hypothesis.

When we rationally assign a probability to a hypothesis, we weigh the evidence. But, here, it's not about the evidence. It's about the internal coherence. It is really not about the AMOUNT of evidence. We see that all over the place in research: Radical insensitivity to the size of samples, radical insensitivity to the overall amount of evidence. We have … I've described it as a machine for jumping to conclusions …

I asked somewhere about someone who is a national leader. And that national leader is intelligent and firm. Now, is she a good one? And actually, you have an answer … Intelligent and firm, that's really good. You're already there. You have an answer. You have an answer as you go, you are evaluating it. Now, if the third word had been corrupt, you'd change your mind … And by the way, you wouldn't change your mind enough, because it would have been worse if I had first said she was corrupt …

We jump to conclusions. We form immediate impressions on the basis of very little evidence. This will cause a failure of perfect rationality, and many failures have that character. System 1 tends to suppress ambiguity, and not to recognize ambiguity. So, I have many examples of that, but the one I use most often is, 'Anne approached the bank'. And when you hear, 'Anne approached the bank,' most of you think of the bank as a financial institution. But, if the previous sentence had been something about floating down the river, the word bank would mean something else. And … really the sentence, 'Anne approach the bank' is ambiguous. The ambiguity is suppressed. Most of you, I suppose, see more banks than rivers, and … something gets decided. You don't do it. It happens to you. Ambiguity is suppressed. A solution is adopted, and this is very characteristic of system 1. So, there is a suppression of ambiguity and telling the best story possible …

This has important implications. It has the implication that we live in a world that is radically simpler than the real world. We simplify the world as we construct stories about it.

49:50 -

Now, I spoke of associative coherence, and let me develop that a little bit. And I already mentioned something that psychologists call the Halo Effect, which is that the various traits that we've assigned to an individual or to a group or to society tend to be emotionally coherent. They tend to make sense together in emotional terms.

But, there are other manifestations to that. So, here's an experiment … In the experiment, people are required to evaluate the validity of a line of reasoning, the validity of a syllogism. And the syllogism is: All roses are flowers, some flowers fade quickly, therefore some roses fade quickly. Valid or invalid? it is not valid. For those that are still working on it, it is not valid because it is entirely possible that all of the flowers that fade quickly are not roses. But, it is true that some roses fade quickly.

Now, I forget the exact number, but it's well over 70% of undergraduates say the reasoning is valid. Now, this is enormously important as a finding, because it tells us something about the way we think in politics, for example.

We have a conclusion -- or we believe that the conclusion is true -- and that is enough to make us believe in arguments that favor that conclusion. This is a violation in what I call objectivity. Here, the reasoning goes the wrong way, from conclusion to …

This characteristic, it explains something which seems at first to be the opposite of jumping to conclusions …

It is why, in a serious sense, it is really almost impossible to change peoples' minds about things that matter to them. And if you ask yourself why you believe in what you believe -- be it religion or global warming or, you know, in whatever else you believe, why do you believe it?

We have been socialized to think that we believe in things because there is evidence to support them. But, this is not the way that we actually form beliefs. We believe in these things because we trust other people who believe in them. That's the main reason that we have religious beliefs. We're not waiting for evidence to have those beliefs. But, what there is, is there is an emotional connection -- a powerful emotional connection between love and trust and belief, and other people believing.

And, what is really very important, I think -- I had the occasion to speak about that to the National Academy of Sciences -- why there is difficulty in conveying scientific evidence to the public -- that …

scientists tend to think that we believe in things because there is evidence to support them. This is NOT the way that most people believe in things. Actually, it's not even true of all scientist always, but it certainly is not true of the public that scientists are trying to address. Evidence really doesn't really seem to work.

And WHY evidence doesn't seem to work seems to be that the kinds of evidence that scientists bring does not resonate with system 1. System 1 needs stories, preferably stories about individual cases, preferably clear causal stories that have emotional impact. It's very unfortunate in a way, but that's the way I think things are.

54:20 -

So … the same system 1 that makes us jump to conclusion when there is very little evidence makes us extraordinarily resistant to change -- not when there is a lot of evidence, but when there is an ingrained collection of attitudes, and emotions, and beliefs. They are going to be held, even against massive opposing evidence.

Another characteristic of system 1: When we ask people a question, they tend to do more than we ask them to do. And there was an experiment, when I was 40 years old that had a big impact on me, where people listened to words on speakers, and they were asked to make rhyming judgments. Do the words rhyme or not? And here are two pairs of words: And one is vote, note. They rhyme. And the other is vote goat. Well, they rhyme too, but people are much slower saying that vote goat rhymes than vote, note. So, what did they do?

They heard the word. They spelled it, and the mismatch on the spelling slowed them down in detecting the match on the rhyming. They did something entirely superfluous, or a computation was done which was entirely unnecessary. This is the way that the system works. It works in parallel. It generates not sequentially, like ordinary reasoning. But, a lot of things happen in parallel. And that one question evokes much more than one answer. In the same way, people that are asked about the syllogism, is it valid? But, what happened was that system 1 immediately detected, it is true, it is true! And true and valid are associatively correlated, and it's that correlation that cause people to answer the question, valid, incorrectly.

By the way, the same undergraduates who failed the question about roses, if you formulate exactly the same question in terms of x's and y's, they answer it with no difficulty whatsoever, and correctly -- most of them. So, it is the case that when an incorrect answer comes very readily to mind -- incorrect answer being a correct answer to a different question -- we tend to adopt … And we answer a question that we haven't been asked. This turns out to be quite an important mechanism in judgment and decision-making …

I speak of a young woman named Julie, and she is a graduating senior. And I'm going to tell you one fact about Julie, which is that she read fluently when she was age 4. And now I'm asking you: What's her GPA? Now, the striking thing is -- that, actually, although you may be embarrassed to admit it, a number came to mind. You have an idea of her GPA, on the basis of that information. You know, it's not precise. But, you know, it's certainly over 3.2, and it's, you know, less than 3.9. You know, it's actually -- agreement on those kinds of things is remarkably close, because we are in the same culture, and we understand each other.

Now, we absolutely -- I think -- understand the mechanism that creates this impression. And the mechanism is that when we hear that she read fluently at age 4, we have an impression of how precocious she was. Now, we weren't asked how precocious was Julie. We were asked what's her GPA now, 20 years later or something. But, you can go from the answer to one to the answer to the other directly, and it turns out that peoples' answer to what's her GPA … Her GPA is about as extreme as the judgment of how precocious she was in her reading. That's the way that people do this. Do is not the right word. That's the way that things happen in peoples' mind. We have a matching rule, and … you could match Julie's reading age to a lot of things … Like to how tall a building … in New Haven would be as tall as Julie's reading speed, people would give you a number of floors for that building. That is a fundamental ability of system 1, and one that is often used to answer questions.

1:00:05 -

I think you get the point that I'm trying to make, which is that we have an absolutely wonderful mind. You know, it's just incredibly marvelous what it can do. You know, it can learn to drive. It can learn to identify, you know, that a particular combination of a voice and a statement is surprising. It can do a lot of great things. It can provide an immediate diagnosis. It can identify very good move in chess. Intuition, in short, is possible. Intuition is marvelous.

But, we also have a mind that is really incompatible with the basic requirements of rationality as put forward, as explained in decision theory, or the basis of economics. We have a mind that is not equipped for invariance, because your reaction to fat-free or to fat food is not going to be the same. It is a mind that is not equipped for dealing adequately with quantity of evidence, because we tend to make up stories, and evaluate the coherence of these stories.

And so, the verdict is really not a negative verdict on the human mind. It's a complicated verdict. But, it's clear that the theory of rationality within decision theory, or within the standard models of what rationality you should think of, is profoundly non-psychological. The psychological truth -- you know, we don't know the truth -- but what I try to summarize in this book is not compatible with the rational model.

What I've presented to you is not a theory. But, I hope it's a little bit on the way to a theory, that there is some internal coherence to the story that I've been telling you. And so, even if the story is not good -- if it is coherent -- then you might believe it, if I am lucky.

So, thank you.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Fri Jan 17, 2014 12:18 am

Plasmatic, your posts increasingly appear to me as graffiti. There are occasional isolated fragments of wisdom, but compared to what is truthfully necessary in order to solve a large complex problem like this, you are unfortunately treading water. There is no sense of any progress towards any larger goal. You're not interested in actually making any observations which might clue you into peoples' actual thoughts, and all of your observations lead straight back to the same pre-determined conclusion every time. You are not actively challenging yourself by exposing yourself to new ideas. You're not growing out of your zone of comfort towards some new set of knowledge and skills. You're basically making many of the mistakes which I am actively trying to tell people to avoid.

I'm not engaged in your debate, and I have to admit that many of the objectivist claims I've seen made are so unimpressive that I'd prefer to not spend any time on them. I've told you that I'm not even here to debate philosophy, but you seem to not care. Every posting I make is designed to help others to better understand the nature of the scientific social network problem, and I am trying to set an example for people to follow of frequently switching topics. The goal is not to prove to the world that I am right. The goal is to help others to figure out processes for approaching this incredibly difficult problem.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Fri Jan 17, 2014 10:35 am

This person attempted to map out what they learned in their self-help books about the subconscious, and many other related topics ...
Notice how this map is not actually integrated into our discourse in any manner. It's a step in the right direction, but what is the cognitive process that he is advocating? A map of our cognition, but in the absence of actual discourse, fails to generate stories -- opportunities for associative coherence -- that can expose and help us to recall the role of biases in thought. On the other hand, discourse without some sort of a map of our biases on hand, may nevertheless lead to tacit learning about biases in people in the long term. But, it will predictably take much longer to assimilate these lessons into a larger framework which we can then use as a check upon our own biases.

What is missing is a system that brings the two together.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Sat Jan 18, 2014 12:28 pm

Does Kahneman's Model Work For YOU?

The rise of Kahneman's ideas about rationality amongst conventional thinkers is of great cultural significance, in my view. I personally view this as profound to see that Yale can fill their auditorium with what I presume are representatives of our scientific community, who came there to listen to Kahneman talk about how we all -- regardless of our occupations and expertise -- struggle to be rational in our daily decision-making and scientific investigations. There is tremendous opportunity here for the EU community to build a bridge from that very powerful theme into EU arguments. Kahneman admits to his audience that his mental model does not conform to the rigid standards which people generally ascribe to scientific models, and his audience accepts this approach for the reason that they find his model useful for becoming better thinkers, themselves. The lesson I learn from this is that people actually want innovation in science, and they are willing to accept imperfect models if these models can exhibit utility.

It is certainly not my intent to in any way undermine David Talbott's efforts, but the apparent rise of Kahneman's ideas about the role of evidence in peoples' decision-making processes provides a stark contrast against the EU's 2014 theme.

If there are people here who either watched the Kahneman video or who read bits of the transcript, and who are getting some value out of his mental model, I'm definitely interested in hearing about that. My girlfriend and I are both able to see that stories arrive in our heads, pre-formed and with embedded biases. In fact, she noticed that this was happening a number of years ago when the system went a little bit haywire. Something caused her subconscious to go into overdrive, and it started generating too many stories, which became overwhelming. But, since she didn't know of a model which could explain why that would be happening, it took more time than was necessary to formulate a meaningful response which tempered that activity.

We also both notice that we frequently answer the wrong question. In fact, this has been something which has been a point of contention between us, and I've been able to actually stop verbalizing my own answers to wrong questions, with only minimal effort. I wasn't even aware that I was doing it, but once it was pointed out, I was able to correct the problem -- with her help, pointing out instances within my actual the behavior -- over the course of just 3-4 weeks.

Another issue is system 2's laziness. This has been a key emergent theme of my investigation into peoples' reactions to EU claims from day 1. To be clear, I was never actually looking for that. The observation simply presented itself repeatedly.

I'm also able to observe in myself, with only minimal reflection, the seapage of mental state (emotion, attitude, mood, whatever) -- from one idea to the next, in spite of a conscious attempt to reframe to another topic. And I can now see that if I want to truly achieve authentic reframing, then I have to alter my physical context.

Switching Gears to Applying the Model

So, my point here is that there is this tendency to view these cognitive systems as static. But, with this model, we can now isolate the underlying cause for the immunity to change to system 1. And this mental model even arrives with the suggestion that the way to appeal to system 1 is through the telling of associatively coherent stories. Evidence of course does still matter for our desire to post-rationalize our beliefs, but we can see from this model that one of the activities that the EU community should be actively engaging in is in collecting stories which exemplify our own take on the lessons of the history of science, as well as our own unique vision for the future.

So, with this model in mind, I've finally started the process of actually attempting to visualize my interface. What I'll be trying to do here is to capture each person's REACTION to the article they are reading using a system of categorizing these reactions. There are three fundamental things which I must visualize:
  • The Copy Annotation Marks: How do I differentiate the text which is being responded to?
  • The Annotation Categories: What are the various types of responses which people might exhibit to a particular claim?
  • The Annotation Link: What is the style which should be adopted for each of the four levels of discourse? What I am seeing, now that I'm listing out the various categories of responses, is that each response comes with a set of values. So, what I am going to suggest is that a visual style be created for each of these four sets of values in order to cue the subconscious to my own simplified mental model. So, what I'd like to do is to simply adopt these styles from pre-existing knowledge visualization efforts. There are many to choose from, and I've already done most of the work of sorting through the library to find these visualization schemes.
At the same time, I'm going to have to spend a bit more effort on creating some user personas to design towards. I'm going to create three separate personas, and a separate site design for each persona:
  • Graduate Students
  • Laypeople
  • Professional Scientist

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Sat Jan 18, 2014 1:20 pm

How to be a genius
You don't need a sky-high IQ or a Ph.D...
By Eric Barker, Barking Up The Wrong Tree | January 15, 2014

Want to know how to be a genius? There are five things you can learn from looking at those who are the very best.

1) Be curious and driven

For his book Creativity, noted professor Mihaly Csikszentmihalyi did interviews with 91 groundbreaking individuals across a number of disciplines, including 14 Nobel Prize winners. In 50 Psychology Classics Tom Butler-Bowdon summed up many of Csikszentmihalyi's findings including this one:
Successful creative people tend to have two things in abundance, curiosity and drive. They are absolutely fascinated by their subject, and while others may be more brilliant, their sheer desire for accomplishment is the decisive factor. [50 Psychology Classics]
2) It's not about formal education. It's about hours at your craft.

Do you need a sky-high IQ? Do great geniuses all have PhD's? Nope. Most had about a college-dropout level of education.
Dean Keith Simonton, a professor at the University of California at Davis, conducted a large-scale study of more than three hundred creative high achievers born between 1450 and 1850 — Leonardo da Vinci, Galileo, Beethoven, Rembrandt, for example. He determined the amount of formal education each had received and measured each one's level of eminence by the spaces devoted to them in an array of reference works. He found that the relation between education and eminence, when plotted on a graph, looked like an inverted U: The most eminent creators were those who had received a moderate amount of education, equal to about the middle of college. Less education than that — or more — corresponded to reduced eminence for creativity. [Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else]
But they all work their asses off in their field of expertise. That's how to be a genius.

Those interested in the 10,000-hour theory of deliberate practice won't be surprised. As detailed in Daily Rituals: How Artists Work, the vast majority of them are workaholics.
"Sooner or later," Pritchett writes, "the great men turn out to be all alike. They never stop working. They never lose a minute. It is very depressing." [Daily Rituals: How Artists Work]
In fact, you really can't work too much.
If we're looking for evidence that too much knowledge of the domain or familiarity with its problems might be a hindrance in creative achievement, we have not found it in the research.

Instead, all evidence seems to point in the opposite direction. The most eminent creators are consistently those who have immersed themselves utterly in their chosen field, have devoted their lives to it, amassed tremendous knowledge of it, and continually pushed themselves to the front of it. [Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else]
3) Test your ideas

Howard Gardner studied geniuses like Picasso, Freud, and Stravinsky and found a similar pattern of analyzing, testing, and feedback used by all of them:
Creative individuals spend a considerable amount of time reflecting on what they are trying to accomplish, whether or not they are achieving success (and, if not, what they might do differently). [Creating Minds: An Anatomy of Creativity Seen Through the Lives of Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Ghandi]
Does testing sound like something scientific and uncreative? Wrong. The more creative an artist is the more likely they are to use this method:
In a study of thirty-five artists, Getzels and Csikszentmihalyi found that the most creative in their sample were more open to experimentation and to reformulating their ideas for projects than their less creative counterparts. [Little Bets: How Breakthrough Ideas Emerge from Small Discoveries]
4) You Must Sacrifice

10,000 hours is a hell of a lot of hours. It means many other things (some important) will need to be ignored.

In fact, geniuses are notably less likely to be popular in high school. Why?

The deliberate practice that will one day make them famous alienates them from their peers in adolescence.
…the single-minded focus on what would turn out to be a lifelong passion, is typical for highly creative people. According to the psychologist Mihaly Csikszentmihalyi, who between 1990 and 1995 studied the lives of ninety-one exceptionally creative people in the arts, sciences, business, and government, many of his subjects were on the social margins during adolescence, partly because "intense curiosity or focused interest seems odd to their peers." Teens who are too gregarious to spend time alone often fail to cultivate their talents "because practicing music or studying math requires a solitude they dread." [Quiet: The Power of Introverts in a World That Can't Stop Talking]
At the extremes, the amount of practice and devotion required can pass into the realm of the pathological. If hours alone determine genius then it is inevitable that reaching the greatest heights will require, quite literally, obsession.
My study reveals that, in one way or another, each of the creators became embedded in some kind of a bargain, deal, or Faustian arrangement, executed as a means of ensuring the preservation of his or her unusual gifts. In general, the creators were so caught up in the pursuit of their work mission that they sacrificed all, especially the possibility of a rounded personal existence. The nature of this arrangement differs: In some cases (Freud, Eliot, Gandhi), it involves the decision to undertake an ascetic existence; in some cases, it involves a self-imposed isolation from other individuals (Einstein, Graham); in Picasso's case, as a consequence of a bargain that was rejected, it involves an outrageous exploitation of other individuals; and in the case of Stravinsky, it involves a constant combative relationship with others, even at the cost of fairness. What pervades these unusual arrangements is the conviction that unless this bargain has been compulsively adhered to, the talent may be compromised or even irretrievably lost. And, indeed, at times when the bargain is relaxed, there may well be negative consequences for the individual's creative output. [Creating Minds: An Anatomy of Creativity Seen Through the Lives of Freud, Einstein, Picasso, Stravinsky, Eliot, Graham, and Ghandi]
5) Work because of passion, not money

Passion produces better art than desire for financial gain — and that leads to more success in the long run.
"Those artists who pursued their painting and sculpture more for the pleasure of the activity itself than for extrinsic rewards have produced art that has been socially recognized as superior," the study said. "It is those who are least motivated to pursue extrinsic rewards who eventually receive them." [Dan Pink's Drive: The Surprising Truth About What Motivates Us]

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Wed Jan 22, 2014 3:59 pm


I am making some progress on a visualization, using Gerrit Verschuur's announcement of correlations between local HI hydrogen critical ionization velocities and WMAP hotspots as a test case for simulating the application in Illustrator and Photoshop. This will initially be a static simulation. I am not focused right now on animations. Application mockups are really a test of the speed with which a person can deploy the Adobe toolset. And this concept of node compression adds yet another level of complexity which is necessarily going to cause the project to split into a number of pieces. As you can imagine, node compression requires two visualizations for the same information, and only animations can demonstrate how the transition occurs.

One of the things that will be critical here will be to expose conventional thinkers to this demo, in order to procure feedback. What I am noticing is that it is really quite tricky to simulate discourse. So, this first draft visualization will necessarily be weak on the rebuttals. But, I will of course be after authentic rebuttals from actual critics.

I am going to cut back on my posts here for a little while, and spend a few months now creating content for the design brief. I will post links to these mockups as I complete them. I hope that people have managed to learn some things from this thread. My intention has from the start been to give people the tools to create. Design thinking is a cognitive process (a cyclical workflow) that involves the creation and application of conceptual models to a complex problem space, in a way in which large, complex projects are ultimately delegated to specialists for completion. It's a very powerful tool which the EU community could greatly benefit from -- and in fact the way in which projects are actually done in the real world. I feel very fortunate to have made it to this point in my own education, but I also see all of the ways in which I still need to improve.

I would love to hear from others that relate in some way to what I've written here. I fully understand the desire to be heard, but I also kindly ask people to please keep the subject topic limited to scientific social networking. We will be thankful we did years down the road.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 23, 2014 11:50 am

If I had to distill everything that I've learned into just ten lessons, this is what they would be:

The 10 Commandments for Design of a Scientific Social Network
  1. Focus on creating a design brief. Think top-down, rather than bottom-up. Do not try to accumulate skills, in hopes of one day "getting there". Learn your skills within the context of your problem space, and not only will you remember those skills better, but you will focus better on what you need to know, in contrast to what you need to delegate.
  2. Design for a target audience. You may or may not be your target. Practice thinking about your problem space from the perspective of these different target audiences, and develop a fluency in the language, culture, thoughts (including biases), etc, of the target audiences which matter. Many people find this to be very, very hard to do, but the ones who get good at it go on to do great things. This skill is knowledge-dependent, btw: You can be good at it in one domain, and very bad at it for most others.
  3. Design your site as a response to the way in which people think, and as a response to what people do. They have questions, and these questions will tend to be the same for everybody. What are they? Your site should answer those questions that they have.
  4. Practice re-framing the same problem. Where you see tradition as justification for action, question it. Switch domains on the fly. Do not let yourself become stuck in one particular mode of thought.
  5. Test your ideas out with your target audiences, in order to develop fluency in how they actually think. Up until that point, all you have is a hypothesis. Design is necessarily a process of ethnography, if it is to be useful to somebody. This process can be difficult in the beginning, but it will be far more rewarding in the long run than most realize. Interesting, unexpected stuff will happen. I guarantee it.
  6. Don't fear mistakes. It happens. The Internet is like a chronology of our mistakes, and science is complicated. What matters is that you are trending in the right direction, and that you try to listen to your critics for long enough to clearly hear what they are saying.
  7. Learn to use your subconscious in support of your rational objectives. Your subconscious is thinking even when you are sleeping. You can tell it what to think about by being selective about what you expose yourself to. When you watch a lot of television -- which is designed to activate your subconscious -- your ability to leverage your subconscious becomes undermined. If you are serious about doing something interesting with your life, one of the most critical decisions you can make is limiting your exposure to the informational garbage which surrounds us.
  8. Do not ramble. People should never question why they are reading something. Start from the high-level, short explanation, and give them the option of reading a more detailed response.
  9. Get organized. Consider creating a system for document management on your desktop, and task management software for your phone. Use the task management software to write all ideas down as you have them. When you use your memory to track ideas, your mind becomes cluttered, and it can interfere with coming up with new creative ideas.
  10. Do not give up. Talent is not born. It's a decision to develop a skill, in spite of being repeatedly told that your idea is garbage. You will be told this over and over and over. Develop skill in your domain, and use that skill to figure out when those critics are right, and when to ignore them.

Posts: 248
Joined: Sun Mar 16, 2008 8:20 pm

Re: Online scientific discourse is broken and it can be fixe

Unread post by pln2bz » Thu Jan 23, 2014 11:56 am

This is what I mean by "interesting stuff will happen" if/when you decide to do your own ethnography …
I'm doing a bit of research on plasma cosmology, so I found a post titled 'How I know Plasma Cosmology Is Wrong'. I thought that's the kind of resistance I need to cut my teeth on and began to read. The post lost my interest after a few paragraphs of insulting people and ideas instead of getting down to business so I cut bait and flicked down to the comments. I'm glad I did. I found one comment from a person using the name of the Swedish electrical engineer Hannes Olof Gösta Alfvén.

It's so beautiful crafted and likeable I'm going to paste it below in italics as a reminder to myself and others how elegant reasoning can be persuasive and more importantly instructive. It's also a fantastic argument for the kind of science I'm interested in. One that celebrates the mystery instead of leaning like a drunk on a lamp post to increasingly threadbare Ptolemaic models.
This is a crucial data point: Somebody has observed the thread of logic which will ultimately undermine the arguments for the CMB, and they GET IT. This is an important observation. It's feedback like this which helps you to determine when you are on the right track ...


Who is online

Users browsing this forum: No registered users and 0 guests