Why You Shouldn’t Do Your Child’s Homework

Children learn from stress and failure

Source: Why You Shouldn’t Do Your Child’s Homework

Posted in Uncategorized

Calling Your Grandma Doesn’t Cut it, Study Suggests

Face-to-face contact reduces depression rates significantly, a study says.

Source: Calling Your Grandma Doesn’t Cut it, Study Suggests

Posted in Uncategorized | Leave a comment

Therapy could prevent healthy kids from developing anxiety disorders

Source: Therapy could prevent healthy kids from developing anxiety disorders

Posted in Uncategorized | Leave a comment

Ten ideas that will create a better science strategy

“The Government is due to deliver its strategic plan for State investment in scientific and humanities research this month, with most commentators stressing this had better come before we des…

Source: Ten ideas that will create a better science strategy

Posted in Uncategorized | Leave a comment

The frailty and error-proneness of human cognition and the decision to torture

Under many conditions (including difficult and stressful ones), people rely on heuristics (cognitive shortcuts that enable decisions) and are prone to effects of social processes such as groupthink. We have a guide from more than fifty years of data in experimental psychology and experimental brain research to understand how human rationality and reason is bounded and error-prone. Daniel Kahneman, experimental psychologist and Nobel Memorial Prize winner, gives a marvellous and extensive exposition of these in his tour-de-force “Thinking, Fast and Slow”. Decision-making within organisations (political institutions, law-enforcement, etc) is often a difficult and ideologically-charged process. Political and civil service systems can undervalue expertise, suppress cognitive diversity and discount evidence in favour of ideology.

There are persistent and enduring cognitive errors which lead to faulty decision-making by individuals (political leaders, civil servants, bankers, etc.) and institutions (social systems and organisations: Government departments, banks, churches, etc). Here are a few.

Anosagnosia is the condition of literally being ‘without knowledge’ (being unaware) of a cognitive or other impairment, and behaving as if there is no problem. Governing elites do not know that they do not know, nor do they even know what they need to know. Complex and difficult problems (such as how to collect valuable and useful intelligence from multiple humans under conditions where the targets are continually moving and changing) are best solved by groups with substantial intellectual strength and capacity (obvious), and substantial diversity of experience (not obvious). Absent these factors, and people will rely on folk or lay intuitions as the basis for deciding courses of action – “we need to know quick – let’s waterboard this guy”. Government decisions are taken and implemented within a group context or contexts – documents are evaluated, discussions occur, and decisions are taken and then implemented (sometimes in ways that subvert the spirit and intentions of the original decision).

Groupthink occurs when a group makes poor decisions because of high levels of within-group cohesion and a desire to minimise conflict (as might happen in an exhausted, embattled and worn-out Government Cabinet desperate to forestall another terrorist attack).

The necessary evidence-based critical analysis does not occur. Groupthink can be reduced by the leader and the group having an extensive network of weak ties to other individuals and groups. Weak ties provide us with novel ideas and knowledge, and provide a route to ‘reality-test’ planned courses of action. An extensive national and international weak tie network will provide Government Ministers knowledge, insights and ideas unavailable within their usual cognitive and social bubble. Being able to ask how the leaders of countries X, Y, Z coped with terrorist outrages, for example: “You guys have waterboarded, beaten, isolated and electrocuted terrorist suspects. How did that work out for you?” European countries have a long, sad and sorry history of doing just this – and within living memory too.

An interesting cognitive error during complex decision making is to not explore counterfactuals, as these might falsify or invalidate a course of action. Exploring counterfactuals forces you to ask why you might be wrong! (“Why might freezing this guy be a bad idea?”).

Cognitive dissonance is the unpleasant feeling caused by holding two contradictory beliefs simultaneously: for example, “our current interrogation methods aren’t delivering; my colleagues and I are good people and are part of the system; therefore it isn’t the system; the problems lie elsewhere” (with terrorists, although logically this may also be true).

Verificationism (also known as confirmation bias) is a pervasive cognitive bias, where evidence favouring a particular point of view is collected and weighted heavily, and contrary evidence is discounted or ignored. (“All the programmes I watch on telly show torture working”). Its opposite, falsificationism, is a difficult habit of mind to acquire. It is a must for any working scientist. Falsificationism requires considering what empirical evidence would invalidate (falsify) the position you are adopting. One way of avoiding this bias is to state clearly what empirical evidence would falsify your opinion or theory; another is to build an evidence-based brake into policy formation. In science this is done by international, anonymous, expert ‘peer-review’. Peer-review and similar systems can be built into the process of Government via policy-review boards.

The arguments for torture may also pivot around the focusing illusion, a cognitive error which emphasises only upside arguments (local benefits: ‘quick and easy knowledge about terrorist networks and ticking time-bombs’), but ignores costs (the destruction of reputation, lives, and contempt for international treaties and the rule of law; acting on what is false knowledge).

Language has the important property of ‘framing’ arguments and discussions. The crime debate at one time in the UK was dominated by the phrase ‘a short, sharp, shock’, which relied on the folk theory that quick and severe punishment would shock teenagers out of criminal tendencies. (The pleasing alliteration of the successive sibilants was an important, but useless, selling point too). Short, sharp shocks, of course, predictably have no such effect, but why let data from the psychology of punishment and from criminology influence debate? The phrase ‘cut and run’ was used to forestall debate about the palpably-failing US military strategy in Iraq, until empirical reality forced a change of direction. There are many other cognitive errors (for example, availability and affect heuristics, motivated reasoning, competence illusions, overconfidence, incentive effects) and humans are also prone to them (especially under duress, as cognition degrades under stress.).

Individual rationality and cognition is limited and error-prone. Institutionalised decision-making supports are vital to ensure that decisions are made using the best evidence and logic available.

My book, Why Torture Doesn’t Work: The Neuroscience of Interrogation is available on Amazon  and will be released in November, 2015.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Counterfactuals and metacognition regarding ticking time bombs and torture: A lesson from Mel Gibson

Payback, a Mel Gibson thriller from 1999, is a tough and visceral film, with an extraordinary torture scene involving a hammer. Gibson plays Porter, a hardened, unpleasant enforcer and killer, who is possessed of a curious sense of honour (and seemingly lacking a forename). Payback nicely shows that torture can be used to extract information, but that the information extracted is neither complete nor useful, and is in fact wholly destructive to those who forced its extraction.

Gibson, playing Porter, is captured by his enemies, is taken to a garage, tied to a chair and asked to voluntarily give up the information regarding where he has imprisoned the lead mobster’s (played by Kris Kristofferson) son. He refuses to do so, whereupon his shoes and socks are removed. He is menaced with a hammer and understands that non-disclosure means imminent agony. He is asked to disclose where the son is again. He refuses to do so and the small toe of his right foot is crushed irreparably with the hammer, after his refusal. This will cause terrible of pain and agony, unsurprisingly.

Cleverly, we are not shown his crushed toes – these are left to the imagination (and would presumably look like pulped meat). He is asked again of the son’s location, and he again refuses to disclose the son’s location. The hammer is applied again with great force, crushing another toe, again irreparably. He is asked again to disclose where he has imprisoned his captor’s son. This time he does so; he is removed from his chair, placed in the trunk of a car and driven to the location (a seedy hotel) where the son is allegedly being held. His captors move rapidly inside the hotel, run upstairs to the room where the son is believed to be held, and enter the room whereupon a phone rings, causing a booby-trap bomb which has been placed under the bed to explode, killing them all.

Porter has managed to escape from the trunk through the soft fabric back seat of the car and has used the car phone to cause the discharge of the explosives. The narrative here points to a compelling problem with the use of extreme stressors to force a captive to disclose information in what is supposedly real time. In this case Porter has lied and willingly endured terrible suffering, in order to give the appearance and substance of having told the truth, in order to lure his captors to their deaths and to save himself.

There are several interesting features of his lying. In order for Porter to ensure his lies are convincing, he knows that he must endure terrible pain and suffering. He must also be able to make a reasonable estimate of the degree to which he must suffer in order for his captors to be convinced of the apparent truth of what he is about to say. Too little, and he will not be believed; too much, and he may not survive.

Thus Porter engages in the sort of metacognition only possible to someone possessing “Theory of Mind” – he is able to infer the psychological states of others, and to infer what others in turn are likely to be thinking about him. In turn, Porter uses his estimate of what others are likely to be thinking about him in order to turn the tables against them – a form of metacognitive capacity known as ‘deceptive intent’. Porter is also capable of enduring great pain in the present moment in order to achieve a much greater deferred reward in the future. Porter therefore presents a case of extreme self-control – even in the face of an imminent and terrible attack on his bodily integrity. (I would guess, given the trials and tribulations of his private life, that Mel Gibson might regret not having more of Porter’s extreme self-control!). Porter manifests this self-control in the face of what will be overwhelming pain signals from the periphery of his body. Stubbing a toe is painful enough; having your toes systematically crushed, one-by-one, and anticipating this, will be much worse. The narrative drive here is different from the usual fictional situation with the ticking time bomb, but those focused on the ticking time bomb rationale will rarely if ever discuss other possible scenarios. Such scenarios will immediately falsify the reasoning involved which leads to torture as the only possible route to needed knowledge.

The point here really is this: the ticking time bomb scenario is most-often presented with a single rationale (a big bomb, a population centre, and now!), but this presentation is both idealised and abstracted, to use Henry Shue’s formulation in his famous paper, Torture in Dreamland: Disposing of the ticking bomb. In fact, there are any number of possible variations possible amidst the chaos of the real world. Dummy bombs, proxies, booby-traps, misdirections, lying to run down the clock, several bombs: you name it, any and all are possible. Devoting lots of serious thinking to a single counterfactual with a single narrative drive with a singular ‘man’s gotta do what a man’s gotta do’ storyline is to deny the actual reality of human ingenuity.

And if Porter can subvert torture with metacognition and self-control… then those who take the ticking time bomb scenario seriously (like Michael Ignatieff) really need to get out to the cinema more. Of course, the tortures that will be imposed by willing amateurs on prisoners in the field in order to loosen tongues in a truthful way complex scenarios involving possible double-dealing, incomplete knowledge and deceptive intent are never specified by those who take seriously the ticking time bomb. And of the blood and filth involved in torture? They are silent on this, and on the amateurs who are supposed to know ‘what to do’.

They might learn that even movie scenarios are more complex than the ‘exceptional case’  that they cogitate.

My book, Why Torture Doesn’t Work: The Neuroscience of Interrogation is available on Amazon  and will be released in November, 2015.

See also:


Unthinking the Ticking Bomb – David Luban

Liberalism, Torture, and the Ticking Bomb – David Luban

Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

Should you rely on first instincts when answering a multiple choice exam? From The Conversation

Should you rely on first instincts when answering a multiple choice exam?

(Reposted from The Conversation under a Creative Commons Attribution NoDerivatives Licence)

Justin J Couchman, Albright College

Often, you’ll hear people say that you should “trust your instincts” when making decisions. But are first instincts always the best?

Psychological research has shown many times that no, they are often no better – any in many cases worse – than a revision or change. Despite enormous popular belief that first instincts are special, dozens of experiments have found that they are not.

While that may be a useful fact to bring up in an academic discussion, anyone who has ever made a decision in real life will undoubtedly reply:

But I remember times when I made a correct choice, then changed my mind and was wrong.

This happens for two reasons: First, humans naturally have something called an endowment bias, where we feel strongly attached to things we already have (our first instinct, in this case).

We don’t want to give it up, and we feel especially bad when we give it up and it turns out later to be correct. We remember these instances vividly and thus they seem to be very common, even though all research shows that they are less common.

The second reason is more obvious: sometimes first instincts actually are correct. The problem is figuring out when to trust yourself and when to change course.

The solution may lie within the realm of “metacognition,” the ability to “think about thinking” and use those thoughts to monitor and control behavior.

I originally began exploring “metacognition” in rhesus monkeys. They would be given various questions, some easy and some quite difficult, and had to either answer or report that they did not know. I was amazed at their robust ability to look into their own minds and “know when they did not know” the right answer. They were able to accurately judge whether they would get the question correct or incorrect.

I was equally amazed that my (human) undergraduate students sometimes seemed to lack the ability. They were always surprised at their exam grades; some would significantly overestimate their performance, while others underestimated it. They would also believe their first instincts were special, even when their own behaviors – successfully revising answers – showed otherwise.

Surely my students knew how well they performed on each question, and could thus figure out how well they’d done on the exam, right? Recently, my colleagues and I tested this by studying students’ metacognition while they were taking exams.

The experiment

We asked the students to track their confidence on each response to a real multiple choice psychology exam, marking it either a “guess” or “known” to indicate how sure they were about their original answer. They also marked whether or not they revised that original response.

More often than not, the students’ revisions – changes from a first instinct to a new choice – resulted in a correct answer. And on questions that caused the most uncertainty, sticking with an initial response was a bad idea: they were wrong more than half the time.

Their “metacognition,” in the form of confidence ratings for each question, was an excellent predictor of whether they made the correct decision and thus whether or not they should change their response. In other words, they were able to tell, in the moment, whether or not they would get the question correct. And because they wrote down those accurate judgments, they could use them later when deciding to change their answer or not.

In a second experiment, we looked at sticking with original answers.

Again using metacognitive confidence ratings, this time on a 1-5 scale, students were able to identify the questions that they were mostly likely to get correct or incorrect.

Using those ratings as a guide, we found that when they chose to stick with an original instinct they were correct more often than not.

Thus, both revisions and first instincts were correct most of the time.

Tracking feelings of confidence

On the surface, that might seem like a contradiction. And it would be, if the only tools the students had in their arsenal were “always trust your instincts” or “always change your mind.”

But we gave them a slightly more sensitive tool, a written-down record of their metacognitive confidence, which allowed them to choose when to revise and when to stick. Everyone feels their level of confidence when they make a decision, but the problem is that we quickly forget this information when we move on to the next decision.

Because they rated their confidence for each question on paper, they could use those ratings instead of (notoriously faulty) memories.

Using this tool, they made more informed choices that helped them perform better.

But why take the time to record confidence levels for each individual question? If they know how they performed on each question, don’t they know how well they did at the end of the exam?

It turns out, no.

Despite being excellent at predicting their performance on each question during the exam, when we asked after the exam, students were very bad at judging how well they’d done.

How can students make more informed decisions?
Wellington College, CC BY-NC

I’ll give you just one dramatic and disturbing example. We asked them, at the end of the first study I mentioned above, whether they thought a revised choice was more likely to be correct than a first instinct.

Despite the fact that their actual choices and ratings, moments earlier, clearly showed that revisions were better, the overwhelming majority of students falsely believed that their original choice would be the best.

That is the dramatic part.

The disturbing part is that an even larger majority reported that a professor or teacher – apparently unaware of the huge body of literature to the contrary – had specifically told them that first choices were mostly likely to be correct.

Thus, the key to knowing when to stick with your first instinct and when to change your mind is to track feelings of confidence during the moment you make the decision.

During college exams, both revising and sticking with original answers had the potential to result in more correct than incorrect answers.

Only the self-tracking of confidence levels predicted when each was more appropriate. By using that simple form of metacognition, students could better identify which questions to revise and which were better left alone.

Making informed decisions

Of course, this practice of tracking feelings is useful for more than just test-taking. Memory is notoriously unreliable, and we are subject to numerous fallacies and biases.

This leads to problems in almost every area of decision-making. Most of these problems stem from the fact that our beliefs about ourselves and our personal histories are usually formed long before or after a decision, not “in the moment.”

Upon reflection, things often seem much different than they actually were.

Tracking how you feel while initially making a decision can provide valuable information later, can help you make more informed choices and will better prepare you to revise your initial decision when necessary.

I would encourage all educators to consider these findings both while administering exams and while forming their own beliefs about how students learn and take tests. Like the students themselves, our reflective beliefs often differ from the actual experience.

Students benefit from a system that allows them to build metacognitive skill, and they will generally make better decisions if they use empirically validated information about their confidence rather than a folk belief or popular misconception. It is also relatively simple to do this during paper-based or electronic exams, so there is little cost.

Educators would, perhaps most importantly, be wary of giving advice based on their subjective beliefs or (almost certainly) unreliable memories, and instead be able to foster a useful skill based on memory and metacognition research.

The Conversation

Justin J Couchman, Assistant Professor of Psychology, Albright College

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment