Trip Report: Security and Human Behavior 2010

Michael Roe

Session 1

Jeff Hancock

He is a psychologist who is interested in the language people use when they lie.

Online text is very easily recorded and searched. This may be a game-changer for lying (makes it more likely you will be caught out in a lie).

Are there universal cues for lying ? Probably not: dating sites, insurance fraud and political lying are different kinds of deception.

One way of using machine learning to detect lies is to have two sets of texts, one of which is truthful and the other of which contains a lie, and to look for differences. But in a text that contains a lie, there will be many supporting statements that are truthful. An alternative approach is to classify sentences (rather than entire texts) as truthful/lying.

Frank Stajano

His previous work was on “why do we fall for scams”? The current research problem: why do we follow the principles that make us fall for scams? (e.g. what was their evolutionary advantage?)

A recommended book:

Cialdini, R. B. (2001). Influence: Science and practice (4th ed.).

Peter Robinson

“Computers are autistic”: both computers and people with autism-spectrum conditions fail to notice non-verbal cues such as facial expression, which carry Important information in conversation (whether the conversational partner is bored, understanding what is said, etc.)

Some facial expressions can be recognized (by human beings) from still photographs, others need video.

He has a computer program that identifies facial expressions from video.

He has also tried computer-analysing body posture. It’s harder to capture (the person being monitored needs to wear a suit with marker dots on). Posture contains a lot of information on what is being done, and who is doing it, but relatively little on how it is being done.

This software cannot detect deception: after all, it has been trained on samples provided by actors. However, there are some muscles in the face that aren’t under conscious control.

Pam Briggs

Biometric Daemons: Authentication via electronic pets

She has an idea for an electronic pet that is always with you, represents you, and is sustained by you. (It has a crypto key so it can authenticate on your behalf; it is continually checking your biometrics to be sure that it is in your possession).

Start with fixed biometrics. It can then learn dynamic biometrics such as gait and the locations you typically visit. The electronic pet will “pine and die” if it is stolen.

The rational for this is that reassuring pets/children comes naturally to human beings; dealing with crypto protocols doesn’t.

You could actively reassure your pet by playing with it when you need to provide it additional assurance that you are present.

This mechanism could also be used in reverse: the pet will warn you/become anxious when you are about to do something risky.

(After the workshop, Bruce Christianson adds: “and if someone steals your pet, it can bite them”)

Mark Frank

Human Behavior and Deception Detection

The research literature suggests that police officers are more confident (but not more accurate) than college students at telling whether someone is lying.

BUT: in their job, police officers are trying to detect lying in people who are in high-stress situations (e.g. they will go to jail if their lie is detected), while the experiments are with low-stakes lies. To be valid, need to do experiments with higher-stakes lies (within the limits of what an Institutional Review Board will allow).

Result: Police officers are no better than students at detecting low-stakes lies, but are better at detecting high-stakes lies.

Martin Taylor

Martin Taylor is a hypnotist and stage magician. He thinks that there is no special “hypnotic state”. Rather, hypnotism relies on the following factors:

Q&A Session

(Frank Furedi) In the process of forming a personal identity, people come to live their lies, and believe them.

What is the relationship between privacy and deception? e.g. teenagers lie to their parents to preserve their privacy.

(Luke Church) When people misrepresent themselves to an online dating site, it is not necessarily an attempt to deceive other people: sometimes, it is an attempt to get around the limitations of the software. E.g a dating site that will never match over 40’s with under 40’s: if you’re willing to date someone outside your age bracket, the only way to get a match is to lie about your age to the system.

(Jean Camp) What is the relation between deception and authority? e.g. suppose your dentist asks you how many children you have. It’s none of his business, and irrelevant to your dental treatment, so you lie.

(Jean Camp) Police officers tell lies too. In teaching them how to detect deception, we would also be teaching them how to lie convincingly: and this is one of the most dangerous groups of people to be lying.

Session 2

Petter Johansson

A psychological experiment that uses a stage magician’s trick. Show the subject two photographs, and ask them which one they prefer. Use sleight of hand to make them think you’ve given them the one they chose, but actually you gave them the other one. People will still justify their decisions! e.g. “I preferred that one because of her earrings” (when the photograph they actually chose was of the woman without earrings)

Terrence Taylor

He has edited a book, “Natural Security”. What can we learn from evolution by natural selection?

Rick Wash

Folk Models of Home Computer Security

He has carried out a survey of “folk models of security”—what typical users think is the cause of security problems.

Some users think security problems are due to “viruses”:

Other users think security problems are caused by “hackers”:

Wolfram Shultz

Risk-dependent reward value signal in human prefrontal cortex

The neuroscience of risk perception. By risk, he means the variance of the probability distribution, rather than the expectation: a game where you are 90% likely to lose is a bad game to be in, but is not very risky.

Brain response to risk shows an inverted s-curve. This is similar to the curve you get when you ask subjects for their subjective assessment of risk.

Mark Levine

Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence

A quote from Frans de Waal: “We know much about the causes of aggression but much less about how aggression becomes violence”

He has analysed CCTV footage of fights breaking out. People spend a lot of time trying to deescalate the situation. As the group size increases, there are more attempts at de-escalation. Drawing a state transition graph with probabilities shows that the third turn is decisive in whether a situation ends in violence. Three different people intervening are much more likely to lead to a non-violent outcome than the same person intervening three times.

Social psychology often points out the negative influence of groups. But this shows that groups can have a beneficial effect.

Q&A Session

Social psychology was developed during the 19th century by mostly middle-class scientists, against a background of social organization by the working class (e.g. trade unions etc). This may be why social psychology sees groups as a threat.

Q: Why does the Department of Homeland Security so overestimate risk? Surely they have enough experts that they can’t simply be mistaken.

A: Both the DHS and journalists are doing their jobs. (Which, in the case of journalists, is to sell newspapers rather than find out the truth).

Session 3

Stephen Lea

The psychology of scams: Provoking and committing errors of judgement

“Psychological factors in Internet scam compliance”

Ask people to complete a survey on which kinds of Internet scam they’ve fallen for, and correlate this against various psychometrics.

Ideally, you would like to survey a general population sample (“real people”). The Internet doesn’t reach all people equally: you have to have an Internet connection in the first place to be vulnerable to Internet scams. His survey used students, because it’s easier.

Falling for a scam is discreditable behaviour that people might not admit to. So ask them if they thought the scam was “plausible” as well as if they’ve given out information or money in response to it. (It is easier to say that other people might fall for it than to admit you’ve fallen for it yourself).

The following characteristics correlate with vulnerability to scams:

Chris Hoofnagle

Internalizing Identity Theft

A study of the victims of bank fraud. Get them to ask their bank for the details on how the fraud was done.

Problems with this study:

Tyler Moore

Would a ‘Cyber Warrior’ Protect Us? Exploring Trade-offs Between Attack and Defense of Information Systems

Now that nation states are interested in “cyber security”, some actors are both attackers and defenders. He has developed a game-theoretic model of vulnerability disclosure (do participants keep vulnerability to themselves so they can use them against their enemies, or disclose them so they can be fixed?)

Q&A Session

Some people treat scams like lotteries: they are fairly sure it’s a scam but are enticed by the small possibility of a big win.

With phishing, the impersonated bank has an incentive to get the fake site taken down. With (e.g.) lottery scams, there’s no-one to pursue it (except the police).

On Stephen Lea’s survey: the absolute rates (of how many people have fallen for a scam) should be taken with a pinch of salt. However, in surveys of this kind, the differences in response rates (correlation with various psychometrics) can be more reliable.

Session 4

Scott Atran

Talking to the Enemy

Scott Atran is an anthropologist who studies “the extremes of human behaviour”—suicide bombers, and leaders in intractable political conflicts.

“sacred values” are the values on which people are unwilling to compromise (they are not necessarily to do with religion).

Purely symbolic gestures (such as offering to give an apology) increases the chances of getting agreement on a proposed settlement. On the other hand, asking people to compromise on their “sacred values” just makes them angry.

Dylan Evans

Ask people “how likely is this?” to a series of factual questions (to which you know the answers). If they have correctly calibrated their own uncertainty, they will be right 10% of the time in cases where they said they were 10% certain. Weather forecasters correctly judge their own uncertainty in this way. However, most people do not estimate their own uncertainty correctly.

Ragnar Löfstedt

Trust in politician and regulators has decreased. One way to restore trust might be to get more public participation in decision-making. But (as shown by a case study of a proposed incinerator), the people who turn up to public meetings are:

  1. Opposed to the proposed incinerator
  2. People who have a lot of time on their hands to go to meetings

So the people who attend public meetings are not representative of the general population.

Bill Burns

“Public Response to 3 Crises: A longitudinal look”

He has surveyed people’s perception of risk after 3 major events, and measured how the perception of risk changes over time.

Chris Cocking

Gustav Le Bon wrote “The Crowd: a study of the popular mind”. Le Bon had observed crowds during the Paris Commune of 1870-71.

There is a common belief that people panic in emergencies: but, usually they don’t. Instead, there is altruistic behaviour amongst strangers as people escape a common threat.

He has been studying what happens when crowds are charged by riot police (using video of G20 protest etc). Crowds scatter, but then they regroup and return to the situation. Charging a crowd unites them against a common enemy and makes them more militant.

Frank Furedi

He has been writing a book on the history of “fear and risk cultures”.

One example: In England, “worrying” television programmes are often followed by the announcement of a telephone helpline that viewers can call. This suggests to the viewer that they ought to be worried, and are abnormal if they aren’t.

Q&A Session

(Richard Clayton) Do different kinds of crowds behave differently? e.g. people who are present when a disaster happens; a crowd that has become too big; people who have come for a political protest.

(Chris Cocking) Yes, but there was more commonality and less difference than they expected.

Sometimes the experts discount the public’s ability to predict risk. Before the Times Square terrorist attack, members of the public who were asked predicted there would be another attack. And they were right.

There is a paper on “the jerk effect”—car drivers’ reaction after encountering a pothole in the road. The same principle applies to other risks.

Session 5

Luke Church

Computer security reaearchers often give “their mother” as the example of a non-expert user. This is a clever piece of rhetoric—no-one is going to argue that you don’t know what your mother is like. However, we should be cautious about using it—middle-aged women are often more expert users than this stereotype would suggest.

Computer security discourse often suffers from these problems:

It might be instructive to compare computer security to graphic design. Graphic designers learn from thousands of examples (and critiques of the examples) but don’t try to make a science out of it.

Rob Reeder

He has been working on improving warning messages in Windows. The “gold bar” (as seen, for example, in PowerPoint 2010) is a useful technique: the user is shown the “safe” content first, and then asked to decide whether they want/need to see the rest. It is bad UI to ask the user whether they want the “unsafe” content before you’ve shown them what’s in the safe part.

Angela Sasse

Years ago, she showed an example of the cost of password resets spiralling out of control. The bad news: the cost of password resets hasn’t got any better since.

Typical security usability problems:

Cormac Herley

Where Do Security Policies Come From?

He is interested in where security policies come from, taking as a case study password policies (e.g. length, number of characters) from different web sites.

Estimate the entropy of the passwords, and use this as a very rough measure of the level of security (acknowledging the problems with entropy as a measure of password security).

Why does require 2^50 (approx. 10^15) times more brute-force resistance than Amazon?

He statistically correlated password strength with various attributes of the site. Password strength does not seem to correlate with measures of security need (e.g. the number of users the site has). Rather, it correlates with whether the user is able to go elsewhere. Sites in .com have less stringent password policies than those in .edu or .gov.

Joseph Bonneau

He explained why entropy is not necessarily the right measure to use for password security.

The “RockYou” site had its password database compromised. He has searched the leaked password database for numeric PINs: his idea is that sites that require a PIN rather than password (e.g. bank ATM’s) may have a similar distribution.

Q&A Session

(Rob Reeder) When designing an error message, you have to be as aware of the cases when it’s perfectly safe as when there’s a real danger.

(John Kelsey) Another measure of password strength is: is it Googleable? He and a friend invented a private language, and a question and answer pair in this language is on a web page. This would be a very bad choice for a password reset question, because the only Google hit for the question gives the right answer. (Even though the attacker does not know the language at all).

(Jean Camp) “The invisible hand is giving you the finger”. Banks don’t care if their customers lose money, so it doesn’t follow that the .com sites (in Cormac Herley’s presentation) are choosing the optimal password length. While .edu .gov sites have offloaded some of the cost of the security mechanism, the .com sites may have offloaded some of the costs of security breaches.

In the case of RockYou, everyone had their password compromised. So the strength of the password wasn’t the weakest link in the system, and users who put in extra effort to choose good passwords were wasting their time. Also, RockYou is not a high-value site for the customer: they may not care if their RockYou account is compromised.

(Richard Clayton) Whether you can brute-force a password depends on other security measures taken by the site (e.g. what do they do when someone tries passwords repeatedly). So it is misleading to compare password strength in isolation. Sites that allow weak passwords may have other security measures to make up for it.

Corporate security policies are often driven by the auditors, and not the IT departments. Amazon (for example) may be so concerned about potential customers being driven away by password policies that they are prepared to say no to their auditors. (Which still begs the question of why Amazon is able to say no and isn’t).

(Luke Church) Security professionals are engaged in a “confessional discourse” about how they can’t protect passwords. We aren’t making progress. We need to “reach closure” on this issue and move on.

Session 7

Alessandri Acquisti

He has carried out an experiment to measure people’s “differential discounting”. Tell study participants a story about person A:

Participants are then asked “How much do you like this person?” Bad things are not just stronger than good ones; they are discounted differently with the passage of time.

Sandra Petronio

She has done research on privacy, including “reluctant confidants”—people who are given information they don’t want. Examples include nurses and bartenders. Another group of “reluctant confidants” is pregnant women whose doctors give them information relevant to the pregnancy that they don’t want to receive.

Lucasz Jedrzejczyk

I Know What You Did Last Summer: risks of location data leakage in mobile and social computing.

He carried out a study on a location-sharing site, where he showed users what it was possible to find out about them, and asked them what they thought.

He will repeat this study with Twitter geotagged tweets.

Andrew Patrick

For the last six months he has been working for the privacy commissioner of Canada.

What is a reasonable expectation of privacy?

People want to know:

A four-point test of appropriateness:

Bruce Schneier

He was at a recent debate in Washington on the subject of “The threat of cyberwar has been greatly exaggerated”. The debate was lost (i.e. the debate audience thought it wasn’t exaggerated).

Bruce thinks that the real examples of “cyberwar” are not really war, but are either crime or espionage.

China and Google: this was an example of espionage carried out by a nation state (assuming the press reports are accurate…)

Estonian denial of service attack: This was an action by individuals, not nation states.

Q&A Session

Bruce Schneier: “In the US, we never use the word “war” for real wars.”

Rick Wash says that in his study, none of the people he interviewed described computer security problems in terms of “war”.

To what extent do confidentiality obligations require consent from the confidant? If you receive an envelope through the post labelled “private and confidential” are you under any obligation to respect this, if you have no prior agreement with the sender?

Session 8

Nick Humphrey

He has an explanation for the placebo effect. The human body has a “health management system” that takes inputs, makes a forecast, forms a plan, and then makes an output. The inputs are signs of injury; pathogen detection; threats; weather and season; energy reserves; social support etc. The outputs are sickness behaviours; pain, fever; immune response; remedy seeking; pleas for help etc.

A placebo tricks the body into thinking that prospects are better, and hence that it should expend resources on healing now, rather than keeping them in reserve for later. The reason that this is usually beneficial, rather than harmful, is that most people today are in a safer situation than our ancestors were in the distant past. In the evolutionary past, keeping healing resources for later was more often the optimal strategy.

Hospitals give conflicting signals. On the one hand, they give many signals that you are in a place of safety. On the other hand, you are surrounded by lots of very ill people.

On a different subject: Computers are used for lots of different things: banking, violent games, pornography etc. Nothing in human history has been like this. It is not surprising that people make bad decisions.

John Adams

There are three different kinds of risks:

A model of risk:

Propensity to take risksRewards
(top loop bias)
Balancing Behaviour
(bottom loop bias)
Perceptions of risksAccidents

Mary Douglas’ cultural theory of risk:


A military example of this:

“poor bloody infantry”Eisenhower
PattonIdeologues; suicide bombers; “ban the bomb”

A psychiatric example:

ProzacPsychiatrists self-medicate with alcohol

(I didn’t have time to copy all of this table)

Alma Whitten

She has been developing a simple explanation of what Google logs about its user’s searches.

Q&A Session

Cynthia Breazeal at MIT’s media lab has been working on “emotional robots”. The project is called Kismet.

(Peter Robinson) Isn’t it a sense of place that puts us in the right mood. For example: going into a bank.

(Bill Burns) Concern about terrorism in the US seems to vary geographically, but the places where people are most worried aren’t where they are most at risk. The cultural theory of risk may be relevant here.

It’s easy to make a robot that generates an emotion in a human being, but it is hard to sustain the emption: the novelty wears off.

Allegedly, B.F. Skinner had two desks, only one of which he used for writing. He conditioned himself to always write when he was at his writing desk.

(Stephen Lea) While (e.g.) a bank and a brothel have traditionally been separate places, our thoughts have never been separated. Not all thoughts that are had in church are religious! The problem with computers/the Internet may not be so much to do with switching between different contexts, as that it makes it very easy to translate a thought into action.

(Alma Whitten) (re. cybersecurity): Are we talking about people dying, or not being able to read their email?