This work is in collaboration with Paul Wilson, of “The Real Hustle” television show. They have identified some common principles that are involved in frauds, including:
The philosophy of deception. Deception is hard to define in an acceptable way:
No, because you might have unintentionally caused them to have a false belief.
No, because the “deceiver” might to honest, but mistaken.
There are several objections to this, of which the easiest to explain is that non-human organisms deceive.
His approach is to define deception in a way that works for non-human organisms, and then extend it to people:
The “purpose” of a mechanism is defined by natural selection.
Security is a “fear sell”.Example of the Conficker worm that “phoned home” on 1 April. This wasn’t actually a big deal, but the specific date focussed media attention.
The U.S. government has hired science fiction writers to predict what the new risks will be. There are several problems with this:
So the effect is that the risks will seem to be scarier, and less under your control.
Policy changes in response to dramatic events. Compare Foucault’s “rupture in history”, Thomas Kuhn’s “paradigm shifts”, punctuated equilibria in evolution.
He has taken a USAF document describing major policy changes since 1945. In every case, an unexpected event was the cause of the change.
Experimental subjects were told to produce two resumes, one that will be private and one that will be published on LinkedIn. The LinkedIn version was more accurate.
The use of the first person singular (“I”) goes down when people lie. He has carried out a statistical analysis of statements made by the U.S. government about the war in Iraq. The correlation between pronoun use and statements that subsequently turned out to be false was very strong.
Q: Have there been any studies on how much fraudsters make? It doesn’t seem very profitable. (Compare drug dealers in “Freakonomics”).
Andrew Adams: Japanese doesn’t use pronouns as much as English. Are there deceit markers in other languages?
Reply: They’re looking at Arabic...
Angela Sasse: When people exaggerate in their online dating profiles, it’s not just about deception. It’s people saying what they think they should say – a social control mechanism.
Audience: People’s self-perception is related to a norm, which may be different from someone else’s perception of the norm. E.g. “I am a clean roommate” – compared to the house I just moved out of.
A study of phishing. Get students to go through an inbox with some phishing messages, and don’t tell them that the study is about phishing. The experimenters got into trouble because their experimental phishing site was visible on the rest of the Internet, not just to the experimental subjects.
Educational strategies sometimes don’t make people better at detecting phishing versus non-phishing: they just shift the detection threshold and achieve fewer false negatives, but more false positives.
Describes an experiment in which subjects were presented with a computer security mechanism, but described as if it had nothing to do with computers. Imagine you have a very eccentric landlady, who rents you two apartments, and allows you to choice between several schemes for building access...
They carried out an experiment in which they watched people install PGP. Several problems emerged:
Report on a real attack, seen in practise, on voting machines. The machines can be configured in such a way that the printed instructions are invalid, and following them is insecure. The change involves adding an additional question at the end, that allows the user – or the person after them to have access to the machine – to go back and change their vote.
“Extended verification certificates” have had extra checking of the subject’s identity, and are identified in the browser by a green bar. But do the users understand what the green bar means?
There is a picture in picture attack, where the fake web site contains a picture of UI, including the green bar. Users don’t understand the difference between the “chrome” (drawn by the browser) and images from the remote web site.
Usability study of “secret” questions for authentication. Get people to answer “secret” questions – then get their partner to guess their answers.
The answers to questions about “favourites” are often forgotten over time.
The statistical attack - just guess the answer most people choose – works quite well. The study population was all from the same geographical area, which may have increased this effect.
Questions are either too memorable or too easy to guess. If users pick their won questions, they pick ones that are too low entropy.
Study of phishing and mule recruitment sites. How long does it take before these sites are taken down?
Angela Sasse: With the brower’s green bar, it’s “reliance”, not “trust”. The users don’t know there’s a potential threat.
Jean Camp: “Educate the user” is a nice way of saying dump the responsibility on the user.
Jeffery Friedberg: The certification authority should have a vested interest in verifying the identity of the certified entity.
Luke Church: Studies have found that the lower-level a concept is, the more likely users are to describe it correctly. “Encryption” is likely to be described correctly; “relationship” is not.
Jean Camp: Some people say that it’s hard to put a value on the cost of electoral fraud, but in this example of a real attack seats on the town council were sold for a price.
Study of biometric data from the “U.S. Visit” programme. Fingerprint quality wasn’t very good, especially for older people. Photos weren’t very good either.
Acceptability of a biometric depends on the context, e.g. identifying a person who has just been arrested is different from identifying someone who wants to rent a car.
Analogy with nuclear waste: these systems are going ahead without dealing with the fundamental problems.
Principles of user-centered design:
“Appliancisation” – security rules are being embedded in an appliance, with only a limited choice of options. This is a dubious form of control, and a dubious form of function negotiation. Some applications (e.g. Wikipedia) don’t fit into the standard ACL model. It requires you to predict in advance what the technology will be used for, and it puts too much power in the hands of the technologist.
They have produced a tool that gives the user a more programming-like interface.
Design principle: Seek ways to leave the last mile of design to your users.
You can teach users. But you can’t teach them very much, so you had better think carefully about what it is you want to teach them.
Why this is hard:
Phishing is a mismatch problem between what’s in the user’s head and where the browser is taking them. But the browser doesn’t know what’s in their head.
They have a prototype that, for high-security sites, uses an isolated browser with the name and public key in the bookmarks.
With certificate error messages, configuration errors are more likely than actual attacks, so people are acting rationally in ignoring the messages.
Described a system for resetting passwords by asking “trustees”.
(more to write up)
I was a speaker in this session, so I wasn’t able to take notes.
He was been working on a collaboration with a biologist:
Sagarin, R.D. and Taylor, Terrence. Natural Security: A Darwinian Approach to a Dangerous World. University of California Press, 2008.
Biohazards ranked in order of intentionality:
A related paradox: how can our society be prosperous and yet have widespread innumeracy?
.. and yet, despite this, society survives
“Cyberspace” was conceived by technologists as a new world that compensates for the defects of human space (compare John Perry Barlow’s “Declaration of Independence of Cyberspace”). But cyberspace and human space are intertwined, and in fact human space is used to patch over the deficiencies of cyberspace.
She has carried out ethnographic fieldwork on teenagers who use social media.
Lies. When they give their country of residence as Afghanistan (first on the list) or their age as 61 (16 reversed) they only want the system to think this. They don’t think they’re really 61 years old, or from Afghanistan. In many cases, you need to be over 13 (or say you’re over 13) to get access. Lie to be safe: e.g. not giving out your real address online.
Password sharing. Parents often demand that they have their child’s password, so they can check what they’re doing. Passwords are also shared with a significant other, as a sign that you have nothing to hide. (Leading to tricky situations when the relationship breaks up).
Privacy is not dead! Teenagers put lots of information online, but what they put online is – deliberately – not the whole story.
The home is not seen as a private place.Privacy is about having a sense of control.
He has been studying CCTV footage of drunk people getting into fights. Due to privacy constraints, he doesn’t have access to the audio to other identifying information. This makes it a harder problem than doing primate research, because at least with non-human primates you know their status within the troupe. He has build a probabilistic (state-change) model where “escalating” or “deescalating” gestures by different parties affect the probability than the incident ends in violence. It turns out that it is very important that the de-escalating gestures are made by different people. A pattern of 1,1,1 is much more likely to end in violence than a pattern of 1,2,3.
In psychology, groups are often seen as bad: mob violence; mass hysteria; peer group pressure; the presence of others weakens the controls which stop violence and weakens the ability to resist anti-social influences. But here is an example of a type of situation where the involvement of others significantly reduces the probability of violence.
Facebook has re-invented many of the Internet’s core protocols - except that they have turned them into a “walled garden” with social networking added.
Audience: Humans over-react to events, but natural selection doesn’t. A threat has to be real and sustained before natural selection kicks in.
There was an informal discussion with Alma Witten and Luke Church: “real money trading” in online games. We often see problems because the computational model provided by the computer system doesn’t match the real social structure (e.g. “friends” in social networking software). Is the RMT problem a similar mismatch? Probably not, because money was already computational. Our society has many centuries of experience of reducing complex notions of the “value” of a thing to a number – money. Once you’ve done that, storing the number in a computer isn’t that big a step.
He plots graphs of how economic activity returns to normal after a terrorist attack such as the London Underground bombing. You see exponential decay, with a half-life of about 45-90 days.
It is widely thought that people panic during emergencies. But analysis of (e.g.) photos of the evacuation during 9/11 shows people evacuating the building quickly, but without panic (not trampling each other, etc).
Some people are building a virtual world to study how people behave in a disaster. This was said to provide as much realism as you could want, and compare to Milgram’s experiments on obedience.
I asked about ecological validity: do people behave in the same way in computer games as they do in real life? The problem is not that the graphics are slightly unrealistic, but that (for example) first-person shooters have trained us to have very different expectations from computer games than we do from real life. Answer: the computer simulation is useful for answering such questions as: which fire exit will you try to use?
An experiment. Subjects are asked personal questions, either in order of increasing sensitivity, or in order of decreasing sensitivity. Which gets the best response?
He has used linguistic analysis to predict whether a Twitter message was a public tweet or a secret tweet. This works quite well.
In another study, the usage of Facebook features is correlated with how much the user (say they) trust Facebook, but not with how much they say they trust other users.
She is a law professor. To quote Brandeis, no-one wants to be entirely alone. Cross-culturally, happiness correlates with social contact. (That is: privacy is not about preventing communication)
There is a pattern in U.S. privacy cases. If you agree (for example) to take a drug test in the specific instance, at the time, that tends to be taken by the courts as valid consent. On the other hand, agreeing long before-hand that you can be tested is looked on less favourably by the courts. The legal precedent is aligned with what we would expect from psychology: if people are asked in advance about a hypothetical event, they’ll hope it won’t happen to them.
He has carried out a study of the use of social networking software by students in the UK and in Japan.
In both countries, it is seen as not acceptable to disclose private information about someone that they wouldn’t disclose themselves. However, in Japan there is more a view of “victim responsibility” – it’s your fault for choosing such untrustworthy friends.