Important Yet Overlooked Parts of Information Security

ISSA
DEVELOPING AND CONNECTING
CYBERSECURITY LEADERS GLOBALLY
Important
Yet Overlooked
Parts of
Information
Security
By Luther Martin – ISSA member, Silicon Valley Chapter and Amy Vosters
Information security has been described as being where computer science, economics, and
psychology meet. While the computer science aspect of the field often get lots of attention, the
other disciplines are often overlooked. The authors present an overview of some of the more
relevant ideas from these other fields.
Abstract
Information security has been described as being where computer science, economics, and psychology meet.1 But while
the new and exciting technologies that comprise the computer science aspect of the field often get lots of attention, the
contributions from the other two disciplines are often overlooked. An overview of some of the more relevant ideas from
these other fields can be useful to people working in the industry, something that we try to address in what follows.
able to give us some useful ideas that we can use on a day-today basis.
How we make decisions
M
Our understanding of how people make decisions has progressed through three general phases: In the first phase, researchers assumed that people were rational and made rational decisions. In the next phase, they accepted that people
are actually not rational and tried to find patterns in how
they behave irrationally. In the third phase, they used techniques from modern neuroscience to understand exactly why
our brains make these irrational decisions. In each of these
phases there have been important insights into how people
make decisions, some of which are particularly relevant to
information security professionals.
1 R. Anderson and T. Moore, “Information Security: Where Computer Science,
Economics and Psychology Meet,” Philosophical Transactions of the Royal Society A,
Vol. 367, No. 1898, pp. 2717-27, August 2009.
“Happiness is possible only to a rational man, the man who
desires nothing but rational goals, seeks nothing but rational values and finds his joy in nothing but rational actions.”
Ayn Rand, The Fountainhead
The definition of man as the “rational animal” is often attributed to Aristotle’s Metaphysics. This phrase does not actually appear in that particular work, but his Nicomachean Eth-
anaging the technology and processes needed to
protect sensitive information can prove very difficult because error-prone people maintain and
use the technology and do not always follow the processes.
At least part of these problems exists because people by nature do not behave rationally. This is unavoidable because of
the way our brains function. Understanding this can help us
comprehend some of the decisions that both managers and
users of information security systems make. It may even be
14 – ISSA Journal | February 2015
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
ics contains essentially the same idea, so attributing this to
him is seemingly not bending the truth too much. So the idea
that rational thought is what makes humans different from
other animals is not a new idea, rather it has been around
for over 2,000 years and was the basis for the development of
much of classical economics.
Economics
Microeconomics, in particular, tries to understand and predict how economic units like individuals, households, or
firms make decisions. A key assumption that economists traditionally used in this type of analysis is that the economic
units making the decisions were rational. They assumed that
people would always make rational decisions. And they assumed that businesses would always make rational decisions.
The models created with these assumptions are actually very
useful. They let us predict that when the price of a particular
good increases, then the amount of the good consumed tends
to decrease. They let us understand why the price of water
is relatively low even though water is absolutely essential to
sustain life while the price of diamonds is high even though
diamonds are a luxury that is not essential to sustain life.2
They let us understand why some information security technologies are subject to more stringent certification requirements than others.3 And they let us understand why laws like
HIPAA are necessary to motivate healthcare organizations to
protect the sensitive data that they harbor.4
There is even an annual workshop5 where researchers discuss new ways to apply microeconomic theory to the field of
2 L. Martin, “The Marginal Utility of Information,” BCS, http://www.bcs.org/content/
conWebDoc/10319.
3 L. Martin, “Crypto Snake Oil,” BCS, http://www.bcs.org/content/conWebDoc/6234.
4 R. Anderson and T. Moore, op. cit.
5 http://www.econinfosec.org.
information security, both to increase our understanding of
existing information security challenges and to help propose
new solutions to existing problems. It has definitely proven to
be a very useful tool that can both help us understand many
real-world problems and propose good solutions to them.
Microeconomic theory has also allowed researchers to create very sophisticated models that can explain various types
of behavior that might otherwise seem rather odd. Steven D.
Levitt and Stephen J. Dubner used this framework in their
book Freakonomics6 and its sequels to successfully explain
some fascinating, counter-intuitive things like why most
drug dealers actually earn less than the minimum wage and
why real estate agents really do not look out for the interests
of the sellers who are paying them. Freakonomics is a good
example of how classical microeconomic theory is general
enough and flexible enough to explain a wide range of behaviors that we observe.
But while applying microeconomic theory can explain much
of behavior that we observe in the real world, it may need
to resort to some very complicated arguments to do this. A
much simpler approach is to discard the assumption that
people behave rationally. Once we do that, many things that
once seemed complicated become much easier to understand.
An example of this is how people play the “ultimatum game,”7
a two-player game that psychologists have used to study certain aspects of human behavior.
The ultimatum game
The ultimatum game between the two players, Alice and Bob,
works like this: Alice is given an amount of money, say $100,
and proposes a division of the money between her and Bob.
6 http://www.freakonomics.com/.
7 D. Kahneman, J. Knetsch, and R. Thaler, “Fairness and the Assumptions of
Economics," The Journal of Business, Vol. 59, No. 4, Part 2, pp. S209-S224, 1986.
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
February 2015 | ISSA Journal – 15
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
Alice might propose an even split with Alice and Bob each
getting $50, or she could offer a less equitable division. Bob
then gets to accept or reject Alice’s proposed division. If Bob
accepts it, each player gets the proposed amount. If Bob rejects it, both Alice and Bob get nothing.
If people are truly rational, we would expect Bob to accept
any division of the $100 that gives him anything at all, even
$1. After all, a rational Bob would reason that $1 is better than
the $0 that he would get if he rejects it. But people do not
think that way. They will fairly consistently reject proposed
divisions that give them less that roughly $20 to $30 of the
initial $100, despite the fact that rejecting a low offer, actually
makes them worse off than they would be by accepting it.
It is possible to create models that can explain how each player in the ultimatum game is actually making a rational decision, even in the cases where Alice makes a low offer or Bob
rejects a low offer. Perhaps Alice estimates that Bob’s chance
of accepting a low offer is high enough to justify the risk of
making one. Or if Alice offers Bob only $10, he may decide to
forgo any financial reward at all because he feels that by punishing Alice for making an unfair offer he is actually making
society better off by discouraging Alice from other possible
greedy behavior in the future. If this is the case, he might be
willing to forgo a gain of $10 because he feels that when he
does this he is creating more than $10 of benefit to society as
a whole.
But while this theory might explain the behavior of Alice and
Bob in this particular case, it does not generalize well to other
situations. Because of this, we would have to find a different
explanation that is consistent with some new understanding
of the apparently irrational behavior every time we need to
explain a new situation that we might come across. So although it may be possible to find a way to explain apparently
irrational behavior in ways that make it seem perfectly rational, if we try to apply this approach to help us predict what
will happen in new situations, things get messy.
We can find plausible reasons for why Bob might reject a $10
offer in the ultimatum game after we see him actually do this,
for example, but it is hard to predict in advance that he will
probably reject offers that are less than $20 to $30 instead of
offers that are in a different range. It might be perfectly reasonable to expect Bob to reject any offer that is less than $40,
for example. But there is a more elegant approach that avoids
this sort of complication.
“Wouldn’t economics make more sense if it were based on
how people actually behave instead of how they should behave?”
Dan Ariely, Predictably Irrational
A simpler explanation that applies equally well to the ultimatum game as well as to many real-world situations is that
people are not really rational. They might behave rationally
sometimes, but in many situations they do not, and once we
accept that this is true, many human behaviors become much
less mysterious.
16 – ISSA Journal | February 2015
Behavioral economics
Trying to understand exactly how irrational we are and when
we act irrationally has become an important part of the field
of behavioral economics. An excellent overview of many interesting aspects of behavioral economics that is suitable for
a general reader is Dan Ariely’s book Predictably Irrational.8
If you read this book, it can be hard to not get the impression
that humans are really not as smart as we would like to think
that we are because it is so easy to take advantage of our irrational thought processes, which strongly encourage us to
make certain decisions even though we have absolutely no
idea that we are being led to do this.
Understanding that people are not always rational may be
particularly important for information security professionals. The psychological theory of personality types9 tries to
categorize people into a relatively small number of categories
based on how they tend to behave. The most popular personality typing scheme is the Meyers-Briggs Type Indicator
(MBTI),10 which categorizes people into one of 16 different
types based on whether they tend to prefer the attitude of introversion (I) or extraversion (E), the psychological function
of intuition (N) or sensing (S), the function of thinking (T) or
feeling (F), and relating to the outside world through judging
(J) or perception (P).
Each of these terms has specialized meanings to psychologists who study personality types, and these meanings may
not correspond to how non-specialists understand these
words. And because they just indicate a preference or tendency towards a particular approach, that definitely does not
mean that a person with a particular preference always acts
that way.
Research has suggested11 that the very rare MBTI “INTJ”
personality type actually occurs in information security professionals often enough to dramatically affect the generally
accepted way of thinking within the industry. It is not clear
how accurate the MBTI is,12 but it definitely serves a useful
purpose in regard to the information security industry, and
that is to remind those of us working in the field that what
may be generally accepted as a useful way of looking at things
by us, may seem very strange to a significant majority of the
population—perhaps almost all of them.
According to the Meyers & Briggs Foundation,13 the organization that looks after the theory and practice of the MBTI,
people with the INTJ type tend to think strategically and to
be very decisive, hardworking, and determined. These are all
characteristics that are good to have, but individuals with the
8 http://danariely.com/.
9 C. Jung, Psychological Types, Princeton, New Jersey: Princeton University Press,
1971.
10http://www.myersbriggs.org/my-mbti-personality-type/mbti-basics/.
11C. Gates and T. Whalen, “Profiling the Defenders,” NSPW ‘04, Proceedings of the
2004 Workshop on New Security, pp. 107-114, 2004.
12D. Pittenger, “Cautionary Comments Regarding the Myers-Briggs Type Indicator,"
Consulting Psychology Journal: Practice and Research, Vol. 57, No. 3, pp. 2010-221,
2005.
13http://www.myersbriggs.org/my-mbti-personality-type/mbti-basics/.
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
INTJ type may obsess over small details (like whether or not
Aristotle really used the phrase “rational animal” to describe
man). And they tend to assume that other people think rationally and they tend to assume that people can easily be convinced by clear, logical arguments.
But because it is clear that people do not actually behave in
this logical way, if the rational mind-set of the INTJ type that
seems to be common in information security professionals is
used as the basis for designing policies, procedures, or user
interfaces, we might expect some external friction. Many
people might even find such things to be totally unusable.
“The genetic you and the neural you aren’t alternatives to
the conscious you. They are its foundations.”
Paul Bloom, “The War on Reason”14
Not long after the research of behavioral economists made
it clear that people did not behave rationally, neuroscience
advanced to the point where it was actually possible to get
a fairly good idea of how our brains operate when we make
decisions. Techniques like functional magnetic resonance
imaging (fMRI) let researchers create images of what parts of
the brain are involved in making various types of decisions.
In one of the more relevant parts of this work, researchers
have learned some very interesting things about how our
brains understand and estimate risk.
2014 Orlando Keynotes & Sessions
Video Presentation
Kevin Johnson Keynote
Click image above to watch highlights of the Kevin’s keynote address [5:28 - 27.9MB - may take a few moments to
load] or click HERE for the full address.
Audio Presentation
Understanding risk
Unless you work in finance, where risk is the variance in the
returns of investments, risk is usually understood to be a
measure of the average or expected loss associated with an
activity that has an uncertain outcome. So if we flip a fair
coin and lose $1 if the coin comes up “heads” (H) and lose $3
if the coin comes up “tails” (T), then the risk associated with
flipping this coin is $2 which we calculate as:
Risk = (probability of H)(loss from H)
+ (probability of T)(loss from T)
= (0.5)($1) + (0.5)($3)
=$0.50 + $1.50
= $2.00
A similar calculation gives us the annualized loss expectancy
(ALE) that is widely used in the information security industry, where the ALE model calculates the risk over a one-year
period instead of looking at just the flip of a single coin.
But it is easy to see the problems with using the ALE model in
practice. In almost all cases that are of interest to information
security professionals, we do not know accurate values for the
probabilities of security-related incidents happening, and we
also do not have accurate estimates for the loss that organizations will incur if these incidents happen. So does the fact
that we do not have these values matter? Should we look for
an entirely different model to help us quantify risk? Or can we
14http://www.theatlantic.com/magazine/archive/2014/03/the-war-on-reason/357561/.
How to Hack a Bank
Tom Schauer, CEO and Client Experience Officer,
Trusted Advisory Group
Andy Robbins, Trusted Advisory Group
Click image above to listen to a portion of the session
[6:07 - 2.8MB] or click HERE for the full session [55:45].
Recorded sessions and presentation materials available
at www.issa.org/?issaconf_home.
TechTarget Video Interviews from ISSA
International Conference in Orlando
Cybersecurity Threats Protection: Don't Forget the Basics
Former CIA chief information security officer Robert
Bigman. Click HERE to view the video.
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
February 2015 | ISSA Journal – 17
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
actually find a way to estimate the risks that the ALE model
tries to quantify?
Research has identified the part of the brain that estimates
probabilities. It has also identified the part of the brain that
estimates how serious the loss from a particular event will
be. And it has shown that our brains combine the outputs
of these regions to estimate risks. This operation seems very
similar to an expected value calculation.15
This does not prove that our brains use an expected value to
estimate risks, of course. These observations are also consistent with our brains using a complicated function of an estimated probability and a complicated function of an estimated
loss to do a complicated calculation that ultimately produces
an estimated risk. So if we have an event that happens with
probability P and causes a loss of L when it happens, instead
of calculating the risk R from this event as
R=PxL
our brains may actually be calculating something that we can
more accurately represent as R' where
R’ = ξ(ς(P), ϑ(L))
in which all of the intentionally mysterious Greek letters indicate complicated functions that we may never really fully
understand.
People estimate probabilities that are close to 0.5 fairly well,
but we consistently overestimate very low probabilities and
consistently underestimate very high probabilities.16 If a probability is really 0.0001, we might estimate it to be 0.1, which
is off by a factor of 1,000. If a probability is really 0.9999, we
might estimate it to be 0.9. And if we use these inaccurate
15B. Knutson, J. Taylor, M. Kaufman, R. Peterson, and G. Glover, "Distributed Neural
Representation of Expected Value," The Journal of Neuroscience, Vol. 25, No. 19, pp.
4806-4812, May 2005.
16F. Attneave, "Psychological probability as a function of experienced frequency,"
Journal of Experimental Psychology, Vol. 46, No. 2, pp. 81-86, August 1953.
estimates for probabilities in a calculation that is estimating a
risk, we should expect our risk estimates to be fairly inaccurate in lots of interesting and useful cases.
But what researchers have learned about how our brains work
also may indicate that we may have no alternative than to estimate risks in a way that may not be too different from how
the ALE model tells us to do this. So even if there is a better
alternative, we may be inclined to not use it in practice because it simply does not make sense to us.
Aside from showing that there may actually be an unavoidable biological basis for the irrational decisions that we make,
it is not clear if there are other insights that research in neuroeconomics may have to offer the field of information security
that are much more practical. But the implications of the fact
that we should actually expect people to behave irrationally can be significant, and taking a closer look at some of the
ways that this irrationality affects the way that we act can be
very useful.
People vs. organizations
People can be irrational. They may actually have no choice
in the matter. But organizations should probably ensure that
they make decisions that are calmly rational. If a careful and
accurate analysis shows that deploying a new information security technology will save your organization several million
dollars per year, the fact that you do not like the vendor that
makes it or the fact that you had a particularly bad experience with a much earlier version of the technology, in theory,
should not affect the decision to deploy the technology. But in
practice, it often does. Even more subtle biases can dramatically affect how we make decisions.
“We are pawns in a game whose forces we largely fail to
comprehend.”
Dan Ariely, Predictably Irrational
Click here for On-Demand Conferences
Security Reflections of 2014 & Predictions for 2015
2-Hour Live Event: January 27, 2015
Dorian Grey & The Net: Social Media Monitoring
2-Hour Live Event: Tuesday, November 18, 2014
Cybersecurity and Other Horror Stories
2-Hour Live Event: Tuesday, October 28, 2014
Encryption - The Dark Side: Things to Worry About for
2014
2-Hour Live Event: Tuesday, September 30, 2014
Cyber Analysis Tools: The State of the Union
2-Hour Live Event: Tuesday, August 26, 2014
Global Cybersecurity Outlook: Legislative, Regulatory
and Policy Landscapes
2-Hour Live Event: Tuesday, June 24, 2014
Breach Report: How Do You Utilize It?
2-Hour Live Event: Tuesday, May 27, 2014
Doing it Right: Organizations That Seem Immune to
Security Attacks
Recorded Live: Tuesday, April 22, 2014
BYOS - Bring Your Own Stuff
Recorded Live: Tuesday, March 25, 2014
GRC/Cyber Insurance
Recorded Live: Tuesday, February 18, 2014
A Wealth of Resources for the Information Security Professional – www.ISSA.org
18 – ISSA Journal | February 2015
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
In one study,17 simply showing subjects the Apple logo was
enough to make them perform significantly better on tests of
creativity, and showing them the Disney logo was enough to
make them perform better on tests of honesty. Another study
showed that people behave more honestly under the watchful
gaze of an image of eyes.18 It is surprisingly easy to dramatically affect people’s behavior through very subtle differences
in what they are exposed to. If you manage to tap into one of
their unconscious biases, they may be unable to not have their
judgment affected. And these sorts of biases are built into the
way that we think and probably unconsciously affect all of the
decisions that we make.
Neuromarketing
It turned out that Vicary actually faked his data, and more
careful attempts to recreate his results failed miserably. But
that did not stop the myth of the effectiveness of subliminal
advertising to continue unchecked to this day, and it did not
stop the Federal Communications Commission from taking
a clear stand against subliminal advertising in 1974:
“We believe that use of subliminal perception is inconsistent with the obligations of a licensee, and therefore we take
this occasion to make clear that broadcasters employing
such techniques are contrary to the public interest. Whether
effective or not, such broadcasts clearly are intended to be
deceptive.”
Federal Communications Commission Document 74-78.21
To take advantage of this, the field of neuromarketing has
been crafted to essentially use the techniques of neuroeconomics to understand consumers’ purchasing decisions.
Consumers are generally very good at expressing purchasing
preferences, along with the ability to pinpoint what they like
and dislike from a buyer’s standpoint, but they lack the reasoning behind why these purchasing decisions are ultimately made.19 Neuromarketing practitioners aim to understand
these built-in biases so that they can cleverly be taken advantage of and used to affect people’s decisions in a particular
and predetermined way. Because of this, the strategies devised with the help of neuromarketing may be particularly
insidious as they can be based on aspects of how our brains
operate that we cannot control.
Neuromarketing researchers can use fMRI to watch our
brains operate and can use what they learn to create situations where we may think that we have a choice, but we really do not. This new field of marketing targets advertising to
our subconscious, which we really have very little power over.
Neuromarketing tries to use emotional appeal to influence
consumers with the intention of the messaging and visuals
used being subversively influential, while at the same time
being completely irresistible. Because this research is based
on how our brains actually react to stimuli instead of how
test subjects describe how they think that they reacted, it can
be much more accurate than strategies developed using less
precise approaches.
Back in 1957, James Vicary performed an experiment in a Ft.
Lee, New Jersey, movie theater that was showing the film Picnic.20 Vicary claimed that by flashing messages saying “Drink
Coca-Cola” and “Hungry? Eat Popcorn,” he was able to increase sales at the theater’s concession stand of Coca-Cola by
18.1 percent and of popcorn by 57.8 percent.
17G. Fitzsimons, T. Chartrand and G. Fitzsimons, “Automatic Effects of Brand
Exposure: How Apple Makes You ‘Think Different,’ Journal of Consumer Research,
Vol. 35, No. 35, pp. 21-35, June 2008.
18M. Bateson, D. Nettle and G. Roberts, "Cues of Being Watched Enhance Cooperation
in a Real-World Setting," Biology Letters, Vol. 2, No. 3, pp. 412-414, September 2006.
19S. Genco, A. Pohlmann and P. Steidl, Neuromarketing for Dummies, Indianapolis,
Indiana: For Dummies Press, 2013.
20S. Rogers, “How a Publicity Blitz Created the myth of Subliminal Advertising,” Public
Relations Quarterly, Vol. 37, No. 4, pp. 12-17, Winter 1992-93.
From the point of view of today, when we are able to measure
the responses of test subjects’ brains and tailor marketing
messages in a way that has a significant chance of actually affecting the decisions that we make, it looks like Vicary might
actually have been on the right track. Perhaps he could have
21 Federal Communications Commission Document FCC 74-78, "Broadcast of
Information by Means of `Subliminal Perception' Techniques," January 24, 1974.
Career Opportunities
L
ooking to begin or advance your career? The
ISSA Career Center offers a listing of current job
openings in the infosec, assurance, privacy, and
risk fields. Visit the Career Center to look for a new opportunity or to post an opening. Among the 1,040 current job listings you will find the following:
• Information Security Consultant 5 – Wells Fargo,
Chandler, AZ
• Information Security Consultant 4 – Wells Fargo,
Chandler, AZ
• Information Security Administrator – Patelco
Credit Union, Pleasanton, CA
• Information Security InfoSec Engineer Internship
– Gap, Inc/Growth Innovation and Digital, San
Francisco
• IT Risk & Information Security Manager PKI –
American Express, Phoenix
• Chief Information Security Officer (CISO) – BOK
Financial, Tulsa
• Privacy and Security Officer – Connect for Health
Colorado, Denver
• Information Security Analyst – Cloud Security –
Citrix Systems
• Senior Specialist IT Security – NeighborWorks
America, Washington, DC
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
February 2015 | ISSA Journal – 19
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
actually succeeded in showing real effects from subliminal
advertising if he had just chosen the right images to show to
moviegoers.
So it seems likely that biases will always affect our decision-making process, and some of these may be from subtle
effects that we do not realize are affecting it. But if we structure our decision-making process in a way that minimizes
how these biases can affect our decisions, we may be able to
avoid having our unavoidable biases lead us to making bad
decisions.
Doing a careful and thorough analysis of complicated situations is very hard to do well. It is easy to ignore important
factors or to have inaccurate data affect the accuracy and reliability of the analysis. Somewhat surprisingly, we are actually
fairly good at making good decisions in complicated situations. It is something that our brains seem to do, even if we
cannot quite understand how or why they are doing so.
“We believe that we are always better off gathering as much
information as possible and spending as much time as possible in deliberation. We really only trust conscious decision making. But there are moments, particularly in times
of stress, when haste does not make waste, when our snap
judgments and first impressions can offer a much better
means of making sense of the world. The first task of Blink is
to convince you of a simple fact: decisions made very quickly
can be every bit as good as decisions made cautiously and
deliberately.”
Malcolm Gladwell, Blink
Our brains seem to handle simple decisions and complicated
decisions very differently. This is discussed at length in Malcolm Gladwell’s book Blink.22 The title of this book is meant
to suggest that we make many decisions very quickly. Much
of the content of the book is a number of case studies where
this is argued to be true. And Gladwell notes the rather interesting fact that many, if not most, of these quick decisions are
actually surprisingly accurate. The bottom line is that while
we are able to carefully analyze relatively simple decisions
and can then use our analysis to help us make a good choice,
we cannot do the same level of analysis in situations with
even a modest level of complexity.
But it also may not matter that much, because our brains seem
to be quite capable of making those decisions for us. They actually do this very quickly and they do this surprisingly well.
It is even possible to use fMRI to watch this happen, so that
there are actually good reasons to believe that the situations
discussed in Blink are not just a collection of special cases that
do not accurately represent how we make decisions.
When we make these quick decisions, we do not have a good
idea of exactly what factors our brains are using to make these
decisions, so there is always the possibility of unexpected biases creeping in. And this can happen surprisingly easily.
22http://gladwell.com/blink/.
20 – ISSA Journal | February 2015
“Trust but verify.”
Ronald Reagan
Although we are able to analyze complicated problems both
fairly quickly and fairly accurately, the decisions that we
make are based on our previous experiences. This may be
both good and bad. If we are making decisions about people,
that might not be a problem. Whether we consciously try to
do this or not, most people learn how to tell when a person
is lying, for example. When we do this, we are probably using everything that we have experienced up to that point to
help us make that particular decision. Because human nature
probably does not change much over a typical person’s lifetime, what we might have learned about detecting lies a few
decades ago is arguably still true today.
Understand your biases
But when it comes to many of the decisions that information
security professionals face on a routine basis, it may be the
case that either technology or the regulatory environment
has changed enough to make what we learned in the past not
very useful today. Our brains may not be discounting our old
experiences that may no longer be relevant, and this can end
up giving us inaccurate judgments today.
If you suffered through using some of the primitive dotcom-era encryption technologies, you may have developed a
strong bias against using encryption. And even though today’s encryption products are much simpler and easier to use
than their dot-com-era ancestors, we may have unconscious
biases against the technology that may not actually be justified today.
“Man is not a rational animal, he is a rationalizing animal.”
Robert Heinlein, Assignment in Eternity
Because we have a natural tendency to make decisions based
on things that we really do not understand well and then to
try to explain our decisions in terms of things that we do understand, we often end up trying to justify our decisions after
they are made instead of being more careful when we make
them. Because this seems to be an artifact of the way our
brains work, it is probably something that we cannot avoid.23
But if we realize and accept that this is happening, we can try
to understand when it may be affecting our decisions and accept that our decisions may need to be critically evaluated to
ensure that their accuracy or validity was not unintentionally
reduced by these sorts of effects.
So it is relatively easy to account for the fact that we do not
always make rational decisions. Once we understand this,
it is easy to find ways to double-check decisions that might
be affected by our irrational thought processes. But there are
also ways in which we may be able to use what behavioral
scientists have learned to make some quick and easy changes
to day-to-day operations that might be able to significantly
improve the effectiveness of information security programs.
23P. Bloom, Just Babies: The Origins of Good and Evil, New York, New York: Crown
Publishers, 2013.
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
Important Yet Overlooked Parts of Information Security | Luther Martin and Amy Vosters
First, accept that people are not rational and that they are
not going to behave rationally, no matter how well-considered your security policies and procedures are. They probably
do not think like information security professionals do, so
just because something makes sense to absolutely everyone
in your information security office is absolutely no guarantee
that it will make sense to more typical employees. And any
information security consultants that you might have working for you are probably affected by the same biases, so they
may not be able to act as a sanity check for you.
The benefits of experience
Next, if you have lots of experience in the information security industry, you are probably capable of unconsciously making very complicated decisions both very quickly and very
accurately. This is an artifact of how our brains work, and one
of the most important benefits of having lots of experience in
a particular industry, with various technologies or from particular jobs is the fact that your brain is able to integrate all of
your relevant experiences in an incomprehensible algorithm
for decision-making that will be accurate much more often
than we might expect.
Having relevant experience instead of general experience
seems to be more useful for helping us do this well. A study by
the US Navy24 even suggested that the best predictor of performance in a particular job was the amount of experience in
the actual job being evaluated.
This seems to contradict the popular wisdom that a competent
manager can manage any organization well. And it seems to
suggest that candidates with particular industry experience
are more likely to do well in some jobs than people with lots
of general experience. Jobs that require lots of judgment calls
instead of more routine and well-defined tasks may benefit
from a person with particular industry experience. And that
may actually include a significant fraction of the jobs in the
information security industry.
When you are using your brain’s ability to quickly make complicated decisions, you may be unable to explain to others
exactly why you think a particular decision is the right one.
That is also an artifact of how our brains work, and it may just
be part of the price we pay for having the ability to integrate
lots of data to make very complicated decisions. So if you have
many years of experience in the information security industry and you find yourself disagreeing with someone else who
also has many years of experience in the industry, if may be
difficult to understand why you managed to arrive at different
conclusions because you may not be able to easily understand
exactly why that you feel that something is true.
It may be easy to take advantage of the fact that subtle things
like pictures of eyes may encourage the same sort of behavior
that we would expect from people if they were actually being watched by another person. So while security awareness
posters may or may not actually affect people’s behavior in a
meaningful way, they may also have the undesirable effect of
making a workplace seem hostile or unfriendly to employees.
A poster that shows a friendly pair of eyes without any dire
security warnings at all may actually be able to affect people’s
behavior in a positive way without the negative side-effects
that security awareness posters may cause. There may be other ways to subtly affect people’s behavior in the workplace by
using subtle images, but it is not clear how much more of this
you can do without having to deal with tricky ethical issues.
“It is possible to write endlessly on elliptic curves. (This is
not a threat.)”
Serge Lang, Elliptic Curves: Diophantine Analysis
Much like we see in other fields that information security
borrows from, it seems possible to find a virtually endless
number of applications of neuromarketing, neuroeconomics,
and behavioral economics that can be of immediate use to
people working in the information security industry. There
are probably not enough people willing to pay for an entire
book on such a narrowly-focused area, so it seems unlikely
that we will see best-selling books like Dan Ariely’s Predictably Irrational or Malcom Gladwell’s Blink that focus only on
such applications.
But it is easy enough for people working in information security to get a steady stream of new ideas of ways to do their
jobs better if they spend a few minutes now and then checking on what researchers in neuromarketing and related fields
have recently learned. There is a bit of a learning curve to
overcome if you try to do this because of the specialized terminology used in the papers that describe this research, but
simply reading the abstracts of these papers is often enough
to provide a good idea if it will be worth the time and effort to
try to read and understand the paper itself.
In many cases, just what you can learn from reading the abstract is enough to create the “Aha!” feeling that you get when
you see a particularly clever way to solve a problem that you
are facing and then cannot wait to try to apply the clever
insight to solve one of your practical problems. If you only get
that reaction a few times per year, that will almost certainly
justify the time and effort that you had to spend on doing it.
About the authors
Luther Martin is the Chief Security Architect
for Voltage Security. You can find his daily
thoughts on information security at http://superconductor.voltage.com and can reach him
at [email protected].
Amy Vosters is the Marketing Manager at
SOASTA Inc., a SaaS company based in
Mountain View, CA, that specializes in mobile- and web-performance test automation
and real user monitoring solutions. She may
be reached at [email protected].
24M. St. John, J. Callan, S. Proctor and S. Holste, US Navy Technical Report 1821,
“Tactical Decision-Making under Uncertainty: Experiments I and II,” April 2000.
©2015 ISSA • www.issa.org • [email protected] • All rights reserved.
February 2015 | ISSA Journal – 21