HomeAbout UsHelpFeedbackSubscribeKnowledge@Wharton
  

Article Image Privacy vs. Security: Who Draws the Line?

Derek Smith wants to know more about you.  Why?  He wants to protect you.  As the Chairman and CEO of ChoicePoint, an Alpharetta, Georgia based company that specializes in providing identification and credential verification to business and government clients, Smith hopes to create a safer and more secure society through the responsible use of information.  Smith and Emory University's Goizueta Business School  faculty Benn Konsynski, professor of decision & information analysis, and Diana Robertson , professor of organization and management, discuss the natural tension between an individual's right to privacy and society's need to know more about its members in order to mitigate risk.

 

"The terrorist attacks, Internet predators, identity theft...  All around us, events are taking place that are changing lives," Smith recently told a Goizueta Business School MBA class at Emory University.  "Many people have historically thought we -- as individuals, as a community or as a society -- were immune to the changing risk dynamic we live in today.  They're wrong."

 

Smith believes the world is becoming a riskier place in which to work, to live and to do business.  He proposes three reasons:  the speed of human mobility, the speed of communication, and the ability for one person or a small group of people to create huge economic, physical or political harm on a catastrophic level.  "Think about it for a minute.  We live in a world where anyone can be any place with a tremendous communications capability, the ability to move financial resources and the ability to create huge devastation, destruction or emotional havoc.  We've never lived in a world where that was possible before," Smith said.

 

If the world's a riskier place to live, work and do business, what does Smith believe is at the center of the new risk dynamic?  "People.  What's the most powerful tool to mitigate risk?  Information about people," he said.

 

"I believe strongly in an individual's right to privacy, but I do not believe in an individual's right to anonymity.  We cannot maintain a democratic society if we don't use information technology to help us extend rights and privileges inside society in a fair and legitimate way," Smith added.

 

Therein lies the rub:  as technology allows for a vast new store of information, it also opens society to the possibility that the data will be used inappropriately and irresponsibly, thereby creating new risks.  "It is the same information technology that's at the heart of mitigating risk." Smith said.

 

Smith recognizes that the line between what uses of data are an invasion of privacy and what uses of data protect society is nebulous and needs to be debated and decided upon.  It's a touchy subject, and one that Smith and companies like ChoicePoint grapple with every day.  "As a company, our goal is not to tell you where that line is drawn, but to engage in a dialogue that causes society to have to deal with where that line is or might be," he said.  "Society has to create standards for prudent behavior and it has to use information responsibly and appropriately to mitigate risk".  Ultimately, there will be significant consequences based on how we manage those risks."

 

Konsynski asked Smith about the anxiety created by having one's information stored and shared:  "How do you build my confidence that the societal engine won't be diverted, hijacked, or misused?"

 

Smith's answer:   Make sure the data repositories are privately owned and not governmental organizations.  "If the government owned them and controlled the information, it would be different.  It would be very difficult for us, as a society, to do anything if we saw a violation or abuse.  But if ChoicePoint violates the trust of the consumer population, then there are legal and economic remedies".  Private industry is accountable in a much different way than is the government."

 

True, observes Robertson, but having data repositories be privately held poses an interesting question.  "Do we have more trust in government or in big business?" she asks.  "It appears that we have more faith in business -- even faith in business to solve some of society's problems.  Which is interesting, as business is not in business to solve societal problems."

 

No question, ChoicePoint is in business to make money, and its growth performance in an otherwise down economy has not disappointed Wall Street.  But, contends Smith, the company also has a social responsibility.   He points out that if ChoicePoint uses data inappropriately, his company's business will most certainly suffer.  The company invests in safeguards to make the inappropriate use of data very difficult.  "Is it perfect?  No".  Can people do bad things?  Yes". There are frailties of human behavior and of control systems, but to extend that and say, 'Therefore data warehousing, by its nature, will be or could be used irresponsibly" There's a tradeoff now.  Which is worse?  The issues that we face in society when dealing with terrorists, rapes, kidnappings and murders" or whether the data will be potentially misused by a very limited number of people?"

 

Smith suggests that establishing strict laws and adapting processes will help minimize the chances that the bad guys will get your data.  But Robertson suggests that it's more than just who can access your data, it's how much of your data should be stored.  "It's a question of what information is available and what information companies have.  At what point do we say, 'That's too much information for them to have,' and put on the moral brakes?"

 

Many people throw on the brakes when it comes to DNA.  "There are some very scary scenarios," said Robertson.  Such as not getting insurance based on DNA tests, or being denied a job because those tests predict you could get cancer or Alzheimer's at an early age.

 

Smith contends that there are even scarier scenarios if we don't use the information.  "If you learn the power of DNA, you'll learn that it's an incredibly powerful tool that can make us a safer society".  If you're scared of that, the only way we're going to move past that is to have debates and discussions by lots of people," he said.

 

Although today's society dictates that one cannot use genetic markers in order to differentiate, there is no law that states that.  Even Smith noted an insurance company might be able to get your DNA information by being creative.  A future sales pitch might read, "Give us your DNA information and we'll give you a twenty percent deduction in insurance premiums if we accept you."

 

Probabilistic analysis, like looking at genetic markers and discovering that a person might develop Alzheimer's, is an ethical dilemma.  "In some places, probability is real.  Like credit; you get approved based on probability analysis.  You get or are denied credit basically because probability statistics say this is the likelihood you will/will not pay your bills," Smith said.

 

Although he sees the dilemma, Smith returns to his thesis:  "As technology gets better and better, there are (going to be) more and more risks.  But if we don't use those technologies to mitigate those risks, it's going to get worse," he said.  "Technology is moving up on an exponential curve.  Our (lack of) ability as humans to understand and assimilate that technology is creating an enormous gap. That gap creates angst."

 

Konsynski agrees.  "Our capabilities are far beyond what they were in the past.  Our ethical practice and our societal thought around how to leverage the 'world of should' isn't, in fact, as advanced or as mature"  as the 'world of can.' The world of can is advancing much more rapidly than our understanding of what we should do as a society," he said.  "There's a natural tension between the world of can versus the world of should." 

 

Our technological capabilities are outstripping our thought processes on how we employ, exercise and limit that technology.  "To pretend that there are no issues is naive," Konsynski said.  "But to preemptively make ill-thought decisions, as Congress often does, in advance of knowing what technical abilities we can have, is faulty as well."

 

Robertson takes it a step further.  "The fundamental issue is who gets to decide 'should.'  Is it the scientists?  The public policy makers?  The judges?  Do we have the right people deciding the 'should?'  I'm not convinced that we have," she said.

 

"It all comes down to individual rights versus societal needs and what is, overall, better for society," said Robertson.  Deciding who determines what is better for society is a question Smith, Konsynski and Robertson believe we need to start asking each other and ourselves.

Send a Comment


Printer Friendly Version

Send to a friend:
        
Quicksearch

Finance and Investment
Leadership and Change
Executive Education
Marketing
Insurance and Pensions
Health Economics
Strategic Management
Real Estate
Public Policy and Management
Human Resources
Business Ethics
Innovation and Entrepeneurship
Operations Management
Managing Technology


Sponser Knowledge@Wharton

Search | Sign Up | About Us | Help | Home | Feedback | Sponsors


All materials copyright of the Goizueta Business School of Emory University
or the Wharton School of the University of Pennsylvania, Privacy Policy