Big data could soon allow us to shape public policy, financial products, healthcare and education to suit our personal needs.

But it also introduces a new set of risks to society.

Governance and legal issues related to technology breakthroughs  (liability, health and safety, data protection) are presenting significant risk management challenges for technology developers, policy-makers and insurers.

“In the last 15 years, we have witnessed an explosion in the amount of digital data available – from the Internet, social media, scientific equipment, smart phones, surveillance cameras, and many other sources – and in the computer technologies used to process it. ‘Big Data,’ as it is known, will undoubtedly deliver important scientific, technological, and medical advances. But Big Data also poses serious risks if it is misused or abused.”

Ernest Davis, Professor of Computer Science, Courant Institute of Mathematical Sciences, New York University

When people know that a data set is being used to make important decisions that will affect them, they have an incentive to tip the scales in their favour.

Privacy violations, are  on the rise because so much of the data now available contains personal information.

Increased reliance on new technologies to do business, calls for cybersecurity, as HR professionals are on guard against Internet fraud, identity theft and other tech-driven criminal actions.

According to PwC research,shared data is the fuel of the next industrial revolution, and you will need not only to manage customers’ behavior, but to prevent outsiders from gaining access to critical information. As PwC cybersecurity experts David Burg and Tom Archer put it, your company will most likely protect itself in the future “by monitoring activity across all its online systems, studying not just the moves of hackers but the actions of legitimate customers as well. Both types of visits, after all, are forms of repetitive human behavior, opposite sides of the same coin.”

Internal Security & Electronic Monitoring

In their book, ‘Managing Human Resources,’ Luis R. Gómez-Mejía, David B. Balkin, Robert L. Cardy  contest that genetic testing, high-tech imaging, brain scans as a selection tool, and DNA analysis may soon be available to aid in making employment decisions.

Firms’ decisions about how to harness the new information are loaded with ethical implications.

Apart from background checks, HR departments are increasingly involved in security by scanning employees’ eyes and fingerprints for positive identification, hiring guards to patrol facilities, identifying employees who might pose a violence threat, and potential sources of industrial espionage.

Although few would question that security checks are necessary, one concern from a human resource perspective is to ensure that applicants’ and employers’ rights are not violated and that due process is followed whenever suspected problems are identified.

Data security should involve HR policies to determine who has access to sensitive information and monitoring systems to prevent abuses by managers and employees

According to a study conducted by a computer-based, security-service firm, Automatic Data Accessing, more than 40 % of résumés misrepresent education or employment history.

Electronic monitoring: Many companies are using sophisticated software that monitors when, how, and why workers are using the Internet. E-mail messages are now used as evidence in legal cases concerning discrimination, harassment, price fixing, intellectual property disputes.

Data Security: According to a New York Times report, unauthorised access to private data leading to widespread identity fraud is on the rise. As evidenced by hacking incidents from countries with highly skilled hackers that are subject to very little, if any, government control.

In recent years, enormous collections of confidential data have been stolen from commercial and government sites. Organisations as Lexis/Nexis Group, ChoicePoint, Bank of America, the United States Air Force, the Pentagon, and even the FBI experienced serious data breaches.

The Privacy Rights Clearinghouse, a consumer advocacy group in San Diego, counted over 80 major data breaches involving the personal information of more than 50 million people. In one case, CardSystems (a credit card processor) left the account information of more than 40 million shareholders exposed to fraud.

The recent Wilkileaks dump of hundreds of thousands of U.S. secrets and classified material from the military and the State Department, suggests that data security is not just a concern for specialised computer experts.

How do we ensure that applicants’ and employers’ rights are not violated and that due process is followed whenever suspected problems are identified?

According to a study conducted by a computer-based, security-service firm, Automatic Data Accessing, more than 40 % of résumés misrepresent education or employment history.

Finally, Big Data poses a challenge for accountability.

Someone who feels that he or she has been treated unfairly by an algorithm’s decision often has no way to appeal it, either because specific results cannot be interpreted, or because the algorithms’ inner mechanics that informs on adecision is not revealed, as data scientist Cathy O’Neil demonstrates in her recent book Weapons of Math Destruction.

The European Union recently adopted a measure guaranteeing people affected by algorithms a “right to an explanation”; but only time will tell how this will work in practice.

Adrian Monck of World Economic Forum poses the question:

But are we willing to sacrifice our privacy?

Manuel A. Rivera Raba, CEO of Grupo Expansión and member of the Global Future Council on Information and Entertainment in an article published by World Economic Forum in collaboration with Project Syndicate. explains the possibilities and pitfalls of the new digital age.

Machine learning is another trend. Your device is learning every day more about who you are and what is important to you. Your life is being downloaded. There is of course a huge risk to all of this…A massive legal document appears on your screen and you just click ‘accept’ because the legal language will put you to sleep. People want to know that the boundaries aren’t being crossed, but they don’t even know what the boundaries are.”

According to Scott Shackelford, Associate Professor of Business Law and Ethics, Indiana University, late last year, the Electronic Frontier Foundation, an online rights advocacy group, called for technology companies to “unite in defense of users,” securing their systems against intrusion by hackers as well as government surveillance. One of the U.N.‘s leading champions of free expression, international law expert David Kaye, called for “the encryption of private communications to be made a standard.”

The U.N. General Assembly voted to confirm people’s “right to privacy in the digital age.” Passed in the wake of revelations about U.S. electronic spying around the globe, the document further endorsed the importance of protecting privacy and freedom of expression online. And the G-20, a group of nations with some of the world’s largest economies, similarly endorsed privacy, “including in the context of digital communications.”

To protect these rights, more firms are using the U.N.’s Guiding Principles and cybersecurity policies, such as encrypting all communications by incorporating cybersecurity as a fundamental ethical consideration in telecommunications, data storage, corporate social responsibility and enterprise risk management.

They are also using U.S. government recommendations, in the form of the National Institute for Standards and Technology Cybersecurity Framework, to help determine how best to protect their data and that of their customers.

For information to be properly used, shared and managed, we need to establish data governance models to facilitate the sharing of data in the future and monitor interconnectedness risk build-up in the context of the emerging challenges relating to the sharing economy.