Blog

All watched over by machines of loving grace

I recently wrote about concerns about the role of big data that arose from Facebook’s research effort where the researchers sought to determine if emotional state could be manipulated by what appeared in their Facebook activity streams (see The fear of big data is growing). But directed efforts to shape our emotional state or to control our behavior are not the only ways that application of big data analysis might prove to be questionable.

Peter Capelli has posted a piece at the Harvard Business Review that poses some important questions about the application of big data and predictive analysis about future behavior and performance of individuals based on what big data may reveal. At core is the question of what businesses might do when confronted with the possibility of peering into the statistical future, and choosing who to hire — for example — based on big data-based predictions.

Peter Cappelli, We Can’t Always Control What Makes Us Successful

Many of the attributes that predict good outcomes are not within our control.  Some are things we were born with, at least in part, like IQ and personality or where and how we were raised.  It is possible that those attributes prevent you from getting a job, of course, but may also prevent you from advancing in a company, put you in the front of the queue for layoffs, and shape a host of other outcomes.

So what, if those predictions are right?

First is the question of fairness. There is an interesting parallel with the court system where predictions of a defendant’s risk of committing a crime in the future are in many states used to shape the sentence they will be given. Many of the factors that determine that risk assessment, some of which include things like family background that are beyond the ability of the defendant to control. And there has been pushback: is it fair to use factors that individuals could not control in determining their punishment?

Likening the assessment of an employee’s fate in a corporate downsizing to the judicial review of criminals may seem farfetched, but the parallels are obvious. The power lies in the hands of the courts and management in the two cases, and the employees and the criminals are powerless. One attribute of that powerlessness is that judges and management have access to statistical information — and its analysis — while the criminals and employees do not, in general.

Capelli makes an argument that the psychologists — who have been grappling with the ethics of human assessment in the enterprise for decades — are now being pushed aside by data scientists and software companies that are providing new ways to read the crystal ball. Instead of personality or IQ tests, machines are crunching big data, mined from hundreds or thousands of companies, that reveal who is most likely to be a good call center worker.

Xerox is using software from Evolv that is based on the analysis of the testing and performance tracking of tens of thousands of call center workers

Joseph Walker, Meet the New Boss: Big Data

By putting applicants through a battery of tests and then tracking their job performance, Evolv has developed a model for the ideal call-center worker. The data say that person lives near the job, has reliable transportation and uses one or more social networks, but not more than four. He or she tends not to be overly inquisitive or empathetic, but is creative.

Applicants for the job take a 30-minute test that screens them for personality traits and puts them through scenarios they might encounter on the job. Then the program spits out a score: red for low potential, yellow for medium potential or green for high potential. Xerox accepts some yellows if it thinks it can train them, but mostly hires greens.

The terminology — reds, yellows, greens — sounds more like the caste system of a dystopic science fiction novel than a contemporary business analytic tool, but it’s not. This is what is going on in business, today. And the reasons are simple, despite the ethical questions that accompany them. One driver for the rise of algorithmic HR is that people are bad at making hiring decisions: we have too many cognitive biases, and our capacity for balancing many independent factors for a candidate’s suitability for a job is limited. So the logical decision — as we have seen at Xerox — is to hand over the decision of who to hire and train to the machines.

The result is that there will be less turnover at Xerox, saving the company money, and the customers benefit from better customer support. The only ones iced out are the ‘reds’: those individuals who might have desired a job at the call center, but who will now have to find a job where their curiosity is a plus not a black mark.

The counter to Capelli’s concern should be the government or the education sector, who — armed with big data and analytic tools of their own — could be guiding those ‘reds’, and everyone else — toward the jobs and careers that line up with their gifts and backgrounds.

And the broadest ethical questions — like what to do about those raised in single parent homes in a world that might rate them a higher risk for all jobs — are beyond the scope of this analysis, today, but as Capelli points out, those questions need to be raised and answered by someone.

In a time when our institutions are in retreat, and the social contract between the worker and the business has been attenuated, you have to wonder who that someone is.