The prior post in this series provided some warnings from Jurassic Park about how letting machine learning run amuck could cause unintended consequences. In this installment, we’ll go further into the process and talk about the challenges of interpreting the data and its usefulness.
The second most dangerous thing about machine learning systems is too often you don’t know what it is telling you, and it’s not as smart as the marketers say.
Let’s start with a movie reference. In the first Harry Potter movie (or the book if you still read books) for the first half of the story Ron Weasley disliked Hermione Granger, primarily because she was that student who crammed for the tests, who had all the answers to every question in class, and who could only follow the rules. When it came to things like needing courage, thinking outside the box, or bending the rules, Hermione was not the person that Ron and Harry wanted on their side. Here is the clip in case you don’t remember:
In many ways, computers and machine learning algorithms tend to be a bit like Hermione. Sometimes a computer can show itself to be “smart” in a way that we would think of it being “book smart”, but have absolutely no “street smarts” or “emotional intelligence”.
The rest of us can become a bit like Ron and Harry – annoyed at the lack of “smarts” from machine algorithms, even when they give answers that we should probably know. It’s easy to find examples when the data was right and the data was wrong, and we can lose sight of understanding what the data tells us. The reality is that good statistical measures are not meant to cause people to get emotionally invested in them and to be defended as “right.” They often at the very least will help provide context, and at the most they should be used to help inform a decision. This limited use, though, does not argue that we can ignore the results. As the famous British statistician George E. P. Box once said, “all models are wrong, but some are useful.”.
We can already see the error in over-trusting automation and machine learning. Take the example of the first self-driving car fatality in Florida. It was a tragic mistake where a very “smart” car drove right under a tractor trailer that was spanning the road, something the brilliant, intelligent, successful technologist who owned the car would have never done in a million years. The car had saved him from a crash a few weeks earlier, and the “right” of the model instilled confidence. But trusting the car’s technology so much, the driver ignored warnings to put his hands on the steering wheel, and was using the feature on a piece of road that he wasn’t supposed to. He didn’t have his hands on the steering wheel for almost a full half-hour before the crash. He was so confident in the technology, that they found after the wreck that it seemed that he was watching a Harry Potter movie at the instant it happened (how is that for a tie-back?).
It’s easy to judge. It seems obvious that you shouldn’t ignore the warnings, and that you need to intervene at times when automation moves into uncharted territory.
But let’s take a hard look in the mirror. How likely is it that if your HR team could build a system that was shown in test cases to be “better than people” at doing work, than people, even if it wasn’t perfect – how long would it be before nobody put their hands on the wheel anymore? Odds are, one long road trip (or one bonus cycle) and if people are free of the headaches and details of the “grunt work”, they won’t check the computer’s work anymore. And when you get into subjective decision making, even computers are going to make some bad calls – so then who’s fault is it?
On the other hand, the organizations that ignore what can be learned from machine learning are in just as much danger. As competitors adopt data strategies and work toward more competitive practices around pay, retention, recruiting, and rewarding and engaging their employees, the companies who can best use the information about their employee’s preferences, patterns, and performance are going to have huge advantages over those who are not maximizing the use of their own data. This risk of not knowing becomes just as dangerous. If you go back and watch the first Harry Potter movie (another tie-back!), you see that the core of the story is two maverick boys who really are poor students going on adventures who end up needing the bookish girl, Hermione, to help them to do things like levitate objects, unlock doors, and navigate challenging puzzles and mysteries. Machines learning and statistical analysis on our HR systems have the potential to unlock just as many doors, and help practitioners solve difficult problems. But if we don’t bother to know what they can do, or ask them to help us solve a problem, then we might be going on our adventures without all the resources we could have.
So how do you bridge the gap between Harry and Hermione??
Here are some tips and questions.
If you have machine learning systems running to solve problems in your HR organization:
Ask questions like:
- “What is this really telling us about our employees, candidates, business?” (beyond bullet points)
- “How is this process going to help me make a better business decision?”
- “What problem is this going to help solve?”
- And after all of those “How much better or worse than a good manager’s intuition is any of this work proving to be?”
That last question is important, because if your big machine learning effort turns out to yield something that says “We are 95% confident that people who have bad managers and who are treated really badly are probably more likely to quit than people who have good manager who treat them well” – then congratulations, you wasted money.
If you don’t have machine learning systems running to solve problems in your HR organization:
Ask questions like:
- “Do we have data?” (And realize that “data” and “information” are two different things)
- “If we have data, what all do we have?”
- “Do we have meaningful metrics that we have good data and historical understand of” (some very basic examples)
- recruiting (time to fill, quality, cost, etc.)
- retention (retention rates, voluntary/involuntary, etc.)
- compensation (competitive rates, internal equity, growth)
- performance (performance ratings, productivity, ROI-type metrics, etc.)
- “If we can solve for those metrics, what would we be willing to do?”