Artificial Intelligence is a lot like teenage sex.

Everyone talks about it. Nobody really knows how to do it. Everyone thinks everyone else is doing it. So everyone claims they are doing it.

The above was presented* by Bryan Briscoe at the recent WorldatWork Total Rewards conference, where Bryan and I presented a session called “If You Can’t Beat ‘Em, Join ‘Em: Embracing Automation and Machine Learning. Never in our lives would Bryan or I expect to be known for talking about sex… but we’ve now had our moment.

Our presentation focused highlighted some key themes, some of which repeat what Bryan covered in past posts (see here, here, and here for a three part series last year). Here is a summary of the key themes.

Computing power and algorithm efficiency are growing at an impressive rate.

As summarized in this blog post, the amount of compute power used in AI training runs is doubling every 3-4 months. Remember that the famous Moore’s law called for an 18 month doubling time. The linked blog post puts that in mathematical perspective, but here’s a comparison that makes it real:

That’s right – your Yorkie became an elephant in three years. I think we’re all going to need a bigger yard…

That’s an astounding rate of growth, which means that the ability for machines to develop amazing capabilities is growing at an astounding pace.

Machine Learning is just a lot of math that makes prediction cheaper.

That growth in computing power is really all about computers being able to process more data and apply math to it. In an oversimplified sense, machine learning is like running a but of regression models with every variable known to man until you find one that fits certain parameters (explained variance, simplicity of variables, etc.). Or, it’s like playing blackjack over and over and over again to learn how the cards that have been played can help you predict how likely you are to get a good hit. What might take you and an Excel spreadsheet a few weeks to find might only take a few seconds for a machine to discover. The ability to predict is becoming a commodity (which has interesting implications, some noted below and others described in this recent HBR Ideacast).

Artificial Intelligence is still not very human-like.

Current AI efforts are challenged by a fundamental intelligence trade-off:

  • Generality: being able to handle multiple tasks
  • Performance: Being good at the things you handle

It’s a classic “depth vs breadth” situation – it’s hard to be really good at all things, so we have to prioritize how generally we develop or how good we become at what we develop.

Relative to human level generality and performance, automation efforts suffer dramatically. The following graphic, provided to us by Brandon Rohrer at Facebook, illustrates the current state. Note that the scale is logarithmic. Self driving cars, for example, are about 1% as “general” as humans (all they can do is drive a car – don’t ask them to cook dinner), and at this point only about 25% as effective as humans at driving (there are situations they simply won’t handle).

Note that nothing is really approaching human level. The prospect of a full human replacement is a ways off.

The risks of depending on AI too much are real…

Because AI isn’t very human like, it can make a lot of mistakes. In our conference presentation, I summarized those risks into three key issues:

  1. The conclusions/predictions drawn by AI come from a black box. Yes, the conclusions are data-driven, but how those conclusions are derived from data is not always clear. Depending on the learning method used, you may not be able to “audit” the conclusion or explain it beyond “that’s what the data says.” For HR, where the “why” matters, this may not be acceptable.
  2. Even the robots can be fooled. Image recognition is a relatively robust work area… but it’s not perfect. Check out this blog post for examples. Another example is that if we all learn that changing your LinkedIn photo is a predictor of who is looking for jobs, and companies implement retention strategies for people at risk, then perhaps people will start updating their photo more frequently.
  3. What’s “true” is not necessarily what is “right.” AI is trained based on existing data, which may have a variety of biases built into it. For example, if your company happens to have a lot of high performing white male sales people, a video interviewing platform that can help score applicants based on their recorded interview might recommend more white males, which would not be helpful.

Given these risks, simply trusting the latest whiz bang AI HR app is essentially going to your CEO and saying this:

I’ve got this unemotional, sociopathic, amoral friend who doesn’t know right from wrong. They are ignorant of all laws, truly don’t care if people live or die, and have no understanding of the public relations impact of their choices. But they are amazing at analyzing data and can predict who to hire and how much to pay. I can’t fully explain how they do it, but it’s magical.

Let’s just say I’m not ready to make that sales pitch.

… And so is the risk of discounting AI’s impact.

So while blind reliance on AI is dangerous, it’s also a big mistake to assume there is no value. There are practical, albeit narrow and requiring some human guidance, applications for AI today within HR, and over time those applications will require less guidance and become more general. As a result, there is a real likelihood that parts of the jobs you and I do today will not exist in the future.

Based on data from the Bureau of Labor Statistics, Bloomberg created a visual showing the likelihood of jobs being automated, comparing to the size of the workforce in that job and the typical pay of the role. It gets awkward when you present that to a room of Rewards practitioners:

There is a place for humans and machines to play together in the sandbox.

The good news is that the graphic above is describing the likelihood of automating the work we do today – not the work we can be doing. There are parts of our jobs we’d love to get rid of, and automation represents a chance for that. Don’t love answering basic questions? Chat bots are coming. Wish you had more time to actually meet more candidates in your pipeline? We can use resume sorting algorithms to save time and at least prioritize which candidates to meet. Spending time analyzing which learning content seems to drive changes in performance? Let the machines do the work, and then you can spend mroe time delivering those programs.

A good read is the new book Human + Machine: Reimagining Work in the Age of AI. It presents a model of activities that humans and machines will each dominate, as well as activities where humans and machines complement each other. For example, humans will be required to lead, or empathize… machines will be better at transacting and predicting… but machines can help give superpowers to humans when it comes to interacting. It’s a fascinating book that presents a hopeful future for our workers.

So what do you do next?

If you follow the key points and logic above, it’s clear that your call to action is to understand AI’s possibilities and risks. We’ve assembled an Artificial Intelligence toolkit, comprised of a list of books, Ted Talks, podcasts, online courses, and even a movie – all to get you more familiar with the world of AI. Click here to provide your email and download a copy.


* The quote is adapted from an original statement by Dan Ariely, where Dan says “Big Data is a lot like teenage sex…”