Depending on who you listen to, between 7% to 75% of the jobs we know today will disappear in the next decade because of automation. For HR professionals that’s naturally a double-threat because (a) our jobs are among those being automated and (b) it won’t take long for the new robot overlord’s the realize that they don’t need “HR” if they eliminate most of the “H” portion.
That may be a little dramatic, but there is little doubt that automation will dramatically change how we work – and perhaps whether some of us work at all. If you attend conferences, read the technology news, watch a TED talk from the last few years, or browse the internet, you’re likely to eventually stumble onto a discussion about artificial intelligence or machine learning.
This is a heavy topic, with lots to understand. My plan is to unpack this topic in a series of a few blog posts. This first post is an attempt to set some context and framework for future discussions. We will start with going over some of the history and the context, then future posts will follow about some ethical and legal concerns you should have. Eventually we will start working some simple examples so that as a community, we can all start to learn and grow together.
What is machine learning?
Machine learning traces its roots back to a guy named Arthur Samuel back in the late 1950’s. Samuel believed computers could “learn” without being programed. He liked to play (and was quite good) at checkers, so to have a worthy opponent he programmed a computer to think strategically like him. Samuel quickly discovered, though, that the computer could never be smarter than he was – Samuel knew intimately what the computer would do next because he had painstakingly put all of the strategy and logic into punch cards himself, so he always won against the machine.
Samuel decided to let the computer play itself and simply let it start recording/learning what the outcomes of different move patterns were. Without having to interact with a human, the computer could play — and “learn” – millions of games of checkers overnight, recording who the winner and the loser was ultimately of each combination of moves. This technique allowed the computer to play more games, gain more experience, and see more outcomes than Samuel ever could.
Then the computer started winning. Arthur Samuel had his worthy opponent. Armed with a library of rules identifying what moves led to victory rather than defeat, the machine learned how to overcome Samuel’s strategy.
This is machine learning: enabling a computer to observe (or simulate) a wide variety of decisions and outcomes to discover the underlying “rules” of the game.
From Checkers to “Taking all of our jobs”
Since Arthur Samuel’s checkers game 60 years ago, things have changed a bit. The world of machine learning has moved from relatively simple games to complex decisions. Here are some things to consider about what has changed:
- According to Northwestern University, the world is on track to increase the total amount of data we have by 10x from 2013 to 2020. The amount of data that is produced every single day is 250,000 times the amount of data in the entire Library of Congress. And companies, data scientists, social media sites, and even governments all have working to standardize more of how that data is stored, shared, and mined.
- Computing power has grown by 1,000,000,000,000 times in the last 60 years. And that’s not total world computing power. That’s of individual device capability. A $300 PS4 (already dated technology) is over 1,000 times more powerful than the $16,000,000 Cray-2 computer in 1985.
- Open source programing and simple graphical interfaces are quickly making the art of command line coding seem archaic – or at least unnecessary. Free software tools are replacing the need to build your own, massive online communities and YouTube are replacing years of school and expensive training resources.
These changes have unlocked machine learning – we are no longer limited to checkers and games with simple rules to follow. And anyone can build a worthy opponent.
Anyone can be a data scientist today. A person with a $150 laptop, an internet connection and a high level of curiosity can learn and do things that were either impossible or incredibly difficult just 5-10 years ago. As a result, machine learning can be applied quickly, cheaply, and by a wider cast of people. Furthermore, this means more and more work can be put to the test to be replaced by algorithms and robots.
We are not just talking about administrative or clerical jobs. Automation is already changing and replacing work in our economy’s most advanced fields. One daunting example of this work comes from Stanford, where researches took 130,000 images of different skin samples and different types of skin cancer and diseases. Using machine learning, they’ve trained a learning machine that can detect skin cancer as accurately as a board-trained dermatologist.
Right now, you can use your smartphone to screen for skin cancer. If a phone app can be as accurate or better than the average dermatologist at their specialty, how wide is the “skill moat” for the jobs in your organization? Or even your job?
There is a great interactive from the McKinsey Global Institute that estimates the automation possibilities for 750 occupations. McKinsey estimates that 90% of payroll clerk jobs can be automated, 50% for HR assistants, 25% for HR specialists, and 10-15% for Human Resource Managers. While at first this suggests managers are safer because of the task of managing others. It begs further questions, such as if there are fewer specialists and assistants, will we need as many managers?
Why do we, HR professionals, need to learn about machine learning?
We can’t deny the possibilities of automation, and we need to be a part of the business conversation on the topic.
With the tremendous opportunity, though, comes notable risk. We need to understand how machine learning works to enable a thoughtful discussion of how to use it. There are countless MBA case studies, books, and corporate bankruptcies that happened because some person took a myopic view of some data and made bad decisions. The “bean-counter” phrase came into popularity when you had accountants controlling things that they didn’t understand. We’re potentially entering a phase where people may turn decisions over to computers when neither the computer nor the people understand each other. As HR professionals and as nerds, there is a “defense against the dark arts” element to this magical world of technology that we need to consider.
Beyond avoiding those risks, machine learning can already help us do our jobs better. It can teach us more about our data, businesses, and people. It is way too early to just hand over the control to these nice color-coded predictive dashboards and processes that vendors are going to keep building. But even today, with current tools, machine learning could help almost any analyst, HR or otherwise, gain deeper understanding and insights.
As we add more content and continue this series, we will start to dip our toes into some of the specific concepts that we need to care about. Not everyone needs to understand how a neural network model works, but there is power in understanding the concept. It even sounds cool at a cocktail party.
And if you want to outrun the robots, you better know how fast they run.