Last Sunday, the service at the First Parish in Bedford was led by a guest speaker, Amy Smith, who is an engineer, inventor, and a Senior Lecturer at MIT. She began by asking us if we had had breakfast; we had. She then asked if we had fetched water, gathered wood, built a fire, shelled corn, and done many other labor intensive tasks to have our breakfast; we had not. Making breakfast can be labor intensive. Amy explained that, in some places, women spend two or more hours a day grinding grain for their families to eat. She also said that, around the world, women and children spend 40 billion hours a year fetching water; more hours than the entire labor force in California working for a year. Making breakfast is also dangerous. What’s the number one cause of death for children under five around the world? Amy told us that it is “breathing the smoke from cooking fires,” which kills more than 2 million children each year. Her solution was to teach people to make clean burning charcoal from waste organic matter. She invented a number of devices, such as a corn sheller, that people could build for themselves, and that reduced the labor. The net result was that farmers were able to work more efficiently, earn money, and create a product that had a positive impact on their society.
I admire Amy, and I thought about this story while talking to clients this past week. The story I hear during meetings usually goes something like this. A few brilliant people have an idea for a product or service that they believe will benefit society. They decide that they can earn money, but being competitive means limiting capital investment and expenses. Usually someone in the room talks about how difficult this would have been 10-15 years ago: in those days, they would have had to invest a lot of money in powerful computers, and hire a staff of people to scale up the business. Today, that model might not work because growth would be too slow: it is hard to raise capital, and it takes a lot of time to hire people with the right skills. Besides that, buying compute power now is almost as simple as buying electric power, where the company need only pay for that used. No need to buy equipment; no need to hire unnecessary staff. By purchasing this infrastructure as a service, the company can focus on hiring people who bring specific skills to their core business. There is no magic here: everyone from the farmer to the entrepreneur benefits from labor-saving inventions. Everyone who is building a product or service wants to increase productivity with a minimum of additional labor.
In my earlier post on past economic trends, I included a graph that showed the relationship between productivity and real earnings. In light of these two stories, the only reason that earnings tracked productivity before 1975 was that there were few choices: growth typically involved purchasing buildings and machines, and hiring people. No business owner, however, wants to buy expensive equipment, or hire unnecessary staff, to achieve growth. Given a choice, they will avoid it. Last week, I discussed the current economic situation, and asserted that, although the principles of economics remain constant, how we experience the economy depends on the technological level of society. The impact of technology is always the same: it reduces the labor needed to produce value. All of us enjoy labor-saving devices, but few of us like being unable to earn a living because our skills, which took years to learn, are no longer needed. Henry Blodget, who is co-founder, CEO and Editor-In Chief of Business Insider, tells us not to worry about robots stealing jobs. Why should anyone think that today’s situation is something new, and different from past technological changes?
First, technology provides business owners with more choices about how to grow their business: they are not limited to investing in equipment and staff. Second, the pace of change now is unlike anything seen in the past, and the pace is accelerating. Today’s digital revolution will be at least as significant as the industrial revolution, but will impact society in a compressed time frame. Third, machines will surpass humans in ability for a significant fraction of the jobs done today by average people. Unlike the past, when technology impacted a specific industry, today’s digital revolution will impact all industries. Job replacement at this scale, and at this pace, has not happened in the past.
In the long run, I believe that we can harness the power of digital technology for a positive impact on society. Even if we are flush with jobs in the future, however, the issue remains that the transition period will be painful for many people. The point of this series is to talk about how to prosper during the transitional years.
Let me offer some background. John Maynard Keynes coined the term “technological unemployment” in the paper “Economic Possibilities for our Grandchildren,” published in 1930. The fundamental ideas go back even farther. The idea that technological unemployment will lead to permanent structural unemployment is known as the Luddite fallacy. The economist Alex Tabarrok summaries the idea of this fallacy when he states that “If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.” Perhaps, but we all know that past performance does not necessarily predict the future, a message we remember when investing our money. As Paul Krugman astutely points out, the industrial revolution raised living standards, but many workers were hurt in the process, so the Luddites did have a value concern.
Let’s elaborate on the pace of change: in 1800 most Americans (90% ) worked in agriculture, but by the year 2000, due to industrialization, most don’t (2%). This shift in employment happened over a period of 200 years, but future employment shifts will happen much faster. Without enough time to adapt, much of society will feel increased pain. To be clear, I’m an optimist and believe that the long-term future is bright; my concern is the transition period that we are in now. One blog, the Weekly Sift, makes the case that if technology creates unimaginable abundance, with little need for labor, then what we really have is a social problem, not an economic problem. No one would complain about elimination of work except for the fact that wages are the primary way that most people earn money. Even with the most optimistic outlook, future nirvana is a few years out, and social problems can take years to fix (just consider congress today). Between now and then, that imaginary footbridge I mentioned in the introduction will continue collapsing behind us, and most people will have to run very fast to not fall into the abyss below.
Lots of people have considered these ideas, especially the people I introduced in the earlier post about past economic trends. Martin Ford published The Lights in the Tunnel in 2009. He provides a non-technical visualization of the world economy and how it would respond to a rising displacement of workers by technology. His arguments for how technology impacts the economy are clear and to the point. Ford published a recent article in the Communications of the ACM (Association of Computing Machinery). Essentially, he argues that most jobs are routine and that machines are rapidly acquiring the skills to do them, so acquisition of new skills by people may be an inadequate defense. Ford also maintains a blog called econfuture, which covers future economics and technology. Another influential book, Race against the Machine, written by Erik Brynjolfsson and Andrew McAfee, both from the MIT Sloan School of Management, was published in 2011. If you have not read the book, a good summary was published in The Atlantic. Essentially, this book argues that we are at the beginning of “the great restructuring.” A new book, Second Machine Age, is due to be published in January 2014. Let me summarize the argument for why the digital revolution, especially the rise of automation, will have such a profound impact on our economy.
Technological advancement is accelerating exponentially. In 1965, Gordon E. Moore invented a formula for predicting the rate of technological advancement, which is called Moore’s Law. Originally, the law applied to integrated circuits: Moore observed that the number of transistors in a single device doubled every 18 to 24 months. To visualize the idea, and project when a computer might execute the same number of computations per second as a human brain, Mother Jones Magazine imagines filling Lake Michigan, starting with a single fluid ounce of water in 1940 and doubling the amount every 18 months. For 70 years, it seems like there is little progress, but then the lake suddenly fills by 2025, in the last 15 years. By analogy, computers began in about 1940, and compute power has doubled about every 18 months. If this trend were to continue, the computations per second of a computer would reach that of the human brain by 2025. The point is that when exponential change happens, it seems like little happens for a long time and then the major impact comes all of a sudden. Today, we have just passed the 70 year mark of the doubling of compute power, so the metaphorical lake of computational power is only a few inches deep, but rising fast. We know that exponential change does not last forever, and that Moore’s law for conventional silicon-based devices is likely coming to a close by about 2020, but new three-dimensional chips are emerging that could continue the trend. The predicted end of Moore’s law has come and gone many times before: each time engineers found ways to break through the perceived barriers.
In addition, alternatives to silicon technology (quantum computing, for example) may well evolve by about 2020 and replace silicon. Also, on April 2, 2013, President Obama announced “the brain initiative” that is an ambitious effort to understand how the brain works. The Defense Advanced Research Projects Agency (DARPA) is sponsoring research called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), and is collaborating with many companies, including IBM. The goal of this research is to build a new type a computer that works much like a mammalian brain. There are other projects too: on October 13, 2013, the Human Brain Project was kicked off in Switzerland. This is a 10 year global project that will give us a better understanding of how the brain works.
Further, other researchers have generalized the concept of Moore’s Law and Write’s Law to predict the rate that other technologies will advance. MIT News recently reported on a paper that shows that these laws give good approximations for the rate of many types of technological progress. The original paper is available on an open access journal. These laws hold for much more than transistors, and as summarized by the Journal Nature, “Mathematical laws can predict industrial growth and productivity in many sectors.” For these reasons, I believe that the digital revolution will continue well beyond the limits of current silicon-based technology.
To replace jobs, machines in the future won’t need artificial intelligence as depicted in science fiction. After all, many middle-income jobs are relatively boring, so there is no need to invent a machine like Commander Data from Star Trek to replace the person doing the job. Even so, machines will likely exceed our expectations. Every time we imagine a glass ceiling on machine ability, it is quickly broken. When I first learned to program a computer, my teachers taught me that the machine was very fast, but very dumb. Nevertheless, in 1997, IBM built a computer, deep blue, that beat the world chess champion at the time, Gary Kasparov. In 2004, Frank Levy and Richard Murnane wrote The New Division of Labor where they predicted jobs that computers would and would not displace. Essentially, the authors imagined that machines were limited to following simple rules, but by 2010 these limits of technology were already being proven wrong. In 2004, DARPA held a challenge to build a driverless vehicle, and the winning entry could navigate only 8 miles. As of 2010, Google had an entire fleet of autonomous cars able to travel thousands of miles on U.S. roads. DARPA focused the 2013 challenge on building humanoid robots. People continue to underestimate how advanced computers will get. In 2011, IBM built a new computer, Watson, that beat the best players at the TV game show Jeopardy. This same class of machine is now being used to do medical diagnosis. Clearly, the gap between machine and human ability is narrowing.
Although some companies are still spending money to buy equipment, few are hiring a lot of workers. Not only are machines cheaper to use, but they are becoming more skilled, flexible, and exact: they are doing more and more jobs once done only by human workers. This trend, using machines instead of labor, is not limited to the United States, it is happening in China as well. These machines are light years ahead of the robot I used for research. When I graduated from MIT in 1986, I interviewed for a job that was to design a machine to automatically sew clothing. It was an interesting, but incredibly complex task to automate, and I did not think it could be done with the technology at that time. I decided not to take the job, but that was 27 years ago. Today, I see that DARPA has recently awarded a contract to develop “complete production facilities that produce garments with zero direct labor as the ultimate goal.” I have no doubt that this milestone in automation will soon be met.
Job replacement is not limited to low paying jobs. Armies of expensive lawyers are being replaced by software. To prepare for a case, lawyers and paralegals used to read thousands of pages at high hourly rates. Today, software can analyze these same document at a fraction of the cost. Every day, trading decisions on Wall Street are made by machines that have displaced human specialists. These workers may have found other work, but the new jobs may very well be less desirable. Lower cost overseas doctors are displacing radiologists in the U.S. This is a precursor to software that will analyze these same images using advanced algorithms. It is not just automation that it impacting jobs, it is the ease of using low-cost labor at the task level. Micro-tasking web sites, like Amazon’s Mechanical Turk, are emerging that allow people around the world to bid on and do work on a task by task basis. This is possible because of high-speed communication technology. Essentially, these people are knowledge workers, and their jobs are subject to off-shoring, automation, or both.
Are people coming around to believing that technology is impacting middle class jobs? The answer is yes. Paul Krugman, in the column Robots and Robber Barons, acknowledges that technology is replacing workers in many industries. The Atlantic recently reported on a new study by a pair of economists at the University of Chicago’s Booth School of Business, Loukas Karabarbounis and Brent Neiman. In a nutshell, the study shows that labor’s share of income is plunging due to ever improving technology. The New York Times recently published an opinion piece by David Autor and David Dorn called “The Great Divide: How Technology Wrecks the Middle Class.” In my earlier post about economic trends, I mentioned these two authors as saying that, in the past decade, demand is rising for people with the lowest and highest skills, but is falling for those in the middle. MIT Technology Review had an article in the July / August Issue called “How Technology is Destroying Jobs.” This article says that Brynjolfsson is not ready to conclude that economic progress and employment have diverged for good, but he adds “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.”
These trends are not going to change anytime soon. The Oxford Martin Programme on the Impacts of Future Technology recently released a study, featured on the London School of Economics and Political Science Blog, that concludes that “Improving technology now means that nearly 50 percent of occupations in the US are under threat of computerization.” They reviewed over 700 occupations, and offer a detailed graph showing the probability of computerization by occupational category. Every category, such as “management, business, and financial,” is shown as a color (such as light blue). All of the jobs in that category are distributed like grains of colored sand. The placement of each grain along the horizontal axis, from low (0) to high (1), represents the probability of computerization. Each category is stacked in turn, resulting in a graph that looks like multicolored sand art. Each of the millions of jobs in the US is a tiny point on the graph. The net result is that 47% of jobs are at high risk of computerization, 19% are at medium risk, and 33% are at low risk. Here’s the graph:
The classic theory is that new occupations will replace the old ones, but that will take time. While all these smart people figure out how to solve the economic and social problems, I am going to worry about a much more mundane issue: how average people can stay employed until this restructuring is complete. This is post four of five. In the next post, I’ll outline how this story may unfold in coming years, and how the people can prosper through it all.