Tag Archives: 2025

An economy for our shared future

Allan: Irving Wladawsky-Berger writes a very good post about current and future economic challenges. Put aside the debate as to whether this is best characterized as the third or fourth industrial revolution, and instead focus on what we all need to do to cope with these changes and build an inclusive society. Inside this post are many good references, which I would also encourage people to read. Klaus Schwab, Founder and Executive Chairman, World Economic Forum, writes:

We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.

For those that read this Blog, you know that I have long been concerned about challenges that will face the workforce between now and 2025. Irving Wladawsky-Berger references a Pew Research Center study, Digital Life in 2025, that predicts the impact of the Internet on humanity by 2025. This is a perfect follow-up, and this study makes expert predictions that can be, “grouped into 15 identifiable theses about our digital future – eight of which we characterize as being hopeful, six as concerned, and another as a kind of neutral, sensible piece of advice that the choices that are made now will shape the future.” The most important conclusion, I think, is #15:

Foresight and accurate predictions can make a difference; ‘The best way to predict the future is to invent it.’

The issues are extremely complex; nevertheless, the future is ours to build.

Originally posted February 23, 2016
Irving Wladawsky-Berger: The Fourth Industrial Revolution

The Fourth Industrial Revolution: what it means, how to respond was the central theme of the 2016 World Economic Forum (WEF) that took place earlier this year in Davos, Switzerland.  The theme was nicely explained by Klaus Schwab, WEF founder and executive chairman, in the lead article of a recently published Foreign Affairs Anthology on the subject.

Dr. Schwab positions the Fourth Industrial Revolution within the historical context of three previous industrial revolutions.  The First, – in the last third of the 18th century, – introduced new tools and manufacturing processes based on steam and water power, ushering the transition from hand-made goods to mechanized, machine-based production.  The Second, – a century later, – revolved around steel, railroads, cars, chemicals, petroleum, electricity, the telephone and radio, leading to the age of mass production.  The Third, – starting in the 1960s, – saw the advent of digital technologies, computers, the IT industry, and the automation of process in just about all industries.

“Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century,” he noted.  “It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.”

Most everyone agrees that there was a major qualitative distinction between the First and Second Industrial Revolutions.  While some believe that the Fourth is merely the evolution of the Third, Schwab argues that they’re qualitatively different for 3 major reasons:

  • Velocity: Compared to the previous three revolutions, “the Fourth is evolving at an exponential rather than a linear pace.”
  • Scope: Disruptions are taking place in “almost every industry in every country.”
  • Systems impact: “The breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.”

View Original 1111 more words

Working till 2025: Crossing the Bridge

We are not here to curse the darkness, but to light a candle that can guide us through the darkness to a safe and sure future. For the world is changing. The old era is ending. The old ways will not do.” — John F. Kennedy

In the “introduction” to this series, I said that I “visualize myself running across a wood plank footbridge that is collapsing behind me: to remain working I must continuously stay one step ahead of rapidly changing economic conditions.” I also said that this series was not going to Foot_bridge_Bahia_Drakebe all about doom and gloom, yet you may feel that I’ve laid out a pretty bleak picture of both the current and future jobs situation. Before talking about the benefits of this new digital age, let me briefly discuss where we’ve been. In the second post, “past economic trends,” I introduced the raging debate among economists about the influence technology has on the economy. I showed that the historic trend, of wages tracking productivity, began breaking down in 1975. Since then, productivity has continued to climb, but wages have remained flat, except for workers with the highest skill levels. Also, demand is up for workers at the two ends of the spectrum, high and low skills, but down for the large pool of people who are middle skilled. In the third post, “life is tough these days,” I told you many things that you likely already knew: unemployment remains high, the labor participation rate is declining, long-term unemployment is a trap, average household income is falling, and it really does not feel like the recession has been over for more than four years. Also, I tried to cut through the noise of monthly labor reports, and define a macro trend for the “at risk” population (people who are suffering from unemployment, involuntary part-time employment, or underemployment). My conclusion was that one-third of the potentially working population is at risk, and the numbers are rising. In the fourth post, “the digital revolution,” I explained why I believe that technology is driving a historic shift, one that is at least as significant as the industrial revolution, and one that will happen with great speed. I ended with a study, featured on the London School of Economics and Political Science Blog, that concluded that computerization threatens about 50% of current occupations in the U.S. Maybe new occupations will emerge, but I think I’ve made a good case that many people will find the transition phase unpleasant.

I’ve dedicated this last post in the series to crossing the bridge. What do I mean by this, and why do I think it will take so long? First, I expect poor economic conditions to continue. Paul Krugman recently wrote a column, “A Permanent Slump?“, where he asked “What if depression-like conditions are on track to persist, not for another year or two, but for decades?” He calls this “secular stagnation,” a persistent state in which a depressed economy is the norm. Krugman notes that if consumer demand remains weak, unemployment will remain high. He suggests possible macroeconomic reasons, including slowing population growth and persistent trade deficits. He does not mention technology, so let’s put that to the side for the moment. The point is that the economy is not likely to recovery soon. Second, zealots will continue to make it difficult to govern, and so it will be hard to put policies in place to solve these economic problems. Consider that a minority party forced a government shutdown, blocked judicial appointments, and, at the state level, has led a campaign against workers (including efforts to limit or restrict sick leave, workplace safety standards, time for meals, child labor standards, and minimum wage). Third, while these problems are happening, we will enter the digital age where about half of current U.S. occupations will be under threat of computerization. Economic issues, government challenges, and technological changes will make getting to the other side of the bridge quite challenging.

What will a prosperous twenty-first century economy look like? I don’t know the answer to that question, but, more importantly, I don’t believe that economists know either. One narrative is that macroeconomic principles will play out as they always have in the past. New occupations will emerge to replace those that have become obsolete, and we’ll return to rising productivity and full employment. An alternative narrative says that, as we continue to transform from connected regional economies to an integrated world economy, the dynamics of the economy will change. Our experiences from the past won’t guide us to accurate predictions about the future. Our economy will have become a single closed system where it may no longer be possible for the U.S. to have continuously increasing productivity, and the full employment that went along with it. Economists may need to build new models for the twenty-first century economy, and we may need to reconsider the nature of work itself.

The Next Enlightened Age

Regardless of how the economists settle this debate, this new technological age is a great opportunity. Many authors have suggested that we can, if we choose, enter into an age of abundance and sustainability. For instance, Peter H. Diamandis and Steven Kotler offer an optimistic view of the future in their book Abundance, The Future is Better Than You Think.  The technology that is coming will give us the opportunity to solve massive problems, including poverty, disease, climate change, and more. But, technology alone is no silver bullet. We need to re-think our core principles and figure out how to build a sustainable economy. Exponential growth cannot continue forever; we need to imagine a steady state that provides security, opportunity and prosperity to everyone.

We need a society where the workforce is fluid, and people can easily move between jobs. Switzerland is considering a proposal to give every citizen a basic income, no strings attached. I think this idea is worth more study. Some worry about the cost, not to mention creating a disincentive to work. But, perhaps those worries are overblown. Perhaps people would be more willing to take risks, and become entrepreneurs, if we guaranteed everyone a minimum income. If people had dignity and security, perhaps they could walk away from low-end jobs unless the employer offered fair pay and good working conditions. They would not have to accept such jobs just to survive. There is conservative appeal to this idea as well: it would replace many government programs, especially if it was simple and had no needs testing. If people had basic income (for security and dignity), had access to health insurance (regardless of their employment situation), and could get any level of education (without going into debt), then we would curb poverty and enable the workforce to thrive in the twenty-first century. People, if standing on a solid foundation, might be able to take part in “free, voluntary exchange to mutual benefit” (Ayn Rand) as libertarians and others so want.

Maybe these predictions are overly optimistic (many authors also write about a future that is a dystopia), but if you believe that the result will largely depend on the choices we make, then I see no reason not to hold out hope and fight for the better outcome. An essay in the Weekly Sift discusses how Nate Silver made the interesting observation that the printing press led to the enlightenment, but only after 300 years of polarization and war. Silver says that when initially faced with information overload that we “engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.” Silver goes on to draw a parallel to the rise of the internet, observing that the information overload has led to polarization in our time. As the digital revolution unfolds, I am hopeful that the outcome will be another period of enlightenment. I am also hopeful that, like the pace of technology, the pace of history will be much faster this time. There will be plenty of time to think about what the world, and the economy, will look like beyond 2025. I will, however, close this series by considering just the transition period.

Timeline to 2025

Let me show, as a context, how computers will advance. You likely use a device with a microprocessor based on 22 nanometer technology; the first such microprocessor came out in 2011. Intel plans to ship 14 nanometer technology in laptops in 2014. Looking to the future, new 10 nanometer technology will emerge, and perhaps another generation (7 nanometer technology) by the end of the decade. In my last post, I described how exponential growth is slow for a long time, and then explodes. Also, earlier in this series, I mentioned how my personal computer today was 60,000 times faster than the IBM PC AT I used years ago for robotics research. Using that as a baseline, I’ll outline how computer technology might advance.

  • Current PC (22 nanometer technology): 60,000 times faster than IBM PC AT
  • 2014 (14 nanometer technology): 120,000 times faster
  • 2016 (10 nanometer technology): 240,000 times faster
  • 2018 (7 nanometer technology): 480,000 times faster
  • 2020 (tentative, alternate technology): 960,000 times faster
  • 2022 (tentative, alternate technology): 1,920,000 times faster
  • 2024 (tentative, alternate technology): 3,840,000 times faster

The dates are approximate, the doubling factor is inexact, and alternative technologies are still in the laboratory. Nevertheless, the macro trend is clear: digital technology is going to explode in terms of computational power. With this timeline in mind, I’ll outline the coming years.

2013 – 2016: The MIT Sloan Management Review recently published a study asserting that embracing digital technology is a strategic imperative for companies — “adopt new technologies effectively or face competitive obsolescence.” Over three-quarters of the participants saw the transformation as urgent, but many also complained about “innovation fatigue” given the number of changes they’ve recently been through. The pace of change, however, is speeding up, and the group most energized about taking on the challenge is the younger generation.

Perhaps rapid changes will create opportunities for start-ups with disruptive technologies and products. IT spending, which exceeds $3 Trillion worldwide, will stay focused around cloud, mobile, social, big data and analytics. New platforms for application development continue to emerge: IBM recently announced the Watson Ecosystem, based upon the computer that won at Jeopardy, to “spur innovation and fuel a new ecosystem of entrepreneurial software app providers.” As of 2012, the internet switched to a new protocol, setting the stage for an emerging “internet of things.” Soon, you’ll be shopping for a smart watch, and robots will do work in new places, such as farms and mining operations. Young people are creating their own digital brand, and they’re networking not just with LinkedIn, but also with new communities, such as FounderDating and MeetUp. The Boston New Technology Meetup Group has a product showcase once a month, where entrepreneurs demonstrate emerging products. Meetings are often held at hack/reduce, which is a non-profit that collaborates with government, corporations, and universities to “help Boston create the talent and the technologies that will shape our future in a big data-driven economy.” They give interested people access to “a large-scale compute cluster, hands-on workshops, and a physical space in the heart of Kendall Square.”

Start up companies will prosper for many years during the digital revolution; many will launch with close to zero capital investment. Just a few mouse clicks can provision servers in the cloud, which can then be used on a pay-as-you-go basis. All companies will enjoy a world-wide audience: just a few employees will be able to reach thousands of customers. Facebook and Twitter were the beginning, but the benefits will extend beyond high technology to anyone with a unique product or skill to offer. My niece is an artist who is able to sell her products to a worldwide audience using the web and social media. Opportunities will continue for people at the highest level of skill, especially those that combine academic, social, managerial, and leadership talents.

Overall, however, expect problems with the economy to continue. Collaboration in high technology represents the bright side of the sharing economy. A recent article in the New York Times describes an emerging trend toward sharing “activities as diverse as car-pooling, ride-sharing, opening one’s home to strangers via Web-based services like Couchsurfing or Airbnb, sharing office space and working in community gardens and food co-ops.” These services, while innovative, are a tactic people are using to support their livelihood and cope with the loss of jobs in the U.S. At the other end of the spectrum are people who survive by doing odd jobs, borrowing money, and going back to school at 60 years old. I expect the number of “at risk” people, as defined in an earlier post, to rise. As mid-skill jobs become obsolete, and these workers shift to low-skill jobs, citizens will demand an increase to the minimum wage. Some will argue against this change, saying that it will accelerate the shift to automation. The truth is that automation is coming regardless of government policies. California has a bill to raise the minimum wage by 2016; other legislation will emerge at both the state and federal level. The dark side of the story is that not everyone will benefit: inequality will continue to rise. The productivity and wealth created by our digital economy will depend less and less on labor.  The economic trends I described in earlier posts will continue.

2017 / 2020: The digital revolution will continue, but other areas will come to the forefront as well, especially genetics, nanotechnology and robotics. Silicon technology will peak during this period or soon after. Cloud computing platforms will become a commodity. You’ll be able to “print” three dimensional objects, such as replacement parts and small toys, inexpensively. IBM predicts that by 2017, machines will mimic all five human senses with technology. Memory implants will be possible. You will be able to “feel” the texture of objects by touching the screen on your phone. Your phone will also be able to “smell” the environment, perhaps detecting that you are sick, and adjust to the context of your environment based on listening to background noise. By the way, a paper-thin device may replace your phone or tablet computer. Nearly half of internet traffic will originate from non-PC devices, such as TVs, handsets and others; the number of devices on the internet will be triple the worldwide population. Analysts predict that we will be living in the “digital universe.” The number of servers worldwide (virtual and physical) will have grown by an order of magnitude. Language barriers will be more or less gone due to machine translation.

2021 – 2025: Computational power will likely continue to increase according to Moore’s law using three-dimensional circuits or another technological innovation. We will foresee the day when computers will match the human brain in terms of computations per second, which could come as early as 2025. In the very long term, IBM researchers have a vision that “by 2060, a one petaflop computer that would fill half a football field today, will fit on your desktop.” Narrow artificial intelligence (or weak AI) will continue evolving at an increasing pace. Note that everything I’ve discussed so far depends only on specialized machine intelligence. None of these trends imply that machines will pass the Turing test (show intelligence indistinguishable from that of a human). Someday, however, this may become a possibility that cannot be discounted. Regardless, robots will become ubiquitous at work and at home.

Crossing the Bridge

For many years, the debate about the impact of technology on the economy will continue to rage. The usual suspects will dig into established positions. On one side, people, such as Scott Winship of the Bookings Institution, will argue that robots do not cause unemployment and that people who say such things are suffering from “Technophobia.” I disagree. I welcome the coming digital revolution and agree that technology promises to bring great benefits to society.  For this to happen, however, society needs to understand and manage the changing dynamics of the economy. We can’t solve the problem by simply saying that government needs to get out-of-the-way, so that the free market can solve all ills. There is no law of physics that says rising productivity must bring rising wages and more jobs. This was true in the past, but the pattern may not hold true in the future. On the other side, people will argue that government can solve issues by updating policies for money, labor, and trade. We may, however, find such changes insufficient. We’re entering a new era where the fundamental nature of work will change. A fresh view about the economy needs to emerge. I don’t know exactly what this new economy will look like, but I do believe that the solution will need cooperation between business and government.

A bright spot is Massive Open Online Education (MOOCS). The number of available classes will continue growing at a phenomenal rate. Advanced education will become ubiquitous, and often freely available, though debate will continue about the relative benefits of online vs. classroom education. This is the good news. The bad news is that advanced education will not guarantee success. Some jobs, once held by highly educated workers, will become vulnerable. All knowledge worker jobs will eventually come under wage pressure from offshore resources or automation. People that remain open-minded and nimble will have an advantage. Despite these changes, I would make it a priority to offer advanced education to as many people as possible.
Here are some thoughts about staying employed until the scientists and economists figure all this out.

  • Take advantage of the low-cost of entry to start a digital business.
  • Prefer jobs that are less likely to be automated in the near future. Last week I mentioned a detailed study that reviewed over 700 occupations and predicted which would be affected by automation. Examples of occupations that are relatively safe are management, science, education and health care. In brief, people oriented jobs are harder to automate than routine jobs (labor or knowledge worker).
  • If you are not the academic type, consider a trade.  It will be some time before robots take jobs as plumbers or electricians.
  • Networking has always been important, but it will be even more so in the future. Because of automatic screening of applications, it will be increasingly hard to find a job without a personal connection.
  • Volunteer in your community. You may not always be employed, but you will always have something to offer. We’re going to need to help each other during these transition years.

Let me close by saying that I will follow-up on this series in a few ways.  First, I’ll continue to track the at risk labor force (and improve my methods). Second, I’ll read new books as they come out, such as the January release of the upcoming book Second Machine Age. This conversation is going to heat up in the coming years, so I’m sure that my views and opinions will evolve as I learn more. Third, I’ll give an update after the 2014 MIT Sloan CIO Symposium. I am very fortunate to have recently joined the team that is planning this event. I expect to have many interesting conversations that will give me a lot to write about.

To stay employed, find a passion and stick with it.  Leadership and creativity will set you apart. Don’t be afraid to study the arts because there will be many people and machines that can do purely technical work.

Working till 2025: The Digital Revolution

Amy Smith at First Parish in Bedford

Amy Smith at First Parish in Bedford

Last Sunday, the service at the First Parish in Bedford was led by a guest speaker, Amy Smith, who is an engineer, inventor, and a Senior Lecturer at MIT. She began by asking us if we had had breakfast; we had. She then asked if we had fetched water, gathered wood, built a fire, shelled corn, and done many other labor intensive tasks to have our breakfast; we had not. Making breakfast can be labor intensive. Amy explained that, in some places, women spend two or more hours a day grinding grain for their families to eat. She also said that, around the world, women and children spend 40 billion hours a year fetching water; more hours than the entire labor force in California working for a year. Making breakfast is also dangerous. What’s the number one cause of death for children under five around the world? Amy told us that it is “breathing the smoke from cooking fires,” which kills more than 2 million children each year. Her solution was to teach people to make clean burning charcoal from waste organic matter. She invented a number of devices, such as a corn sheller, that people could build for themselves, and that reduced the labor. The net result was that farmers were able to work more efficiently, earn money, and create a product that had a positive impact on their society.

I admire Amy, and I thought about this story while talking to clients this past week. The story I hear during meetings usually goes something like this. A few brilliant people have an idea for a product or service that they believe will benefit society. They decide that they can earn money, but being competitive means limiting capital investment and expenses. Usually someone in the room talks about how difficult this would have been 10-15 years ago: in those days, they would have had to invest a lot of money in powerful computers, and hire a staff of people to scale up the business. Today, that model might not work because growth would be too slow: it is hard to raise capital, and it takes a lot of time to hire people with the right skills. Besides that, buying compute power now is almost as simple as buying electric power, where the company need only pay for that used. No need to buy equipment; no need to hire unnecessary staff. By purchasing this infrastructure as a service, the company can focus on hiring people who bring specific skills to their core business. There is no magic here: everyone from the farmer to the entrepreneur benefits from labor-saving inventions. Everyone who is building a product or service wants to increase productivity with a minimum of additional labor.

In my earlier post on past economic trends, I included a graph that showed the relationship between productivity and real earnings. In light of these two stories, the only reason that earnings tracked productivity before 1975 was that there were few choices: growth typically involved purchasing buildings and machines, and hiring people. No business owner, however, wants to buy expensive equipment, or hire unnecessary staff, to achieve growth. Given a choice, they will avoid it. Last week, I discussed the current economic situation, and asserted that, although the principles of economics remain constant, how we experience the economy depends on the technological level of society. The impact of technology is always the same: it reduces the labor needed to produce value. All of us enjoy labor-saving devices, but few of us like being unable to earn a living because our skills, which took years to learn, are no longer needed. Henry Blodget, who is co-founder, CEO and Editor-In Chief of Business Insider, tells us not to worry about robots stealing jobs. Why should anyone think that today’s situation is something new, and different from past technological changes?

First, technology provides business owners with more choices about how to grow their business: they are not limited to investing in equipment and staff. Second, the pace of change now is unlike anything seen in the past, and the pace is accelerating. Today’s digital revolution will be at least as significant as the industrial revolution, but will impact society in a compressed time frame. Third, machines will surpass humans in ability for a significant fraction of the jobs done today by average people. Unlike the past, when technology impacted a specific industry, today’s digital revolution will impact all industries. Job replacement at this scale, and at this pace, has not happened in the past.

In the long run, I believe that we can harness the power of digital technology for a positive impact on society. Even if we are flush with jobs in the future, however, the issue remains that the transition period will be painful for many people. The point of this series is to talk about how to prosper during the transitional years.

Let me offer some background. John Maynard Keynes coined the term “technological unemployment” in the paper “Economic Possibilities for our Grandchildren,” published in 1930. The fundamental ideas go back even farther. The idea that technological unemployment will lead to permanent structural unemployment is known as the Luddite fallacy. The economist Alex Tabarrok summaries the idea of this fallacy when he states that “If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.” Perhaps, but we all know that past performance does not necessarily predict the future, a message we remember when investing our money. As Paul Krugman astutely points out, the industrial revolution raised living standards, but many workers were hurt in the process, so the Luddites did have a value concern.

Let’s elaborate on the pace of change: in 1800 most Americans (90% ) worked in agriculture, but by the year 2000, due to industrialization, most don’t (2%). This shift in employment happened over a period of 200 years, but future employment shifts will happen much faster. Without enough time to adapt, much of society will feel increased pain. To be clear, I’m an optimist and believe that the long-term future is bright; my concern is the transition period that we are in now. One blog, the Weekly Sift, makes the case that if technology creates unimaginable abundance, with little need for labor, then what we really have is a social problem, not an economic problem. No one would complain about elimination of work except for the fact that wages are the primary way that most people earn money. Even with the most optimistic outlook, future nirvana is a few years out, and social problems can take years to fix (just consider congress today). Between now and then, that imaginary footbridge I mentioned in the introduction will continue collapsing behind us, and most people will have to run very fast to not fall into the abyss below.

Lots of people have considered these ideas, especially the people I introduced in the earlier post about past economic trends. Martin Ford published The Lights in the Tunnel in 2009. He provides a non-technical visualization of the world economy and how it would respond to a rising displacement of workers by technology. His arguments for how technology impacts the economy are clear and to the point. Ford published a recent article in the Communications of the ACM (Association of Computing Machinery). Essentially, he argues that most jobs are routine and that machines are rapidly acquiring the skills to do them, so acquisition of new skills by people may be an inadequate defense. Ford also maintains a blog called econfuture, which covers future economics and technology. Another influential book, Race against the Machine, written by Erik Brynjolfsson and Andrew McAfee, both from the MIT Sloan School of Management, was published in 2011. If you have not read the book, a good summary was published in The Atlantic. Essentially, this book argues that we are at the beginning of “the great restructuring.” A new book, Second Machine Age, is due to be published in January 2014. Let me summarize the argument for why the digital revolution, especially the rise of automation, will have such a profound impact on our economy.

Technological advancement is accelerating exponentially. In 1965, Gordon E. Moore invented a formula for predicting the rate of technological advancement, which is called Moore’s Law. Originally, the law applied to integrated circuits: Moore observed that the number of transistors in a single device doubled every 18 to 24 months. To visualize the idea, and project when a computer might execute the same number of computations per second as a human brain, Mother Jones Magazine imagines filling Lake Michigan, starting with a single fluid ounce of water in 1940 and doubling the amount every 18 months. For 70 years, it seems like there is little progress, but then the lake suddenly fills by 2025, in the last 15 years. By analogy, computers began in about 1940, and compute power has doubled about every 18 months. If this trend were to continue, the computations per second of a computer would reach that of the human brain by 2025. The point is that when exponential change happens, it seems like little happens for a long time and then the major impact comes all of a sudden. Today, we have just passed the 70 year mark of  the doubling of compute power, so the metaphorical lake of computational power is only a few inches deep, but rising fast. We know that exponential change does not last forever, and that Moore’s law for conventional silicon-based devices is likely coming to a close by about 2020, but new three-dimensional chips are emerging that could continue the trend. The predicted end of Moore’s law has come and gone many times before: each time engineers found ways to break through the perceived barriers.

In addition, alternatives to silicon technology (quantum computing, for example) may well evolve by about 2020 and replace silicon. Also, on April 2, 2013, President Obama announced “the brain initiative” that is an ambitious effort to understand how the brain works. The Defense Advanced Research Projects Agency (DARPA) is sponsoring research called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), and is collaborating with many companies, including IBM. The goal of this research is to build a new type a computer that works much like a mammalian brain. There are other projects too: on October 13, 2013, the Human Brain Project was kicked off in Switzerland. This is a 10 year global project that will give us a better understanding of how the brain works.

Further, other researchers have generalized the concept of Moore’s Law and Write’s Law to predict the rate that other technologies will advance. MIT News recently reported on a paper that shows that these laws give good approximations for the rate of many types of technological progress. The original paper is available on an open access journal. These laws hold for much more than transistors, and as summarized by the Journal Nature, “Mathematical laws can predict industrial growth and productivity in many sectors.” For these reasons, I believe that the digital revolution will continue well beyond the limits of current silicon-based technology.

To replace jobs, machines in the future won’t need artificial intelligence as depicted in science fiction. After all, many middle-income jobs are relatively boring, so there is no need to invent a machine like Commander Data from Star Trek to replace the person doing the job. Even so, machines will likely exceed our expectations. Every time we imagine a glass ceiling on machine ability, it is quickly broken. When I first learned to program a computer, my teachers taught me that the machine was very fast, but very dumb. Nevertheless, in 1997, IBM built a computer, deep blue, that beat the world chess champion at the time, Gary Kasparov. In 2004, Frank Levy and Richard Murnane wrote The New Division of Labor where they predicted jobs that computers would and would not displace. Essentially, the authors imagined that machines were limited to following simple rules, but by 2010 these limits of technology were already being proven wrong. In 2004, DARPA held a challenge to build a driverless vehicle, and the winning entry could navigate only 8 miles. As of 2010, Google had an entire fleet of autonomous cars able to travel thousands of miles on U.S. roads. DARPA focused the 2013 challenge on building humanoid robots. People continue to underestimate how advanced computers will get. In 2011, IBM built a new computer, Watson, that beat the best players at the TV game show Jeopardy. This same class of machine is now being used to do medical diagnosis. Clearly, the gap between machine and human ability is narrowing.

Although some companies are still spending money to buy equipment, few are hiring a lot of workers. Not only are machines cheaper to use, but they are becoming more skilled, flexible, and exact: they are doing more and more jobs once done only by human workers. This trend, using machines instead of labor, is not limited to the United States, it is happening in China as well. These machines are light years ahead of the robot I used for research. When I graduated from MIT in 1986, I interviewed for a job that was to design a machine to automatically sew clothing. It was an interesting, but incredibly complex task to automate, and I did not think it could be done with the technology at that time. I decided not to take the job, but that was 27 years ago. Today, I see that DARPA has recently awarded a contract to develop “complete production facilities that produce garments with zero direct labor as the ultimate goal.” I have no doubt that this milestone in automation will soon be met.

Job replacement is not limited to low paying jobs. Armies of expensive lawyers are being replaced by software. To prepare for a case, lawyers and paralegals used to read thousands of pages at high hourly rates. Today, software can analyze these same document at a fraction of the cost. Every day, trading decisions on Wall Street are made by machines that have displaced human specialists. These workers may have found other work, but the new jobs may very well be less desirable. Lower cost overseas doctors are displacing radiologists in the U.S. This is a precursor to software that will analyze these same images using advanced algorithms. It is not just automation that it impacting jobs, it is the ease of using low-cost labor at the task level. Micro-tasking web sites, like Amazon’s Mechanical Turk, are emerging that allow people around the world to bid on and do work on a task by task basis. This is possible because of high-speed communication technology. Essentially, these people are knowledge workers, and their jobs are subject to off-shoring, automation, or both.

Are people coming around to believing that technology is impacting middle class jobs? The answer is yes. Paul Krugman, in the column Robots and Robber Barons, acknowledges that technology is replacing workers in many industries. The Atlantic recently reported on a new study by a pair of economists at the University of Chicago’s Booth School of Business, Loukas Karabarbounis and Brent Neiman. In a nutshell, the study shows that labor’s share of income is plunging due to ever improving technology. The New York Times recently published an opinion piece by David Autor and David Dorn called “The Great Divide: How Technology Wrecks the Middle Class.” In my earlier post about economic trends, I mentioned these two authors as saying that, in the past decade, demand is rising for people with the lowest and highest skills, but is falling for those in the middle. MIT Technology Review had an article in the July / August Issue  called “How Technology is Destroying Jobs.” This article says that Brynjolfsson is not ready to conclude that economic progress and employment have diverged for good, but he adds “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.”

These trends are not going to change anytime soon. The Oxford Martin Programme on the Impacts of Future Technology recently released a study, featured on the London School of Economics and Political Science Blog, that concludes that “Improving technology now means that nearly 50 percent of occupations in the US are under threat of computerization.” They reviewed over 700 occupations, and offer a detailed graph showing the probability of computerization by occupational category. Every category, such as “management, business, and financial,” is shown as a color (such as light blue). All of the jobs in that category are distributed like grains of colored sand. The placement of each grain along the horizontal axis, from low (0) to high (1), represents the probability of computerization. Each category is stacked in turn, resulting in a graph that looks like multicolored sand art. Each of the millions of jobs in the US is a tiny point on the graph. The net result is that 47% of jobs are at high risk of computerization, 19% are at medium risk, and 33% are at low risk. Here’s the graph:

Probability of computerization by occupational category

Probability of computerization by occupational category
(Carl Frey and Michael Osborne)

The classic theory is that new occupations will replace the old ones, but that will take time. While all these smart people figure out how to solve the economic and social problems, I am going to worry about a much more mundane issue: how average people can stay employed until this restructuring is complete.  This is post four of five. In the next post, I’ll outline how this story may unfold in coming years, and how the people can prosper through it all.

Working Till 2025: Life is Tough These Days

If history were a walking path, we’d be standing at the foothills of a mountain. Looking back, we’d see the slowly changing landscape of human history represented by the valleys and hills we’ve traversed. Looking forward, we’d see a rapidly changing landscape of the future represented by the steep sides of the mountain we’re about to climb. The forces of history shaped the land we stand on: the valley and foothills behind by the agricultural and industrial revolutions; the mountain ahead by the coming digital revolution. In this metaphor the weather reports that we listen to each day are reports about the economy. The forces that shape the weather are not the same as the forces that shaped the earth, and the weather at the top of the mountain will be very different from the weather we experienced in the valleys below.

We recently weathered a huge storm called the great recession that, according to the National Bureau of Economic Research, ended over four years ago in June 2009. A mix of complex factors caused this storm, including government policy choices, high-risk lending and borrowing, and international trade imbalances. These observations are true as far as they go, but storms manifest differently depending on the terrain (valley, foothills, or mountain top), and the terrain is about to change very quickly. To make matters worse, we’ve failed to clean up the storm damage, a situation that economist Paul Krugman calls the mutilated economy. In a recent column, Krugman argues that current economic measurements “translate into millions of human tragedies — homes lost, careers destroyed, young people who can’t get their lives started.” He quotes a recent paper that argues that we have reduced our economic potential, and damaged our future economy, by tolerating high unemployment for so long. So, we’ve experienced a storm, we’ve damaged our future, and we’ll soon be hanging off the side of a mountain from an ice axe when the next storm comes.

If you’re like me, it is challenging to sort through all the economic reports. Economics is a complex topic and contradictory opinions about root causes and solutions abound. How do we sort through it all? I decided to download data from the U.S. Bureau of Labor Statistics (BLS) and do the analysis myself. The BLS issues a report on the employment situation every month, and the current edition can always be found here. I’m sharing my analysis because I want to show that any citizen can use and interpret the raw data themselves. If, like me, you’re not an economist, then I’m hopeful that this post will offer a reference for you to read and interpret the many articles about jobs and unemployment. Also, I’m providing a baseline for the future as I plan to post updates as the economic situation unfolds.

This is a long post with a lot of detail, so let me summarize the gist. The media has reported fluctuations in unemployment and job growth since the recession ended, so it feels like we are on a roller coaster. The recent jobs report was described in the New York Times as unexpectedly strong, with the pace of hiring greater than expected, yet the unemployment rate rose from 7.2 percent in September to 7.3 percent in October. At the same time, the size of the labor force dropped by 720,000, which brought the labor participation rate to a 35 year low of 62.8%. The news pushed the Dow Jones industrial average to a new high, yet at this pace it will take seven more years of monthly job growth to reach the pre-recession unemployment and participation rates. My claim is that the macro trend is clear and consistent: the percentage of the population that is at risk (AR), due to unemployment, involuntary part-time employment, or underemployment, has increased since 2007 and this trend will continue for the foreseeable future. In brief, more than one-third of the civilian non-institutional population is at risk by this definition. The pain won’t stop until policy makers, scientists, and economists figure out the nature of the new economy that is emerging in this century.

Bureau of Labor Statistics

Let me begin by walking through data from the U.S. Bureau of Labor Statistics. Here is a summary table of the economic situation as of August 2013. I realize that the data is slightly out of date, but it serves my purpose, and I’ll give an update in early 2014.

U.S. Employment Situation, August 2013

U.S. Employment Situation, August 2013

I’ll walk you through this row by row. The Bureau of Labor Statistics defines the population of interest as everyone over 16 who is not in prison or other institution. As shown in row #1, there are about 246 million such people as of August 2013. This population is split into two parts: everyone is either in the labor force (row #2) or not (row #9). Let’s first consider people in the labor force. This group is split into two parts: people are either employed (row #3) or not (row #8). The BLS calls the most common definition of unemployment “U-3”, which is the number of unemployed divided by the civilian labor force. In the news, this is simply reported as the “unemployment rate.” I give the numerator and denominator for this U-3 calculation in the rows labeled “U-3”, and show the rate as 7.3%. This unemployment rate tends to fluctuate up and down, and we need to carefully interpret it. Consider the people who are not counted. First, there are people who are working part-time, but want to work full-time. There were 7.9 million such people as shown on row #6. Second, there are people who are not in the labor force, but still want to work. There are 6.3 million such people as shown on row #11.

The BLS provides alternate measures of unemployment, and they call the broadest definition of unemployment “U-6”. This definition does not count all the people on row #11, but only those considered “marginally attached.” The BLS provides this very long, complex definition: “people who want a job, have searched for work during the prior 12 months, and were available to take a job during the reference week, but had not looked for work in the past 4 weeks.” I give the numerator and denominator for this U-6 calculation in rows labeled “U-6”. The numerator is the sum of rows #6, #8 and #13, and the rate is 13.7%. Note that the denominator is the sum of rows #2 and #13: those marginally attached are added to the total civilian labor force for this calculation.

Total At Risk Labor

These measurements are well-defined, but they don’t fully capture the way average people intuitively feel about the economic situation. I will use this same government data to create a different measurement that may do a better job. I’m calling this the “at risk labor force” — people who are either unemployed (want a job), forced to work part-time, or under employed. First, consider the people who are not in the labor force, but still want a job (row #11). The BLS reports this number every month, but I rarely see it in the media. This pool of people has grown from 2.0% of the civilian non-institutional population in 2007 to 2.8% in 2013.  As you can see in the table, the broad definition of unemployment (U-6) only counts the 2.3 million people who the BLS defines as “marginally attached” (row #13) even though there are 3.9 million more who also want a job (row #12). These are the people who are relatively invisible. Thus, for my definition of people at risk, I count the entire 6.3 million who are out of the labor force, but say that they want a job (row #11). Second, consider the people who are working full-time, but are struggling. In my last post I said that jobs are increasing at the low and high-end of the income scale, while middle-income jobs are declining. How can we estimate the number of people who have full-time jobs, but are unable to afford necessities, unable to raise a family, unable to buy health insurance, or are otherwise working full-time, but are at risk?

To begin answering this question, one can get data from BLS and estimate the number of workers within any given income range. Unfortunately, this is difficult to do using current employment statistics (CES), the data used to generate the monthly report on the employment situation (as discussed above). The reason is that the BLS estimates workers and wages by industry, not by occupation.  So, the average wage for janitorial services (series CES6056172003) is the total payroll divided by the total employees, which includes janitors, managers, bookkeepers, and everyone else employed at the companies that make up this industry. Also, there might be people working as janitors who don’t work in this industry. Instead, it is necessary to look at the occupational employment statistics (OES), which the BLS publishes annually, and provides detailed wage data by occupation. In this data base, for example, I can find the occupation “Janitors and Cleaners, Except Maids and Housekeeping Cleaners” (Code 37-2011). Since the last report was in 2012, I had to extrapolate the trend out to August 2013. Even for past years, the number of workers in a specific wage range might be under counted: in some years, data for a specific profession may not be available; every year, hourly data is sometimes not provided for occupations that don’t work year round. Nevertheless, with careful analysis, I was able to estimate the number of workers in any wage range for any year.

As a side note, the BLS reported in August 2013 that they have an experimental program to combine the wage estimates from these two surveys. This will be a welcome upgrade, but In the meantime I joined the data myself (in the table above, all the rows come from CES data except rows #4, #5, and #7. I estimated row #7 based on OES data and computed the other two rows using this number plus CES data. I should also mention that the BLS discourages year by year comparisons of OES data. This is because occupations constantly change (some jobs are invented and others become obsolete). In my case, however, I am adding up people who earn within a given wage range across all occupations. Therefore, I can compare my counts year to year provided that I use constant dollars.

To estimate the number of full-time workers at risk, I counted all the workers earning below the federal 1968 minimum wage, $1.60 per hour, scaled for inflation (the BLS provides a handy calculator). This is a first approximation of at-risk workers; later I’ll discuss how to improve this calculation. I chose 1968 because that’s when the value of the minimum wage peaked; the value of today’s federal minimum wage ($7.25 per hour) is much less than the 1968 wage in today’s dollars ($10.75). The Economic Policy Institute argues that the declining value of the minimum wage is one of the forces driving inequality. Looking at the table above, I estimated the number of at risk full-time workers at about 33 million (row #7). I give the numerator and denominator for this “At Risk Labor” calculation in rows labeled “AR”. The numerator is the sum of rows #5, #8 and #11, and the rate is 36%. Note that the denominator is the sum of rows #2 and #11: those not in the labor force, but still want a job, are added to the total civilian labor force for this calculation.

As stated earlier, more than one-third of the civilian non-institutional population is at risk. Using this approximation of at risk workers, here’s the trend for the past ten years:

At_Risk_Labor

U.S. At Risk Labor

My estimate of the total at risk labor is almost surely low. The social security administration reports that 61 million people who reported federal income taxes in 2011 (about 40% of wage earners) earned less than $20,000 per year. Of course this statistic includes my son, who was under 16 and earned extra money over the summer. Nevertheless, this count is greater than my estimate of 27 million full-time at risk workers in 2011. The Economic Policy Institute reports that the median family budget area in the nation is Topeka, Kansas, where a family of four would need to earn $63,364 for an adequate, but modest, living. With both parents working full-time (2080 hours per year each), they would need to earn more than $15 per hour. Such a family would not be included in my count. At the same time, you might argue that not everyone earning below $10.75 an hour is at risk. Young people who work and live with their parents, for instance, might not be at risk. I agree, and I’m open to improving my method for a future post. In principle, the BLS data could be analyzed by region, age, and other demographics. Also, there are many budget calculators available, such as the Living Wage Calculator (MIT) or the Family Budget Calculator (EPI) that can be used to find the minimum wage necessary to meet basic living needs for every region of the country. Nevertheless, I’m confident of two things: 1) my estimate of full-time at risk workers is lower than the true number and 2) the number of people who are at risk is growing.

Let me put this another way.  The percentage of the population that is struggling to meet basic needs has grown since 2007, even while the official measure of unemployment has dropped. In New York City, the homeless shelter population is at a record 50,000, and many of these people work full-time, sometimes holding multiple jobs. In fact, many of these people earn only around $5.00 per hour (less than required by law), and appeals to the Department of Labor have often gone unanswered. My own town, Bedford, MA, houses about 90 homeless families.  Many of these people go to work every day, while their children go to school, and must make do with a single room equipped with only a microwave oven. We need a better way to measure the total at risk population. While it is true that jobless claims are at a six-year low, I disagree with economists, such as Paul Ashworth, concluding that this represents an “improvement in job market conditions.”

Employment Situation Summary

To be clear, there are opportunities for some, and this will continue. A newly updated paper by Emmanuel Saez, who is a Professor of Economics and, among other things, tracks the distribution of income, says that “the top 1% captured 95% of the income gains in the first three years of the recovery.” I recently had dinner with the Chief Technical Officer for one of my clients in the investment industry. He talked at length about the difficulty of hiring and keeping talented college graduates to work on “big data” and “analytics,” two of the hot areas these days. There is fierce competition for talent from large software companies as well as start-ups. Speaking of start-ups, I recently went to a product showcase where seven emerging companies had five minutes to show their product and five minutes for Q&A. What struck me was how easily a tiny group of people could quickly amass a huge user base.  For example, one company had only four people and over 100,000 users. This is possible, in part, because many vendors can provision the digital infrastructure to run an online company in hours, with no capital investment.

Nevertheless, life is hard these days for many people in the United States. Consider these data points:

Thomas Friedman suggests that young people today will need to “invent” their job, and not “find” their job. He says “they will have to reinvent, re-engineer and re-imagine that job much more often than their parents.” Friedman’s advice has clearly worked for some people, such as George Popescu, the young CEO of Boston Technologies who has three master’s degrees (one from MIT). The problem is that not everyone will be able to become an innovator; not everyone will be able to invent their job. We can’t stop the coming changes to our economy, but we do need to make sure that our future economy supports those with low to medium skills so that they too can make a meaningful contribution to society. All of us have family, friends, and neighbors who need to support themselves and can’t do more.

This is post three of five. Next week, I’ll talk about the digital revolution and the accelerating pace of technological change. I’ll return to talking about the people I introduced in my last post; people who are leading the way to explaining how this story will unfold as known economic forces mix with the changing landscape of technology and automation, including robotics. In my final post of this series, I’ll talk about how these forces will come together in the next decade to create a perfect storm. Although I remain optimistic about our long-term future, I also believe getting there will be a bumpy ride.

Working Till 2025: Past Economic Trends

In this post, I’ll outline the past economic trends that provide the context for understanding our current and future situation.  In summary, the trends are:

  • A growing gap between productivity and worker compensation
  • A growing gap between the wages of the most and least educated worker
  • A decrease in available jobs for middle skilled workers
  • A flattening of wages for all but the highest skilled workers

There are many forces driving these changes, but accelerating technology is, in my opinion, a dominant force.  I plan to talk about the people who have been the thought leaders in analyzing recent trends, so let’s begin by providing a cast of characters.  Daron Acemoglu and David Autor are MIT economists who (among others) have provided many important studies, backed by statistical data, that are central to these arguments.  Martin Ford is a technologist, who wrote The Lights in the Tunnel (2009), which describes how the accelerating pace of technological change will disrupt the economy. Erik Brynjolfsson and Andrew McAfee, both from MIT, wrote Race Against The Machine (2011), and continue to write about the impact of technology on business. As the story unfolds, you’ll see that there are other voices as well, but these are the people at the tip of the arrow.

In this series of posts, I will give a reader’s digest version of these emerging arguments, and relate these discussions to my experience. I’ll show why I feel that the job market will remain difficult for many years, and I’ll talk about what people can do to cope between now and the day when the technologists and economists sort all of this out.

Robot-1990

Robot-1990

In 1984, I began research that resulted in a paper that was published in June 1990. It was about how a robot could automatically grind weld beads flat, which was important to the auto industry. At that time, people did this job, and it was dangerous and dirty. The photo at the left shows the prototype created by three graduate students and two advisers. My contribution was an early attempt to give the robot dexterity: specifically, to vary the force applied to the weld depending of the depth of cut required. Another student created the vision system and another modeled the dynamics of grinding. An early personal computer, the IBM PC AT, did all the processing, powered by an Intel 80286, that is more than 60,000 times slower than a typical desktop computer today. Back in 1990, we collectively believed that the nature of our work was simple: automation increased economic productivity, which increased jobs, which increased prosperity. Little did we know that we were contributing to a major economic shift that would not be fully understood until the 21st century.

The Raging Debate: Are we at the beginning of an epic change in human history?

Agreement about what is going on in our economy is elusive. Many economists still believe that what we experience today is just another chapter in a long series of technological disruptions, from the plow to the steam engine to electric power distribution. Consider that this 2011 report about the causes of long-term unemployment, from the Federal Reserve Bank of Richmond, does not talk about technology at all. Technological disruption of labor has emerged as a much discussed idea, but there are significant groups of economists that dismiss it. There is a raging debate going on. Is history repeating itself, or are we at the beginning of an epic change in human history?

Productivity vs. Worker Earnings
By U.S. Department of Labor, Bureau of Labor Statistics [Public domain], via Wikimedia Commons

From 1940’s until 1970’s economic productivity and worker compensation moved hand in hand: as productivity went up, workers benefited in proportion.  In the 1970’s this relationship broke down and the gap between productivity and compensation has grown ever since. The figure Productivity vs. Worker Earnings illustrates the concept. Essentially, our economy generates more wealth each year as productivity increases, but less of that wealth is distributed to workers in the form of wages. Instead, more of that wealth is distributed to the people who own the companies: the stockholders. We all benefit from lower costs for goods, but many struggle with stagnant wages.

This much is not in dispute, but the reasons for the change have been debated for a long time. Some economists, such as Mark Perry, from the American Enterprise Institute, attribute the gap to market forces. He expects that it will narrow as the economy recovers, saying “It’s just taking a while.” I disagree with people in this camp. The gap has grown for over 40 years, and I believe that this is a fundamental change. Other economists, such as Lawrence Mishel, John Schmitt, and Heidi Shieholz, from the Economic Policy Institute, believe that the main reasons for this gap are political forces such as stagnant minimum wage, de-unionization, trade liberalization and deregulation. Specifically, this camp offers an explanation of these trends that does not rely on technological change. While these other points are valid, I disagree with their assertion that technological change plays no role in creating wage inequality. In terms of economists, I most closely align myself with people such as Daron Acemoglu and David Autor, from MIT, who argue that new technologies “directly substitute capital for labor in tasks previously performed by moderately skilled workers.” Dylan Matthews reports on this debate in the article Inequality is rising. Should we blame robots or the government? Or both?

What technological changes have emerged since 1970 that might be impacting this trend? Personal computers, which emerged in the late 1970s, allowed almost everyone to create and publish written materials. Worldwide standards emerged. People in all countries began to join the economic mainstream. The internet, which became popular in the 1990s, created new ways to collaborate and created new channels to market and sell products. As personal computers and the internet matured, technology emerged that streamlined business transactions, manufacturing processes, and product distribution. All of this facilitated the use of a worldwide labor pool with major improvements in productivity.

Trends in Wages
Autor and Acemoglu

Autor and Acemoglu’s paper also show a growing divergence between the most-educated and least-educated workers. The authors show that, over the last 40 years, wages for those with a high school degree have fallen while those with a college or graduate degree have risen (see figure Trends in Wages).  Economists call this “Skill-Biased Technological Change” (SBTC), which means a technological change that increases demand for high-skill labor while reducing or eliminating demand for low-skill labor.  A classic example is workers displaced by robots that do routine tasks in a factory. Looking at the chart, one can see that this trend has accelerated since microprocessors were introduced in 1971 (one marker in the emerging digital era). Many economists (Autor, Katz and Krueger; Levy and Murname) have shown a correlation between demand for skilled labor and advances in digital technology.

Trends in Employment by Skill Level
Autor and Dorn

In 2013, Autor co-authored another paper, this time with David Dorn, that shows another economic trend: in the past decade demand is rising for people with the lowest and highest skills, but is falling for those in the middle of the skill distribution. Federico S. Mandelman, Federal Reserve Bank of Atlanta, gives a condensed explanation. One reason for this trend is that jobs in the middle of the spectrum (book-keeper, bank teller, cashier, factory worker) are relatively easy to automate compared to either the low-end (hairdresser, gardener, and home health aide) or the high-end (managers and professionals).  Many low-end jobs are hard to automate because they require physical dexterity and interpersonal interactions; many high-end jobs are hard to automate because they require creativity, problem solving and persuasion. In addition, the trend in some industries has been to off-shore many mid-skill jobs.

In 2013, Mandelman and Zlate build upon the work of Autor and Dorn.  Essentially, they point out that wages are growing significantly only for the most skilled workers.  At the low-end of the spectrum, wages are flat even though the number of people performing these jobs is increasing.  Their hypothesis is “sizable inflows of low-skilled immigration into the United States during the last 30 years weakened the increase of wages for this skill group.”

Let me elaborate on my personal experience. In the 1980s and 1990s, I spent most of my time working on using computers for factory automation.  In the beginning, I worked on designing a specialized factory for a U.S. government agency that cut, packaged and delivered packages of photos to clients. I then did my master’s thesis in robotics, which I briefly described above. After graduation, I worked on computer systems for process industries, such as systems to control the flow and temperature of liquids.  Later, I worked on computer systems for high-speed motion control, which is a component of modern robotics.  These systems increased productivity, but also affected someone’s job in the factories that used them.

As a technologist, I’ve observed and thought about the impact of technology my whole career. As such, I tend to align myself with those economists whose explanations align with my experience.  That said, I think even the most open-minded economists underestimate the rate at which technology is accelerating; they place too much value on past experience in trying to predict the future. I disagree with those who believe that they can predict our future economy simply by studying the past; that history is repeating itself. I believe we are on the cusp of a new era.

This is post two of five. In the next post I’ll talk about the current situation in the job market.

Working Till 2025: Introduction

It has been a long time since my last post. I knew when I started this blog that my writing would ebb and flow; that’s my reality.  To a large degree, I’ve dedicated most of my time lately to my job.  Back in April, when I last posted, I was traveling each week to Harrisburg, PA to work on the state’s unemployment compensation system.  I won’t go into the details here, but if you’ve followed the news in PA then you’ll know that I was immersed in a critical situation that left almost no time for personal hobbies.  For me, this assignment ended on July 1 when I took a new job as the Technology Services Executive in New England. This was a huge change for me: new division, new role, new co-workers, and new responsibilities.  As with any new job, I began a long climb up a steep learning curve, and I have yet to see the top of the hill. Make no mistake, I am lucky; I am working in a difficult economy. Nevertheless, I am acutely aware that my work life does not resemble my father’s, who had a stable job working up the ladder at the same company for 47 years. Instead, I sometimes visualize myself running across a wood plank footbridge that is collapsing behind me: to remain working I must continuously stay one step ahead of rapidly changing economic conditions.

I think a lot of people have this same feeling of uncertainty about their work, and are running on that footbridge right beside me.  We’ve all just lived through the great recession and naturally wonder when the good times will begin again.  My prediction is that we’ll collectively hit the other side of the bridge around 2025. Once we get there, if we’re lucky, we’ll have discovered how to make prosperity available to many, and we’ll understand the need for new economic models to make this happen. Between now and then, I expect uncertainty to continue as society grapples with the changing nature of work.  Many people are likely to fall into the abyss as the collapsing footbridge catches up to them.

This topic is not just about workers my age; I am also thinking about the generation that is just now entering the workforce.  I have three children in this category and I think about their future constantly. So far they have been fortunate; all three worked this past summer.  My oldest son, who just graduated from college with a degree in Environmental Science, has the good fortune of working full-time in his field.  Nevertheless, I know that my kids are on the footbridge with the rest of us.  I hope they keep up. There are many other young adults that I have talked to, who are equally talented, but are unfortunately unemployed or underemployed.

If you’re thinking that this series of posts will be about doom and gloom, that’s not the case.  We can build a bright future, but we need to make that collective choice.  Between now and then I expect changes that will shake us to our foundation.  No one knows what that future state will be, and no one knows exactly how long it will take to get there.  Nevertheless, for the purpose of this series of posts, I am choosing to discuss how history might unfold between now and 2025. I know that predicting the future is fraught with danger, but I think the benefits of putting this discussion into a concrete time frame make the topic more tangible. A few things are certain: I need to work during these years, so do my children, and so do many of you.

This is post one of five. You can expect one post each week. Next, I’ll talk about economic trends that provide context for this series.