Monthly Archives: November 2013

Reflecting on the First Year of this Blog

The Four Seasons

The Four Seasons

This blog recently passed the one year mark. Two co-workers inspired me to start writing it while we were having lunch and discussing the upcoming election in 2012.  I discovered that these highly intelligent people described themselves as relatively low information voters. Why? They told me that they vote on “impressions” and that they don’t have time to do all the research to make informed opinions. Besides, even if they did have the time, who would they believe?  Who would they trust? I’d been researching the issues we were discussing and shared some of the things I had learned. Then one of them said something like “you and I don’t agree on everything, but I know you, I’ve seen you at work, and I value your opinion. Even if I disagreed with you, at least reading your research would give me a place to start because I know you.” After lunch, I sent these guys an email copy of my private blog, the place where I kept my personal notes at the time. The value to them was not only the content, but also the knowledge that the material was filtered and organized from a reliable source.

So, this blog was an experiment. In the past year I published 20 posts, and the core topics were technology, economics, scouting, and education (with a bit of politics mixed in). Here are the two top posts based on the number of times readers shared them:

My intent this past year was primarily to clean up research that I was doing anyway, and post it for public consumption. This year, I want to create a larger variety of posts. In addition to the “Commentary” category (essays), here are new categories:

  • The “Digest” category: posts that summarize work from other authors about selected topics, especially to follow-up on previous essays. My first experiment with this format is, “The Face of Changing Times.”
  • The “Controversy” category: posts that give two sides of a heated story. My first experiment with this format is, “The 23andMe Controversy.”
  • The “Republish” category: posts that repeat content from another author. My first experiments with this format were, “Krugman: The Case for Techno-optimism” and  “Thanksgiving.”

I’ve changed my earlier policy of publishing only on the weekend; instead, I will publish ad hoc in the future. Essays will post when they are ready, but I’ll have content from the new categories in-between. You’ll notice that I’ve updated the theme, adding a cover photo (from Scotland), search, categories, and twitter. I’m also experimenting with pictures and videos, and I reorganized the “Online Community” and “References” pages. The criteria I established recommending people in “Building this Blog” still holds, though I’ve found that it is not always easy to decide if a site lives up to these standards. Nevertheless, I’ll continue to populate these pages.

If you’re looking for me to shock you, this is the wrong blog. I don’t like labels, such as liberal and conservative. For example, the Affordable Care Act is a “conservative policy,” enacted by “liberals,” and opposed by the “conservatives” that invented it in the first place. Confusing, huh? I don’t like ad hominem arguments and over generalizations. Let me be tongue in cheek for a moment. I’ll generally avoid writing about “evil corporations” (so said the activist checking her iPhone before ordering a latte) or “godless liberals” (so said the commentator trying to get people’s blood to boil). I will, however, write about underlying issues, and I’ll support policies to return us to a just society, with a thriving middle class. That said, I’m not going to demonize part of our society in the process. Also, I’m a skeptic, so don’t expect me to endorse conspiracy theories. Here is what I will do: I will write, as best that I can, in a non-partisan way. This does not mean that I won’t have opinions, it just means that I will try to let reason, compassion, and the facts guide me.

Where is this blog going? This is not a “news” or “political” blog. These elements are part of what I’ll write about, but other blogs focus on these topics, and already do a great job. I prefer to write about trends, past and future, and discuss how they influence our lives. I’ll continue writing about science and technology (robots, automation, genetics, information technology, etc.), the economy (poverty, jobs, unemployment), and education (cost, access, online options), but I’ll introduce more topics as well, such as privacy, online security, and social media.

I’m interested in what the future will look like. I want to figure out how we will emerge from the present day dark ages, and begin a new enlightenment period. My theme for this blog is still emerging, but I know that it will involve finding new alternatives to complex problems, and exploring trends that will shape the future.

The Face of Changing Times

Following up on the “Working to 2025” series, a number of articles have been written that discuss upcoming challenges. This is a digest of a few of them.

Some of my Catholic friends have recently shared articles about the Pope, including an article from Salon. Here, Katie Mcdonough reports that Pope Francis calls capitalism “a new tyranny” in a recently published official church document called the apostolic exhortation. I think the message is important, so let me highlight four separate excerpts from the section called “Some Challenges of Today’s World.”

This epochal change has been set in motion by the enormous qualitative, quantitative, rapid and cumulative advances occurring in the sciences and in technology, and by their instant application in different areas of nature and of life.

Today everything comes under the laws of competition and the survival of the fittest, where the powerful feed upon the powerless. As a consequence, masses of people find themselves excluded and marginalized: without work, without possibilities, without any means of escape.

Human beings are themselves considered consumer goods to be used and then discarded. We have created a “throw away” culture which is now spreading. It is no longer simply about exploitation and oppression, but something new. Exclusion ultimately has to do with what it means to be a part of the society in which we live; those excluded are no longer society’s underside or its fringes or its disenfranchised – they are no longer even a part of it. The excluded are not the “exploited” but the outcast, the “leftovers”.

While the earnings of a minority are growing exponentially, so too is the gap separating the majority from the prosperity enjoyed by those happy few. This imbalance is the result of ideologies which defend the absolute autonomy of the marketplace and financial speculation. Consequently, they reject the right of states, charged with vigilance for the common good, to exercise any form of control. A new tyranny is thus born…

Others are also sharing articles about impending change. Doug Sosnik writes in POLITICO Magazine and asks, “Which Side of the Barricade Are You On?” He anticipates a rising populist movement, which I highlight with this excerpt:

This all suggests that the period of turmoil and dissatisfaction that we have been experiencing for the past 10 years could well continue through the end of this decade. However, underneath this turmoil you can see the shape of an emerging populist movement that will, in time, either move the politicians to action or throw them out of office. The country is moving toward new types of leaders, those who will be problem-solvers and build institutions that are capable of making a difference in people’s lives.

Finally, Linda Tirado, who is a night cook (and now writes the blog Killer Martinis), put a face on poverty when she wrote the article, “This Is Why Poor People’s Bad Decisions Make Perfect Sense.” Here’s an excerpt:

I am not asking for sympathy. I am just trying to explain, on a human level, how it is that people make what look from the outside like awful decisions. This is what our lives are like, and here are our defense mechanisms, and here is why we think differently. It’s certainly self-defeating, but it’s safer. That’s all. I hope it helps make sense of it.

The article went viral, and was met with criticism from some, to which the author responded in the article, “Meet the Woman Who Accidentally Explained Poverty to the Nation.” She is intelligent and articulate, and I think her writing puts a human face on the situation we have in our country today.

Thanksgiving

For the dusting of frost and the early nightfall,
For the crunch of leaves and the barn owl’s call.
For the patient hills and the furious winds,
For the torrent of anger that time can rescind…
We give thanks for quiet, and rustle and rain,
We give thanks for forests, the sun and the plains.
For the kindness of strangers whose gifts come for free,
And how in just noticing that, we can be –
More open to others, both living and gone,
More grateful to everyone hither and yon.

The times we are living in can be so stark,
We stumble around in the cold and the dark.
The bills pile up, the chores need to get done,
The news of the world makes all of us numb.
Yet we cannot pretend it’s not happening now,
In Ghaza, the Philippines, here in our town,
The ones who suffer feel empty and small,
Their voices are whispers behind a thick wall,
And we who are able to listen or act,
Must vow to reach out when faced with the fact –
That time does not stop for Love nor for Grief;
It rambles along like a mischievous thief.

’Til one day our “story-in-story” unfolds,
The truth is that humans are frail and bold.
We may be the one behind the thick wall,
or stuck in a marriage or ready to fall,
Yet courage can bloom when we meet eye to eye,
In sisterhood, brotherhood, someone says “why? –
are you thinking that you need to fix this alone?
Your life is a blessing, your presence is home –
to so many around you, who see who you are.
You’re known, and you’re treasured… so don’t you go far.”
Your membership here, is only a start,
Of the great world we live in, humanity’s heart.

All humans and creatures live under one sky;
We live in the beauty of this world so fine.
So though days are short and nights are so long,
And though time is fleeting like notes in a song,
The thing to remember this month of the year,
Is to sidle right up to the ones you hold dear.
Think long on the things that give you great stir,
Perfecting these thanks, then telling that sir, –
or madame or youth, all you want to convey.
Wait not for tomorrow, when subtle thoughts fade.
Tell them specifics, how it felt when you knew,
That they could be trusted; you could tell them the truth.
Tell them you love them, say you won’t shy away,
From caring or solving our fears of the day.
Speak of your love with your whimsey and wit.
Speak of it softly or tell it in bits,
Round a fireplace, cuddled up, in a child’s ear,
Out on a walk, or maybe right here.

We are grateful together this Thanksgiving time,
For the kindness of strangers, and old friends divine.
For spiced apples, and muffins and tea in a cup,
And the windy cold weather that makes us zip up.
We give thanks this morning for all that will come,
Since time stops for no one, not mother or son.
We turn to each other through thick and through thin,
Thanks be for this life, this “Great Room” that we’re in.

Wendell Berry stands before the solar panels on his farm in Henry County, KY. Photo by Guy Mendes

Wendell Berry stands before the solar panels on his farm in Henry County, KY. Photo by Guy Mendes

“Thanksgiving” was written by Rev. Megan Lynes, and she read it on Nov. 24, 2013 at First Parish in Bedford MA as part of the 9:00 AM Thanksgiving Service & Welcoming of New Members. The service, which was a team effort with Revs. John Gibbons, included inspiration and readings from Wendell Berry. Selected sermons are posted here.

Have a happy Thanksgiving.

Krugman: The Case for Techno-optimism

Following up on this blog’s “Working till 2025: Crossing the Bridge,” here is an excerpt from a recent Paul Krugman post to his blog:

Basically, smart machines are getting much better at interacting with the natural environment in all its complexity. And that suggests that Skynet will soon kill us all a real transformative leap is somewhere over the horizon, maybe not this decade, but this generation.

But Brynjolfsson and McAfee have a new book — not yet out, but I have a manuscript — making this point with many examples and a lot of analysis.

There remain big questions about how the benefits of this technological surge, if it’s coming, will be distributed. But I think this kind of thing has to be taken into account when we try to imagine the future; I’m a great Gordon admirer, but his techniques necessarily involve extrapolating from the past, and aren’t well suited to picking up what could be a major inflection point.

Read Paul Krugman’s post for details on the Gordon reference. I’m looking forward to the new book from Brynjolfsson and McAfee.

The 23andMe Controversy

Here is the 23andMe ad:

The New York Times reports, “In a crackdown on genetic testing that is offered directly to consumers, the Food and Drug Administration has demanded that 23andMe immediately cease selling and marketing its DNA testing service until it receives clearance from the agency.”

Christine Gorman, editor in charge of health and medicine features for Scientific American, says that the FDA was right to block 23andMe, and goes on to say, “At present, getting raw data about your personal genome is worse than useless, as Nancy Shute pointed out in a Scientific American article that I edited back in 2012.”

Not everyone, however, agrees. Nick Gillespie, editor in chief of Reason.com and Reason TV, responds, “when it comes to learning about your own goddamn genes, the FDA doesn’t think you can handle the truth. That means the FDA is now officially worse than Oedipus’s parents, Dr. Zaius, and the god of Genesis combined, telling us that there are things that us mere mortals just shouldn’t be allowed to know….”

Smithsonian.com explains the FDA’s concern, and provides this example, “For instance, if the BRCA-related risk assessment for breast or ovarian cancer reports a false positive, it could lead a patient to undergo prophylactic surgery, chemoprevention, intensive screening, or other morbidity-inducing actions, while a false negative could result in a failure to recognize an actual risk that may exist.”

Shannon Brownlee, who is Bernard L. Schwartz Senior Fellow at the New America Foundation, has additional concerns about business motives (Data mining your DNA). She writes in Mother Jones, “23andMe’s long-term revenue model has little to do with selling kits and everything to do with selling customer information to drugmakers and others in need of human guinea pigs for clinical research.”

Kerry Grens, TheScientist, sums up the situation, “The FDA gave 23andMe 15 days to respond with a description of how it’s addressing the agency’s concerns.”

______________________________________________________________

Note: The video embedded in this post is no longer available. Here’s an update from 23andMe Blog:

Pending an FDA decision, 23andMe no longer offers new customers access to health reports referred to in this post. Customers who received their health information prior to November 22, 2013 will still be able to see their health reports, but those who purchased after that time will only have access to ancestry information as well as access to their uninterpreted raw data. These new customers may receive health reports in the future dependent on FDA marketing authorization.

Working till 2025: Crossing the Bridge

We are not here to curse the darkness, but to light a candle that can guide us through the darkness to a safe and sure future. For the world is changing. The old era is ending. The old ways will not do.” — John F. Kennedy

In the “introduction” to this series, I said that I “visualize myself running across a wood plank footbridge that is collapsing behind me: to remain working I must continuously stay one step ahead of rapidly changing economic conditions.” I also said that this series was not going to Foot_bridge_Bahia_Drakebe all about doom and gloom, yet you may feel that I’ve laid out a pretty bleak picture of both the current and future jobs situation. Before talking about the benefits of this new digital age, let me briefly discuss where we’ve been. In the second post, “past economic trends,” I introduced the raging debate among economists about the influence technology has on the economy. I showed that the historic trend, of wages tracking productivity, began breaking down in 1975. Since then, productivity has continued to climb, but wages have remained flat, except for workers with the highest skill levels. Also, demand is up for workers at the two ends of the spectrum, high and low skills, but down for the large pool of people who are middle skilled. In the third post, “life is tough these days,” I told you many things that you likely already knew: unemployment remains high, the labor participation rate is declining, long-term unemployment is a trap, average household income is falling, and it really does not feel like the recession has been over for more than four years. Also, I tried to cut through the noise of monthly labor reports, and define a macro trend for the “at risk” population (people who are suffering from unemployment, involuntary part-time employment, or underemployment). My conclusion was that one-third of the potentially working population is at risk, and the numbers are rising. In the fourth post, “the digital revolution,” I explained why I believe that technology is driving a historic shift, one that is at least as significant as the industrial revolution, and one that will happen with great speed. I ended with a study, featured on the London School of Economics and Political Science Blog, that concluded that computerization threatens about 50% of current occupations in the U.S. Maybe new occupations will emerge, but I think I’ve made a good case that many people will find the transition phase unpleasant.

I’ve dedicated this last post in the series to crossing the bridge. What do I mean by this, and why do I think it will take so long? First, I expect poor economic conditions to continue. Paul Krugman recently wrote a column, “A Permanent Slump?“, where he asked “What if depression-like conditions are on track to persist, not for another year or two, but for decades?” He calls this “secular stagnation,” a persistent state in which a depressed economy is the norm. Krugman notes that if consumer demand remains weak, unemployment will remain high. He suggests possible macroeconomic reasons, including slowing population growth and persistent trade deficits. He does not mention technology, so let’s put that to the side for the moment. The point is that the economy is not likely to recovery soon. Second, zealots will continue to make it difficult to govern, and so it will be hard to put policies in place to solve these economic problems. Consider that a minority party forced a government shutdown, blocked judicial appointments, and, at the state level, has led a campaign against workers (including efforts to limit or restrict sick leave, workplace safety standards, time for meals, child labor standards, and minimum wage). Third, while these problems are happening, we will enter the digital age where about half of current U.S. occupations will be under threat of computerization. Economic issues, government challenges, and technological changes will make getting to the other side of the bridge quite challenging.

What will a prosperous twenty-first century economy look like? I don’t know the answer to that question, but, more importantly, I don’t believe that economists know either. One narrative is that macroeconomic principles will play out as they always have in the past. New occupations will emerge to replace those that have become obsolete, and we’ll return to rising productivity and full employment. An alternative narrative says that, as we continue to transform from connected regional economies to an integrated world economy, the dynamics of the economy will change. Our experiences from the past won’t guide us to accurate predictions about the future. Our economy will have become a single closed system where it may no longer be possible for the U.S. to have continuously increasing productivity, and the full employment that went along with it. Economists may need to build new models for the twenty-first century economy, and we may need to reconsider the nature of work itself.

The Next Enlightened Age

Regardless of how the economists settle this debate, this new technological age is a great opportunity. Many authors have suggested that we can, if we choose, enter into an age of abundance and sustainability. For instance, Peter H. Diamandis and Steven Kotler offer an optimistic view of the future in their book Abundance, The Future is Better Than You Think.  The technology that is coming will give us the opportunity to solve massive problems, including poverty, disease, climate change, and more. But, technology alone is no silver bullet. We need to re-think our core principles and figure out how to build a sustainable economy. Exponential growth cannot continue forever; we need to imagine a steady state that provides security, opportunity and prosperity to everyone.

We need a society where the workforce is fluid, and people can easily move between jobs. Switzerland is considering a proposal to give every citizen a basic income, no strings attached. I think this idea is worth more study. Some worry about the cost, not to mention creating a disincentive to work. But, perhaps those worries are overblown. Perhaps people would be more willing to take risks, and become entrepreneurs, if we guaranteed everyone a minimum income. If people had dignity and security, perhaps they could walk away from low-end jobs unless the employer offered fair pay and good working conditions. They would not have to accept such jobs just to survive. There is conservative appeal to this idea as well: it would replace many government programs, especially if it was simple and had no needs testing. If people had basic income (for security and dignity), had access to health insurance (regardless of their employment situation), and could get any level of education (without going into debt), then we would curb poverty and enable the workforce to thrive in the twenty-first century. People, if standing on a solid foundation, might be able to take part in “free, voluntary exchange to mutual benefit” (Ayn Rand) as libertarians and others so want.

Maybe these predictions are overly optimistic (many authors also write about a future that is a dystopia), but if you believe that the result will largely depend on the choices we make, then I see no reason not to hold out hope and fight for the better outcome. An essay in the Weekly Sift discusses how Nate Silver made the interesting observation that the printing press led to the enlightenment, but only after 300 years of polarization and war. Silver says that when initially faced with information overload that we “engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.” Silver goes on to draw a parallel to the rise of the internet, observing that the information overload has led to polarization in our time. As the digital revolution unfolds, I am hopeful that the outcome will be another period of enlightenment. I am also hopeful that, like the pace of technology, the pace of history will be much faster this time. There will be plenty of time to think about what the world, and the economy, will look like beyond 2025. I will, however, close this series by considering just the transition period.

Timeline to 2025

Let me show, as a context, how computers will advance. You likely use a device with a microprocessor based on 22 nanometer technology; the first such microprocessor came out in 2011. Intel plans to ship 14 nanometer technology in laptops in 2014. Looking to the future, new 10 nanometer technology will emerge, and perhaps another generation (7 nanometer technology) by the end of the decade. In my last post, I described how exponential growth is slow for a long time, and then explodes. Also, earlier in this series, I mentioned how my personal computer today was 60,000 times faster than the IBM PC AT I used years ago for robotics research. Using that as a baseline, I’ll outline how computer technology might advance.

  • Current PC (22 nanometer technology): 60,000 times faster than IBM PC AT
  • 2014 (14 nanometer technology): 120,000 times faster
  • 2016 (10 nanometer technology): 240,000 times faster
  • 2018 (7 nanometer technology): 480,000 times faster
  • 2020 (tentative, alternate technology): 960,000 times faster
  • 2022 (tentative, alternate technology): 1,920,000 times faster
  • 2024 (tentative, alternate technology): 3,840,000 times faster

The dates are approximate, the doubling factor is inexact, and alternative technologies are still in the laboratory. Nevertheless, the macro trend is clear: digital technology is going to explode in terms of computational power. With this timeline in mind, I’ll outline the coming years.

2013 – 2016: The MIT Sloan Management Review recently published a study asserting that embracing digital technology is a strategic imperative for companies — “adopt new technologies effectively or face competitive obsolescence.” Over three-quarters of the participants saw the transformation as urgent, but many also complained about “innovation fatigue” given the number of changes they’ve recently been through. The pace of change, however, is speeding up, and the group most energized about taking on the challenge is the younger generation.

Perhaps rapid changes will create opportunities for start-ups with disruptive technologies and products. IT spending, which exceeds $3 Trillion worldwide, will stay focused around cloud, mobile, social, big data and analytics. New platforms for application development continue to emerge: IBM recently announced the Watson Ecosystem, based upon the computer that won at Jeopardy, to “spur innovation and fuel a new ecosystem of entrepreneurial software app providers.” As of 2012, the internet switched to a new protocol, setting the stage for an emerging “internet of things.” Soon, you’ll be shopping for a smart watch, and robots will do work in new places, such as farms and mining operations. Young people are creating their own digital brand, and they’re networking not just with LinkedIn, but also with new communities, such as FounderDating and MeetUp. The Boston New Technology Meetup Group has a product showcase once a month, where entrepreneurs demonstrate emerging products. Meetings are often held at hack/reduce, which is a non-profit that collaborates with government, corporations, and universities to “help Boston create the talent and the technologies that will shape our future in a big data-driven economy.” They give interested people access to “a large-scale compute cluster, hands-on workshops, and a physical space in the heart of Kendall Square.”

Start up companies will prosper for many years during the digital revolution; many will launch with close to zero capital investment. Just a few mouse clicks can provision servers in the cloud, which can then be used on a pay-as-you-go basis. All companies will enjoy a world-wide audience: just a few employees will be able to reach thousands of customers. Facebook and Twitter were the beginning, but the benefits will extend beyond high technology to anyone with a unique product or skill to offer. My niece is an artist who is able to sell her products to a worldwide audience using the web and social media. Opportunities will continue for people at the highest level of skill, especially those that combine academic, social, managerial, and leadership talents.

Overall, however, expect problems with the economy to continue. Collaboration in high technology represents the bright side of the sharing economy. A recent article in the New York Times describes an emerging trend toward sharing “activities as diverse as car-pooling, ride-sharing, opening one’s home to strangers via Web-based services like Couchsurfing or Airbnb, sharing office space and working in community gardens and food co-ops.” These services, while innovative, are a tactic people are using to support their livelihood and cope with the loss of jobs in the U.S. At the other end of the spectrum are people who survive by doing odd jobs, borrowing money, and going back to school at 60 years old. I expect the number of “at risk” people, as defined in an earlier post, to rise. As mid-skill jobs become obsolete, and these workers shift to low-skill jobs, citizens will demand an increase to the minimum wage. Some will argue against this change, saying that it will accelerate the shift to automation. The truth is that automation is coming regardless of government policies. California has a bill to raise the minimum wage by 2016; other legislation will emerge at both the state and federal level. The dark side of the story is that not everyone will benefit: inequality will continue to rise. The productivity and wealth created by our digital economy will depend less and less on labor.  The economic trends I described in earlier posts will continue.

2017 / 2020: The digital revolution will continue, but other areas will come to the forefront as well, especially genetics, nanotechnology and robotics. Silicon technology will peak during this period or soon after. Cloud computing platforms will become a commodity. You’ll be able to “print” three dimensional objects, such as replacement parts and small toys, inexpensively. IBM predicts that by 2017, machines will mimic all five human senses with technology. Memory implants will be possible. You will be able to “feel” the texture of objects by touching the screen on your phone. Your phone will also be able to “smell” the environment, perhaps detecting that you are sick, and adjust to the context of your environment based on listening to background noise. By the way, a paper-thin device may replace your phone or tablet computer. Nearly half of internet traffic will originate from non-PC devices, such as TVs, handsets and others; the number of devices on the internet will be triple the worldwide population. Analysts predict that we will be living in the “digital universe.” The number of servers worldwide (virtual and physical) will have grown by an order of magnitude. Language barriers will be more or less gone due to machine translation.

2021 – 2025: Computational power will likely continue to increase according to Moore’s law using three-dimensional circuits or another technological innovation. We will foresee the day when computers will match the human brain in terms of computations per second, which could come as early as 2025. In the very long term, IBM researchers have a vision that “by 2060, a one petaflop computer that would fill half a football field today, will fit on your desktop.” Narrow artificial intelligence (or weak AI) will continue evolving at an increasing pace. Note that everything I’ve discussed so far depends only on specialized machine intelligence. None of these trends imply that machines will pass the Turing test (show intelligence indistinguishable from that of a human). Someday, however, this may become a possibility that cannot be discounted. Regardless, robots will become ubiquitous at work and at home.

Crossing the Bridge

For many years, the debate about the impact of technology on the economy will continue to rage. The usual suspects will dig into established positions. On one side, people, such as Scott Winship of the Bookings Institution, will argue that robots do not cause unemployment and that people who say such things are suffering from “Technophobia.” I disagree. I welcome the coming digital revolution and agree that technology promises to bring great benefits to society.  For this to happen, however, society needs to understand and manage the changing dynamics of the economy. We can’t solve the problem by simply saying that government needs to get out-of-the-way, so that the free market can solve all ills. There is no law of physics that says rising productivity must bring rising wages and more jobs. This was true in the past, but the pattern may not hold true in the future. On the other side, people will argue that government can solve issues by updating policies for money, labor, and trade. We may, however, find such changes insufficient. We’re entering a new era where the fundamental nature of work will change. A fresh view about the economy needs to emerge. I don’t know exactly what this new economy will look like, but I do believe that the solution will need cooperation between business and government.

A bright spot is Massive Open Online Education (MOOCS). The number of available classes will continue growing at a phenomenal rate. Advanced education will become ubiquitous, and often freely available, though debate will continue about the relative benefits of online vs. classroom education. This is the good news. The bad news is that advanced education will not guarantee success. Some jobs, once held by highly educated workers, will become vulnerable. All knowledge worker jobs will eventually come under wage pressure from offshore resources or automation. People that remain open-minded and nimble will have an advantage. Despite these changes, I would make it a priority to offer advanced education to as many people as possible.
Here are some thoughts about staying employed until the scientists and economists figure all this out.

  • Take advantage of the low-cost of entry to start a digital business.
  • Prefer jobs that are less likely to be automated in the near future. Last week I mentioned a detailed study that reviewed over 700 occupations and predicted which would be affected by automation. Examples of occupations that are relatively safe are management, science, education and health care. In brief, people oriented jobs are harder to automate than routine jobs (labor or knowledge worker).
  • If you are not the academic type, consider a trade.  It will be some time before robots take jobs as plumbers or electricians.
  • Networking has always been important, but it will be even more so in the future. Because of automatic screening of applications, it will be increasingly hard to find a job without a personal connection.
  • Volunteer in your community. You may not always be employed, but you will always have something to offer. We’re going to need to help each other during these transition years.

Let me close by saying that I will follow-up on this series in a few ways.  First, I’ll continue to track the at risk labor force (and improve my methods). Second, I’ll read new books as they come out, such as the January release of the upcoming book Second Machine Age. This conversation is going to heat up in the coming years, so I’m sure that my views and opinions will evolve as I learn more. Third, I’ll give an update after the 2014 MIT Sloan CIO Symposium. I am very fortunate to have recently joined the team that is planning this event. I expect to have many interesting conversations that will give me a lot to write about.

To stay employed, find a passion and stick with it.  Leadership and creativity will set you apart. Don’t be afraid to study the arts because there will be many people and machines that can do purely technical work.

Working till 2025: The Digital Revolution

Amy Smith at First Parish in Bedford

Amy Smith at First Parish in Bedford

Last Sunday, the service at the First Parish in Bedford was led by a guest speaker, Amy Smith, who is an engineer, inventor, and a Senior Lecturer at MIT. She began by asking us if we had had breakfast; we had. She then asked if we had fetched water, gathered wood, built a fire, shelled corn, and done many other labor intensive tasks to have our breakfast; we had not. Making breakfast can be labor intensive. Amy explained that, in some places, women spend two or more hours a day grinding grain for their families to eat. She also said that, around the world, women and children spend 40 billion hours a year fetching water; more hours than the entire labor force in California working for a year. Making breakfast is also dangerous. What’s the number one cause of death for children under five around the world? Amy told us that it is “breathing the smoke from cooking fires,” which kills more than 2 million children each year. Her solution was to teach people to make clean burning charcoal from waste organic matter. She invented a number of devices, such as a corn sheller, that people could build for themselves, and that reduced the labor. The net result was that farmers were able to work more efficiently, earn money, and create a product that had a positive impact on their society.

I admire Amy, and I thought about this story while talking to clients this past week. The story I hear during meetings usually goes something like this. A few brilliant people have an idea for a product or service that they believe will benefit society. They decide that they can earn money, but being competitive means limiting capital investment and expenses. Usually someone in the room talks about how difficult this would have been 10-15 years ago: in those days, they would have had to invest a lot of money in powerful computers, and hire a staff of people to scale up the business. Today, that model might not work because growth would be too slow: it is hard to raise capital, and it takes a lot of time to hire people with the right skills. Besides that, buying compute power now is almost as simple as buying electric power, where the company need only pay for that used. No need to buy equipment; no need to hire unnecessary staff. By purchasing this infrastructure as a service, the company can focus on hiring people who bring specific skills to their core business. There is no magic here: everyone from the farmer to the entrepreneur benefits from labor-saving inventions. Everyone who is building a product or service wants to increase productivity with a minimum of additional labor.

In my earlier post on past economic trends, I included a graph that showed the relationship between productivity and real earnings. In light of these two stories, the only reason that earnings tracked productivity before 1975 was that there were few choices: growth typically involved purchasing buildings and machines, and hiring people. No business owner, however, wants to buy expensive equipment, or hire unnecessary staff, to achieve growth. Given a choice, they will avoid it. Last week, I discussed the current economic situation, and asserted that, although the principles of economics remain constant, how we experience the economy depends on the technological level of society. The impact of technology is always the same: it reduces the labor needed to produce value. All of us enjoy labor-saving devices, but few of us like being unable to earn a living because our skills, which took years to learn, are no longer needed. Henry Blodget, who is co-founder, CEO and Editor-In Chief of Business Insider, tells us not to worry about robots stealing jobs. Why should anyone think that today’s situation is something new, and different from past technological changes?

First, technology provides business owners with more choices about how to grow their business: they are not limited to investing in equipment and staff. Second, the pace of change now is unlike anything seen in the past, and the pace is accelerating. Today’s digital revolution will be at least as significant as the industrial revolution, but will impact society in a compressed time frame. Third, machines will surpass humans in ability for a significant fraction of the jobs done today by average people. Unlike the past, when technology impacted a specific industry, today’s digital revolution will impact all industries. Job replacement at this scale, and at this pace, has not happened in the past.

In the long run, I believe that we can harness the power of digital technology for a positive impact on society. Even if we are flush with jobs in the future, however, the issue remains that the transition period will be painful for many people. The point of this series is to talk about how to prosper during the transitional years.

Let me offer some background. John Maynard Keynes coined the term “technological unemployment” in the paper “Economic Possibilities for our Grandchildren,” published in 1930. The fundamental ideas go back even farther. The idea that technological unemployment will lead to permanent structural unemployment is known as the Luddite fallacy. The economist Alex Tabarrok summaries the idea of this fallacy when he states that “If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.” Perhaps, but we all know that past performance does not necessarily predict the future, a message we remember when investing our money. As Paul Krugman astutely points out, the industrial revolution raised living standards, but many workers were hurt in the process, so the Luddites did have a value concern.

Let’s elaborate on the pace of change: in 1800 most Americans (90% ) worked in agriculture, but by the year 2000, due to industrialization, most don’t (2%). This shift in employment happened over a period of 200 years, but future employment shifts will happen much faster. Without enough time to adapt, much of society will feel increased pain. To be clear, I’m an optimist and believe that the long-term future is bright; my concern is the transition period that we are in now. One blog, the Weekly Sift, makes the case that if technology creates unimaginable abundance, with little need for labor, then what we really have is a social problem, not an economic problem. No one would complain about elimination of work except for the fact that wages are the primary way that most people earn money. Even with the most optimistic outlook, future nirvana is a few years out, and social problems can take years to fix (just consider congress today). Between now and then, that imaginary footbridge I mentioned in the introduction will continue collapsing behind us, and most people will have to run very fast to not fall into the abyss below.

Lots of people have considered these ideas, especially the people I introduced in the earlier post about past economic trends. Martin Ford published The Lights in the Tunnel in 2009. He provides a non-technical visualization of the world economy and how it would respond to a rising displacement of workers by technology. His arguments for how technology impacts the economy are clear and to the point. Ford published a recent article in the Communications of the ACM (Association of Computing Machinery). Essentially, he argues that most jobs are routine and that machines are rapidly acquiring the skills to do them, so acquisition of new skills by people may be an inadequate defense. Ford also maintains a blog called econfuture, which covers future economics and technology. Another influential book, Race against the Machine, written by Erik Brynjolfsson and Andrew McAfee, both from the MIT Sloan School of Management, was published in 2011. If you have not read the book, a good summary was published in The Atlantic. Essentially, this book argues that we are at the beginning of “the great restructuring.” A new book, Second Machine Age, is due to be published in January 2014. Let me summarize the argument for why the digital revolution, especially the rise of automation, will have such a profound impact on our economy.

Technological advancement is accelerating exponentially. In 1965, Gordon E. Moore invented a formula for predicting the rate of technological advancement, which is called Moore’s Law. Originally, the law applied to integrated circuits: Moore observed that the number of transistors in a single device doubled every 18 to 24 months. To visualize the idea, and project when a computer might execute the same number of computations per second as a human brain, Mother Jones Magazine imagines filling Lake Michigan, starting with a single fluid ounce of water in 1940 and doubling the amount every 18 months. For 70 years, it seems like there is little progress, but then the lake suddenly fills by 2025, in the last 15 years. By analogy, computers began in about 1940, and compute power has doubled about every 18 months. If this trend were to continue, the computations per second of a computer would reach that of the human brain by 2025. The point is that when exponential change happens, it seems like little happens for a long time and then the major impact comes all of a sudden. Today, we have just passed the 70 year mark of  the doubling of compute power, so the metaphorical lake of computational power is only a few inches deep, but rising fast. We know that exponential change does not last forever, and that Moore’s law for conventional silicon-based devices is likely coming to a close by about 2020, but new three-dimensional chips are emerging that could continue the trend. The predicted end of Moore’s law has come and gone many times before: each time engineers found ways to break through the perceived barriers.

In addition, alternatives to silicon technology (quantum computing, for example) may well evolve by about 2020 and replace silicon. Also, on April 2, 2013, President Obama announced “the brain initiative” that is an ambitious effort to understand how the brain works. The Defense Advanced Research Projects Agency (DARPA) is sponsoring research called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE), and is collaborating with many companies, including IBM. The goal of this research is to build a new type a computer that works much like a mammalian brain. There are other projects too: on October 13, 2013, the Human Brain Project was kicked off in Switzerland. This is a 10 year global project that will give us a better understanding of how the brain works.

Further, other researchers have generalized the concept of Moore’s Law and Write’s Law to predict the rate that other technologies will advance. MIT News recently reported on a paper that shows that these laws give good approximations for the rate of many types of technological progress. The original paper is available on an open access journal. These laws hold for much more than transistors, and as summarized by the Journal Nature, “Mathematical laws can predict industrial growth and productivity in many sectors.” For these reasons, I believe that the digital revolution will continue well beyond the limits of current silicon-based technology.

To replace jobs, machines in the future won’t need artificial intelligence as depicted in science fiction. After all, many middle-income jobs are relatively boring, so there is no need to invent a machine like Commander Data from Star Trek to replace the person doing the job. Even so, machines will likely exceed our expectations. Every time we imagine a glass ceiling on machine ability, it is quickly broken. When I first learned to program a computer, my teachers taught me that the machine was very fast, but very dumb. Nevertheless, in 1997, IBM built a computer, deep blue, that beat the world chess champion at the time, Gary Kasparov. In 2004, Frank Levy and Richard Murnane wrote The New Division of Labor where they predicted jobs that computers would and would not displace. Essentially, the authors imagined that machines were limited to following simple rules, but by 2010 these limits of technology were already being proven wrong. In 2004, DARPA held a challenge to build a driverless vehicle, and the winning entry could navigate only 8 miles. As of 2010, Google had an entire fleet of autonomous cars able to travel thousands of miles on U.S. roads. DARPA focused the 2013 challenge on building humanoid robots. People continue to underestimate how advanced computers will get. In 2011, IBM built a new computer, Watson, that beat the best players at the TV game show Jeopardy. This same class of machine is now being used to do medical diagnosis. Clearly, the gap between machine and human ability is narrowing.

Although some companies are still spending money to buy equipment, few are hiring a lot of workers. Not only are machines cheaper to use, but they are becoming more skilled, flexible, and exact: they are doing more and more jobs once done only by human workers. This trend, using machines instead of labor, is not limited to the United States, it is happening in China as well. These machines are light years ahead of the robot I used for research. When I graduated from MIT in 1986, I interviewed for a job that was to design a machine to automatically sew clothing. It was an interesting, but incredibly complex task to automate, and I did not think it could be done with the technology at that time. I decided not to take the job, but that was 27 years ago. Today, I see that DARPA has recently awarded a contract to develop “complete production facilities that produce garments with zero direct labor as the ultimate goal.” I have no doubt that this milestone in automation will soon be met.

Job replacement is not limited to low paying jobs. Armies of expensive lawyers are being replaced by software. To prepare for a case, lawyers and paralegals used to read thousands of pages at high hourly rates. Today, software can analyze these same document at a fraction of the cost. Every day, trading decisions on Wall Street are made by machines that have displaced human specialists. These workers may have found other work, but the new jobs may very well be less desirable. Lower cost overseas doctors are displacing radiologists in the U.S. This is a precursor to software that will analyze these same images using advanced algorithms. It is not just automation that it impacting jobs, it is the ease of using low-cost labor at the task level. Micro-tasking web sites, like Amazon’s Mechanical Turk, are emerging that allow people around the world to bid on and do work on a task by task basis. This is possible because of high-speed communication technology. Essentially, these people are knowledge workers, and their jobs are subject to off-shoring, automation, or both.

Are people coming around to believing that technology is impacting middle class jobs? The answer is yes. Paul Krugman, in the column Robots and Robber Barons, acknowledges that technology is replacing workers in many industries. The Atlantic recently reported on a new study by a pair of economists at the University of Chicago’s Booth School of Business, Loukas Karabarbounis and Brent Neiman. In a nutshell, the study shows that labor’s share of income is plunging due to ever improving technology. The New York Times recently published an opinion piece by David Autor and David Dorn called “The Great Divide: How Technology Wrecks the Middle Class.” In my earlier post about economic trends, I mentioned these two authors as saying that, in the past decade, demand is rising for people with the lowest and highest skills, but is falling for those in the middle. MIT Technology Review had an article in the July / August Issue  called “How Technology is Destroying Jobs.” This article says that Brynjolfsson is not ready to conclude that economic progress and employment have diverged for good, but he adds “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.”

These trends are not going to change anytime soon. The Oxford Martin Programme on the Impacts of Future Technology recently released a study, featured on the London School of Economics and Political Science Blog, that concludes that “Improving technology now means that nearly 50 percent of occupations in the US are under threat of computerization.” They reviewed over 700 occupations, and offer a detailed graph showing the probability of computerization by occupational category. Every category, such as “management, business, and financial,” is shown as a color (such as light blue). All of the jobs in that category are distributed like grains of colored sand. The placement of each grain along the horizontal axis, from low (0) to high (1), represents the probability of computerization. Each category is stacked in turn, resulting in a graph that looks like multicolored sand art. Each of the millions of jobs in the US is a tiny point on the graph. The net result is that 47% of jobs are at high risk of computerization, 19% are at medium risk, and 33% are at low risk. Here’s the graph:

Probability of computerization by occupational category

Probability of computerization by occupational category
(Carl Frey and Michael Osborne)

The classic theory is that new occupations will replace the old ones, but that will take time. While all these smart people figure out how to solve the economic and social problems, I am going to worry about a much more mundane issue: how average people can stay employed until this restructuring is complete.  This is post four of five. In the next post, I’ll outline how this story may unfold in coming years, and how the people can prosper through it all.