To predict the future of work with AI, study the past
Lessons learned from 150 years of technological disruption in the labor market
Hello, everyone. I hope you had a great summer! Today begins a multi-part series I’m writing for Forked Lightning about technological disruptions of the labor market. Like everyone else these days, I’m interested in how AI is going to change work and society. I don’t know the answer of course, and neither does anyone else.
However, we can learn a lot about the future of work by looking at past episodes of technological disruption. The figure below stitches together data from the U.S. Census and the American Community Survey to calculate changes in the U.S. occupation structure since 1880.1 Farming and agriculture accounted for more than 40 percent of all jobs in the U.S. economy in 1880, and today it is less than 2 percent. “Blue collar” jobs, which include manual labor like construction as well as precision production and skilled trades, peaked at around 40 percent of all jobs between 1920 and 1960, and have since declined to about 20 percent. Professional occupations such as management, business, engineering, law, and medicine grew from less than 15 percent a century ago to 45 percent of all jobs today.
As we’ll discuss over the next few weeks, these changes were fueled by new technologies like steam power, electricity, and the personal computer. How did those technologies change the labor market? How long did it take, and what were the early signs? What were people saying about them at the time?
People think Artificial Intelligence is going to be a big deal™, and they are probably right. Still, without a long-run perspective, it’s possible to get carried away. The McKinsey Global Institute thinks that 30 percent of hours worked in the US economy could be automated by generative AI and other technologies by the year 2030.2
I’ll take the over on their forecast, which is based on technical experts’ assessments of when technologies will achieve human-level performance in a range of tasks. Focusing only on future possibilities – what AI “could” do – is a mistake. Looking at the past forces you to reckon with grubby realities like the trial-and-error of failed technological applications (anyone remember the Apple Newton?) and the slow pace of adoption. Looking to the past also helps us think about the distributional impacts. Who benefits from the disruptive impacts of new technology, and who is harmed? Which skills become more important, and which become obsolete? Which new jobs emerge?
Each week I’ll deliver you a detailed case study of a particular occupation. I’ll explore how technological innovation has changed a single job over time, and what has happened to employment and wages. I will focus particularly on how technology changes the skills and capacities that are required to do a job well.
Having done much of the work in advance, I take three broad lessons from these occupation case studies.
To see a world in a grain of sand
The first lesson is that if you want to figure out how some new technology will affect the labor market, you need to understand it from a front-line worker’s point of view. Zoom in before you zoom out. Even if technology ends up replacing labor altogether, it doesn’t start off that way. Instead, it begins with the availability of a new tool, which could possibly help some people do some things better in some situations.
Wikipedia says that technology is “the application of conceptual knowledge to achieve practical goals, especially in a reproducible way.” While I hesitate to contradict the Internet hive mind (aka the Borg), I prefer a simpler definition. A technology is a (new) way of doing things. Nobel laureate economist Paul Romer calls them “recipes”, which is a wonderful metaphor, because it shows how technological change can lead to greater abundance.
A good recipe combines simple ingredients in a new way to create something delicious.3 Similarly, new technologies add value using familiar components. A mobile phone mostly consists of common raw materials – silicon, plastic, iron, aluminum – plus some rare earth elements that power the battery and give it the ability to vibrate. If you laid out these materials on a table, they wouldn’t do anything on their own. It’s the recipe that’s important.
Because they combine the same old ingredients in better ways, new technological recipes can increase productivity and, ultimately, economic growth.
Some recipes are harder to follow than others. I love a great poached egg. The recipe couldn’t be simpler – boil water, make it spin, crack an egg into the middle of it, remove when done. But it takes skill and practice to get the technique just right. Just as recipes depend on good execution, the benefits of new technologies are almost always embodied in people and the expertise they develop. The economist’s term for this is “technology-skill complementarity”. No technology directly substitutes for a human worker (at least not yet). Rather, it ends up helping greatly with some aspects of some jobs, while being irrelevant or even harmful in others. The net impact depends not only on the technology itself, but also on the capabilities of the people using it.
On the other hand, attempts to understand new technologies solely from a bird’s eye view typically fail spectacularly. In 2013, two scholars from the Future of Humanity Institute at Oxford wrote a paper arguing that 47 percent of US jobs were at high risk of being automated within “a decade or two”. They used a similar approach to the McKinsey report above, where human experts assessed which job tasks could be automated in the near future. The report was cited more than 15,000 times, including by the Council of Economic Advisers, the BBC, and John Oliver. The authors had already walked it back by 2019, and 11 years later the U.S. economy has 22 million more jobs than it did in 2013. Yet here we are, making the same mistakes again with generative AI. Apparently we have technological progress in everything except forecasting ability.
The teaching of new tricks to old dogs
The second lesson is that adoption is just as important as invention. Take the example of electricity. The great economic historian Paul David studied mass adoption of the electrical dynamo, a device that converts the physical spinning motion generated by magnets into an electrical current that can be used as an energy source. The principle of electromagnetic energy generation was discovered in 1832 by Michael Faraday. But it took another 35 years for others to refine the design so that the electrical current was steady and reliable enough to be used for industrial applications. In 1880, nearly 50 years after Faraday’s discovery, Thomas Edison received a US patent for the incandescent light bulb.4 Two years later, he built the first commercial power plant, on Pearl Street in downtown Manhattan.
Did Edison’s power plant usher in a golden age of electricity-fueled productivity growth? Not exactly. According to David’s calculations, electric lighting was being used in only 3 percent of US residences in 1899, nearly two decades after Edison’s invention. Urban adoption was only 8 percent, and factory adoption was at only about 5 percent of total energy consumption. Key bottlenecks include the high fixed cost of capital (rebuilding factories to take advantage of a new power source is expensive) and the difficulty of retraining workers and managers in new ways of doing things.
The rate of electricity adoption increased rapidly in the early 1900s and 1910s, and by the early 1920s a little more than half of factory mechanical drive capacity had been electrified. This means that the path from invention to adoption for electricity was either 90 years (if you start from Faraday’s discovery) or 40 years (if you start with the invention of the light bulb). That’s a long time!
In an excellent paper, Diego Comin and Bart Hobijn show that long adoption lags are common. Looking across 166 countries over the last two centuries, they find that countries adopt new technologies about 45 years after they are first invented. Importantly, technology adoption has sped up over time due to globalization and rising educational attainment. Many other studies that have found that educated people adopt new technologies faster and use them more intensively. Joel Mokyr has argued that technology adoption at the societal level, and ultimately economic progress, depends importantly on institutional factors like the emergence and strengthening of patent law (which lets inventors profit from their discoveries) and the creation of universities and other knowledge networks.
It’s not easy to pin down the year that generative AI was invented. Some say it was the publication of Ian Goodfellow’s paper on Generative Adversarial Networks (GANs) in 2014, although the conceptual groundwork was laid much earlier. Another milestone was the introduction in 2017 of the transformer architecture in the famous and wonderfully titled paper “Attention is All You Need”.
If this 2014-2017 period was as important as Edison’s power plant, that is great news for humanity. But it also suggests we will be riding the adoption curve for another four decades. Even though technological breakthroughs often occur suddenly and with great fanfare, mass adoption is a long and sometimes uneven slog.
Man is a perpetually wanting animal
The third lesson is that technology only destroys jobs in aggregate when it satisfies our material wants. This condition is met less frequently than you might expect. In most cases, the increased prosperity unlocked by new technological recipes just ratchets up our expectations. We move the goalposts as we get richer.
An obvious example is health care. From 1900 to 2019, the average life expectancy of a newborn child in the U.S. increased from 49 to 79 years of age. These gains were driven by the invention of vaccination, which greatly reduced mortality from communicable diseases like smallpox and measles, as well as the development of antibiotics for treating bacterial infections, and treatments like oral rehydration therapy for diarrhea, which still kills millions every year.
Technological progress has unlocked huge gains in life expectancy, allowing us to live long and prosper. Yet we didn’t respond as a society by saying “ok, we’re healthy enough, let’s stop employing people in health care.” Instead, the opposite happened. National health expenditures rose from 5 percent of GDP in 1960 to more than 17 percent today.
Health care is perceived as inefficient, partly because we’ve picked a lot of the low hanging fruit with mass vaccination, antibiotics, and other innovations. We now spend a lot of money on extending lifespan through chronic disease management, which is harder and more expensive. Yet it also keeps people employed. Health care jobs – not only doctors and nurses, but technicians, therapists, and other allied health professions – account for nearly half of the occupations that the Bureau of Labor Statistics projects to have the fastest employment growth over the next decade. As we get richer, we want to live longer and live better. Technological progress in health care will change jobs, perhaps fundamentally, but it’s not going to destroy them.
A preview of coming attractions
In each of the next few weeks I’ll dive into the history and evolution of a single occupation. Next week is farmers, then machinists. After that, I’ll turn to white collar work, covering secretaries, architects and other STEM occupations before finishing with sales and management. Stick around, it should be fun.
One last caveat, mostly for my academic colleagues: I am attempting to cover a lot of ground, and I am inevitably going to miss some citations and get some details wrong. If that happens, let me know so I can correct the record. Also let me know if you have ideas for other occupations, assuming I don’t run out of steam.
Until next time!
Ideally, we’d look at more detailed occupation codes, but the labor market has changed too much for that to be possible. There were no software engineers or occupational therapists in 1880, and there are no midwives, sextons, canalmen, or hucksters and peddlers today. This probably admits something deeply nerdy about me, but I enjoy looking at the old Census occupation codes. They are a window into the distant past.
“Could” is doing a lot of work in that sentence! They generate the estimate by asking technical experts to forecast when technology will be as good as humans at broad tasks like “coordination with multiple agents” and “logical reasoning and problem solving”, and then they project those tasks onto their importance in each type of job to make the forecast.
There are many good examples, but one of the best is the invention of the Caesar salad by Chef Caesar Cardini at his restaurant in Tijuana, Mexico. The kitchen ran out of ingredients during a busy July 4th in 1924, so he combined Romaine lettuce and croutons with lemon juice, eggs, olive oil, parmesan cheese, pepper, Worcestershire sauce, anchovies, garlic, and Dijon mustard to create the Caesar salad. Desperate times call for desperate measures!
Edison began working on the light bulb in 1878. Years before that, Joseph Swan had experimented with designs for an electric lamp using carbon filaments. While Swan did not receive a British patent until the 1880s, he had deployed a practical version of the light bulb as early as 1879.
I think you mean to say "I'll take the under" or am I misreading? You think it will be under 30 percent. Interesting series, thanks!