UK Union Leader argues for 4-day work week


Advances in technology typically bring greater productivity (output per unit of input), which always raises the question: who, precisely, will reap the benefits?  Will it be those with the capital (CEO’s, stockholders), those doing the labor, or both?

Of course, the benefits that come with greater productivity aren’t just about money; they can also be benefits of time.   Whether it’s a five-hour workday or a four-day work week, the idea of workers getting the same pay/benefits for a shorter work week has been slowly gaining momentum.

Until recently, though, the idea has been limited to a few eccentric companies and university-funded studies.  But now Frances O’Grady, the general secretary of the Trades Union Congress (TUC) in the UK, has put her significant power and influence behind the idea.  As reported in the Guardian, O’Grady’s speech at the TUC’s annual meeting made a powerful historical argument for a four-day work week:

“In the 19th century, unions campaigned for an eight-hour day. In the 20th century, we won the right to a two-day weekend and paid holidays.

“So, for the 21st century, let’s lift our ambition again. I believe that in this century we can win a four-day working week, with decent pay for everyone. It’s time to share the wealth from new technology, not allow those at the top to grab it for themselves.”

This historical perspective Ms. O’Grady takes is absolutely crucial.  At various points in history, the idea of an eight-hour work day or a two-day weekend seemed to most people like a utopian pipe-dream…until it didn’t.  When the ‘impossible’ came to be seen as possible, it moved one step closer to being actual.  Such a step is what Frances O’Grady’s speech represents.

O’Grady’s timeline of “in this century” is, to be sure, not overly ambitious.  Even some of the more restrained predictions regarding automation and technological unemployment see a bleak picture for workers in the year 2100.

But to see someone of O’Grady’s stature and influence openly call for a four-day work week is a real milestone in the development of AI-driven automation.

Read the Guardian article here.

“Humanities and Technology at the Crossroads: Where Do We Go From Here?” (BU Mellon Sawyer Seminars)

Over the 2017-2018 academic year, Boston University’s philosophy and communication departments ran a series of Mellon Sawyer Seminars about the intersection of philosophy and emerging computational technologies.  (the author of this post was a Fellow on the project.)  Topics examined included big data and philosophy of science, the ethics of algorithms, and ‘human plasticity’ in relation to human-machine interfaces.

For all the details, go to

Tracking AI’s Impact on Jobs…with the help of AI

A panel working with the National Academies of Sciences, Engineering and Medicine has published a much-needed report on developing new tools to track two important trends:

  1. the rate at which A.I. is developing
  2. how these developments are affecting U.S. employment

As the co-chairmen* of the panel put it, we are currently “flying blind” on these trends.

Thus, without inventing some new kind of ‘radar,’ we won’t know either our location or where we’re headed, and we won’t know how to give career and training/retraining advice to vulnerable U.S. workers.

To give an example of such advice: “Mr. Smith, your current job likely won’t exist in 6 years; here’s a related job that probably will still exist, and here’s how to start training for it.”  Or, “Ms. Jones, the college major you’ve chosen most commonly leads to these 3 careers, all of which have a >70% chance of being automated in 15 years.  Perhaps consider another major!”

As the panel notes, however, elements of these tracking tools already exist, in the form of the A.I. and Big Data infrastructures currently in place (LinkedIn, Google, etc.).   What is needed, the panel says, are public-private collaborations to combine the existing mountains of data with secure, anonymous, and unbiased ways of distributing and making sense of it.

Thus, one essential way to track and adjust to the development of A.I. is by means of A.I. — provided that oversight for the common good is also in place.  (In particular, machine learning’s focus on gleaning practical insights from petabytes of data will be key.)  If properly directed, the very technologies that threaten so many workers’ jobs may, it turns out, help put those same workers back to work.

*The panel is co-chaired by Erik Brynjolfsson of MIT, the co-author of the outstanding The Second Machine Age — a must-read on the topic of A.I. and employment.

“The Simple Economics of Machine Intelligence”

This piece in the Harvard Business Review has a four-part argument, with a cautiously optimistic conclusion:
  1. Machine intelligence is essentially about prediction.
  2. As the price of such prediction drops, demand for it will go up.  (for example, making predictions about very early-stage diseases)
  3. The ‘complement’ (in economic terms) of prediction is judgment — something done by humans.
  4. Thus, as demand for prediction rises, demand for human judgment will also rise.  For example, the demand for decisions about medical treatment for diseases that are detected at an early stage will rise.
  5. Overall, then, the role of such ‘complements’ to AI might mean that the rise of machine intelligence will be good for human employment prospects.