Aristotle: True leisure is not relaxation.


The Noble Leisure Project” gives an excellent explanation of Aristotle’s concept of leisure.  Far from being mere passivity/relaxation, true leisure is an activity, and an activity in which a person finds their greatest fulfillment.  But leisure is not just any activity: it consists of the activities that are most properly human.  (Determining what these activities are is one of the goals of Aristotle’s Nicomachean Ethics.)

In general, then, the hierarchy for Aristotle goes something like this:

relaxation -> (done for the sake of) -> work -> (done for the sake of) -> leisure (done for its own sake)

How, then, should we arrange our lives and our daily schedules, so that we have time for all three of these?

“Humanities and Technology at the Crossroads: Where Do We Go From Here?” (BU Mellon Sawyer Seminars)

Over the 2017-2018 academic year, Boston University’s philosophy and communication departments will be running an exciting series of Mellon Sawyer Seminars about the intersection of philosophy and emerging computational technologies.  (the author of this post is a Fellow on the project.)  Topics to be examined include big data and philosophy of science, the ethics of algorithms, and ‘human plasticity’ in relation to human-machine interfaces.

From the website:

“Several of our Sawyer Seminars will involve both a public event and a prior small workshop run in conjunction with two PhD seminars, one for students in the Division of Emerging Media at BU (with Prof. James E. Katz) and another in the Philosophy Department (with Prof. Juliet Floyd).  Commentators are sought for specific events, as well as interested faculty and graduate students from the Boston Area, upon application.  Please contact us if you are interested in participating.”

For all the details, go to

Tracking AI’s Impact on Jobs…with the help of AI

A panel working with the National Academies of Sciences, Engineering and Medicine has published a much-needed report on developing new tools to track two important trends:

  1. the rate at which A.I. is developing
  2. how these developments are affecting U.S. employment

As the co-chairmen* of the panel put it, we are currently “flying blind” on these trends.

Thus, without inventing some new kind of ‘radar,’ we won’t know either our location or where we’re headed, and we won’t know how to give career and training/retraining advice to vulnerable U.S. workers.

To give an example of such advice: “Mr. Smith, your current job likely won’t exist in 6 years; here’s a related job that probably will still exist, and here’s how to start training for it.”  Or, “Ms. Jones, the college major you’ve chosen most commonly leads to these 3 careers, all of which have a >70% chance of being automated in 15 years.  Perhaps consider another major!”

As the panel notes, however, elements of these tracking tools already exist, in the form of the A.I. and Big Data infrastructures currently in place (LinkedIn, Google, etc.).   What is needed, the panel says, are public-private collaborations to combine the existing mountains of data with secure, anonymous, and unbiased ways of distributing and making sense of it.

Thus, one essential way to track and adjust to the development of A.I. is by means of A.I. — provided that oversight for the common good is also in place.  (In particular, machine learning’s focus on gleaning practical insights from petabytes of data will be key.)  If properly directed, the very technologies that threaten so many workers’ jobs may, it turns out, help put those same workers back to work.

*The panel is co-chaired by Erik Brynjolfsson of MIT, the co-author of the outstanding The Second Machine Age — a must-read on the topic of A.I. and employment.

“The Simple Economics of Machine Intelligence”

This piece in the Harvard Business Review has a four-part argument, with a cautiously optimistic conclusion:
  1. Machine intelligence is essentially about prediction.
  2. As the price of such prediction drops, demand for it will go up.  (for example, making predictions about very early-stage diseases)
  3. The ‘complement’ (in economic terms) of prediction is judgment — something done by humans.
  4. Thus, as demand for prediction rises, demand for human judgment will also rise.  For example, the demand for decisions about medical treatment for diseases that are detected at an early stage will rise.
  5. Overall, then, the role of such ‘complements’ to AI might mean that the rise of machine intelligence will be good for human employment prospects.

How should we educate in the age of automation?

In the Guardian, George Monbiot lays out a compelling argument that the dominant mode of education in the West may have been well-suited for an industrial age, but not for a post-industrial, increasingly automated one.

In this new age, both rote physical tasks and rote mental tasks are being taken over by increasingly capable machines.  If the larger context and purposes of education have changed, why has the dominant mode of education not changed along with it?

Although not mentioned in the article, several institutions have long sought to de-regiment, de-mechanize, and genuinely humanize education: the Montessori tradition, and St. John’s College in Annapolis and Santa Fe.  Among other things, these institutions consist of meticulously-designed structures which ensure that students are exposed to what is important, but also give each student plenty of room to pursue what he or she finds most compelling.   Self-driven learning, truly curious learning, is given space to grow.

Might not these be the kinds of educational models that are most needed in an age of automation?

Training humans to be more like machines doesn’t make sense; let’s train them to be more like humans.

Read the Guardian article here: “In an age of robots, schools are teaching our children to be redundant.”