The Software That Shapes Workers’ Lives vs. Designing Freedom

Sometimes what am I reading becomes more significant when I start reading the words in the context of another work. Today’s example is The Software That Shapes Workers’ Lives by Miriam Posner in the March 12, 2019 issue of The New Yorker. It’s a good read but, personally, I think you need to read Posner’s longer article See No Evil first in order to best appreciate the magnitude of the impact that supply chain management software has had on our world.

A central challenge in supply-chain management is the vast distance—spatial, temporal, and informational—that separates the S.C.M. process from the real world of manufacturing and consumption. Among the distance-based problems planners worry about is the “bullwhip effect.” Suppose a store runs low on diapers. Observing this strong demand, a manager who normally needs fifty cases might put in an order for a hundred, just to be on the safe side. The diaper company, in turn, might order the production of two hundred cases, rather than a hundred, to insure that they have enough stock on hand. Just as a flick of the wrist creates waves which grow as they travel through a whip, so subtle signals sent by consumers can be amplified out of proportion as they travel through the supply chain. This inflation is dangerous for manufacturers—especially those that depend on the razor-thin inventory margins demanded by just-in-time planning—and yet it’s also hard to avoid, since the manufacturing process is so distributed in time and space, with many junctures at which forecasts might grow.

The Software That Shapes Workers’ Lives, Miriam Posner, March 12, 2019, The New Yorker.

While I was reading, The Software That Shapes Workers’ Lives, I was reading another Massey Lecture that proved to be strange and weird counterpoint: Designing Freedom by management cyberneticist Stafford Beer. I was reading the slim book because I have a personal mission to own and read all the Massey Lectures but it appears that his work is still is being discussed on twitter.


Despite the simple figures at the end of most of the chapters, Designing Freedom is not a simple read. I suspect one might to have read Beer’s other works in order to appreciate this distillation of his ideas. As a first introduction, I found it too easy to get lost in Beer’s vocabulary:

Let’s get down to work, and recall where we were. A social institution is not an entity, but a dynamic system. The measure we need to discuss it is the measure of variety. Variety is the number of possible states of the system, and that number grows daily, for every institution, because of an ever-increasing range of possibilities afforded by education, by technology , by communications, by prosperity, and by the way these possibilities interact to generate yet more variety. In order to regulate a system, we have to absorb its variety. If we fail in this, the system becomes unstable. Then, at the best,we cannot control it—as happened with the bobbing ball on our elaborated tennis trainer; at the worst, there is a catastrophic collapse—as happened with the wave. So next to something new. What is it that controls variety? The answer is dead simple: variety. Variety absorbs variety, and nothing else can.

Stafford Beer, Designing Freedom, p. 9

Beer does explain this concept using the example of a person looking to buy a pair of shoes in a department store.

But not for nothing is that store called departmental. There is a shoe salesman, and a cake salesman; that is what organizational structure is for—to carve up the total system variety into subsystems of more reasonably sized variety. The customer who is not clear what commodity, if any, will meet her need, represents variety that cannot be trapped by this departmental arrangement; her variety will be left over, not absorbed, if the store is not careful—and we can see how this means that the situation is out of control. But if the store is careful, it will have an information bureau—which exists precisely to absorb this excess variety.

Let us return to the shoe purchaser; we observe that she is becoming angry .This is because she cannot get any attention. The shoe salesman is dealing with someone else, and four more people are waiting. The other shoe salesmen are similarly occupied. Temporarily, at any rate, the situation is out of control, because at this moment the store has miscalculated the number of shoe salesmen needed to absorb the variety generated by the customer. Well, maybe you remember the concept we need to describe this affair, and its name. The name is relaxation time. Variety is cropping up faster in this system than the system can absorb it, and this is bad from the customer’s point of view. If it happens all the time, it will be bad from the store’s point of view as well: the customer will desert the store, looking for somewhere with a shorter relaxation time. So the temporary instability of service in the store will become permanent, and—at that very moment—incipiently catastrophic. The trouble with our societary institutions, of course, is that the citizen has no alternative but to use them. Only variety can absorb variety. It sounds ridiculous, but the perfect, undefeatable way to run this store is to attach a salesman to each customer on arrival. Then we could forget about those departments, where the shoesalesmen are run off their feet, while the girls in lingerie are manicuring their fingernails, and absorb the customers’ variety as we go along. For, you see, not only do we need variety to absorb variety, but we need exactly the same amount of variety to do it. We were speaking just now of the law of gravity in physics: it is perhaps the dominant law of the physical universe. What we have arrived at in the departmental store is the dominant law of societary systems, the Law of Requisite Variety—named Ashby’s Law after its discoverer.

The example is ridiculous, because we cannot afford to supply requisite variety by this obvious expedient. We cannot give every departmental store customer a salesman, because we cannot afford it; but you may already have noticed that in very superior (and therefore very expensive) special-purpose stores, such as those selling automobiles or hand-made suits, this is exactly what happens.

Stafford Beer, Designing Freedom, p. 9

The above example might be ridiculous but for myself, it was the moment of the most pronounced clarity in this book.

As Beer tackles the systems of bureaucracy, of liberty, of families, and of education with his cybernetic managerial approach, I found myself hoping that I following along and progressively losing confidence that I was. For example, I think I understand what Beer means here but there not enough in the text for me to be really sure:

So: I am hoping that we may approach the final lecture of this series in the following state of mind. The human being is limited by his finite brain from assimilating all possible information, and from recognizing all possible patterns of the world. He is limited by his own finite resources from doing whatever he likes, and by the finite resources of the planet from demanding an end-less growth in material prosperity, for all men. Indeed the pursuit of his own material prosperity, though possible, is not something that the affluent part of the world can any longer maintain as a good, unless it is explicitly willing to declare that it will be done at the expense of the less fortunate.

Then the concept of freedom is not meaningful for any person except within measurable variety constraints:and the extent to which we have lost freedom is due to loss of control over the variety attenuators—education, publishing—and to the centralization of power at the wrong levels of recursion. This freedom could be re-claimed, using the new scientific tools at our disposal, but only if new democratic machinery is established to replace existing bureaucracies. As long as these remain cybernetically organized so as to produce themselves, our societary institutions remain set on courses that lead to catastrophic instability.

Stafford Beer, Designing Freedom, p. 36

When I read Designing Freedom, I always felt very close to discovering something very profound about the current state of our organizations and governments. And while I feel I did gained some insight from this short book, I know there is much more work I would still need to do as a reader to make these connections meaningful to me.

But I do know this: there is some urgency to the matter of using better organizational software if we want to create a better world.

Because if you think that a university would never use software from SAP, you would be wrong.

Belonging vs. Computation

One of the benefits of reading several books at the same time, is that occasionally a chapter that I have just finished from one book will somehow find resonance in another. Today I felt this, despite that James Bridle’s New Dark Age and Adrienne Clarkson’s Belonging are very different books.

From the blurb to New Dark Age:

As the world around us increases in technological complexity, our understanding of it diminishes. Underlying this trend is a single idea: the belief that our existence is understandable through computation, and more data is enough to help us build a better world.

Bridle’s chapter on Computation is exceptional. There is so much I would love to share but I will leave you this excerpt:

To take another example from aviation, consider the experience of being in an airport. An airport is a canonical example of what geographers call ‘code/space‘. Code/spaces describe the interweaving of computation with the built environment and daily experience to a very specific extent: rather than overlaying and augmenting them, computation becomes a crucial component of them, such that the environment and the experience of it actually ceases to function in the absence of code…

That which computation sets out to map and model it eventually takes over. Google sets out to index all human knowledge and becomes the source and the arbiter of that knowledge: it became what people think. Facebook set out to map the connections between people – the social graph – and became the platform for those connections, irrevocably reshaping societal relationships. Like an air control system mistaking a flock of birds for a fleet of bombers, software is unable to distinguish between the model of the world and reality – and, once conditioned, neither are we.

James Bridle, New Dark Age, p.37, 39.

Belonging, on the other hand, is a set of Massey Lectures from journalist and former Governor General of Canada, Adrienne Clarkson that addresses the paradoxes of citizenship. In her first chapter she tells several stories that express how our identity and our sense of belonging are deeply dependent upon each other.

If we remove our sense of belonging to each other, no matter what our material and social conditions are, survival, acquisition, and selfish triumphalism will endure at the cost of our humanity. Under extreme circumstances, each and every one of us is capable of a mentality that brings about the abandonment of children, the lack of cultivation of human relationships, and the deliberate denial of love.

Adrienne Clarkson, Belonging, p. 3.

To bring these two ideas together: Like an air control system mistaking a flock of birds for a fleet of bombers, software is unable to distinguish between the model of the world and reality, and if we let ourselves become conditioned to substitute our standing in social media with our sense of belonging in our social structures, neither will we.

To improve energy security, we need to make infrastructures less reliable.

Redefining Energy Security

To arrive to a more accurate definition of energy security requires the concept to be defined, not in terms of commodities like kilowatt-hours of electricity, but in terms of energy services, social practices, or basic needs. 1 People don’t need electricity in itself. What they need, is to store food, wash clothes, open and close doors, communicate with each other, move from one place to another, see in the dark, and so on. All these things can be achieved either with or without electricity, and in the first case, with more or less electricity.

Defined in this way, energy security is not just about securing the supply of electricity, but also about improving the resilience of the society, so that it becomes less dependent on a continuous supply of power. This includes the resilience of people (do they have the skills to do things without electricity?), the resilience of devices and technological systems (can they handle an intermittent power supply?), and the resilience of institutions (is it legal to operate a power grid that is not always on?). Depending on the resilience of the society, a disruption of the power supply may or may not lead to a disruption of energy services or social practices.

… To improve energy security, we need to make infrastructures less reliable.

Keeping Some of the Lights On: Redefining Energy Security“. Kris De Decker. Low <– Tech Magazine.

Deep Work

The fact that [David Heinemeier] Hannson might be working from Marbella, Spain, while your office is in Des Moines, Iowa, doesn’t matter to your company, as advances in communication and collaboration technology make the process near seamless. (This reality does matter, however, to the less-skilled local programmers living in Des Moines and in need of a steady paycheck.) This same trend holds for the growing number of fields where technology makes productive remote possible — consulting, marketing, writing, design and so on. Once the talent market is made universally accessible, those at the peak of the market thrive while the rest suffer.

From “Chapter One: Deep Work is Valuable.”

Newport, Cal. Deep Work: Rules for Focused Success in a Distracted World. First edition. New York: Grand Central Publishing, 2016.

Oh, we’re really making an AI

Around 2002 I attended a private party for Google — before its IPO, when it was a small company focused only on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site you thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an A.I.”

I’ve thought a lot about that conversation over the past few years as Google has bought 13 other AI and robotics companies in addition to DeepMind. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search constitutes 80 percent of its revenue. But I think that’s backward. Bather than use AI to make its search better, Google is using search to make its AI better. Every tie you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter Bunny looks like. Each of the 3 billion queries that Google conducts each day tutors the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousandfold more data and a hundreds more computing resources, Google will have an unrivaled AI. In a quarterly earning conference call in the fall of 2015, Google CEO Sundar Pichai stated that AI was going to be “a core transformative way by which we are rethinking everything we are doing… We are applying it to all of our products, be it search, be it YouTube and Play etc.” My prediction: By 2026, Google’s main product will not be search but AI.

Kelly, Kevin. The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, 2016.

Track Changes #28 – Rational Geographic

[32:31]

Aaron: Nobody understands why a gazetteer is important until they suddenly need one and then they’re, like, wait, oh what, how do we…

Paul: That’s been the miracle of the web, to me, right, it’s that you’d be like I want to build this thing  and then you very rapidly stumble into the need for a large set of data with a lot of tasks. Like, I need historical texts or I need a list of places or whatever and it’s just amazing how often you get back to that.

And that whole part of our world is surprisingly untended. Right? And you go, oh get this a list of businesses but it’s from 2010 and no one has adopted it since. I’ve been actually thinking, like there isn’t really, as far as I can tell – maybe you know better than I would, but there’s an idea that I’m going to adopt this open source project, or give this into to the commons, or I’m going to open this thing but there’s no culture of adopting big data sets and taking care of them in the same way as there is as putting things on github and doing releases as open source software… that I know about.

….

[34:10]

Aaron: I guess the example of people who are doing that are the New York Public Library.

Paul: They are. That’s true.

Aaron: That’s a good example of trying to deal with both just processing the data – whether its the Menus project or the Theatre Bills or Building Inspector…

Paul: Their Labs is very strong…

Aaron: … and then providing tools for letting people work in little atomic units but even then some of it is a question of scale, I mean for all that the NYPL does amazing work they’re pretty reluctant to offer those services outside of New York City.

Paul: No, of course. What’s bugging me is I think that everyone sees code as the infrastructure for creativity and doing new work online, and I think it’s also data, and we don’t really, that’s not a conversation that people really have that much.

Track Changes – Podcast #28: Rational Geographic — Map Chat with Aaron Straup Cope
►iTunes/►SoundCloud/►Overcast/►Stitcher/►MP3 /►RSS

The AI Revolution: The Road to Superintelligence

Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that idea throughout).

The AI Revolution: The Road to Superintelligence, Tim Urban, Wait but why