Tragedy of the E-Commons
“The tragedy of the commons” owes its notoriety as a concept to a controversial 1968 article in Scientific American, written by the microbiologist Garrett Hardin.
Hardin adapted the idea from a little-known 19th century pamphlet from William Forster Lloyd, who posited that a commons—say, a public pasture upon which everyone’s cattle herd could graze—would inevitably get destroyed because each “herdsman will try to keep as many cattle as possible on the commons.” This might work for a time, but eventually the commons will reach its resource limit.
Hardin used the word “tragedy” the way A.N. Whitehead did, not as “unhappiness” but as “the solemnity of the remorseless working of things.” That is, it’s the disastrous result of a natural and logical process of everyone trying to maximize utility of the same thing—a rational process for the individual that leads to the corruption of the whole. In regard to the herdsmen, “each man is locked into a system that compels him to increase his herd without limit—in a world that is limited.”
As such, “freedom in a commons brings ruin to all.”
You might recognize a Malthusian angle in this thinking, and you’d be right. For one, Hardin explicitly invokes him in the essay. But even implicitly it would be obvious, as Malthus’s essay on population is what stimulated modern biology’s seminal idea: Charles Darwin’s theory of natural selection. In a nutshell, what happens when finite resources are stretched to their breaking point? For Darwin, nature takes over and selects for survival those that are best adapted to a competitive, closed environment.
The eugenical overtones of Hardin’s line of thinking are evident, too, and so it shouldn’t be surprising that he was chiefly concerned with overpopulation. Eugenics owes its existence less to Darwin than his cousin Francis Galton, and Hardin cites—instead of the eminent Darwin—his appropriately-named grandson Charles Galton Darwin, who worried that homo sapiens might evolve into homo progenitus (which is seemingly the plotline of the movie Idiocracy).
Hardin’s concern was that technological development led to the maximization of resources, but that technology itself could not fix the problem. Rather, some kind of intervention and government control was necessary. His advocacy for the liberalization of abortion laws led to antipathy from American conservatives, but his promotion of anti-immigration laws and eugenical ideas led to antipathy from the American left. Some of his ideas sound like something the Chinese Communist Party would embrace.
Hardin’s truculent dismissal of the entirety of religious ethics is a particularly annoying part of the original essay. He condemned the whole history of Christian thinking on morality as irrelevant because it was only concerned with black and white rules and “Thou Shalt Nots”—as though there was nothing in its ethical systems other than the Ten Commandments. Aristotle? Never heard of him.
This staggering blind spot is not really that surprising given the troubling morality of the essay to begin with. Anytime a self-assured, middle-aged professor starts lecturing the world on the “management” of population, the rest of us should head for the hills. Hardin’s writing on this should be rejected as the authoritarian fantasy that it is.
But there is one aspect of the Scientific American essay that is pertinent to our consideration of the internet and contemporary technological commons: pollution.
Hardin observed that there was a different kind of tragedy of the commons that took place when a public place was spoiled rather than harvested. “Here it is not a question of taking something out of the commons,” he wrote, “but of putting something in—sewage, or chemical, radioactive, and heat wastes into water; noxious and dangerous fumes into the air; and distracting and unpleasant advertising signs into the line of sight.”
We are all familiar with his in the organic world. But it is also directly relevant to the digital world and the global commons known as the internet.
The tragedy, though, is that this is not what the internet’s pioneers envisioned for it.
J.C.R. Licklider and the Intergalactic Network
If you’re like me, you probably haven’t heard of J.C.R. Licklider. I only learned of him recently while reading The Dream Machine, a book by M. Mitchell Waldrop which functions both as a history of computing and the internet as well as a biography of Licklider.
Waldrop wrote the book this way because he was surprised to see Licklider’s name and influence continually crop up while research hotbeds of computer development (like at Xerox PARC or DARPA). The more he studied the topic, the more he thought Licklider should get center-stage. Lick (as he was called by friends) was more of a visionary and ideas man, which is partly why his name is not attached to any single invention, but also looms large over almost all of them. He’s sometimes called “Computing’s Johnny Appleseed.”
Licklider worked for DARPA and headed up Project MAC, one of the first large scale experiments in personal computing, in which they sought to create a functional time-sharing system at MIT.
A fun-loving and congenial sort, Licklider had a background in psychology and came to computers through that route, rather than through programming or engineering. In fact, as Waldrop chronicles it, many of the computer scientists with interest in psychology were attempting to wrest the human mind away from the tyranny of behaviorism, a deeply mechanistic psychology which viewed humans as automatons who only react to stimuli. Licklider and other psychologists felt that, in fact, there was a “there” there in the human mind, that (like computers) they could process information in a real way.
But Licklider did not think humans were analogous to machines. According to Waldrop, he favored a third way between unscientific mysticism and cold, mechanical reductionism. Fundamentally, he thought, computers and human beings were very different. As Oscar Schwartz writes:
The problem, for [Licklider], was that the existing paradigm saw humans and machines as being intellectually equivalent beings. Licklider believed that, in fact, humans and machines were fundamentally different in their cognitive capacities and strengths. Humans were good at certain intellectual activities—like being creative and exercising judgment—while computers were good at others, like remembering data and processing it quickly.
Rather, Licklider argued for a partnership between man and machine. “Instead of having computers imitate human intellectual activities,” writes Schwartz, “Licklider proposed an approach in which humans and machines would collaborate, each making use of their particular advantage.”
In a crucial paper entitled “Man-Machine Symbiosis,” Licklider spelled out this vision. A computer would be less a tool than a colleague, and it could amplify and expand human creative capacity—not replace it or blunt it. Licklider viewed humanity as inherently and unfailingly creative, and by outsourcing some of the skills at which we are less-than-stellar (say, quickly processing data), we could amplify our creative powers. Man and machine together could be the best of both worlds.
In fact, there was even something of a rivalry between Project MAC and Marvin Minsky’s AI Lab. “As for Licklider's vision of human computer symbiosis, what was the point?” Waldrop imagines Minsky and Co. asking themselves. “Why waste your time augmenting human intelligence, when humans were virtually obsolete?”
This was not the way Licklider wanted machines to go. Later, he expanded on this symbiosis concept in another important paper, “Intergalactic Computer Network.”
This paper proposed an early version of what would become ARPANET, which would later evolve into the internet. Licklider saw that human-machine symbiosis could be augmented by a network of networks, spanning across geography—a medium through which information could be accessed by anyone.
Another computer scientist at the time, Mike Dertouzos, had been thinking along the same lines. As he related it:
Now, with networks, we were moving away from a centralized-brain metaphor to a system without centralized control—a heterarchy. So in 1976 or so I was looking for a metaphor for how the machines in such a system would interact with each other. Being Greek, I thought of the Athens flea market, where I used to spend every Sunday, and I envisioned an on-line version: a community with a very large number of people coming together in a place where they could buy, sell, and exchange information.
The internet as a kind of digital agora was coming into being.
This hyper-democratic ethos underlay much of Licklider’s conception of the internet, too. His hope was that, as Waldrop writes, “networking had the potential to become not just a technology but an electronic commons.”
Such a commons would give everyone a forum in which to participate, and “ordinary people might just create this embodiment of equality, community, and freedom on their own.”
It is very much an extension what I was describing last post in my “defense of machines.”
Waldrop calls this a kind of Jeffersonian idealism (perhaps naivety—a “hopelessly unfashionable lack of cynicism”). It is an authentically American (and Greek, too) belief in the unflagging supremacy of democracy in the public square.
For a long time, I would say most of us saw the internet this way too. And Licklider has often been seen as a uniquely prescient visionary. Forbes included a short write-up about him in which he was the exception to the rule in predicting the future.
But lately I am, and probably a lot of us are, feeling less sanguine about this digital commons.
Another hugely important early computer scientist, Norbert Weiner, became pessimistic about the prospects of technological “progress” as he aged. He did not think was such an easy straight line in technological improvement as we like to imagine. Weiner is notable as the father of cybernetics (a word he coined by drawing from the ancient Greek κυβερνήτης, or helmsman), but he also made public waves after Hiroshima by publishing an Atlantic article called “A Scientist Rebels” in which he called on scientists to resist political and military intervention and influence on their work.
As Waldrop describes him, Weiner was concerned that a “second” industrial revolution could wreak unforeseen havoc. Advances in technology in the 18th and 19th century led to the horrific creation of William Blake’s “dark satanic mills,” in which (in Weiner’s words) the human arm was devalued by competition with factory machines. Likewise, “the modern industrial revolution is similarly bound to devalue the human brain.”
This devaluation not only of the human brain but of sane human life is what constitutes the tragedy of the e-commons.
The Pollution of the Digital Commons
In writing about pollution, Hardin observed that our conception of private property essentially guaranteed it. “The owner of a factory on the bank of a stream,” he wrote, “often has difficulty seeing why it is not his natural right to muddy the waters flowing past his door.”
One could easily analogize this to the internet. However, the digital commons is not really a public space the way that a park is. It is privatized to its core, accessed through major telecoms and interacted with via megacorporations like Google and Facebook. There is little compunction, then, about offloading massive amounts of aesthetic, intellectual, and spiritual waste into it—especially since it doesn’t seem to take up “space” in a recognizable way (leaving aside issues of bandwidth and cloud storage).
The most obvious example right now is the avalanche of AI-generated slop that is proliferating on social media. OpenAI seems disturbingly uninterested in the noxious side-effects of this, as their primary concern is (allegedly) long-term problems with superintelligence. If that’s the real problem, then what’s a little deluge of digital waste in the meantime? The end will justify the means.
But the problem goes deeper than just spam. Advertising, more than anything else, constitutes most of the digital pollution we encounter on a daily basis. This isn’t new, of course, and Hardin commented on it: “Advertisers muddy the airwaves of radio and television and pollute the view of travelers.”
The history of advertising is a fascinating story of spiritual conflict. As William Leach chronicled in Land of Desire, pathfinding advertisers in the late 19th and early 20th century recognized that American religious habits (especially the spartan ethic we inherited from Puritanism) put a natural limit on consumption of products. To overcome this, the spiritual practices of Americans had to be overridden, such that their inherent reluctance to buy, buy, buy could be broken down. Advertising was the solution.
That advertising is something of an assault on human life is probably something we all feel in our bones. South Park rather humorously depicted it as a war, a struggle between humans and ads who—in their final form—come to life and walk among us (a rather prescient prophecy of the way AI would come to influence individually-tailored ads).
TV commercials feel like high-art compared to internet advertising. I almost pine for the days of 90-second-long beer promos when now we are accosted with ads that sneak into a 5 second YouTube break by screaming their product’s name at you before you can hit skip.
So, if the e-commons that Licklider and the rest envisioned has become mired in pollution because of the logic of the market, and because of the technological advances we have made, then what can we do about it?
For the tragedy of the organic commons, Hardin took a draconian line of arguing that technology cannot be the solution to the problem it has helped create, and that governments and other overseers needed to step in. But because he was so concerned with overpopulation, he proposed wildly unethical (at best) and outright authoritarian (at worst) ideas. He is not worth taking seriously.
The question of regulation is one to pause on, however. While it might seem a natural solution, it’s hard to expect anything here. A major, flawed prediction of the cyberpunk genre was that it usually depicted a kind of zero-sum game between corporations and the government, such that the triumph of the former would lead to the obliteration of the latter. Certainly, in the 1980s, this seemed intuitive. Books like Neuromancer and movies like Blade Runner are inconceivable without the context of the Laffer Curve and supply-side economics. But today it’s rather more apparent that our plutocratic kakistocracy is not the natural opponent to digital pollution.
Which means that we might have to grapple with technology on our own to help clean the pollution in our lives. Important steps would be to help friends and loved ones learn how to identify the swarm of AI-generated spam on social media sites. This goes especially for older users of sites like Facebook. Primarily using security and privacy-focused web browsers like Firefox or Brave are good steps, especially with an adblocker extension. Or search engines like DuckDuckGo. Using a trustworthy VPN is also helpful.
If one really wanted to push the envelope, one could install Pi-Hole on a Raspberry Pi and use it to set up a DNS sinkhole so that ads are prevented from even getting into your home network. (I’m currently trying to learn how to do this, but since I’m not an expert by any stretch of the imagination I may need to enlist the help of a programmer friend of mine).
To be clear, none of this is because of some sort of weird, galaxy-brained conspiracies that we should get off the grid because Congress is using 5G networks to read our dreams.
Perfect privacy and anonymity is not possible in the present world. Instead, I think these measures are valuable because they are ways to try to steal back one’s time and attention from the onslaught of spam that we encounter on a daily basis. It’s not about staying hidden, it’s about staying sane.
Does this deal with pollution? No. But it does help clean one’s mind a bit. And maybe, in its own way, it helps clear out some of the gunk online so that the internet can return to being a digital commons, and computers can function more like the pro-human colleagues they were intended to be by forward thinking inventors like J.C.R. Licklider. Maybe, then, the great hope that computers can unleash human creativity (which, it is clear, they have done and are doing in many ways) can continue to be nourished.
What we’re dealing with might be a tragedy in a lot of ways, but the story isn’t over yet.
Member discussion