War of the Worlds Tripods loom over visitors to Consumer Electronics Show (CES) 2015, Las Vegas, Nevada. Photo by Maurizio Pesce.
[The] challenge of technology cannot be met with technology alone. It is rather a question of setting into motion a politically effective discussion…that brings the social potential constituted by technical knowledge and ability into a defined and controlled relation to our practical knowledge and will…this dialectic of potential and will takes place today without reflection, in accordance with interests for which public justification is neither demanded nor permitted.
— Jürgen Habermas, Toward a Rational Society, 1970
Fitter, Happier, More Productive!
—Radiohead, 1997
Something’s Going to Happen, Something Wonderful
Perhaps it is becoming somewhat easier (at last) to characterize the meaning of “Silicon Valley” as a comprehensive phenomenon–cultural, socio-political and economic–now that it has definitively “jumped the shark.”
By this I mean that utopias, when still flush with cultural potency, are hard to critique, but when they darken into dystopian nightmares, the job gets much easier.
In choosing this theme, that of hazarding an interpretation of the Silicon Valley utopia as having now devolved into a specific sort of dystopia, I am inspired by the Silicon Valley critic Evgeny Morozov, who repeatedly waxes splenetic about how Silicon Valley is forever claiming the right to define itself uncritically and self-referentially, with recourse only to its own utopian logic.
Across it’s various stampings, from the 1950s to the present, Silicon Valley has actually become increasingly reflective about its own inner “techno-topian” logic. By this I mean the sense in which the burning desire on the part of tech geeks to “change the world” became transmuted into the passion of CEOs and venture-backed corporate boards to develop products and services to “disrupt the brick and mortar economy.” And how this entrepreneurialism then found its mature expression in the Delphic pronouncements of the captains of industry about the future of the “innovation economy,” whilst sitting on unprecedented mountains of cash culled from advertising on global internet platforms, and the selling of data analytics.
Surely, if we can arrive at inter-planetary warp drive, then by definition we will have also solved the problems of scarcity and want along the way.
At the most basic level, I know this, because I actually grew up there. My parents arrived in Mountain View in 1966. Except for a couple of years in the late sixties when we moved back out near the Florida Cape (my dad was an engineer supporting Apollo missions) I lived in the Silicon Valley through high school and returned again to the area after graduate school. Incidentally, some of my earliest memories are of watching Apollo spacecraft (7,8,9) rumble past the gantry from the seashore across from our neighborhood in Indian Harbor Beach just down the sandspit from Canaveral. I thought every kid had a space rocket more or less in their backyard. I tell you this so that you will understand that my upbringing has been pretty much awash in hopeful technological futurism. Pass the Tang, please.
As for Mountain View, we arrived in time for the successive waves of defense, personal computer, and microprocessor everything. At the point at which I started working in the Valley, there was more software than silicon, and the first vaporware b2b web app solutions were hatching out of their nurseries funded by Sand Hill Road. By the early 2000s, enterprise software had up and moved wholesale to the Indian subcontinent, and wafer fabs and most high-tech manufacturing was outsourced to Asia, leaving mostly just the headquarters of OEMs, engineering design centers, and people still trying to figure out how to make real money out of the Web somehow.
But I digress. The point I am wanting to make here in the introduction to this “Tedd Talk” is that Silicon Valley’s technological solutionism, its desire to make our problems go away by rendering them superfluous through the dawning of a new technological age, has always been a sort of a utopian stand-in for a more broadly Enlightenment-oriented project of increasing “societal rationalization.”
This latter project can be seen in the line of thought that stretches from Kant’s “enlightenment as release from our self-incurred tutelage” through Max Weber’s critique of societal rationalization as “disenchantment and iron cage.” More recently, it can seen in Jürgen Habermas’ promotion of the democratic institutionalization of forms of communicative rationality beyond just the successful penetration of purposive, instrumental forms into all aspects of modern life.
By utopian stand-in, I mean that it’s just way easier and more rewarding to marvel at the latest iPhone or tablet, and hope and believe that a civilization that can give you one of these can surely solve its other problems, than it is to worry about advanced capitalist technological society and its relation to democracy. Surely, if we can arrive at inter-planetary warp drive, then by definition we will have also solved the problems of scarcity and want along the way. In the meantime, if you’re sitting in a nice restaurant, eating soylent green and staring at your phone, and something blows up nearby like in Terry Gilliam’s movie Brazil, waiters will come and put up a screen so you don’t have to look at the carnage while the first responders come and sort it out.
Criticizing Silicon Valley “solutionism” is of course nothing new. Perhaps the most famous rant, Evgeny Morozov’s “To Save Everything, Click Here” first came out in 2013. Primarily focused on the “internet-centrist” version of techno-topian solutionism, Morozov protests that there is “something about living in the polis with other human beings that is irreducible to formulaic expression and optimization techniques,” and rails against what he calls techno-topian and techno-scapist dreams “where deliberation and debate are silenced, technocrats and administrators are given free reign, and deeply political, life-altering issues are recast as matters of improving efficiency.” Refencing Thomas Hobbes’ “nasty, brutish, and short,” Morozov’s fundamental complaint is that the technological futurism of Silicon Valley fails because its adherents are committed to “making life longer, less nasty, but unfortunately, not less brutish. If anything, more so.”
Speaking at the World Economic Forum in Davos, George Soros applauded the EU’s anti-trust actions against Google, and said that the days of the major tech monopoly players…are numbered.
Locally, there have been all kinds of signs that the bloom is off the rose. For some time now, Google buses wending their way through SF city streets to get to 101 South sometimes get hit with rotten vegetables. It’s not lost on people that Google, rather than investing in public transportation for all, instead created a transportation network for its employees. As anyone who attempts to deal with Google on behalf of public institutions quickly learns, appeals to the common good fall on deaf ears, because it’s an article of Googler faith that Google IS the common good.
It’s also been quite some time since very many venture-backed Silicon Valley companies had large IPOs. Startups mostly pursue early exit strategies where they or their IP is acquired by one of the large monopoly tech companies. For those working in the valley, of course, the Jekyll and Hyde duality of techno-topianism and entrepreneurial capitalism has always been pretty evident. The hype cycle around new technology, the marketing babble about the potential of new-found efficiencies to transform human life, or about how Moore’s Law heralds the coming of a “singularity event” etc., has always barely masked a will to mostly upend historic shares of vertical markets, by owning the space on the Internet or whatever, in order to then “cash out.”
Silicon Valley: The Jumping of the Shark
At the end of October of 2017, Silicon Valley tech executives appeared before the Senate Judiciary Committee to acknowledge their role in the Russian hack of the 2016 election campaign for the first time. I’d like to call this the moment when Silicon Valley jumped the shark, because this is the clear moment when Silicon Valley techno-topianism, with its naively optimistic tech solutionism, publicly crashed and burned.
The executives whined about the difficulties of managing a digital public square that mirrors all the problems and divisions of the country–even though they are actually running global advertising businesses that reward misinformation (domestic and foreign) because of its lucrative virality. After their visit to capitol hill, there immediately followed a steady progression of highly critical articles in the national media.
Already on October 9th 2017, The Washington Post ran a piece by eBay founder Pierre Omidyar, warning that “the monetization and manipulation of information is swiftly tearing us apart,” blasting social media platforms for undermining democracy through the creation of micro-targeted echo chambers of hyper-partisanship fueled by paid “dark posts.” Other mea culpas included former Facebook president Sean Parker warning that social media “changes our relationship to society, and God knows what it’s doing to children’s brains,” and Tristan Harris, a former high-up at Google is urging software developers to “tone down the compulsive elements of their inventions.” By January, Mark Zuckerberg was posting that Facebook was voluntarily changing the algorithm, so that users would see fewer posts from publishers and brands and media, and more from their immediate friends and family.
On October 30th, Francis Fukuyama warned about monopolistic media companies masquerading as public utilities in a piece in The American Interest. On November 4th, the Economist ran a piece entitled, “Do Social Media Threaten Democracy? in which the magazine called upon major internet platforms to step up to their social responsibility in the wake of the Facebook-YouTube-Twitter Russian troll and bot hack. Speaking at the World Economic Forum in Davos on January 26th, billionaire philanthropist George Soros applauded the EU’s anti-trust actions against Google and said that the days of the major tech monopoly players, which harm individuals, market innovation and democracy, are numbered.
What Makes the Muskrat Guard His Musk?
But the most alarming bit of tech industry introspection actually came from Elon Musk, who, speaking before the National Governor’s Association in the summer of 2017, told the assembled governors flatly that “AI is a fundamental risk to the existence of human civilization.” Worried that Musk’s remarks would be simply dismissed as absurd, or as a non-sequitur, science fiction writer Ted Chiang amplified them in Buzzfeed on December 28th, 2017.
The major thrust of the piece is to raise the alarm about what Musk is calling “AI,” by unpacking it so that it’s clear that the threat comes from the alignment of the spirit of Silicon Valley entrepreneurial capitalism, big data and machine learning technology, and the immense wealth, power and reach of the large tech companies. Chiang writes that when we think about the way that tech companies operate, their monomaniacal focus, their scorched earth approach to market share, their unchecked exponential growth, and their obliviousness to the consequences of their technological innovations and disruptions, in short, if we understand the implications of marrying super-intelligent machines with no holds barred capitalism, then there are actually good reasons to be afraid that “AI could bring about the extinction of humanity purely as an unintended side effect.”
The basic problem is the pervasive lack of what Chiang refers to in the piece as “insight” or “metacognition.” What Musk and Chiang are calling artificial intelligence (the unity of certain technologies and advanced corporate capitalism) suffers from the defect that it cannot “take a step back and ask whether the current course of action is really a good idea.” While the human beings in charge of the corporations don’t function autonomously, and are thus presumably capable of insight, capitalism does not reward them for using it. Instead, Chiang writes, “capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what good means with whatever the market decides.”
Needless to say, techno-topian dreams die hard. The following week, novelist and software engineer Josh Evans published a rebuttal in TechCrunch entitled, “Ted Chiang is a genius, but he’s wrong about Silicon Valley.” Evans protests that while Chiang is spot on when it comes to capitalism itself, especially in its more recent neoliberal manifestations, he is wrong to attribute these things to the tech industry. “Uber and Peter Thiel are not Silicon Valley,” Evans writes. “Silicon Valley is mostly full of people who really want to build rocket ships and cure cancer with data analysis and who genuinely believe that making the world more open and connected could only ever be beneficial…”
Of course, Evans is not wrong about the naïve optimism of many who work in the techno-topian industrial complex. I recently attended a conference on the future of transportation held in Santa Clara, where computer engineers were breathlessly extoling the efficiencies and other virtues of autonomous vehicle platforms on smart roads with embedded sensors capable of tracking every movement of every citizen. They did this without even a whiff of concern about the possibilities opened up for authoritarian control of mass populations. Despite the concurrent activities of the Chinese government to install facial recognition cameras on a massive scale, etc., the tech geeks simply assume a liberal democratic backdrop for the techno-topian dreams. So, it’s not that difficult to understand that both things can be true at the same time—both the benevolent utopianism of the engineers, and the hyper-capitalism of their corporate masters. If you think otherwise, I have a really cheap three-bedroom ranch house to sell you in Palo Alto.
The Proxy Data Toxic Feedback Loop
In case anyone is still thinking that Musk and Chiang are overwrought, a detour through a less conceptual account of what is at issue can be of some help at this juncture. In her 2016 book “Weapons of Math Destruction,” former college math professor turned financial industry “quant” Cathy O’Neil charts the rise of what she calls “the big data economy” in which data algorithms (which she dubs WMDs) now “churn away in every conceivable industry,” employing biased models that are validated only by their profitability, and that “create their own toxic feedback loops” where real people are often collateral damage.
O’Neil begins her first chapter with a contrast with what she considers to be value-neutral statistics. Here she references the example of what gets popularly called “moneyball,” the application of statistical approaches in domains long ruled by the gut, but where the models are nonetheless fair, because “everyone has access to the statistics, and the rules of the game are constitutive.” Also, she says, baseball has statistical rigor, because the datasets are immense, and because the data itself is highly relevant to the outcomes that people are trying to predict.
The problem with the WMDs, however, is that they generally rely upon proxy data to achieve their desired results, and the effectiveness of the proxy data correlations is measured in terms of profits rather than accuracy or fairness. So, for example, Washington DC schools sought to weed out bad teachers by means of a scoring system that relied on a black box data algorithm that included way-too-small samples of student test scores and things like the teachers’ credit histories, and devalued things like the assessments by supervisors and colleagues and parents, level of engagement, specific skills, etc.
Similar things are going on in all sorts of other domains, such as advertising, banking and credit, insurance, employment, and for-profit education. Widely deployed HR software relies upon things like applicant personality testing, credit reports, and other poorly correlated proxy data as a filter to weed out applicants. Just as in the case of the teacher evaluation, the poor correlations don’t matter; even if some people are treated unfairly (false positives or negatives, so to speak) the overall goals are achieved. As O’Neil points out about the HR personality testing, the primary purpose of the test is not to find the best employee, but rather to exclude as many people as possible, as cheaply as possible. For many of the businesses running these algorithms, O’Neil says, “the money pouring in seems to prove that their models are working.”
In revolutionizing the way American politicians win elections, Todd and Dann argued, Big Data techniques and practices have broken American politics by encouraging and accelerating extreme political polarization.
Putting aside the basic “model fairness” issue, O’Neil also directs our attention equally to what she calls the aspect of the “toxic feedback loop” the manner in which the data model itself, as a fact in the world, creates and sustains massive confirmation biases. O’Neil makes the point strongly on her chapter having to do with the “crime prediction” software that cash-strapped police departments are using across the country. By feeding crime statistics into the programs, which create hour-to-hour predictions of where crimes are likely to occur in the policing area laid out in grid squares, departments are able to direct their resources to the likely places as an effectiveness optimization. All well and good, it seems, but it doesn’t stop there. Instead of just focusing on violent/serious crime data, police departments also decide to include data for “nuisance crimes” and “anti-social behavior” incidents. The more policing there is toward these data points, the more data points it creates, justifying still more policing.
But the connection between zero tolerance campaigns and violent crime reduction continues to be a matter of some controversy. The problem is that the data model enacts a toxic feedback loop and ends up tracking geography as an un-reflected stand-in for poverty and race. In their eagerness to continually improve the accuracy of the models, the developers move things in the direction of Phillip K. Dick’s “Minority Report” where people are considered guilty of crimes before they actually commit them.
In the final chapter of Weapons of Math Destruction, entitled “The Targeted Citizen,” Cathy O’Neil also takes aim at social media algorithms (relying on proxy data mining, machine learning, profiling and predictive analytics) suggesting how the effects of the Facebook algorithm allow for the gaming of our political system. The powerful data algorithms behind Internet platforms have allowed politicians and their surrogates to merge politics with consumer marketing. By means of microtargeting and personalization of politically-oriented messages (both paid and unpaid) directed at individuals and groups of people because of their proxy data likes and dislikes, politicians can sell multiple versions of themselves, effectively becoming all things to all people in a way that maximizes their chances at the voting booth.
Individuals targeted by such campaigns come to believe the “fake news” that is aggressively directed to them—fed on a steady diet of anti-immigration rhetoric, all that’s needed is to then strongly suggest that Obama is a Muslim; or that he was born outside the United States–and their non-targeted fellow citizens look on with bafflement, while they vote based upon the triggering of their profiled hot buttons. Most importantly, as should be evident, all of this creates another “toxic feedback loop” of increasing political polarization.
On March 14th 2017, reporters Chuck Todd and Carrie Dann published a piece on NBC News, called “How Big Data Broke American Politics” that said some very similar things. In revolutionizing the way American politicians win elections, Todd and Dann argued, Big Data techniques and practices have broken American politics by encouraging and accelerating extreme political polarization. Our lawmakers aren’t responding to the center because they don’t need centrists or swing voters in order to win elections when the game is now all about mobilizing every possible base voter and building a partisan firewall, and then enacting a style of governance where politicians play only to their base voters, and never to the electorate as a whole.
Elon Musk and the Allegory of the Strawberries
At the start of the prior section, I introduced Cathy O’Neil’s book Weapons of Math Destruction as a way of talking more concretely about Elon Musk’s warning to the National Association of Governors about the “AI threat to human civilization.” Before ending this first installment of my Tedd Talk, it’s worth taking a step back and considering the point of intersection, the way I am claiming it makes sense to go from Elon Musk to Cathy O’Neil and back again.
In his speech to the assembled governors, Ted Chiang relates, Musk actually gives an example of what he means when he said that “AI could bring about the extinction of humanity as an unintended side effect.” In the example, Musk imagines AI re-designing itself to be more effective at maximizing strawberry output and decides that the best way would be to destroy civilization and convert the entire surface of the earth to strawberry fields.
In my view, Musk’s warning should be taken as an allegory about the dangers of handing over the future itself, as an open site for human cultural and political decision, to the vagaries of wholly cybernetic functions. If we choose to look at it in this fashion, then the line of constraint between Musk’s allegory of the strawberries and O’Neil’s fretful account of universal and globalized algorithmic decision-making over all the rudiments of human flourishing in modern societies starts to coincide on the terrain of a very non-allegorical reality. Or perhaps hyper-reality, since it’s really the arrival of a dystopian future where nothing is real…and nothing to get hung about…Strawberry fields forever!
Silicon Valley’s Dystopia: The Hyper-Capitalist Dictatorship Over Needs
In this first installment of my Tedd Talk, I have made the claim that the Silicon Valley Utopia of technological solutionism has definitively “jumped the shark.” By this I mean that it has been exposed, widely and publicly, as having given birth to a full-fledged dystopia that was perhaps always present in the beating heart of its intrinsically utopian energies.
In the second installment of this article, called “Silicon Valley Solutionism: A Reductionist Account of the Human Condition” I offer a closer look at Morozov’s critique of Solutionism, as well as another look at the threat of wholesale societal cybernetic decision-making under conditions of hyper-capitalism.
In the third installment of this article, entitled, “Silicon Valley Rant Part III: Hyper-Capitalist Tech Solutionism & Human Needs” I renew my critique of tech solutionism’s effects on democracy, and turn to a consideration of theories of human needs as a condition for talking about tech solutionism as a “dictatorship over needs” in the fourth and final installment.