December 12, 2016 | Author: stedman15 | Category: N/A
All of the Breaking Smart Season 1 essays by Venkatesh Rao....
Breaking Smart Introduction (by Marc Andreessen) In 2007, right before the first iPhone launched, I asked Steve Jobs the obvious qu estion: The design of the iPhone was based on discarding every physical interface element except for a touchscreen. Would users be willing to give up the then-domin ant physical keypads for a soft keyboard? His answer was brusque: “They’ll learn.” Steve turned out to be right. Today, touchscreens are ubiquitous and seem normal, and other interfaces are emerging. An entire generation is now coming of age with a completely different tactile relationship to information, validating all over ag ain Marshall McLuhan’s observation that “the medium is the message”. A great deal of product development is based on the assumption that products must adapt to unchanging human needs or risk being rejected. Yet, time and again, peopl e adapt in unpredictable ways to get the most out of new tech. Creative people tin ker to figure out the most interesting applications, others build on those, and en tire industries are reshaped. People change, then forget that they changed, and act as though they always behave d a certain way and could never change again. Because of this, unexpected changes in human behavior are often dismissed as regressive rather than as potentially int elligent adaptations. But change happens anyway. “Software is eating the world” is the most recent historic tr ansformation of this sort. In 2014, a few of us invited Venkatesh Rao to spend the year at Andreessen Horowit z as a consultant to explore the nature of such historic tech transformations. In particular, we set out to answer the question: Between both the breathless and des pairing extremes of viewing the future, could an intellectually rigorous case be m ade for pragmatic optimism? As this set of essays argues — many of them inspired by a series of intensive conver sations Venkat and I had — there is indeed such a case, and it follows naturally fro m the basic premise that people can and do change. To “break smart” is to adapt intelligen tly to new technological possibilities. With his technological background, satirical eye, and gift for deep and different takes (as anyone who follows his Ribbonfarmblog knows!), there is perhaps nobody bet ter suited than Venkat for telling a story of the future as it breaks smart from the past. Whether you’re a high school kid figuring out a career or a CEO attempting to naviga te the new economy, Breaking Smart should be on your go-to list of resources for thinkin g about the future, even as you are busy trying to shape it. — Marc Andreessen ----------------A New Soft Technology Something momentous happened around the year 2000: a major new soft technology c ame of age. After written language and money, software is only the third major so ft technology to appear in human civilization. Fifteen years into the age of sof tware, we are still struggling to understand exactly what has happened. Marc And reessen’s now-familiar line, software is eating the world, hints at the significance, but we are only just beginning to figure out how to think about the world in whi ch we find ourselves. Only a handful of general-purpose technologies1 – electricity, steam power, precisio n clocks, written language, token currencies, iron metallurgy and agriculture am ong them – have impacted our world in the sort of deeply transformative way that d eserves the description eating. And only two of these, written language and money, w ere soft technologies: seemingly ephemeral, but capable of being embodied in a var iety of specific physical forms. Software has the same relationship to any specif ic sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books. But only since about 2000 has software acquired the sort of unbridled power, indep endent of hardware specifics, that it possesses today. For the first half centur y of modern computing after World War II, hardware was the driving force. The in dustrial world mostly consumed software to meet existing needs, such as tracking
inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper. Sometime around the dot com crash of 2000, though, the nature of software, and i ts relationship with hardware, underwent a shift. It was a shift marked by accel erating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy. But the economic numbers only hint at3 the profundity of the resulting societal im pact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-sourc e projects, and become a talented professional-grade programmer before age 18. T his is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield dispr oportionate influence on the emerging future. Only a tiny fraction of this enormously valuable activity — the cost of a laptop a nd an Internet connection — would show up in standard economic metrics. Based on v isible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18 -year-old paying $100,000 over four years to acquire a traditional college degre e. In the most dramatic cases, it can be as high as the value of an entire indus try. The music industry is an example: a product created by a teenager, Shawn Fa nning’s Napster, triggered a cascade of innovation whose primary visible impact ha s been the vertiginous decline of big record labels, but whose hidden impact inc ludes an explosion in independent music production and rapid growth in the livemusic sector.4 Software eating the world is a story of the seen and the unseen: small, measurab le effects that seem underwhelming or even negative, and large invisible and pos itive effects that are easy to miss, unless you know where to look.5 Today, the significance of the unseen story is beginning to be widely appreciate d. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transit ion to software-first computing. Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconduc tor manufacturing firms began running into the fundamental limits of Moore’s Law, ch ip designers and device manufacturers began to figure out how to use Moore’s Law t o drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what w e think of as computers. Coupled with reliable and cheap cloud computing infrast ructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many mor e people in every country on the planet, at radically lower cost and expertise l evels. One result of this increased potential was that technologists began to grope tow ards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powe red that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimat es of the economic potential of the Internet of Things – of putting a chip and sof tware into every physical item on Earth – vary from $2.7 trillion to over $14 tril lion: comparable to the entire GDP of the United States today.6 By 2010, it had become clear that given connectivity to nearly limitless cloud c omputing power and advances in battery technologies, programming was no longer s omething only a trained engineer could do to a box connected to a screen and a k eyboard. It was something even a teenager could do, to almost anything. The rise of ridesharing illustrates the process particularly well.
Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could inc reasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads. As the ridesharing sector took root and grew in city after city, second-order ef fects began to kick in. The increased convenience enables many more urban dwelle rs to adopt carless lifestyles. Increasing supply lowers costs, and increases ac cessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to r ealize that century-old trends like suburbanization, driven in part by car owner ship, could no longer be taken for granted. The ridesharing future we are seeing emerge now is even more dramatic: the highe r utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and in surance models are being reimagined. The future of road networks must now be rec onsidered in light of greener and more efficient use of both vehicles and roads. Meanwhile, the emerging software infrastructure created by ridesharing is starti ng to have a cascading impact on businesses, such as delivery services, that rel y on urban transportation and logistics systems. And finally, by proving many ke y component technologies, the rideshare industry is paving the way for the next major development: driverless cars. These developments herald a major change in our relationship to cars. To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the life style motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent fr eedom. And this dramatic reversal in our relationships to two important technologies – ca rs and smartphones – is being catalyzed by what was initially dismissed as “yet anot her trivial app.” Similar impact patterns are unfolding in sector after sector. Prominent early ex amples include the publishing, education, cable television, aviation, postal mai l and hotel sectors. The impact is more than economic. Every aspect of the global in dustrial social order is being transformed by the impact of software. This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powe rful than either. Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a p rison to pepper and salt in the ancient world to modern fiat currencies can work . But software can increasingly go wherever writing and money can go, and beyond . Software can also eat both, and take them to places they cannot go on their ow n. Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the fo rces being unleashed by software. While it might seem like software is constantl y in the news, what we have already seen is dwarfed by what still remains unseen . The effects of this widespread underestimation are dramatic. The opportunities p resented by software are expanding, and the risks of being caught on the wrong s ide of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it a re losing. And the winners are not winning by small margins or temporarily either. Software -fueled victories in the past decade have tended to be overwhelming and irrevers
ible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist th e transformation indefinitely. So to understand how software is eating the world, we have to ask why we have be en systematically underestimating its impact, and how we can recalibrate our exp ectations for the future. [1] Economists use the term general-purpose technologies to talk about those with broa d impact across sectors. There is, however, no clear consensus on which technolo gies should make the list [2] By a rough estimate, between 1977 and 2012 the direct contribution of computin g hardware to the United States GDP increased 14% (from 1.4% in 1977 to 1.6% in 2012), while the direct contribution of software increased 150% (from 0.2% in 19 77 to 0.5% in 2012). Computing hardware peaked in 2000 (at 2.2% of GDP) and has steadily declined since (source: a16z research staff). [3] See for instance Silicon Valley Doesn’t Believe Productivity is Down (Wall Street Jo urnal, July 16, 2015) and GDP: A Brief but Affectionate History, by Diane Coyle, revie wed in Arnold Kling, GDP and Measuring the Intangible, American Enterprise Institu te, February 2014. [4] Why We Shouldn’t Worry About The (Alleged) Decline Of The Music Industry, Forbes, Ja nuary 2012. [5] The idea of seen/unseen effects as an overarching distinction in economic evol ution can be traced to an influential 1850 essay by Frederic Bastiat, That Which i s Seen and That Which is Unseen. [6] Three independent estimates, all for the year 2020, help us calibrate the potent ial. Gartner estimates $1.9 trillion in value-add by 2020. Cisco estimates a value som ewhere between $14 trillion and $19 trillion. IDC estimates a value around $8.9 trillion ( source: a16z research staff). ------------------Getting Reoriented There are four major reasons we underestimate the increasing power of software. Three of these reasons drove similar patterns of miscalibration in previous tech nological revolutions, but one is unique to software. First, as futurist Roy Amara noted, “We tend to overestimate the effect of a techn ology in the short run and underestimate the effect in the long run.” Technologica l change unfolds exponentially, like compound interest, and we humans seem wired to think about exponential phenomena in flawed ways.1 In the case of software, we expected too much too soon from 1995 to 2000, leading to the crash. Now in 2015 , many apparently silly ideas from 2000, such as home-delivery of groceries orde red on the Internet, have become a mundane part of everyday life in many cities. But the element of surprise has dissipated, so we tend to expect too little, to o far out, and are blindsided by revolutionary change in sector after sector. Ch ange that often looks trivial or banal on the surface, but turns out to have bee n profound once the dust settles. Second, we have shifted gears from what economic historian Carlota Perez calls t he installation phase of the software revolution, focused on basic infrastructure su ch as operating systems and networking protocols, to a deployment phase focused on c onsumer applications such as social networks, ridesharing and ebooks. In her lan dmark study of the history of technology,2 Perez demonstrates that the shift from installation to deployment phase for every major technology is marked by a chaotic t ransitional phase of wars, financial scandals and deep anxieties about civilizat ional collapse. One consequence of the chaos is that attention is absorbed by tr ansient crises in economic, political and military affairs, and the apocalyptic fears and utopian dreams they provoke. As a result, momentous but quiet change p asses unnoticed. Third, a great deal of the impact of software today appears in a disguised form. The genomics and nanotechnology sectors appear to be rooted in biology and mate rials science. The “maker” movement around 3d printing and drones appears to be abou t manufacturing and hardware. Dig a little deeper though, and you invariably fin d that the action is being driven by possibilities opened up by software more than f undamental new discoveries in those physical fields. The crashing cost of genome
-sequencing is primarily due to computing, with innovations in wet chemistry playi ng a secondary role. Financial innovations leading to cheaper insurance and cred it are software innovations in disguise. The Nest thermostat achieves energy sav ings not by exploiting new discoveries in thermodynamics, but by using machine l earning algorithms in a creative way. The potential of this software-driven mode l is what prompted Google, a software company, to pay $3B to acquire Nest: a com pany that on the surface appeared to have merely invented a slightly better mouset rap. These three reasons for under-estimating the power of software had counterparts in previous technology revolutions. The railroad revolution of the nineteenth ce ntury also saw a transitional period marked by systematically flawed expectation s, a bloody civil war in the United States, and extensive patterns of disguised change — such as the rise of urban living, grocery store chains, and meat consumpt ion — whose root cause was cheap rail transport. The fourth reason we underestimate software, however, is a unique one: it is a r evolution that is being led, in large measure, by brash young kids rather than s ober adults.3 This is perhaps the single most important thing to understand about the revoluti on that we have labeled software eating the world: it is being led by young people , and proceeding largely without adult supervision (though with many adults part icipating). This has unexpected consequences. As in most periods in history, older generations today run or control all key in stitutions worldwide. They are better organized and politically more powerful. I n the United States for example, the AARP is perhaps the single most influential organization in politics. Within the current structure of the global economy, o lder generations can, and do, borrow unconditionally from the future at the expe nse of the young and the yet-to-be-born. But unlike most periods in history, young people today do not have to either “wait their turn” or directly confront a social order that is systematically stacked ag ainst them. Operating in the margins by a hacker ethos — a problem solving sensibi lity based on rapid trial-and-error and creative improvisation — they are able to u se software leverage and loose digital forms of organization to create new econo mic, social and political wealth. In the process, young people are indirectly di srupting politics and economics and creating a new parallel social order. Instea d of vying for control of venerable institutions that have already weathered sev eral generational wars, young people are creating new institutions based on the new software and new wealth. These improvised but highly effective institutions repe atedly emerge out of nowhere, and begin accumulating political and economic powe r. Most importantly, they are relatively invisible. Compared to the visible powe r of youth counterculture in the 1960s for instance, today’s youth culture, built around messaging apps and photo-sharing, does not seem like a political force to reckon with. This culture also has a decidedly commercial rather than ideologic al character, as a New York Times writer (rather wistfully) noted in a 2011 piece ap propriately titled Generation Sell.4 Yet, today’s youth culture is arguably morepowerful as a result, representing as it does what Jane Jacobs called the “commerce syndrome” of values, rooted in pluralistic economic pragmatism, rather than the opposed “guard ian syndrome” of values, rooted in exclusionary and authoritarian political ideolo gies. Chris Dixon captured this guerrilla pattern of the ongoing shift in political po wer with a succinct observation: what the smartest people do on the weekend is what everyone else will do during the week in ten years. The result is strange: what in past eras would have been a classic situation of generational conflict based on political confrontation, is instead playing out a s an economy-wide technological disruption involving surprisingly little direct political confrontation. Movements such as #Occupy pale in comparison to their 1 960s counterparts, and more importantly, in comparison to contemporary youth-dri ven economic activity. This does not mean, of course, that there are no political consequences. Softwar e-driven transformations directly disrupt the middle-class life script, upon whi ch the entire industrial social order is based. In its typical aspirational form
, the traditional script is based on 12 years of regimented industrial schooling , an additional 4 years devoted to economic specialization, lifetime employment with predictable seniority-based promotions, and middle-class lifestyles. Though this script began to unravel as early as the 1970s, even for the minority (whit e, male, straight, abled, native-born) who actually enjoyed it, the social order of our world is still based on it. Instead of software, the traditional script runs on what we might call paperware: bureaucratic processes constructed from the older soft technologies of writing and money. Instead of the hacker ethos of fle xible and creative improvisation, it is based on the credentialist ethos of degr ees, certifications, licenses and regulations. Instead of being based on achievi ng financial autonomy early, it is based on taking on significant debt (for coll ege and home ownership) early. It is important to note though, that this social order based on credentialism an d paperware worked reasonably well for almost a century between approximately 18 70 and 1970, and created a great deal of new wealth and prosperity. Despite its stifling effects on individualism, creativity and risk-taking, it offered its me mbers a broader range of opportunities and more security than the narrow agraria n provincialism it supplanted. For all its shortcomings, lifetime employment in a large corporation like General Motors, with significantly higher standards of living, was a great improvement over pre-industrial rural life. But by the 1970s, industrialization had succeeded so wildly, it had undermined i ts own fundamental premises of interchangeability in products, parts and humans. As economists Jeffrey Greenwood and Mehmet Yorkuglu5 argue in a provocative paper titled 1974, that year arguably marked the end of the industrial age and the beginn ing of the information age. Computer-aided industrial automation was making ever -greater scale possible at ever-lower costs. At the same time, variety and uniqu eness in products and services were becoming increasingly valuable to consumers in the developed world. Global competition, especially from Japan and Germany, b egan to directly threaten American industrial leadership. This began to drive pr oduct differentiation, a challenge that demanded originality rather than conform ity from workers. Industry structures that had taken shape in the era of mass-pr oduced products, such as Ford’s famous black Model T, were redefined to serve the demand for increasing variety. The result was arguably a peaking in all aspects of the industrial social order based on mass production and interchangeable work ers roughly around 1974, a phenomenon Balaji Srinivasan has dubbed peak centraliza tion.6 One way to understand the shift from credentialist to hacker modes of social org anization, via young people acquiring technological leverage, is through the myt hological tale of Prometheus stealing fire from the heavens for human use. The legend of Prometheus has been used as a metaphor for technological progress at least since Mary Shelley’s Frankenstein: A Modern Prometheus. Technologies capable of eating the world typically have a Promethean character: they emerge within a mature social order (a metaphoric “heaven” that is the preserve of older elites), bu t their true potential is unleashed by an emerging one (a metaphoric “earth” compris ing creative marginal cultures, in particular youth cultures), which gains relat ive power as a result. Software as a Promethean technology emerged in the heart of the industrial social order, at companies such as AT&T, IBM and Xerox, univer sities such as MIT and Stanford, and government agencies such as DARPA and CERN. But its Promethean character was unleashed, starting with the early hacker move ment, on the open Internet and through Silicon-Valley style startups. As a result of a Promethean technology being unleashed, younger and older face a similar dilemma: should I abandon some of my investments in the industrial social order and join the dynamic new social order, or hold on to the status quo as lo ng as possible? The decision is obviously easier if you are younger, with much less to lose. But many who are young still choose the apparent safety of the credentialist script s of their parents. These are what David Brooks called Organization Kids (after Will iam Whyte’s 1956 classic, The Organization Man7): those who bet (or allow their “Tiger” pa rents8 to bet on their behalf) on the industrial social order. If you are an adult over 30, especially one encumbered with significant family obligations or debt,
the decision is harder. Those with a Promethean mindset and an aggressive approach to pursuing a new pat h can break out of the credentialist life script at any age. Those who are unwil ling or unable to do so are holding on to it more tenaciously than ever. Young or old, those who are unable to adopt the Promethean mindset end up defaul ting to what we call a pastoral mindset: one marked by yearning for lost or unat tained utopias. Today many still yearn for an updated version of romanticized9 195 0s American middle-class life for instance, featuring flying cars and jetpacks. How and why you should choose the Promethean option, despite its disorienting un certainties and challenges, is the overarching theme of Season 1. It is a choice we call breaking smart, and it is available to almost everybody in the developed wo rld, and a rapidly growing number of people in the newly-connected developing wo rld. These individual choices matter. As historians such as Daron Acemoglu and James Robinson10 and Joseph Tainter11 have argued, it is the nature of human problem-solving institutions, rather than the nature of the problems themselves, that determines whether societies fail or suc ceed. Breaking smart at the level of individuals is what leads to organizations and nations breaking smart, which in turn leads to societies succeeding or faili ng. Today, the future depends on increasing numbers of people choosing the Promethea n option. Fortunately, that is precisely what is happening. [1] See for example, the phenomenon of hyperbolic discounting, one of the major bias es that affect human temporal reasoning. [2] Carlota Perez, Technological Revolutions and Financial Capital, 2003. [3] Though the widespread perception that startup founders are relatively young is de batable, it is clear that software allows talented individuals to begin their entrep reneurial journeys much earlier than other technologies in history, simply due t o the availability and accessibility of the technology at low cost. By contrast, major business leaders in the Robber Baron age, such as Cornelius Vanderbilt an d John D. Rockefeller embarked on their empire building journeys in middle age. [4] William Deresiewicz, Generation Sell, New York Times, 2011. [5] Jeremy Greenwood and Mehmet Yorukoglu, 1974, Carnegie-Rochester Conference Series on Public Policy, 1997. [6] Personal communication. [7] See William Whyte, The Organization Man, first published in 1956 and David Brooks, The Organization Kid, The Atlantic Monthly, 2001. [8] Amy Chua, Battle Hymn of the Tiger Mother, 2011. [9] Stephanie Coontz, The Way we Never Were: American Families and the Nostalgia Tra p, 1993. [10] Daron Acemoglu and James Robinson, Why Nations Fail, 2013. [11] Joseph Tainter, The Collapse of Complex Societies, 1990. ------------------Towards a Mass Flourishing In this season of Breaking Smart, I will not attempt to predict the what and when of the fu ure. In fact, a core element of the hacker ethos is the belief that being open t o possibilities and embracing uncertainty is necessary for the actual future to unfold in positive ways. Or as computing pioneer Alan Kay put it, inventing the future is easier than predicting it. And this is precisely what tens of thousands of small teams — small enough to be fed by no more than two pizzas, by a rule of thumb made famous by Amazon founder Jef f Bezos — are doing across the world today. Prediction as a foundation for facing the future involves risks that go beyond s imply getting it wrong. The bigger risk is getting attached to a particular what and w hen, a specific vision of a paradise to be sought, preserved or reclaimed. This is often a serious philosophical error — to which pastoralist mindsets are particula rly prone — that seeks to limit the future. But while I will avoid dwelling too much on the what and when, I will unabashedly advocate for a particular answer to how.Thanks to virtuous cycles already gaining in power , I believe almost all effective responses to the problems and opportunities of the co
ming decades will emerge out of the hacker ethos, despite its apparent periphera l role today. The credentialist ethos of extensive planning and scripting toward s deterministic futures will play a minor supporting role at best. Those who ado pt a Promethean mindset and break smart will play an expanding role in shaping t he future. Those who adopt a pastoral mindset and retreat towards tradition will play a diminishing role, in the shrinking number of economic sectors where cred entialism is still the more appropriate model. The nature of problem-solving in the hacker mode, based on trial-and-error, iter ative improvement, testing and adaptation (both automated and human-driven) allo ws us to identify four characteristics of how the future will emerge. First, despite current pessimism about the continued global leadership of the Un ited States, the US remains the single largest culture that embodies the pragmat ic hacker ethos, nowhere more so than in Silicon Valley. The United States in ge neral, and Silicon Valley in particular, will therefore continue to serve as the global exemplar of Promethean technology-driven change. And as virtual collabora tion technologies improve, the Silicon Valley economic culture will increasingly become the global economic culture. Second, the future will unfold through very small groups having very large impac ts. One piece of wisdom in Silicon Valley today is that the core of the best sof tware is nearly always written by teams of fewer than a dozen people, not by hug e committee-driven development teams. This means increasing well-being for all w ill be achieved through small two-pizza teams beating large ones. Scale will inc reasingly be achieved via loosely governed ecosystems of additional participants creating wealth in ways that are hard to track using traditional economic measu res. Instead of armies of Organization Men and Women employed within large corpo rations, and Organization Kids marching in at one end and retirees marching out at the other, the world of work will be far more diverse. Third, the future will unfold through a gradual and continuous improvement of well-b eing and quality of life across the world, not through sudden emergence of a uto pian software-enabled world (or sudden collapse into a dystopian world). The pro cess will be one of fits and starts, toys and experiments, bugginess and brokenn ess. But the overall trend will be upwards, towards increasing prosperity for al l. Fourth, the future will unfold through rapid declines in the costs of solutions to problems, including in heavily regulated sectors historically resistant to cost-s aving innovations, such as healthcare and higher education. In improvements wrough t by software, poor and expensive solutions have generally been replaced by supe rior and cheaper (often free) solutions, and these substitution effects will acc elerate. Putting these four characteristics together, we get a picture of messy, emergent progress that economist Bradford Delong calls “slouching towards utopia“: a condition of gradual, increasing quality of life available, at gradually declining cost, to a gradually expanding portion of the global population. A big implication is immediately clear: the asymptotic condition represents a cons umer utopia. As consumers, we will enjoy far more for far less. This means that th e biggest unknown today is our future as producers, which brings us to what many vie w as the central question today: the future of work. The gist of a robust answer, which we will explore in Understanding Elite Discontent , was anticipated by John Maynard Keynes as far back as 1930,1 though he did not l ike the implications: the majority of the population will be engaged in creating a nd satisfying each other’s new needs in ways that even the most prescient of today’s visionaries will fail to anticipate. While we cannot predict precisely what workers of the future will be doing — what futu re wants and needs workers will be satisfying — we can predict some things about how t hey will be doing it. Work will take on an experimental, trial-and-error charact er, and will take place in an environment of rich feedback, self-correction, ada ptation, ongoing improvement, and continuous learning. The social order surround ing work will be a much more fluid descendant of today’s secure but stifling paych eck world on the one hand, and liberating but precarious world of free agency an d contingent labor on the other.
In other words, the hacker ethos will go global and the workforce at large will break smart. As the hacker ethos spreads, we will witness what economist Edmund Phelps calls a mass flourishing2 — a state of the economy where work will be challengi ng and therefore fulfilling. Unchallenging, predictable work will become the preserv e of machines. Previous historical periods of mass flourishing, such as the early industrial re volution, were short-lived, and gave way, after a few decades, to societies base d on a new middle class majority built around predictable patterns of work and l ife. This time around, the state of mass flourishing will be a sustained one: a slouching towards a consumer and producer utopia. If this vision seems overly dramatic, consider once again the comparison to othe r soft technologies: software is perhaps the most imagination-expanding technolo gy humans have invented since writing and money, and possibly more powerful than either. To operate on the assumption that it will transform the world at least as d ramatically, far from being wild-eyed optimism, is sober realism. [1] The classic 1930 article by John Maynard Keynes, Economic possibilities for our grandchildren, remains the dominant framing for understanding technological unemploy ment. Keynes understood that technological unemployment is a temporary phenomenon and that new wants and needs soon appear to create new employment. He viewed this as a spiritual problem of sorts: that of endlessly expanding materialism and spi ritual degeneracy. We will discuss his proposed solution, the concept of a leisu re society, in a later essay. [2] Edmund Phelps’ Mass Flourishing (2014) is a magisterial survey of the rise of corporat ism and its stifling effects on the economic dynamism that marked the early deca des of the industrial revolution. By critically examining a wide variety of econo mic indicators (ranging from job satisfaction and values surveys to employment a nd growth data), Phelps constructs a powerful case for abandoning corporatist ec onomic organization models. Compared to the much more heavily publicized economi c blockbuster of 2014, Thomas Piketty’s Capital in the Twenty-First Century, which foc used much more narrowly on income inequality, Phelps’ work takes a much broader mu lti-model approach. For readers interested in a broad understanding of the econo mic context of software eating the world, Phelps’ book is probably the single best resource. ---------------------Purists versus Pragmatists At the heart of the historical development of computing is the age-old philosophic al impasse between purist and pragmatistapproaches to technology, which is particularl y pronounced in software due to its seeming near-Platonic ineffability. One way to understand the distinction is through a dinnerware analogy. Purist approaches, which rely on alluring visions, are like precious “good” china: m ostly for display, and reserved exclusively for narrow uses like formal dinners. Damage through careless use can drastically lower the value of a piece. Broken or missing pieces must be replaced for the collection to retain its full display value. To purists, mixing and matching, either with humbler everyday tableware, or with different china patterns, is a kind of sacrilege. The pragmatic approach on the other hand, is like unrestricted and frequent use of hardier everyday dinnerware. Damage through careless play does not affect value as much. Broken pieces may still be useful, and missing pieces need not be repl aced if they are not actually needed. For pragmatists, mixing and matching avail able resources, far from being sacrilege, is something to be encouraged, especia lly for collaborations such as neighborhood potlucks. In software, the difference between the two approaches is clearly illustrated by t he history of the web browser. On January 23, 1993, Marc Andreessen sent out a short email, announcing the releas e of Mosaic, the first graphical web browser: 07:21:17-0800 by
[email protected]: By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCS A’s Motif-based networked information systems and World Wide Web browser, X Mosaic , is hereby released: file://ftp.ncsa.uiuc.edu/Web/xmosaic/xmosaic-0.5.tar.Z
Along with Eric Bina he had quickly hacked the prototype together after becoming enthralled by his first view of the World Wide Web, which Tim Berners-Lee had un leashed from CERN in Geneva in 1991. Over the next year, several other colleague s joined the project, equally excited by the possibilities of the web. All were eager to share the excitement they had experienced, and to open up the future of t he web to more people, especially non-technologists. Over the course of the next few years, the graphical browser escaped the confine s of the government-funded lab (the National Center for Supercomputing Applicati ons at the University of Illinois) where it was born. As it matured at Netscape and later at Microsoft, Mozilla and Google, it steered the web in unexpected (an d to some, undesirable) directions. The rapid evolution triggered both the legen dary browser wars and passionate debates about the future of the Internet. Those late-nineties conflicts shaped the Internet of today. To some visionary pioneers, such as Ted Nelson, who had been developing a purist hypertext paradigm called Xanadu for decades, the browser represented an undesi rably messy direction for the evolution of the Internet. To pragmatists, the bro wser represented important software evolving as it should: in a pluralistic way, embodying many contending ideas, through what the Internet Engineering Task For ce (IETF) calls “rough consensus and running code.” While every major software project has drawn inspiration from both purists and pra gmatists, the browser, like other pieces of software that became a mission criti cal part of the Internet, was primarily derived from the work and ideas of pragmat ists: pioneers like Jon Postel, David Clark, Bob Kahn and Vint Cerf, who were in strumental in shaping the early development of the Internet through highly inclu sive institutions like the IETF. Today, the then-minority tradition of pragmatic hacking has matured into agile d evelopment, the dominant modern approach for making software. But the significan ce of this bit of history goes beyond the Internet. Increasingly, the pragmatic, agile approach to building things has spread to other kinds of engineering and beyond, to business and politics. The nature of software has come to matter far beyond software. Agile philosophie s are eating all kinds of building philosophies. To understand the nature of the world today, whether or not you are a technologist, it is crucial to understand agility and its roots in the conflict between pragmatic and purist approaches t o computing. The story of the browser was not exceptional. Until the early 1990s, almost all important software began life as purist architectural visions rather than pragma tic hands-on tinkering. This was because early programming with punch-card mainframes was a highly const rained process. Iterative refinement was slow and expensive. Agility was a dista nt dream: programmers often had to wait weeks between runs. If your program didn’t work the first time, you might not have gotten another chance. Purist architect ures, worked out on paper, helped minimize risk and maximize results under these conditions. As a result, early programming was led by creative architects (often mathematici ans and, with rare exceptions like Klari Von Neumann and Grace Hopper, usually m ale) who worked out the structure of complex programs upfront, as completely as possible. The actual coding onto punch cards was done by large teams of hands-on programmers (mostly women1) with much lower autonomy, responsible for working o ut implementation details. In short, purist architecture led the way and pragmatic hands-on hacking was eff ectively impossible. Trial-and-error was simply too risky and slow, which meant significant hands-on creativity had to be given up in favor of productivity. With the development of smaller computers capable of interactive input hands-on hacking became possible. At early hacker hubs, like MIT through the sixties, a h igh-autonomy culture of hands-on programming began to take root. Though the shif t would not be widely recognized until after 2000, the creative part of programm ing was migrating from visioning to hands-on coding. Already by 1970, important and high-quality software, such as the Unix operating system, had emerged from t he hacker culture growing at the minicomputer margins of industrial programming.
Through the seventies, a tenuous balance of power prevailed between purist archi tects and pragmatic hackers. With the introduction of networked personal computi ng in the eighties, however, hands-on hacking became the defining activity in pr ogramming. The culture of early hacker hubs like MIT and Bell Labs began to diff use broadly through the programming world. The archetypal programmer had evolved : from interchangeable member of a large team, to the uniquely creative hacker, tinkering away at a personal computer, interacting with peers over networks. Ins tead of dutifully obeying an architect, the best programmers were devoting incre asing amounts of creative energy to scratching personal itches. The balance shifted decisively in favor of pragmatists with the founding of the IETF in 1986. In January of that year, a group of 21 researchers met in San Dieg o and planted the seeds of what would become the modern “government” of the Internet . Despite its deceptively bureaucratic-sounding name, the IETF is like no standard s organization in history, starting with the fact that it has no membership requ irements and is open to all who want to participate. Its core philosophy can be found in an obscure document, The Tao of the IETF, little known outside the world of technology. It is a document that combines the informality and self-awareness of good blogs, the gravitas of a declaration of independence, and the aphoristic wi sdom of Zen koans. This oft-quoted section illustrates its basic spirit: In many ways, the IETF runs on the beliefs of its members. One of the “founding bel iefs” is embodied in an early quote about the IETF from David Clark: “We reject king s, presidents and voting. We believe in rough consensus and running code”. Another e arly quote that has become a commonly-held belief in the IETF comes from Jon Pos tel: “Be conservative in what you send and liberal in what you accept”. Though the IETF began as a gathering of government-funded researchers, it also r epresented a shift in the center of programming gravity from government labs to the commercial and open-source worlds. Over the nearly three decades since, it h as evolved into the primary steward2 of the inclusive, pluralistic and egalitarian spirit of the Internet. In invisible ways, the IETF has shaped the broader econ omic and political dimensions of software eating the world. The difference between purist and pragmatic approaches becomes clear when we com pare the evolution of programming in the United States and Japan since the early eighties. Around 1982, Japan chose the purist path over the pragmatic path, by embarking on the ambitious “fifth-generation computing” effort. The highly corporati st government-led program, which caused much anxiety in America at the time, pro ved to be largely a dead-end. The American tradition on the other hand, outgrew its government-funded roots and gave rise to the modern Internet. Japan’s contempo rary contributions to software, such as the hugely popular Ruby language designe d by Yukihiro Matsumoto, belong squarely within the pragmatic hacker tradition. I will argue that this pattern of development is not limited to computer science. Ev ery field eaten by software experiences a migration of the creative part from visi oning activities to hands-on activities, disrupting the social structure of all professions. Classical engineering fields like mechanical, civil and electrical engineering had already largely succumbed to hands-on pragmatic hacking by the n ineties. Non-engineering fields like marketing are beginning to convert. So the significance of pragmatic approaches prevailing over purist ones cannot b e overstated: in the world of technology, it was the equivalent of the fall of t he Berlin Wall. [1] This distinction between high-autonomy architects and low-level programmers in the early days of computing is often glossed over in feel-good popular accounts of the role of women in early computing. This popular narrative conflates the highlevel work of women like Klari von Neumann and Grace Hopper (the latter is in so me ways the zombie Marie Curie of computing, whose totemic prominence somewhat obscu res the contributions of other pioneering women) with the routine work of rank-a nd-file women programmers. By doing so, the popular narrative manages to oversta te the creative contribution of women in the early days, and thereby, rather iro nically, understates the actual misogyny of the 1940s-50s. This leads to a misle ading narrative of decline from an imagined golden age of women in computing. A cl earer indicator of the history of women in programming would be the rate of their
participation in interactive computing, starting with the early 1960s hacker cultu re of the sort that developed at MIT around the earliest interactive minicompute rs such as the PDP-1. Measured against this baseline, I suspect the participatio n of women in creative hands-on programming has been steadily increasing from an ear ly misogynistic low, and is likely significantly better than in other engineerin g fields. I do not, however, have the data to justify this claim. [2] This rise, in terms of both institutional power and philosophical influence ha s, of course, attracted criticism. The most common criticism is the expected one from purists: that the IETF philosophy encourages incrementalism and short-term thinking. This sort of criticism is briefly addressed in these essays, but repr esents a fundamental philosophical divide comparable to the Left/Right divide in politics, rather than dissent within the pragmatic philosophy. There has also b een actionable criticism within the pragmatic camp. For instance, Postel’s robustnes s principle cited above (“be conservative in what you send and liberal in what you accept”), has been criticized by Joel Spolsky for creating chaos in standards efforts by allowing too much “soft failure.” This particular criticism is being accommodated by the IETF in the form of the alternate “fail hard and fast” design principle. ------------------Agility and Illegibility While pragmatic hacking was on the rise, purist approaches entered a period of s low, painful and costly decline. Even as they grew in ambition, software project s based on purist architecture and teams of interchangeable programmers grew inc reasingly unmanageable. They began to exhibit the predictable failure patterns o f industrial age models: massive cost-overruns, extended delays, failed launches , damaging unintended consequences, and broken, unusable systems. These failure patterns are characteristic of what political scientist James Scot t1 called authoritarian high modernism: a purist architectural aesthetic driven by t he authoritarian priorities. To authoritarian high-modernists, elements of the e nvironment that do not conform to their purist design visions appear “illegible” and anxiety-provoking. As a result, they attempt to make the environment legible by forcibly removing illegible elements. Failures follow because important element s, critical to the functioning of the environment, get removed in the process. Geometrically laid-out suburbs, for example, are legible and conform to platonic architectural visions, even if they are unlivable and economically stagnant. Sl ums on the other hand, appear illegible and are anxiety-provoking to planners, e ven when they are full of thriving economic life. As a result, authoritarian pla nners level slums and relocate residents into low-cost planned housing. In the p rocess they often destroy economic and cultural vitality. In software, what authoritarian architects find illegible and anxiety-provoking is the messy, unplanned tinkering hackers use to figure out creative solutions. When the pragmatic model first emerged in the sixties, authoritarian architects reacted like urban planners: by attempting to clear “code slums.” These attempts too k the form of increasingly rigid documentation and control processes inherited f rom manufacturing. In the process, they often lost the hacker knowledge keeping the project alive. In short, authoritarian high modernism is a kind of tunnel vision. Architects ar e prone to it in environments that are richer than one mind can comprehend. The urge to dictate and organize is destructive, because it leads architects to dest roy the apparent chaos that is vital for success. The flaws of authoritarian high modernism first became problematic in fields lik e forestry, urban planning and civil engineering. Failures of authoritarian deve lopment in these fields resulted in forests ravaged by disease, unlivable “planned” cities, crony capitalism and endemic corruption. By the 1960s, in the West, pionee ring critics of authoritarian models, such as the urbanist Jane Jacobs and the e nvironmentalist Rachel Carson, had begun to correctly diagnose the problem. By the seventies, liberal democracies had begun to adopt the familiar, more demo cratic consultation processes of today. These processes were adopted in computin g as well, just as the early mainframe era was giving way to the minicomputer er a. Unfortunately, while democratic processes did mitigate the problems, the result
was often lowered development speed, increased cost and more invisible corruptio n. New stakeholders brought competing utopian visions and authoritarian tendenci es to the party. The problem now became reconciling conflicting authoritarian vi sions. Worse, any remaining illegible realities, which were anxiety-provoking to a ll stakeholders, were now even more vulnerable to prejudice and elimination. As a result complex technology projects often slowed to a costly, gridlocked crawl. T yranny of the majority — expressed through autocratic representatives of particula r powerful constituencies — drove whatever progress did occur. The biggest casualt y was innovation, which by definition is driven by ideas that are illegible to a ll but a few: what Peter Thiel calls secrets — things entrepreneurs believe that nobod y else does, which leads them to unpredictable breakthroughs. The process was most clearly evident in fields like defense. In major liberal de mocracies, different branches of the military competed to influence the design o f new weaponry, and politicians competed to create jobs in their constituencies. As a result, major projects spiraled out of control and failed in predictable w ays: delayed, too expensive and technologically compromised. In the non liberaldemocratic world, the consequences were even worse. Authoritarian high modernism continued (and continues today in countries like Russia and North Korea), unche cked, wreaking avoidable havoc. Software is no exception to this pathology. As high-profile failures like the la unch of healthcare.gov2 show, “democratic” processes meant to mitigate risks tend to cre ate stalled or gridlocked processes, compounding the problem. Both in traditional engineering fields and in software, authoritarian high moder nism leads to a Catch-22 situation: you either get a runaway train wreck due to too much unchecked authoritarianism, or a train that barely moves due to a gridl ock of checks and balances. Fortunately, agile software development manages to combine both decisive authority a nd pluralistic visions, and mitigate risks without slowing things to a crawl. Th e basic principles of agile development, articulated by a group of 17 programmer s in 2001, in a document known as the Agile Manifesto, represented an evolution of t he pragmatic philosophy first explicitly adopted by the IETF. The cost of this agility is a seemingly anarchic pattern of progress. Agile deve lopment models catalyze illegible, collective patterns of creativity, weaken ill usions of control, and resist being yoked to driving utopian visions. Adopting agi le models leads individuals and organizations to gradually increase their tolera nce for anxiety in the face of apparent chaos. As a result, agile models can get m ore agile over time. Not only are agile models driving reform in software, they are also spreading to traditional domains where authoritarian high-modernism first emerged. Software is beginning to eat domains like forestry, urban planning and environment protec tion. Open Geographic Information Systems (GIS) in forestry, open data initiative s in urban governance, and monitoring technologies in agriculture, all increase i nformation availability while eliminating cumbersome paperware processes. As we will see in upcoming essays, enhanced information availability and lowered frict ion can make any field hacker-friendly. Once a field becomes hacker-friendly, so ftware begins to eat it. Development gathers momentum: the train can begin movin g faster, without leading to train wrecks, resolving the Catch-22. Today the shift from purist to pragmatist has progressed far enough that it is a lso reflected at the level of the economics of software development. In past dec ades, economic purists argued variously for idealized open-source, market-driven or government-led development of important projects. But all found themselves f aced with an emerging reality that was too complex for any one economic ideology to govern. As a result, rough consensus and running economic mechanisms have pr evailed over specific economic ideologies and gridlocked debates. Today, every a vailable economic mechanism — market-based, governmental, nonprofit and even crimi nal — has been deployed at the software frontier. And the same economic pragmatism is spreading to software-eaten fields. This is a natural consequence of the dramatic increase in both participation lev els and ambitions in the software world. In 1943, only a small handful of people working on classified military projects had access to the earliest computers. E
ven in 1974, the year of Peak Centralization, only a small and privileged group had access to the early hacker-friendly minicomputers like the DEC PDP series. B ut by 1993, the PC revolution had nearly delivered on Bill Gates’ vision of a comp uter at every desk, at least in the developed world. And by 2000, laptops and Bl ackberries were already foreshadowing the world of today, with near-universal ac cess to smartphones, and an exploding number of computers per person. The IETF slogan of rough consensus and running code (RCRC) has emerged as the only w orkable doctrine for both technological development and associated economic mode ls under these conditions. As a result of pragmatism prevailing, a nearly ungovernable Promethean fire has been unleashed. Hundreds of thousands of software entrepreneurs are unleashing i nnovations on an unsuspecting world by the power vested in them by “nobody in part icular,” and by any economic means necessary. It is in the context of the anxiety-inducing chaos and complexity of a mass flou rishing that we then ask: what exactly is software? ----------------Rough Consensus and Maximal Interestingness Software possesses an extremely strange property: it is possible to create highvalue software products with effectively zero capital outlay. As Mozilla enginee r Sam Penrose put it, software programming is labor that creates capital. This characteristic make software radically different from engineering materials like steel, and much closer to artistic media such as paint.1 As a consequence, eng ineer and engineering are somewhat inappropriate terms. It is something of a stretch t o even think of software as a kind of engineering “material.” Though all computing r equires a physical substrate, the trillions of tiny electrical charges within co mputer circuits, the physical embodiment of a running program, barely seem like matter. The closest relative to this strange new medium is paper. But even paper is not as cheap or evanescent. Though we can appreciate the spirit of creative abundanc e with which industrial age poets tossed crumpled-up false starts into trash can s, a part of us registers the wastefulness. Paper almost qualifies as a medium for t rue creative abundance, but falls just short. Software though, is a medium that not only can, but must be approached with an abund ance mindset. Without a level of extensive trial-and-error and apparent waste th at would bankrupt both traditional engineering and art, good software does not t ake shape. From the earliest days of interactive computing, when programmers cho se to build games while more “serious” problems waited for computer time, to modern complaints about “trivial” apps (which often turn out to be revolutionary), scarcity -oriented thinkers have remained unable to grasp the essential nature of softwar e for fifty years. The difference has a simple cause: unsullied purist visions have value beyond an xiety-alleviation and planning. They are also a critical authoritarian marketing and signaling tool — like formal dinners featuring expensive china — for attracting and concentrating scarce resources in fields such as architecture. In an enviro nment of abundance, there is much less need for visions to serve such a marketin g purpose. They only need to provide a roughly correct sense of direction to tho se laboring at software development to create capital using whatever tools and i deas they bring to the party — like potluck participants improvising whatever reso urces are necessary to make dinner happen. Translated to technical terms, the dinnerware analogy is at the heart of softwar e engineering. Purist visions tend to arise when authoritarian architects attemp t to concentrate and use scarce resources optimally, in ways they often sincerel y believe is best for all. By contrast, tinkering is focused on steady progress rather than optimal end-states that realize a totalizing vision. It is usually d riven by individual interests and not obsessively concerned with grand and pater nalistic “best for all” objectives. The result is that purist visions seem more comf orting and aesthetically appealing on the outside, while pragmatic hacking looks messy and unfocused. At the same time purist visions are much less open to new possibilities and bricolage, while pragmatic hacking is highly open to both. Within the world of computing, the importance of abundance-oriented approaches w
as already recognized by the 1960s. With Moore’s Law kicking in, pioneering comput er scientist Alan Kay codified the idea of abundance orientation with the observ ation that programmers ought to “waste transistors” in order to truly unleash the po wer of computing. But even for young engineers starting out today, used to routinely renting cloud y container-loads2 of computers by the minute, the principle remains difficult to follow. Devoting skills and resources to playful tinkering still seems “wrong,” when there are obvious and serious problems desperately waiting for skilled attentio n. Like the protagonist in the movie Brewster’s Millions, who struggles to spend $30 m illion within thirty days in order to inherit $300 million, software engineers m ust unlearn habits born of scarcity before they can be productive in their mediu m. The principle of rough consensus and running code is perhaps the essence of the abun dance mindset in software. If you are used to the collaboration processes of authoritarian organizations, t he idea of rough consensus might conjure up the image of a somewhat informal committ ee meeting, but the similarity is superficial. Consensus in traditional organiza tions tends to be brokered by harmony-seeking individuals attuned to the needs o f others, sensitive to constraints, and good at creating “alignment” among competing autocrats. This is a natural mode of operation when consensus is sought in orde r to deal with scarcity. Allocating limited resources is the typical purpose of such industrial-age consensus seeking. Under such conditions, compromise represe nts a spirit of sharing and civility. Unfortunately, it is also a recipe for gri dlock when compromise is hard and creative breakthroughs become necessary. By contrast, software development favors individuals with an autocratic streak, driven by an uncompromising sense of how things ought to be designed and built, which at first blush appears to contradict the idea of consensus. Paradoxically, the IETF philosophy of eschewing “kings, presidents and voting” means that rough consensus evolves through strong-minded individuals either truly com ing to an agreement, or splitting off to pursue their own dissenting ideas. Conf licts are not sorted out through compromises that leave everybody unhappy. Inste ad they are sorted out through the principle futurist Bob Sutton identified as c ritical for navigating uncertainty: strong views, weakly held. Pragmatists, unlike the authoritarian high-modernist architects studied by James Scott, hold strong views on the basis of having contributed running code rather than abstract visions. But they also recognize others as autonomous peers, rath er than as unquestioning subordinates or rivals. Faced with conflict, they are w illing to work hard to persuade others, be persuaded themselves, or leave. Rough consensus favors people who, in traditional organizations, would be consid ered disruptive and stubborn: these are exactly the people prone to “breaking smar t.” In its most powerful form, rough consensus is about finding the most fertile d irections in which to proceed rather than uncovering constraints. Constraints in software tend to be relatively few and obvious. Possibilities, however, tend to be intimidatingly vast. Resisting limiting visions, finding the most fertile di rection, and allying with the right people become the primary challenges. In a process reminiscent of the “rule of agreement” in improv theater, ideas that un leash the strongest flood of follow-on builds tend to be recognized as the most fertile and adopted as the consensus. Collaborators who spark the most intense c reative chemistry tend to be recognized as the right ones. The consensus is rough be cause it is left as a sense of direction, instead of being worked out into a det ailed, purist vision. This general principle of fertility-seeking has been repeatedly rediscovered and articulated in a bewildering variety of specific forms. The statements have nam es such as the principle of least commitment (planning software), the end-to-end princ iple (network design), the procrastination principle (architecture), optionality (investin g), paving the cowpaths (interface design), lazy evaluation(language design) and late bi nding (code execution). While the details, assumptions and scope of applicability of these different statements vary, they all amount to leaving the future as fre e and unconstrained as possible, by making as few commitments as possible in any given local context.
The principle is in fact an expression of laissez-faire engineering ethics. Donald K nuth, another software pioneer, captured the ethical dimension with his version: p remature optimization is the root of all evil. The principle is the deeper reason autonomy and creativity can migrate downstream to hands-on decision-making. Leav ing more decisions for the future also leads to devolving authority to those who come later. Such principles might seem dangerously playful and short-sighted, but under cond itions of increasing abundance, with falling costs of failure, they turn out to be wise. It is generally smarter to assume that problems that seem difficult and important today might become trivial or be rendered moot in the future. Behavio rs that would be short-sighted in the context of scarcity become far-sighted in the context of abundance. The original design of the Mosaic browser, for instance, reflected the optimisti c assumption that everybody would have high-bandwidth access to the Internet in th e future, a statement that was not true at the time, but is now largely true in the developed world. Today, many financial technology entrepreneurs are building products based on the assumption that cryptocurrencies will be widely adopted a nd accepted. Underlying all such optimism about technology is an optimism about humans: a belief that those who come after us will be better informed and have m ore capabilities, and therefore able to make more creative decisions. The consequences of this optimistic approach are radical. Traditional processes of consensus-seeking drive towards clarity in long-term visions but are usually fuzzy on immediate next steps. By contrast, rough consensus in software delibera tely seeksambiguity in long-term outcomes and extreme clarity in immediate next st eps. It is a heuristic that helps correct the cognitive bias behind Amara’s Law. C larity in next steps counteracts the tendency to overestimate what is possible i n the short term, while comfort with ambiguity in visions counteracts the tenden cy to underestimate what is possible in the long term. At an ethical level, roug h consensus is deeply anti-authoritarian, since it avoids constraining the freedom s of future stakeholders simply to allay present anxieties. The rejection of “voti ng” in the IETF model is a rejection of a false sense of egalitarianism, rather th an a rejection of democratic principles. In other words, true north in software is often the direction that combines ambig uity and evidence of fertility in the most alluring way: the direction of maximal interestingness.3 The decade after the dot com crash of 2000 demonstrated the value of this princi ple clearly. Startups derided for prioritizing “growth in eyeballs” (an “interestingne ss” direction) rather than clear models of steady-state profitability (a self-limi ting purist vision of an idealized business) were eventually proven right. Iconi c “eyeball” based businesses, such as Google and Facebook, turned out to be highly pro fitable. Businesses which prematurely optimized their business model in response to revenue anxieties limited their own potential and choked off their own growt h. The great practical advantage of this heuristic is that the direction of maximal interestingness can be very rapidly updated to reflect new information, by evol ving the rough consensus. The term pivot, introduced by Eric Ries as part of the Lea n Startup framework, has recently gained popularity for such reorientation. A pi vot allows the direction of development to change rapidly, without a detailed lo ng-term plan. It is enough to figure out experimental next steps. This ability t o reorient and adopt new mental models quickly (what military strategists call a f ast transient4) is at the heart of agility. The response to new information is exactly the reverse in authoritarian developm ent models. Because such models are based on detailed purist visions that grow m ore complex over time, it becomes increasingly harder to incorporate new data. A s a result, the typical response to new information is to label it as an irrelev ant distraction, reaffirm commitment to the original vision, and keep going. Thi s is the runaway-train-wreck scenario. On the other hand, if the new information helps ideological opposition cohere within a democratic process, a competing pu rist vision can emerge. This leads to the stalled-train scenario. The reason rough consensus avoids both these outcomes is that it is much easier to a
gree roughly on the most interesting direction than to either update a complex, detailed vision or bring two or more conflicting complex visions into harmony. For this to work, an equally pragmatic implementation philosophy is necessary. O ne that is very different from the authoritarian high-modernist way, or as it is known in software engineering, the waterfall model (named for the way high-level pu rist plans flow unidirectionally towards low-level implementation work). Not only does such a pragmatic implementation philosophy exist, it works so well that running code actually tends to outrun even the most uninhibited rough consensu s process without turning into a train wreck. One illustration of this dynamic i s that successful software tends to get both used and extended in ways that the original creators never anticipated – and are often pleasantly surprised by, and s ometimes alarmed by. This is of course the well-known agile model. We will not g et into the engineering details,5 but what matters are the consequences of using i t. The biggest consequence is this: in the waterfall model, execution usually lags vision, leading to a deficit-driven process. By contrast, in working agile proce sses, running code races ahead, leaving vision to catch up, creating a surplus-d riven process. Both kinds of gaps contain lurking unknowns, but of very different sorts. The su rplus in the case of working agile processes is the source of many pleasant surp rises: serendipity. The deficit in the case of waterfall models is the source of what William Boyd called zemblanity: “unpleasant unsurprises.” In software, waterfall processes fail in predictable ways, like classic Greek tr agedies. Agile processes on the other hand, can lead to snowballing serendipity, getting luckier and luckier, and succeeding in unexpected ways. The reason is s imple: waterfall plans constrain the freedom of future participants, leading the m to resent and rebel against the grand plan in predictable ways. By contrast, a gile models empower future participants in a project, catalyzing creativity and unpredictable new value. The engineering term for the serendipitous, empowering gap between running code and governing vision has now made it into popular culture in the form of a muchmisunderstood idea: perpetual beta. [1] See the essay by Paul Graham, Hackers and Painters. [2] Modern cloud-computing datacenters often use modular architectures with racks of servers mounted within shipping containers. This allows them to be easily mov ed, swapped out or added. [3] The importance of the “interestingness” of work extends far beyond software proces ses. As Edmund Phelps (see footnote 2of Towards a Mass Flourishing) notes, based on da ta from the World Values Survey, that “How the survey respondents in a country value d the ‘interestingness of a job’ (c020 in the WVS classification) was significantly related to how well the country scored in several dimensions economic performanc e.” [4] Fast transient is a term of art in a military doctrine known as maneuver warfare. Maneuver warfare is descended from a long tradition dating back to Sun Tzu’s Art of War and the German Blitzkrieg model in World War II. In its contemporary form, it was developed by Col. John Boyd of the US Air Force. The Lean Startup movement i s in many ways a simplified version of a core concept in Boydian maneuver warfar e: the OODA loop (Observe, Orient, Decide and Act). The Lean Startup notion ofpivot corres ponds roughly to the idea of a rapid reorientation via a fast transient. A good discussion of the application of maneuver warfare concepts for business environm ents can be found in Chet Richards’ excellent book, Certain to Win. [5] For technologists interested in learning agile methodologies, there are many e xcellent books, such as The Principles of Product Development Flow: Second Generat ion Lean Product Development by Donald G. Reinertsen and active communities of pra ctitioners constantly evolving the state of the art. --------------Running Code and Perpetual Beta When Google’s Gmail service finally exited beta status in July 2009, five years aft er it was launched, it already had over 30 million users. By then, it was the th ird largest free email provider after Yahoo and Hotmail, and was growing much fa
ster than either.1 For most of its users, it had already become their primary pers onal email service. The beta label on the logo, indicating experimental prototype status, had become suc h a running joke that when it was finally removed, the project team included a w himsical “back to beta” feature, which allowed users to revert to the old logo. That feature itself was part of a new section of the product called Gmail Labs: a co llection of settings that allowed users to turn on experimental features. The id ea of perpetual beta had morphed into permanent infrastructure within Gmail for continuous experimentation. Today, this is standard practice: all modern web-based software includes scaffol ding for extensive ongoing experimentation within the deployed production site o r smartphone app backend (and beyond, through developer APIs2). Some of it is ev en visible to users. In addition to experimental features that allow users to st ay ahead of the curve, many services also offer “classic” settings that allow them t o stay behind the curve — for a while. The best products use perpetual beta as a w ay to lead their users towards richer, more empowered behaviors, instead of foll owing them through customer-driven processes. Backward compatibility is limited to situations of pragmatic need, rather than being treated as a religious imperati ve. The Gmail story contains an answer to the obvious question about agile models yo u might ask if you have only experienced waterfall models: How does anything amb itious get finished by groups of stubborn individuals heading in the foggiest po ssible direction of “maximal interestingness” with neither purist visions nor “custome r needs” guiding them? The answer is that it doesn’t get finished. But unlike in waterfall models, this does not necessarily mean the product is incomplete. It means the vision is perpetually incom plete and growing in unbounded ways, due to ongoing evolutionary experiments. Wh en this process works well, what engineers call technical debt can get transformed i nto what we might calltechnical surplus.3 The parts of the product that lack satis fying design justifications represent the areas of rapid innovation. The gaps in the vision are sources of serendipitous good luck. (If you are a Gmail user, br owsing the “Labs” section might lead you to some serendipitous discoveries: features you did not know you wanted might already exist unofficially). The deeper significance of perpetual beta culture in technology often goes unnot iced: in the industrial age, engineering labs were impressive, enduring building s inside which experimental products were created. In the digital age, engineeri ng labs are experimental sections inside impressive, enduring products. Those who bemo an the gradual decline of famous engineering labs like AT&T Bell Labs and Xerox PARC often miss the rise of even more impressive labs inside major modern produc ts and their developer ecosystems. Perpetual beta is now such an entrenched idea that users expect good products to evolve rapidly and serendipitously, continuously challenging their capacity to learn and adapt. They accept occasional non-critical failures as a price worth p aying. Just as the ubiquitous under construction signs on the early static websites of the 1990s gave way to dynamic websites that were effectively always “under cons truction,” software products too have acquired an open-ended evolutionary character . Just as rough consensus drives ideation towards “maximal interestingness”, agile pro cesses drive evolution towards the regimes of greatest operational uncertainty, wh ere failures are most likely to occur. In well-run modern software processes, not on ly is the resulting chaos tolerated, it is actively invited. Changes are often del iberately made at seemingly the worst possible times. Intuit, a maker of tax sof tware, has a history of making large numbers of changes and updates at the height of tax season. Conditions that cause failure, instead of being cordoned off for avoidance in th e future, are deliberately and systematically recreated and explored further. Th ere are even automated systems designed to deliberately cause failures in production systems, such as ChaosMonkey, a system developed by Netflix to randomly take prod uction servers offline, forcing the system to heal itself or die trying. The glimpses of perpetual beta that users can see is dwarfed by unseen backstage
experimentation. This is neither perverse, nor masochistic: it is necessary to uncover hidden ris ks in experimental ideas early, and to quickly resolve gridlocks with data. The origins of this curious philosophy lie in what is known as the release early, release often (RERO) principle, usually attributed to Linus Torvalds, the primary architect of the Linux operating system. The idea is exactly what it sounds like : releasing code as early as possible, and as frequently as possible while it is actively evolving. What makes this possible in software is that most software failures do not have life-threatening consequences.4 As a result, it is usually faster and cheaper to l earn from failure than to attempt to anticipate and accommodate it via detailed planning (which is why the RERO principle is often restated in terms of failure as fail fast). So crucial is the RERO mindset today that many companies, such as Facebook and Etsy, insist on new hires contributing and deploying a minor change to mission-critical systems on their very first day. Companies that rely on waterfall processes by contrast, often put new engineers through years of rotating assignments before t rusting them with significant autonomy. To appreciate just how counterintuitive the RERO principle is, and why it makes trad itional engineers nervous, imagine a car manufacturer rushing to put every prototype into “experimental” mass production, with the intention of discovering issues throu gh live car crashes. Or supervisors in a manufacturing plant randomly unplugging or even breaking machinery during peak demand periods. Even lean management mod els in manufacturing do not go this far. Due to their roots in scarcity, lean mo dels at best mitigate the problems caused by waterfall thinking. Truly agile mod els on the other hand, do more: they catalyze abundance. Perhaps the most counter-intuitive consequence of the RERO principle is this: wh ere engineers in other disciplines attempt to minimize the number of releases, s oftware engineers today strive to maximize the frequency of releases. The industrial -age analogy here is the stuff of comedy science fiction: an intern launching a space mission just to ferry a single paper-clip to the crew of a space station. This tendency makes no sense within waterfall models, but is a necessary feature of agile models. The only way for execution to track the changing direction of the rough consensus as it pivots is to increase the frequency of releases. Faile d experiments can be abandoned earlier, with lower sunk costs. Successful ones c an migrate into the product as fast as hidden risks can be squeezed out. As a re sult, a lightweight sense of direction — rough consensus — is enough. There is no ne ed to navigate by an increasingly unattainable utopian vision. Which raises an interesting question: what happens when there are irreconcilable differences of opinion that break the rough consensus? [1] See 2009 CNET article Yahoo Mail still king as Gmail lurks. As of 2015, Gmail h as close to a billion users. [2] Application Programming Interface, a mechanism for external parties to “plug in” p rogrammatically into a product. [3] Technical debt, a notion introduced by Ward Cunningham (the inventor of the Wiki) in 1992, is conceptually similar to debt in the economic sense. It usually refer s to known pending work, such as replacing temporary expedient hacks with ideal solutions, and “refactoring” to improve inefficiently structured code. The “debt” is the gap between the idealized version of the feature and the one actually in place. In somewhat looser usage, it can also refer, in waterfall processes, to unfinis hed features in the authoritarian vision that may only exist as stubs in the cod e or unimplemented specifications. In the context of agile processes, however, a ll such debt, created through either expedience or incompleteness, is not necess arily “must do” work. If an experimental feature is not actually adopted by users, o r rendered unnecessary by a pivot, there may be no point in replacing an expedie nt solution with an idealized one. Technical surplus can analogously be thought of as the unanticipated growth opportunities (or optionality in the sense of Nas sim Taleb in Antifragile) created by users doing creative and unexpected things with e xisting features. Such opportunities require an expansion in the vision. The surp lus comprises the spillover value of unanticipated uses. As in economics, a proj
ect with high technical debt is in a fragile state and vulnerable to zemblanity. A project with high technical surplus is in an antifragile state and open to se rendipity. [4] This is not true of all software of course: there is a different development r egime for code with life-threatening consequences. Code developed in such regime s tends to evolve far more slowly and is often between 10-30 years behind the cu rve. This is one reason for the perception that trivial applications dominate th e industry: it takes longer for mission-critical code in life-threatening applic ations to be updated. ----------------Software as Subversion If creating great software takes very little capital, copying great software takes e ven less. This means dissent can be resolved in an interesting way that is impos sible in the world of atoms. Under appropriately liberal intellectual property r egimes, individuals can simply take a copy of the software and continue developi ng it independently. In software, this is called forking.Efforts can also combine forces, a process known as merging. Unlike the superficially similar process of spin -offs and mergers in business, forking and merging in software can be non-zero s um. Where democratic processes would lead to gridlock and stalled development, confl icts under rough consensus and running codeand release early, release often processe s leads to competing, divergent paths of development that explore many possible wo rlds in parallel. This approach to conflict resolution is so radically unfamiliar1 that it took near ly three decades even for pragmatic hackers to recognize forking as something to be encouraged. Twenty five years passed between the first use of the term “fork” in this sense (by Unix hacker Eric Altman in 1980) and the development of a tool th at encouraged rather than discouraged it: git,developed by Linus Torvalds in 2005. Git is now the most widely used code management system in the world, and the ba sis for Github, the leading online code repository. In software development, the model works so well that a nearly two-century old i ndustrial model of work is being abandoned for one built around highly open coll aboration, promiscuous forking and opt-in staffing of projects. The dynamics of the model are most clearly visible in certain modern programming contests, such as the regular Matlab programming contests conducted by MathWork s. Such events often allow contestants to frequently check their under-development co de into a shared repository. In the early stages, such sharing allows for the ra pid dissemination of the best design ideas through the contestant pool. Individu als effectively vote for the most promising ideas by appropriating them for thei r own designs, in effect forming temporary collaborations. Hoarding ideas or cod e tends to be counterproductive due to the likelihood that another contestant wi ll stumble on the same idea, improve upon it in unexpected ways, or detect a fla w that allows it to “fail fast.” But in the later stages, the process creates tricky competitive conditions, where speed of execution beats quality of ideas. Not su rprisingly, the winner is often a contestant who makes a minor, last-minute twea k to the best submitted solution, with seconds to spare. Such contests — which exhibit in simplified forms the dynamics of the open-source community as well as practices inside leading companies — not only display the pow er of RCRC and RERO, they demonstrate why promiscuous forking and open sharing le ad to better overall outcomes. Software that thrives in such environments has a peculiar characteristic: what c omputer scientist Richard Gabriel described asworse is better.2 Working code that prioritizes visible simplicity, catalyzing effective collaboration and rapid ex perimentation, tends to spread rapidly and unpredictably. Overwrought code that prioritizes authoritarian, purist concerns such as formal correctness, consisten cy, and completeness tends to die out. In the real world, teams form through self-selection around great code written b y one or two linchpin programmers rather than contest challenges. Team members ty pically know each other at least casually, which means product teams tend to gro
w to a few dozen at most. Programmers who fail to integrate well typically leave in short order. If they cannot or do not leave, they are often explicitly told to do nothing and stay out of the way, and actively shunned and cut out of the l oop if they persist. While the precise size of an optimal team is debatable, Jeff Bezos’ two-pizza rule suggests that the number is no more than about a dozen.3 In stark contrast to the quality code developed by “worse is better” processes, soft ware developed by teams of anonymous, interchangeable programmers, with bureaucr atic top-down staffing, tends to be of terrible quality. Turning Gabriel’s phrase around, such software represents a “better is worse” outcome: utopian visions that f ail predictably in implementation, if they ever progress beyond vaporware at all . The IBM OS/2 project of the early nineties,4 conceived as a replacement for the th en-dominant operating system, MS-DOS, provides a perfect illustration of “better i s worse.” Each of the thousands of programmers involved was expected to design, wr ite, debug, document, and support just 10 lines of code per day. Writing more th an 10 lines was considered a sign of irresponsibility. Project estimates were ar rived at by first estimating the number of lines of code in the finished project , dividing by the number of days allocated to the project, and then dividing by 10 to get the number of programmers to assign to the project. Needless to say, p rogrammers were considered completely interchangeable. The nominal “planning” time r equired to complete a project could be arbitrarily halved at any time, by doubli ng the number of assigned engineers.5 At the same time, dozens of managers across the the company could withhold approval and hold back a release, a process omino usly called “nonconcurrence.” “Worse is better” can be a significant culture shock to those used to industrial-era work processes. The most common complaint is that a few rapidly growing startup s and open-source projects typically corner a huge share of the talent supply in a region at any given time, making it hard for other projects to grow. To add i nsult to injury, the process can at times seem to over-feed the most capricious and silly projects while starving projects that seem more important. This proces s of the best talent unpredictably abandoning other efforts and swarming a few o pportunities is a highly unforgiving one. It creates a few exceptional winning p roducts and vast numbers of failed ones, leaving those with strong authoritarian opinions about “good” and “bad” technology deeply dissatisfied. But not only does the model work, it creates vast amounts of new wealth through both technology startups and open-source projects. Today, its underlying concept s like rough consensus, pivot, fast failure, perpetual beta, promiscuous forking, opt-in and worse is better are carrying over to domains beyond software and regions be yond Silicon Valley. Wherever they spread, limiting authoritarian visions and pu rist ideologies retreat. There are certainly risks with this approach, and it would be polyannish to deny them. The state of the Internet today is the sum of millions of pragmatic, expe dient decisions made by hundreds of thousands of individuals delivering running code, all of which made sense at the time. These decisions undoubtedly contribut ed to the serious problems facing us today, ranging from the poor security of In ternet protocols to the ones being debated around Net Neutrality. But arguably, had the pragmatic approach not prevailed, the Internet would not have evolved si gnificantly beyond the original ARPANET at all. Instead of a thriving Internet e conomy that promises to revitalize the old economy, the world at large might hav e followed the Japanese down the dead-end purist path of fifth-generation mainfr ame computing. Today, moreover, several solutions to such serious legacy problems are being pur sued, such as blockchain technology (the software basis for cryptocurrencies lik e Bitcoin). These are vastly more creative than solutions that were debated in t he early days of the Internet, and reflect an understanding of problems that hav e actually been encountered, rather than the limiting anxieties of authoritarian high-modernist visions. More importantly, they validate early decisions to resi st premature optimization and leave as much creative room for future innovators as possible. Of course, if emerging solutions succeed, more lurking problems wil
l surface that will in turn need to be solved, in the continuing pragmatic tradi tion of perpetual beta. Our account of the nature of software ought to suggest an obvious conclusion: it is a deeply subversive force. For those caught on the wrong side of this force, being on the receiving end of Blitzkrieg operations by a high-functioning agile software team can feel like mounting zemblanity: a sense of inevitable doom. This process has by now occurred often enough, that a general sense of zemblanit y has overcome the traditional economy at large. Every aggressively growing star tup seems like a special-forces team with an occupying army of job-eating machin e-learning programs and robots following close behind. Internally, the software-eaten economy is even more driven by disruption: the time it takes for a disruptor to become a disruptee has been radically shrinking in the last decade — and startups today are highly aware of that risk. That awareness hel ps explain the raw aggressiveness that they exhibit. It is understandable that to people in the traditional economy, software eating th e world sounds like a relentless war between technology and humanity. But exactly the opposite is the case. Technological progress, unlike war or Wall Str eet style high finance, is not a zero-sum game, and that makes all the difference. T he Promethean force of technology is today, and always has been, the force that has rescued humanity from its worst problems just when it seemed impossible to a vert civilizational collapse. With every single technological advance, from the invention of writing to the invention of television, those who have failed to ap preciate the non-zero-sum nature of technological evolution have prophesied doom and been proven wrong. Every time, they have made some version of the argument: t his time it is different, and been proven wrong. Instead of enduring civilizational collapse, humanity has instead ascended to a new level of well-being and prosperity each time. Of course, this poor record of predicting collapses is not by itself proof that it is no different this time. There is no necessary reason the future has to be like the past. There is no fundamental reason our modern globalized society is u niquely immune to the sorts of game-ending catastrophes that led to the fall of the Roman empire or the Mayan civilization. The case for continued progress must be made anew with each technological advance, and new concerns, such as climate change today, must be seriously considered. But concerns that the game might end should not lead us to limit ourselves to wh at philosopher James Carse6 called finite gameviews of the world, based on “winning” and arriving at a changeless, pure and utopian state as a prize. As we will argue i n the next essay, the appropriate mindset is what Carse called an infinite game view , based on the desire to continue playing the game in increasingly generative wa ys. From an infinite game perspective, software eating the world is in fact the best thing that can happen to the world. [1] The idea of forking as a mechanism for dissent enabled by the zero-copying-cos t (or equivalently, non-rivalrous) nature of software is closely related to the notion of exit in Albert O. Hirschman’s well-known model of dissent as an exit-versus-vo ice choice. One way to understand the nature of software is that it favors exit ov er voice as a means of expressing dissent. Beyond code-sharing, this has led, fo r instance, to the popularity of the law of two feet at informal unconferences in te chnology, where the social norm is leave talks and sessions that you are not int erested in, rather than staying merely to be polite to the speaker. The idea tha t exit might be becoming a better option for dissent in a broader political sens e, suggested by Balaji Srinvasan in a 2013 talk, sparked a furore among mainstream political commentators who read secessionist sentiments and abdication of respo nsibilities into the idea. As Balaji and others have pointed out since, there is no such necessary association. Software opens up creative possibilities for exi t-as-default governance models, such as Bruno Frey’s notion ofFunctional, Overlapp ing, Competing Jurisdictions (FOCJ), or ideas like charter cities which envision cit y-level equivalents of the law of two feet. Exit models can also be “soft”, involvin g dissenting behaviors expressed via choice of virtual contexts. At a more munda ne level, exit-driven political dissent is already a major economic phenomenon. The idea of regulatory arbitrage — individuals and corporations moving across bord
ers to take advantage of friendlier political regimes — is already a reality withi n nations. Given the ongoing experimentation with clever new notions of citizens hip, such as the e-citizenship initiative in Estonia, these dynamics an only str engthen. [2] The phrase worse is better has had a colorful history since Gabriel coined it in a 1 989 essay. Gabriel initially intended it only as half-serious sardonic commentar y on an emerging trend in programming rather than a value judgment. Criticism and re consideration led him to retreat from a casual endorsement of the idea in a foll ow-on pseudonymously authored article titledWorse is Better is Worse. The story gets more complicated from there on and it is worth reading it in his own words. T he end result for Gabriel personally appears to have been decisive ambivalence. In the software industry, however, the phrase has acquired a colorful and polarizi ng life of its own. For pragmatists, it has become a statement of a powerful pri nciple and operating truth. For purists, it has become a constant lament. [3] See the 2011 Wall Street Journal profile of Jeff Bezos, Birth of a Salesman for on e reference to the two-pizza rule. The idea has been part of technology industry folklore for much longer, however. [4] This description is based on discussions with Marc Andreessen’s about his recollec tions of working at IBM in the early nineties. [5] By 1975 was already well known that adding programmers to a delayed project de lays it further (a principle known as Brooks’ Law). See The Mythical Man Month by Fred erick Brooks. The principle ironically, emerged from an IBM project. [6] James Carse’s dense blend of poetry and metaphysics in Finite and Infinite Games is not, strictly speaking, of much direct relevance to the ideas in these essays. B ut for philosophically inclined readers, it probably provides the most thorough ph ilosophical case for pragmatic over purist approaches, the hacker ethos over the credentialist ethos, and the Promethean aesthetic over the pastoral aesthetic. ----------------Prometheans and Pastoralists The unique characteristics of software as a technological medium have an impact beyond the profession itself. To understand the broader impact of software eating the world, we have to begin by examining the nature of technology adoption process es. A basic divide in the world of technology is between those who believe humans ar e capable of significant change, and those who believe they are not. Prometheani sm is the philosophy of technology that follows from the idea that humans can, d o and should change. Pastoralism, on the other hand is the philosophy that change is profane. The tension between these two philosophies leads to a technology diffu sion process characterized by a colloquial phrase popular in the startup world: firs t they ignore you, then they laugh at you, then they fight you, then you win.1 Science fiction writer Douglas Adams reduced the phenomenon to a set of three sa rdonic rules from the point of view of users of technology: 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. 3. Anything invented after you’re thirty-five is against the natural order of things. As both these folk formulations suggest, there is certain inevitability to technolog ical evolution, and a certain naivete to certain patterns of resistance. To understand why this is in fact the case, consider the proposition that technologica l evolution is path-dependent in the short term, but not in the long term. Major technological possibilities, once uncovered, are invariably exploited in w ays that maximally unleash their potential. While there is underutilized potenti al left, individuals compete and keep adapting in unpredictable ways to exploit that potential. All it takes is one thing: a thriving frontier of constant tinke ring and diverse value systems must exist somewhere in the world. Specific ideas may fail. Specific uses may not endure. Localized attempts to resis t may succeed, as the existence of the Amish demonstrates. Some individuals may resist some aspects of the imperative to change successfully. Entire nations may
collectively decide to not explore certain possibilities. But with major techno logies, it usually becomes clear very early on that the global impact is going t o be of a certain magnitude and cause a corresponding amount of disruptive socie tal change. This is the path-independent outcome and the reason there seems to b e a “right side of history” during periods of rapid technological developments. The specifics of how, when, where and through whom a technology achieves its max imal impact are path dependent. Competing to guess the right answers is the work of entrepreneurs and investors. But once the answers are figured out, the contin gent path from “weird” to “normal” will be largely forgotten, and the maximally transfor med society will seem inevitable with hindsight. The ongoing evolution of ridesharing through conflict with the taxicab industry illustrates this phenomenon well. In January 2014 for instance, striking cabdriv ers in Paris attacked vehicles hired through Uber. The rioting cabdrivers smashe d windshields and slashed tires, leading to immediate comparisons in the media t o the original pastoralists of industrialized modernity: the Luddites of the ear ly 19th century.2 Like the Luddite movement, the reaction to ridesharing services such as Uber and Lyft is not resistance to innovative technology per se, but something larger and mo re complex: an attempt to limit the scope and scale of impact in order to preven t disruption of a particular way of life. As Richard Conniff notes in a 2011 ess ay in the Smithsonian magazine: As the Industrial Revolution began, workers naturally worried about being displa ced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor pr actices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprent iceship and got paid decent wages. Those were their only concerns.3 In his essay, Conniff argues that the original Luddites were simply fighting to preserve their idea of human values, and concludes that “standing up against techn ologies that put money or convenience above other human values” is necessary for a critical engagement of technology. Critics make similar arguments in every sect or being eaten by software. The apparent reasonableness of this view is deceptive: it is based on the wishfu l hope that entire societies can and should agree on what the term human values means, and se that consensus to decide which technologies to adopt. An unqualified appeal t o “universal” human values is usually a call for an authoritarian imposition of deci dedly non-universal values. As the rideshare industry debates demonstrate, even consumers and producers with in a single sector find it hard to achieve consensus on values. Protests by cab drivers in London in 2014 for instance, led to an increase in business4 for rideshare co mpanies, clear evidence that consumers do not necessarily act in solidarity with incumbent producers based on shared “human values.” It is tempting to analyze such conflicts in terms of classical capitalist or lab or perspectives. The result is a predictable impasse: capitalists emphasize incr eased supply driving prices down, while progressives focus on loss of jobs in th e taxicab industry. Both sides attempt to co-opt the political loyalties of ride share drivers. Capitalists highlight increased entrepreneurial opportunities, wh ile progressives highlight increased income precarity. Capitalists like to label rideshare drivers free agents ormicro-entrepreneurs, while progressives prefer labe ls like precariat (by analogy to proletariat) or scab. Both sides attempt to make the fu ture determinate by force-fitting it into preferred received narratives using lo aded terms. Both sides also operate by the same sense of proportions: they exaggerate the im portance of the familiar and trivialize the new. Apps seem trivial, while automo biles loom large as a motif of an entire century-old way of life. Societies orga nized around cars seem timeless, normal, moral and self-evidently necessary to p reserve and extend into the future. The smartphone at first seems to add no more than a minor element of customer convenience within a way of life that cannot p
ossibly change. The value it adds to the picture is treated like a rounding erro r and ignored. As a result both sides see the conflict as a zero-sum redistribut ion of existing value: gains on one side, exactly offset by losses on the other side. But as Marshall McLuhan observed, new technologies change our sense of proportio ns. Even today’s foggy view of a smartphone-centric future suggests that ridesharing i s evolving from convenience to necessity. By sustaining cheaper and more flexibl e patterns of local mobility, ridesharing enables new lifestyles in urban areas. Young professionals can better afford to work in opportunity-rich cities. Low-i ncome service workers can expand their mobility beyond rigid public transit and the occasional expensive emergency taxi-ride. Small restaurants with limited wor king capital can use ridesharing-like services to offer delivery services. It is in fact getting hard to imagine how else transportation couldwork in a society wi th smartphones. The impact is shifting from the path-dependent phase, when it wasn’t clear whether the idea was even workable, to the non-path-dependent phase, where it seems inevi table enough that other ideas can be built on top. Such snowballing changes in patterns of life are due to what economists call con sumer surplus5 (increased spending power elsewhere due to falling costs in one are a of consumption) and positive spillover effects6 (unexpected benefits in unrelate d industries or distant geographies). For technologies with a broad impact, thes e are like butterfly effects: deceptively tiny causes with huge, unpredictable e ffects. Due to the unpredictability of surplus and spillover, the bulk of the ne w wealth created by new technologies (on the order of 90% or more) eventually ac crues to society at large,7 rather than the innovators who drove the early, path-d ependent phase of evolution. This is the macroeconomic analog to perpetual beta: execution by many outrunning visioning by a few, driving more bottom-up experim entation and turning society itself into an innovation laboratory. Far from the value of the smartphone app being a rounding error in the rideshare industry debate, it in fact represents the bulkof the value. It just does not acc rue directly to any of the participants in the overt, visible conflict. If adoption models were entirely dictated by the taxicab industry, this value wo uld not exist, and the zero-sum framing would become a self-fulfilling prophecy. Similarly, when entrepreneurs try to capture all or even most of the value they set out to create, the results are counterproductive: minor evolutionary advanc es that again make zero-sum outcomes a self-fulfilling prophecy. Technology publ ishing pioneer Tim O’Reilly captured the essence of this phenomenon with the princ iple, “create more value than you capture.” For the highest-impact products, the soc ietal value created dwarfs the value captured. These largely invisible surplus and spillover effects do more than raise broad l iving standards. By redirecting newly freed creative energy and resources down i ndeterminate paths, consumer surpluses and spillover effects actually drive furt her technological evolution in a non-zero-sum way. The bulk of the energy leaks away to drive unexpected innovations in unrelated areas. A fraction courses thro ugh unexpected feedback paths and improves the original innovation itself, in wa ys the pioneers themselves do not anticipate. Similar unexpected feedback paths improve derivative inventions as well, vastly amplifying the impact beyond simpl e “technology diffusion.” The story of the steam engine is a good illustration of both effects. It is wide ly recognized that spillover effects from James Watt’s steam engine, originally in troduced in the Cornish mining industry, helped trigger the British industrial r evolution. What is less well-known8 is that the steam engine itself was vastly imp roved by hundreds of unknown tinkerers adding “microinventions” in the decades immed iately following the expiration of James Watt’s patents. Once an invention leaks i nto what Robert Allen calls “collective invention settings,” with a large number of i ndividuals and firms freely sharing information and independently tinkering with an innovation, future evolution gathers unstoppable momentum and the innovation goes from “weird” to “new normal.” Besides the Cornish mining district in the early 1800s , the Connecticut Valley in the 1870s-1890s,9 Silicon Valley since 1950 and the Sh
enzen region of China since the 1990s are examples of flourishing collective inven tion settings. Together, such active creative regions constitute the global tech nology frontier: the worldwide zone of bricolage. The path-dependent phase of evolution of a technology can take centuries, as Joe l Mokyr shows in his classic, Lever of Riches.But once it enters a collective inve ntion phase, surplus and spillover effects gather momentum and further evolution becomes simultaneously unpredictable and inevitable. Once the inevitability is recognized, it is possible to bet on follow-on ideaswithout waiting for details to become clear. Today, it is possible to bet on a future based on ridesharing and driverless carswithout knowing precisely what those futures will look like. As consumers, we experience this kind of evolution as what Buckminster Fuller ca lled ephemeralization: the seemingly magical ability of technology to do more and mo re with less and less. This is most visible today in the guise of Moore’s Law, but ephemeralization is in fact a feature of all technological evolution. Potable water was once so hard t o come by, many societies suffered from endemic water-borne diseases and forced to rely on expensive and inefficient procedures like boiling water at home. Toda y, only around 10% of the world lacks such access.10 Diamonds were once worth figh ting wars over. Today artificial diamonds, indistinguishable from natural ones, are becoming widely available. The result is a virtuous cycle of increasing serendipity, driven by widespread l ifestyle adaptation and cascades of self-improving innovation. Surplus and spill over creating more surplus and spillover. Brad deLong’s slouching towards utopia for c onsumers and Edmund Phelps’ mass flourishing for producers. And when the virtuous cycl e is powered by a soft, world-eating technology, the steady, cumulative impact i s immense. Both critics and enthusiasts of innovation deeply misunderstand the nature of th is virtuous cycle. Critics typically lament lifestyle adaptations as degeneracy and call for a return to traditional values. Many enthusiasts, instead of being inspired by a sense of unpredictable, flourishing potential, are repeatedly sedu ced by specific visions of the Next Big Thing, sometimes derived rather literall y from popular science fiction. As a result, they lament the lack of collective attention directed towards their pet societal projects. The priorities of other enth usiasts seem degenerate. The result in both cases is the same: calls for reining in the virtuous cycle. B oth kinds of lament motivate efforts to concentrate and deploy surpluses in auth oritarian ways (through retention of excessive monopolistic profits by large com panies or government-led efforts funded through taxation) and contain spillover effects (by restricting access to new technological capabilities). Both are ulti mately attempts to direct creative energies down a few determinate paths. Both a re driven by a macroeconomic version of the Luddite hope: that it is possible to enjoy the benefits of non-zero-sum innovation without giving up predictability. For critics, it is the predictability of established patterns of life. For Next Big Thing enthusiasts, it is a specific aspirational pattern of life. Both are varieties of pastoralism, the cultural cousin of purist approaches in e ngineering. Pastoralism suffers from precisely the same, predictable authoritari an high-modernist failure modes. Like purist software visions, pastoralist visio ns too are marked by an obsessive desire to permanently win a specific, zero-sum finite game rather than to keep playing the non-zero-sum infinite game. When the allure of pastoralist visions is resisted, and the virtuous cycle is al lowed to work, we get Promethean progress. This is unpredictable evolution in th e direction of maximal societal impact, unencumbered by limiting deterministic v isions. Just as the principle of rough consensus and running code creates great software, consumer surplus and spillover effects create great societies. Just as pragmatic and purist development models lead to serendipity and zemblanity in en gineering respectively, Promethean and pastoral models lead to serendipity and z emblanity at the level of entire societies. When pastoralist calls for actual retreat are heeded, the technological frontier migrates elsewhere, often causing centuries of stagnation. This was precisely w hat happened in China and the Islamic world around the fifteenth century, when t
he technological frontier shifted to Europe. Heeding the other kind of pastoralist call, to pursue a determinate Next Big Thi ng at the expense of many indeterminate small things, leads to somewhat better r esults. Such models can deliver impressive initial gains, but invariably create a hardening landscape of authoritarian, corporatist institutions. This triggers a vicious cycle that predictably stifles innovation. The Apollo program, for instance, fulfilled John F. Kennedy’s call to put humans o n the moon within the decade. It also led to the inexorable rise of the military -industrial complex that his predecessor, Dwight D. Eisenhower, had warned again st. The Soviets fared even worse: they made equally impressive strides in the sp ace race, but the society they created collapsed on itself under the weight of a uthoritarianism. What prevented that outcome in the United States was the region al technological frontier migrating to the West Coast, and breaking smart from t he military-industrial complex in the process. This allowed some of the creative energy being gradually stifled to escape to a more favorable environment. With software eating the world, we are again witnessing predictable calls for pa storalist development models. Once again, the challenge is to resist the easy an swers on offer. [1] This quote is often attributed to Gandhi, but the attribution appears to be a pocryphal. Ironically, it appears that the most likely origin of the phrase was a 19 18 speech by trade unionist Nicholas Klein, whose phrasing was: “First they ignore you. Then they ridicule you. And then they attack you and want to burn you. And then they build monuments to you.” The earliest clear formulation of the idea (thou gh not the phrase) in the sense of a pattern of resistance to a new technology is probably due to Elting Morison’s 1968 study, Men, Machine and Modern Times. Morison’s mo del, based on a careful study of the introduction of improved naval gunnery tech nology in the US Navy by William Sims at the end of the nineteenth century, is a three-stage model that could be paraphrased as first they ignore you, then argue that your idea is impossible, then they resort to name-calling. Today, the most famili ar formulation of the idea is in terms of Clayton Christensen’s narrower notion of disruption, but the general pattern of resistance can be seen even when the tec hnology is introduced in non-disruptive ways, through internal evangelism, as in the case of Sims and naval gunnery. [2] See for example Uber and the Neo-Luddities, Salon, 2014. [3] Richard Coniff, What the Luddites Really Fought Against, Smithsonian Magazine, 201 1. [4] See for example, the story Uber’s sign-ups jump 850% after strike ‘own goal’, featured on CNBC in 2014, on the effects of a strike by London cab-drivers. [5] Consumer surplus is the difference between what consumers are willing to pay for a service and what it costs, which allows them to spend in new ways. [6] Spillover is, loosely, the benefits in one sector due to unrelated causes in ano ther sector. The term is also used to refer to such benefits spreading across nati onal borders. [7] William D. Nordhaus, Schumpeterian Profits and the Alchemist Fallacy, Yale Economi c Applications and Policy Discussion Paper No. 6, 2005. [8] Collective invention during the British Industrial Revolution: the case of the Cornish pumping engine, Alessandro Nuvolari, Camb. J. Econ. (2004) 28 (3):347-363. [9] The technological flourishing around Springfield and Harper’s Ferry armories in the Connecticut Valley region is what inspired Mark Twain’s novel, A Connecticut Yan kee in King Arthur’s Court. [10] By 2010, Millennium Development Goals relating to water had already been exceeded , with 2 billion gaining access since 1990. The projected figure for 2015 is 8%. While access to water is not the same as water security, this is nevertheless r emarkable progress. See: Global Access to Clean Drinking Water and Sanitation: U.S. and International Programs, Tiaji Salaam-Blyther, Congressional Research Service, September 2012 -----------------The Allure of Pastoralism In art, the term pastoral refers to a genre of painting and literature based on roma nticized and idealized portrayals of a pastoral lifestyle, usually for urban aud
iences with no direct experience of the actual squalor and oppression of pre-ind ustrial rural life. Within religious traditions, pastorals may also be associated with the motifs an d symbols of uncorrupted states of being. In the West for instance, pastoral art and literature often evoke the Garden of Eden story. In Islamic societies, the first caliphate is often evoked in a similar way. The notion of a pastoral is useful for understanding idealized understandings of a ny society, real or imagined, past, present or future. In Philip Roth’s American Pasto ral for instance, the term is an allusion to the idealized American lifestyle enjo yed by the protagonist Seymour “Swede” Levov, before it is ruined by the social turm oil of the 1960s. At the center of any pastoral we find essentialized notions of what it means to be human, like Adam and Eve or William Whyte’s Organization Man, arranged in a parti cular social order (patriarchal in this case). From these archetypes we get to p ure and virtuous idealized lifestyles. Lifestyles that deviate from these unders tandings seem corrupt and vice-driven. The belief that “people don’t change” is at onc e an approximation and a prescription: people should not change except to better con form to the ideal they are assumed to already approximate. The belief justifies building technology to serve the predictable and changeless ideal and labeling u nexpected uses of technology degenerate. We owe our increasingly farcical yearning for jetpacks and flying cars, for inst ance, to what we might call the “World Fairs pastoral,” since the vision was strongl y shaped by mid-twentieth-century World Fairs. Even at the height of its influen ce, it was already being satirized by television shows like The Flintstones and The Je tsons. The shows portrayed essentially the 1950s social order, full of Organizatio n Families, transposed to past and future pastoral settings. The humor in the sh ows rested on audiences recognizing the escapist non-realism. The World Fairs pastoral, inspired strongly by the aerospace technologies of the 1950s, represented a future imagined around flying cars, jetpacks and glamorous airlines like Pan Am. Flying cars merely updated a familiar nuclear-family life style. Jetpacks appealed to the same individualist instincts as motorcycles. Air lines like Pan Am, besides being an integral part of the military-industrial com plex, owed their “glamor” in part to their deliberate perpetuation of the sexist cul ture of the fifties. Within this vision, truly significant developments, like th e rise of vastly more efficient low-cost airlines in the 70s, seemed like declin e from a “Golden Age” of air travel. Arguably, the aerospace future that actually unfolded was vastly more interesting th an the one envisioned in the World Fairs pastoral. Low-cost, long-distance air t ravel opened up a globalized and multicultural future, broke down barriers betwe en insular societies, and vastly increased global human mobility. Along the way, it helped dismantle much of the institutionalized sexism behind the glamour of the airline industry. These developments were enabled in large part by post-1970 s software technologies,1 rather than improvements in core aerospace engineering t echnologies. These were precisely the technologies that were beginning to “break smart” out of the stifling influence of the military-industrial complex. In 2012, thanks largely to these developments, for the first time in history the re were over a billion international tourist arrivals worldwide.2 Software had eaten a nd democratized elitist air travel. Today, software is continuing to eat airplan es in deeper ways, driving the current explosion in drone technology. Again, tho se fixated on jetpacks and flying cars are missing the actual, much more interes ting action because it is not what they predicted. When pastoralists pay attenti on to drones at all, they see them primarily as morally objectionable military w eapons. The fact that they replace technologies of mass slaughter such as carpet bombing, and the growing number of non-military uses, are ignored. In fact the entire World Fairs pastoral is really a case of privileged members o f society, presuming to speak for all, demanding “faster horses” for all of society (in the sense of the likely apocryphal3 quote attributed to Henry Ford, “If I’d asked my customers what they wanted, they would have demanded faster horses.”) Fortunately for the vitality of the United States and the world at large, the fu ture proved wiser than any limiting pastoral vision of it. The aerospace story i
s just one among many that suddenly appear in a vastly more positive light once we drop pastoral obsessions and look at the actual unfolding action. Instead of the limited things we could imagine in the 1950s, we got much more impactful things. Software eating aerospace technology allowed it to continue progressing in the direction of maximum potential. If pastoral visions are so limiting, why do we get so attached to them? Where do they even come from in the first place? Ironically, they arise from Promethean periods of evolution that are too successful. The World Fairs pastoral, for instance, emerged out of a Promethean period in th e United States, heralded by Alexander Hamilton in the 1790s. Hamilton recognize d the enormous potential of industrial manufacturing, and in his influential 179 2Report on Manufactures,4 argued that the then-young United States ought to strive to become a manufacturing superpower. For much of the nineteenth century, Hamil ton’s ideas competed for political influence5 with Thomas Jefferson’s pastoral vision of an agrarian, small-town way of life, a romanticized, sanitized version of the society that already existed. For free Americans alive at the time, Jefferson’s vision must have seemed tangible , obviously valuable and just within reach. Hamilton’s must have seemed speculative, u ncertain and profane, associated with the grime and smoke of early industrializi ng Britain. For almost 60 years, it was in fact Jefferson’s parochial sense of pro portions that dominated American politics. It was not until the Civil War that t he contradictions inherent in the Jeffersonian pastoral led to its collapse as a political force. Today, while it still supplies powerful symbolism to politicia ns’ speeches, all that remains of the Jeffersonian Pastoral is a nostalgic cultura l memory of small-town agrarian life. During the same period, Hamilton’s ideas, through their overwhelming success, evol ved from a vague sense of direction in the 1790s into a rapidly maturing industr ial social order by the 1890s. By the 1930s, this social order was already being pastoralized into an alluring vision of jetpacks and flying cars in a vast, ind ustrialized, centralized society. A few decades later, this had turned into a se nse of dead-end failure associated with the end of the Apollo program, and the r eality of a massive, overbearing military-industrial complex straddling the tech nological world. The latter has now metastasized into an entire too-big-to-fail old economy. One indicator of the freezing of the sense of direction is that man y contemporary American politiciansstill remain focused on physical manufacturing the way Alexander Hamilton was in 1791. What was a prescient sense of direction then has turned into nostalgia for an obsolete utopian vision today. But where w e have lost our irrational attachment to the Jeffersonian Pastoral, the World Fa irs pastoral is still too real to let go. We get attached to pastorals because they offer a present condition of certainty and stability and a utopian future promise of absolutely perfected certainty an d stability. Arrival at the utopia seems like a well-deserved reward for hard-wo n Promethean victories. Pastoral utopias are where the victors of particular his torical finite games hope to secure their gains and rest indefinitely on their l aurels. The dark side, of course, is that pastorals also represent fantasies of absolute and eternal power over the fate of society: absolute utopias for believ ers that necessarily represent dystopias for disbelievers. Totalitarian ideologi es of the twentieth century, such as communism and fascism, are the product of pas toral mindsets in their most toxic forms. The Jeffersonian pastoral was a nightm are for black Americans. When pastoral fantasies start to collapse under the weight of their own internal contradictions, long-repressed energies are unleashed. The result is a societal condition marked by widespread lifestyle experimentation based on previously re pressed values. To those faced with a collapse of the World Fairs pastoral proje ct today, this seems like an irreversible slide towards corruption and moral dec ay. [1] One revealing metric is “Cost per Available Seat Mile”, the main metric used to me asure the efficiency of airlines. This cost has dropped 40% since the 1970s. See : R. John Hansman The Impact of Information Technologies on Air Transportation , AIA A Conference, 2005.
[2] Over 1.1 billion tourists traveled abroad in 2014, UN World Tourism Organizati on press release, 2014. [3] The quote “If I asked my customers what they wanted, they would have asked for a faster horse” is often attributed to Henry Ford, but there is no evidence that he act ually said that. It is still a convenient metaphor though. [4] The historical context of Hamilton’s Report on Manufactures can be found in the Wikipe dia entry. The actual document can be found here. [5] Michael Lind’s 2012 book, Land of Promise provides a comprehensive overview of the i nterplay of Hamiltonian and Jeffersonian ideas since the 1780s. Though Lind focu ses on the United States, a similar conflict has shaped the course of industrial ization for every major economy. Lind comes down heavily in favor Hamiltonian mo dels, but fails to adequately distinguish between the underlying Promethean valu es from the corporatist economic organization models that emerged by the 1950s. This weakens an otherwise excellent treatment of the dynamic. Edmund Phelps’ Mass Fl ourishing makes for a good companion read that makes up for some of the shortcomin gs of Lind’s treatment. -------------------Understanding Elite Discontent Because they serve as stewards of dominant pastoral visions, cultural elites are m ost prone to viewing unexpected developments as degeneracy. From the Greek philo sopher Plato1 (who lamented the invention of writing in the 4th century BC) to the Chinese scholar, Zhang Xian Wu2 (who lamented the invention of printing in the 12 th century AD), alarmist commentary on technological change has been a constant in history. A contemporary example can be found in a 2014 article3by Paul Verhae ge in The Guardian: There are constant laments about the so-called loss of norms and values in our c ulture. Yet our norms and values make up an integral and essential part of our i dentity. So they cannot be lost, only changed. And that is precisely what has ha ppened: a changed economy reflects changed ethics and brings about changed ident ity. The current economic system is bringing out the worst in us. Viewed through any given pastoral lens, any unplanned development is more likely to subtract rather than add value. In an imagined world where cars fly, but driving is still a central rather than peripheral function, ridesharing can only be see n as subtracting taxi drivers from a complete vision. Driverless cars — the name i s revealing, like “horseless carriage” — can only be seen as subtracting all drivers from the vision. And with such apparent subtraction, values and humans can only be se en as degenerating (never mind that we still ride horses for fun, and will likel y continue driving cars for fun). This tendency to view adaptation as degeneracy is perhaps why cultural elites ar e startlingly prone to the Luddite fallacy. This is the idea that technology-dri ven unemployment is a real concern, an idea that arises from the more basic assu mption that there is a fixed amount of work (“lump of labor”) to be done. By this logi c, if a machine does more, then there is less for people to do. Prometheans often attribute this fallacious argument to a lack of imagination, but the roots of its appeal lie much deeper. Pastoralists are perfectly willing and able to imagine many interesting things, so long as they bring reality closer to th e pastoral vision. Flying cars — and there are very imaginative ways to conceive o f them — seem better than land-bound ones because drivers predictably evolving int o pilots conforms to the underlying notion of human perfectibility. Drivers unpr edictably evolving into smartphone-wielding free agents, and breaking smart from the Organization Man archetype, does not. Within the Jeffersonian pastoral, fas ter horses (not exactly trivial to breed) made for more empowered small-town yeoma n farmers. Drivers of early horseless carriages were degenerate dependents, beho lden to big corporations, big cities and Standard Oil. In other words, pastoralists can imagine sustaining changes to the prevailing social order, but disruptive changes seem profane. As a result, those who adapt to disrupt ion in unexpected ways seem like economic and cultural degenerates, rather than representing employment rebounding in unexpected ways. History of course, has shown that the idea of technological unemployment is not just wrong, it is wildly wrong. Contemporary fears of software eating jobs is just t
he latest version of the argument that “people cannot change” and that this time, the true limits of human adaptability have been discovered. This argument is absolutely correct — within the pastoral vision that it is made. Once we remove pastoral blinders, it becomes obvious that the future of work lie s in the unexpected and degenerate-seeming behaviors of today. Agriculture certa inly suffered a devastating permanent loss of employment to machinery within the Jeffersonian pastoral by 1890. Fortunately, Hamilton’s profane ideas, and the deg enerate citizens of the industrial world he foresaw, saved the day. The ideal Je ffersonian human, the noble small-town yeoman farmer, did in fact become practic ally extinct as the Jeffersonians feared. Today the pastoral-ideal human is a hi gh-IQ credentialist Organization Man, headed for gradual extinction, unable to c ompete with higher-IQ machines. The degenerate, breaking-smart humans of the sof tware-eaten world on the other hand, have no such fears. They are too busy tinke ring with new possibilities to bemoan imaginary lost utopias. John Maynard Keynes was too astute to succumb to the Luddite fallacy in this nai ve form. In his 1930 conception of the leisure society,4 he noted that the economy could arbitrarily expand to create and satisfy new needs, and with a lag, absor b labor as fast as automation freed it up. But Keynes too failed to recognize th at with new lifestyles come new priorities, new lived values and new reasons to wa nt to work. As a result, he saw the Promethean pattern of progress as a necessary evil on the path to a utopian leisure society based on traditional, universal reli gious values: I see us free, therefore, to return to some of the most sure and certain princip les of religion and traditional virtue-that avarice is a vice, that the exaction of usury is a misdemeanour, and the love of money is detestable, that those wal k most truly in the paths of virtue and sane wisdom who take least thought for t he morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enj oyment in things, the lilies of the field who toil not, neither do they spin. But beware! The time for all this is not yet. For at least another hundred years we must pretend to ourselves and to every one that fair is foul and foul is fai r; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight. Perceptions of moral decline however, have no necessary relationship with actual mor al decline. As Joseph Tainter observes inThe Collapse of Complex Societies: Values of course, vary culturally, socially and individually…What one individual, society, or culture values highly another does not…Most of us approve, in general, of that which culturally is most like or most pleasing, or at least most intell igible to us. The result is a global bedlam of idiosyncratic ideologies, each cl aiming exclusive possession of ‘truth.’… The ‘decadance’ concept seems particularly detrimental [and is] notoriously difficul t to define. Decadent behavior is that which differs from one’s own moral code, pa rticular if the offender at some former time behaved in a manner of which one ap proves. There is no clear causal link between the morality of behavior and polit ical fortunes. While there is no actual moral decline in any meaningful absolute sense, the anx iety experienced by pastoralists is real. For those who yearn for paternalistic authority, more lifestyle possibilities leads to a sense of anomie rather than f reedom. It triggers what the philosopher George Steiner called nostalgia for the a bsolute.5 Calls for a retreat to tradition or a collectivist drive towards the Nex t Big Thing (often an Updated Old Thing, as in the case of President Obama’s call for a “new Sputnik moment” a few years ago) share a yearning for a simpler world. Bu t, as Steiner notes: I do not think it will work. On the most brutal, empirical level, we have no exa mple in history…of a complex economic and technological system backtracking to a m ore simple, primitive level of survival. Yes, it can be done individually. We al l, I think, in the universities now have a former colleague or student somewhere planting his own organic food, living in a cabin in the forest, trying to educa
te his family far from school. Individually it might work. Socially, I think, it is moonshine. In 1974, the year of peak centralization, Steiner was presciently observing the beginnings of the transformation. Today, the angst he observed on university cam puses has turned into a society-wide condition of pastoral longing, and a pervas ive sense of moral decay. For Prometheans, on the other hand, not only is there no decay, there is actual moral progress. [1] In Phaedrus, Plato lamented the invention of writing as causing degeneration of ou r memory capacities. Interesting discussions of the thought can be found in Jame s Gleick’s The Information and Nick Carr’s The Shallows. [2] Book chapter: Stephen H. West. Time Management and Self Control: Self-help Gui des in the Yuan. Text, performance, and gender in Chinese literature and music: essays in honor of Wilt Idema. E. J. Brill (2009). [3] Neoliberalism has brought out the worst in us, Guardian, 2014. [4] See footnote [1] in Towards a Mass Flourishing. [5] George Steiner, Nostalgia for the Absolute, 1974. ----------------The Principle of Generative Pluralism Prometheans understand technological evolution in terms of increasing diversity of lived values, in the form of more varied actual lifestyles. From any given pastor al perspective, such increasing pluralism is a sign of moral decline, but from a Promethean perspective, it is a sign of moral progress catalyzed by new technologic al capabilities. Emerging lifestyles introduce new lived values into societies. Hamilton did not just suggest a way out of the rural squalor1 that was the reality of the Jefferson ian pastoral. His way also led to the dismantlement of slavery, the rise of mode rn feminism and the gradual retreat of colonial oppression and racism. Today, we are not just leaving the World Fairs pastoral behind for a richer technological future. We are also leaving behind its paternalistic institutions, narrow “resour ce” view of nature, narrow national identities and intolerance of non-normative se xual identities. Promethean attitudes begin with an acknowledgment of the primacy of lived values over abstract doctrines. This does notmean that lived values must be uncritically accepted or left unexamined. It just means that lived values must be judged on their own merit, rather than through the lens of a prejudiced pastoral vision. The shift from car-centric to smartphone-centric priorities in urban transportat ion is just one aspect of a broader shift from hardware-centric to software-cent ric lifestyles. Rideshare driver, carless urban professional and low-income-high-mobilit y are just the tip of an iceberg that includes many other emerging lifestyles, suc h as eBay or Etsy merchant, blogger, indie musician and search-engine marketer. Each n ew software-enabled lifestyle adds a new set of lived values and more apparent p rofanity to society. Some, like rent-over-own values, are shared across many eme rging lifestyles and threaten pastorals like the “American Dream,” built around home ownership. Others, such as dietary preferences, are becoming increasingly indiv idualized and weaken the very idea of a single “official food pyramid” pastoral scri pt for all. Such broad shifts have historically triggered change all the way up to the globa l political order. Whether or not emerging marginal ideologies2 achieve mainstream prominence, their sense of proportions and priorities, driven by emerging lifes tyles and lived values, inevitably does. These observations are not new among historians of technology, and have led to e ndless debates about whether societal values drive technological change (social determinism) or whether technological change drives societal values (technologic al determinism). In practice, the fact that people change and disrupt the dominant p revailing ideal of “human values” renders the question moot. New lived values and ne w technologies simultaneously irrupt into society in the form of new lifestyles. Old lifestyles do not necessarily vanish: there are still Jeffersonian small farmer s and traditional blacksmiths around the world for instance. Rather, they occupy a gradually diminishing role in the social order. As a result, new and old tech
nologies and an increasing number of value systems coexist. In other words, human pluralism eventually expands to accommodate the full potenti al of technological capabilities.3 We call this the principle of generative pluralism. Generative pluralism is what all ows the virtuous cycle of surplus and spillover to operate. Ephemeralization — the ability to gradually do more with less — creates room for the pluralistic expansion o f lifestyle possibilities and individual values, without constraining the future to a specific path. The inherent unpredictability in the principle implies that both technological a nd social determinism are incomplete models driven by zero-sum thinking. The pas t cannot “determine” the future at all, because the future is more complex and diver se. It embodies new knowledge about the world and new moral wisdom, in the form of a more pluralistic and technologically sophisticated society. Thanks to a particularly fertile kind of generative pluralism that we know as ne twork effects, soft technologies like language and money have historically cause d the greatest broad increases in complexity and pluralism. When more people spe ak a language or accept a currency, the potential of that language or currency i ncreases in a non-zero-sum way. Shared languages and currencies allow more peopl e to harmoniously co-exist, despite conflicting values, by allowing disputes to be s ettled through words or trade4 rather than violence. We should therefore expect so ftware eating the world to cause an explosion in the variety of possible lifesty les, and society as a whole becoming vastly more pluralistic. And this is in fact what we are experiencing today. The principle also resolves the apparent conflict between human agency and “what t echnology wants”: Far from limiting human agency, technological evolution in fact serves as the most complete expression of it. Technology evolution takes on its unst oppable and inevitable character only after it breaks smart from authoritarian contr ol and becomes part of unpredictable and unscripted collective invention culture . The existence of thousands of individuals and firms working relatively indepen dently on the same frontier means that every possibility will not only be uncove red, it will be uncovered by multiple individuals, operating with different value sy stems, at different times and places. Even if one inventor chooses not to pursue a p ossibility, chances are, others will. As a result, all pastoralist forms of resi stance are eventually overwhelmed. But the process retains rational resistance t o paths that carry risk of ending the infinite game for all, in proportion to th eir severity. As global success in limiting the spread of nuclear and biological weapons shows, generative pluralism is not the same as mad scientists and James Bond villains running amok. Prometheans who discover high-leverage unexpected possibilities enter a zone of serendipity. The universe seems to conspire to magnify their agency to superhuman le vels. Pastoralists who reject change altogether as profanity turn lack of agency int o a self-fulfilling prophecy, and enter a zone of zemblanity. The universe seems to conspire to diminish whatever agency they dohave, resulting in the perception that technology diminishes agency. Power, unlike capability, is zero-sum, since it is defined in terms of control over other human beings. Generative pluralism implies that on average, pastoralists a re constantly ceding power to Prometheans. In the long term, however, the loss o f power is primarily a psychological rather than material loss. To the extent th at ephemeralization frees us of the need for power, we have less use for a disproportion ate share. As a simple example, consider a common twentieth-century battleground: public si gnage. Today, different languages contend for signaling power in public spaces. In highly multilingual countries, this contention can turn violent. But automate d translation and augmented reality technologies5 can make it unnecessary to decid e, for instance, whether public signage in the United States ought to be in Engl ish, Spanish or both. An arbitrary number of languages can share the same public spaces, and there is much less need for linguistic authoritarianism. Like physi cal sports in an earlier era, soft technologies such as online communities, vide o games and augmented reality are all slowly sublimating our most violent tenden cies. The 2014 protests in Ferguson, MO, are a powerful example. Compared to the
very similar civil rights riots in the 1960s, information in the form of social media coverage, rather than violence, was the primary medium of influence. The broader lesson of the principle of generative pluralism is this: through tec hnology, societies become intellectually capable of handling progressively more complex value-based conflicts. As societies gradually awaken to resolution mecha nisms that do not require authoritarian control over the lives of others, they g radually substitute intelligence and information for power and coercion. [1] See Deirdre McCloskey, The Bourgeois Virtues, 2007 for one treatment of the mode rn urban tendency to romanticize the realities of pre-modern agrarian life. [2] Many are thriving today, such as Liberal-tarianism, and Crypto-anarchism, that ref lect a better sense of proportions relative to emerging technologies. [3] The similarity to Parkinson’s Law, “work expands to occupy the resources available,” i s not an accident. [4] McCloskey views this idea as the mark of bourgeois virtues (see footnote 1), w hich are similar to Jane Jacobs’ commerce syndrome of values. Both, however, are not value systems or ideologies per se, but expressions of pluralistic tolerance and no n-conflict among ideologies. [5] Apps like Google Translate can already do this, though the technology has not yet become pervasive in public infrastructure. With the rise of Augmented Realit y technologies, such approaches will likely become more prominent. -------------------The Future in the Rear-View Mirror So far, we have tried to convey a visceral sense of what is essentially an uneve n global condition of explosive positive change. Change that is progressing at a ll levels from individual to business to communities to the global societal orde r. Perhaps most important part of the change is that we are experiencing a syste matic substitution of intelligence for brute authoritarian power in problem solv ing, allowing a condition of vastly increased pluralism to emerge. Paradoxically, due to the roots of vocal elite discontent in pastoral sensibilitie s, this analysis is valid only to the extent that itfeels viscerally wrong. And going by the headlines of the past few years, it certainly does. Much of our collective sense of looming chaos and paradises being lost is in fac t a clear and unambiguous sign of positive change in the world. By this model, if ou r current collective experience of the human condition felt utopian, with cultural e lites extolling its virtues, we should be very worried indeed. Societies that pr esent a facade of superficial pastoral harmony, as in the movie Stepford Wives, tend to be sustained by authoritarian, non-pluralistic polities, hidden demons, and i nvisible violence. Innovation can in fact be defined as ongoing moral progress achieved by driving dire ctly towards the regimes of greatest moral ambiguity, where our collective demon s lurk. These are also the regimes where technology finds its maximal expression s, and it is no accident that the two coincide. Genuine progress feels like onru shing obscenity and profanity, and also requires new technological capabilities to drive it. The subjective psychological feel of this evolutionary process is what Marshall McLuhan described in terms of a rear-view mirror effect: “we see the world through a rear-view mirror. We march backwards into the future.” Our aesthetic and moral sensibilities are oriented by default towards romanticiz ed memories of paradises lost. Indeed, this is the only way we can enter the future. Our constantly pastoralizing view of the world, grounded in the past, is the on ly one we have. The future, glimpsed only through a small rear-view mirror, is n ecessarily framed by the past. To extend McLuhan’s metaphor, the great temptation is to slam on the brakes and shift from what seems like reverse gear into forwar d gear. The paradox of progress is that what seems like the path forward is in f act the reactionary path of retreat. What seems like the direction of decline is in fact the path forward. Today, our collective rear-view mirror is packed with seeming profanity, in the form of multiple paths of descent into hell. Among the major ones that occupy ou r minds are the following: 1. Technological Unemployment: The debate around technological unemployment a
nd the concern that “this time it is different” with AI and robots “eating all the job s.” 2. Inequality: The rising concern around persistent inequality and the fear t hat software, unlike previous technologies, does not offer much opportunity outs ide of an emerging intellectual elite of programmers and financiers. 3. “Real” Problems: The idea that “real” problems such as climate change, collapsing biodiversity, healthcare, water scarcity and energy security are being neglected , while talent and energy are being frivolously expended on “trivial” photo-sharing apps. 4. “Real” Innovation: The idea that “real” innovation in areas such as space explorat ion, flying cars and jetpacks has stagnated. 5. National Competitiveness: The idea that software eating the world threaten s national competitiveness based on manufacturing prowess and student performanc e on standardized tests. 6. Cultural Decline: The idea that social networks, and seemingly “low-quality” n ew media and online education are destroying intellectual culture. 7. Cybersecurity: The concern that vast new powers of repression are being ga ined by authoritarian forces, threatening freedom everywhere: Surveillance and c yberwarfare technologies (the latter ranging from worms like Stuxnet created by intelligence agencies, to drone strikes) beyond the reach of average citizens. 8. The End of the Internet: The concern that new developments due to commerci al interests pose a deep and existential threat to the freedoms and possibilitie s that we have come to associate with the Internet. These are such complex and strongly coupled themes that conversations about any one of them quickly lead to a jumbled discussion of all of them, in the form of an a mbiguous “inequality, surveillance and everything” non-question. Dickens’ memorable op ening paragraph in A Tale of Two Cities captures this state of confused urgency and inchoate anxiety perfectly: It was the best of times, it was the worst of times, it was the age of wisdom, i t was the age of foolishness, it was the epoch of belief, it was the epoch of in credulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we h ad nothing before us, we were all going direct to Heaven, we were all going dire ct the other way – in short, the period was so far like the present period, that s ome of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only. Such a state of confused urgency often leads to hasty and ill-conceived grand pa storalist schemes by way of the well-known politician’s syllogism:1 Something must be done This is something This must be done Promethean sensibilities suggest that the right response to the sense of urgency is not the politician’s syllogism, but counter-intuitive courses of action: drivi ng straight into the very uncertainties the ambiguous problem statements frame. Often, when only reactionary pastoralist paths are under consideration, this mea ns doing nothing, and allowing events to follow a natural course. In other words, our basic answer to the non-question of “inequality, surveillance and everything” is this: the best way through it is through it. It is an answer simila r in spirit to the stoic principle that “the obstacle is the way” and the Finnish co ncept of sisu:meeting adversity head-on by cultivating a capacity for managing str ess, rather than figuring out schemes to get around it. Seemingly easier paths, as the twentieth century’s utopian experiments showed, create a great deal more pain i n the long run. Broken though they might seem, the mechanisms we need for working through “inequal ity, surveillance and everything” are the generative, pluralist ones we have been refining over the last century: liberal democracy, innovation, entrepreneurship, functional markets and the most thoughtful and limited new institutions we can design. This answer will strike many as deeply unsatisfactory and perhaps even callous. Yet, time and again, when the world has been faced with seemingly impossible pro
blems, these mechanisms have delivered. Beyond doing the utmost possible to shield those most exposed to, and least capa ble of enduring, the material pain of change, it is crucial to limit ourselves and a void the temptation of reactionary paths suggested by utopian or dystopian visio ns, especiallythose that appear in futurist guises. The idea that forward is backw ard and sacred is profane will never feel natural or intuitive, but innovation and progress depend on acting by these ideas anyway. In the remaining essays in this series, we will explore what it means to act by these ideas. ---------------------A Tale of Two Computers Part-way through Douglas Adams’ Hitchhikers’ Guide to the Galaxy, we learn that Earth is not a planet, but a giant supercomputer built by a race of hyperintelligent ali ens. Earth was designed by a predecessor supercomputer called Deep Thought, whic h in turn had been built to figure out the answer to the ultimate question of “Lif e, the Universe and Everything.” Much to the annoyance of the aliens, the answer t urns out to be a cryptic and unsatisfactory “42.” We concluded the previous essay with our own ultimate question of “Inequality, Surve illance and Everything.” The basic answer we offered — “the best way through it is thr ough it” — must seem as annoying, cryptic and unsatisfactory as Deep Thought’s “42.” In Adams’ tale, Deep Thought gently suggests to the frustrated aliens that perhaps the answer seemed cryptic because they never understood the question in the fir st place. Deep Thought then proceeds to design Earth to solve the much tougher p roblem of figuring out the actual question. First performed as a radio show in 1978, Adams’ absurdist epic precisely portrayed the societal transformation that was gaining momentum at the time. Rapid techno logical progress due to computing was accompanied by cryptic and unsatisfactory answers to confused and urgent-seeming questions about the human condition. Our “I nequality, Surveillance and Everything” form of the non-question is not that diffe rent from the corresponding non-question of the late 1970s: “Cold War, Globalizati on and Everything.” Then, as now, the frustrating but correct answer was “the best w ay through it is through it.” The Hitchhiker’s Guide can be read as a satirical anti-morality tale about pastoral sensibilities, utopian solutions and perfect answers. In their dissatisfaction w ith the real “Ultimate Answer,” the aliens failed to notice the truly remarkable dev elopment: they had built an astoundingly powerful computer, which had then proce eded to design an even more powerful successor. Like the aliens, we may not be satisfied with the answers we find to timeless qu estions, but simply by asking the questions and attempting to answer them, we are bo otstrapping our way to a more advanced society. As we argued in the last essay, the advancement is both technological and moral, a llowing for a more pluralistic society to emerge from the past. Adams died in 2001, just as his satirical visions, which had inspired a generati on of technologists, started to actually come true. Just as Deep Thought had giv en rise to a fictional “Earth” computer, centralized mainframe computing of the indu strial era gave way to distributed, networked computing. In a rather perfect cas e of life imitating art, IBM researchers named a powerful chess-playing supercom puter Deep Thought in the 1990s, in honor of Adams’ fictional computer. A later ve rsion, Deep Blue, became the first computer to beat the reigning human champion in 1997. But the true successor to the IBM era of computing was the planet-strad dling distributed computer we call the Internet. Science fiction writer Neal Stephenson noted the resulting physical transformati on as early as 1996, in his essay on the undersea cable-laying industry, Mother Ea rth, Motherboard.1 By 2004, Kevin Kelly had coined a term and launched a new site to talk about the idea of digitally integrated technology as a single, all-subsu ming social reality,2 emerging on this motherboard: I’m calling this site The Technium. It’s a word I’ve reluctantly coined to designate t he greater sphere of technology – one that goes beyond hardware to include culture , law, social institutions, and intellectual creations of all types. In short, t he Technium is anything that springs from the human mind. It includes hard techn
ology, but much else of human creation as well. I see this extended face of tech nology as a whole system with its own dynamics. The metaphor of the world as a single interconnected entity that subsumes human existence is an old one, and in its modern form, can be traced at least to Hobbe s’ Leviathan (1651), and Herbert Spencer’s The Social Organism (1853). What is new about th specific form is that it is much more than a metaphor. The view of the world as a si ngle, connected, substrate forcomputation is not just a poetic way to appreciate t he world: It is a way to shape it and act upon it. For many software projects, t he idea that “the network is the computer” (due to John Gage, a computing pioneer at Sun Microsystems) is the only practical perspective. While the pre-Internet world can also be viewed as a programmable planetary comp uter based on paperware, what makes today’s planetary computer unique in history i s that almost anyone with an Internet connection can program it at a global scal e, rather than just powerful leaders with the ability to shape organizations. The kinds of programming possible on such a vast, democratic scale have been rap idly increasing in sophistication. In November 2014 for instance, within a few d ays of the Internet discovering and becoming outraged by a sexist 2013 Barbie co mic-book titled Computer Engineer Barbie, hacker Kathleen Tuite had created a web ap p (using an inexpensive cloud service called Heroku) allowing anyone to rewrite the text of the book. The hashtag #FeministHackerBarbie immediately went viral. C oupled with the web app, the hashtag unleashed a flood of creative rewrites of t he Barbie book. What would have been a short-lived flood of outrage only a few y ears ago had turned into a breaking-smart moment for the entire software industr y. To appreciate just how remarkable this episode was, consider this: a hashtag is effectively an instantly defined soft networkwithin the Internet, with capabilitie s comparable to the entire planet’s telegraph system a century ago. By associating a hashtag with the right kind of app, Tuite effectively created an entire tempo rary publishing company, with its own distribution network, in a matter of hours rather than decades. In the process, reactive sentiment turned into creative ag ency. These capabilities emerged in just 15 years: practically overnight by the normal standards of technological change. In 1999, SETI@home,3 the first distributed computing project to capture the popula r imagination, merely seemed like a weird way to donate spare personal computing power to science. By 2007, Facebook, Twitter, YouTube, Wikipedia and Amazon’s Mec hanical Turk4 had added human creativity, communication and money into the mix, an d the same engineering approaches had created the social web. By 2014, experimen tal mechanisms developed in the culture of cat memes5 were influencing elections. The penny-ante economy of Amazon’s Mechanical Turk had evolved into a world where bitcoin miners were making fortunes, car owners were making livable incomes throug h ridesharing on the side, and canny artists were launching lucrative new career s on Kickstarter. Even as the old planet-scale computer declines, the new one it gave birth to is coming of age. In our Tale of Two Computers, the parent is a four-century-old computer whose ba sic architecture was laid down in the zero-sum mercantile age. It runs on paperw are, credentialism, and exhaustive territorial claims that completely carve up t he world with strongly regulated boundaries. Its structure is based on hierarchi cally arranged container-like organizations, ranging from families to nations. I n this order of things, there is no natural place for a free frontier. Ideally, there is a place for everything, and everything is in its place. It is a compute r designed for stability, within which innovation is a bug rather than a feature . We’ll call this planet-scale computer the geographic world. The child is a young, half-century old computer whose basic architecture was lai d down during the Cold War. It runs on software, the hacker ethos, and soft netw orks that wire up the planet in ever-richer, non-exclusive, non-zero-sum ways. I ts structure is based on streams like Twitter: open, non-hierarchical flows of rea l-time information from multiple overlapping networks. In this order of things,
everything from banal household gadgets to space probes becomes part of a fronti er for ceaseless innovation through bricolage. It is a computer designed for rap id, disorderly and serendipitous evolution, within which innovation, far from be ing a bug, is the primary feature. We’ll call this planet-scale computer the networked world. The networked world is not new. It is at least as old as the oldest trade routes , which have been spreading subversive ideas alongside valuable commodities thro ughout history. What is new is its growing ability to dominate the geographic world. The story of software eating the world is the also the story of networks eating geography. There are two major subplots to this story. The first subplot is about bits domi nating atoms. The second subplot is about the rise of a new culture of problem-s olving. [1] Neal Stephenson’s essay on the world of cable-laying, Mother Earth, Mother Board, is still a must-read, almost 20 years later. His idea of “hacker tourism” is something e verybody should try as part of becoming technologically literate. [2] Kevin Kelly, My Search for the Meaning of Tech, 2004. [3] The most famous of many projects that utilize idle computing time on individua l personal computers to perform complex tasks. In this case, analyzing radio sig nals from space for signs of alien life. [4] Reassembling the Social: An introduction to actor-network theory, Bruno Latour , 2007. [5] Kate Miltner, Srsly Phenomenal: An Investigation Into The Appeal Of Lolcats, MSc Dis sertation, London School of Economics, 2011. See also, this Huffington Post review . -------The Immortality of Bits In 2015, it is safe to say that the weird problem-solving mechanisms of SETI@hom e and kitten-picture sharing have become normal problem-solving mechanisms for a ll domains. Today it seems strange to not apply networked distributed computing involving both n eurons and silicon to any complex problem. The term social media is now unnecessary: Even when there are no humans involved, problem-solving on this planet-scale co mputer almost necessarily involves social mechanisms. Whatever the mix of humans , software and robots involved, solutions tend to involve the same “social” design e lements: real-time information streams, dynamically evolving patterns of trust, fluid identities, rapidly negotiated collaborations, unexpected emergent problem decompositions, efficiently allocated intelligence, and frictionless financial transactions. Each time a problem is solved using these elements, the networked world is stren gthened. As a result of this new and self-reinforcing normal in problem-solving, the tech nological foundation of our planet is evolving with extraordinary rapidity. The process is a branching, continuous one rather than the staged, sequential proces s suggested by labels like Web 2.0 and Web 3.01, which reflect an attempt to und erstand it in somewhat industrial terms. Some recently sprouted extensions and b ranches have already been identified and named: the Mobile Web, the Internet of Things (IoT), streaming media, Virtual Reality (VR), Augmented Reality (AR) and the blockchain. Others will no doubt emerge in profusion, further blurring the l ine between real and virtual. Surprisingly, as a consequence of software eating the technology industry itself , the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hard ware-agnostic today. The Internet Wayback Machine,2 developed by Brewster Kahle and Bruce Gilliat in 19 96, has already preserved a history of the web across a few generations of hardw are. While such efforts can sometimes seem woefully inadequate with respect to p astoralist visions of history preservation, it is important to recognize the eno rmity of the advance they represent over paper-based collective memories. Crashing storage costs and continuously upgraded datacenter hardware allows corp
orations to indefinitely save all the data they generate. This is turning out to be cheaper than deciding what to do with it3 in real time, resulting in the Big D ata approach to business. At a personal level, cloud-based services like Dropbox make your personal data trivial to move across computers. Most code today, unlike fifty years ago, is in hardware-independent high-level p rogramming languages rather than hardware-specific machine code. As a result of virtualization (technology that allows one piece of hardware to emulate another, a fringe technology until around 20004), most cloud-based software runs within virtual machines and “code containers” rather than directly on hardware. Containeriz ation in shipping drove nearly a seven-fold increase5 in trade among industrialize d nations over 20 years. Containerization of code is shaping up to be even more impactful in the economics of software. Networks too, are defined primarily in software today. It is not just extremely high-level networks, such as the transient, disposable ones defined by hashtags, that exist in software. Low-level networking software can also persist across g enerations of switching equipment and different kinds of physical links, such as telephone lines, optic fiber cables and satellite links. Thanks to the emerging technology of software-defined networking (SDN), functions that used to be perf ormed by network hardware are increasingly performed by software. In other words, we don’t just live on a networked planet. We live on a planet netw orked by software, a distinction that makes all the difference. The software-networked planet is an entity that can exist in a continuous and coherent way despite con tinuous hardware churn, just as we humans experience a persistent identity, even though almost every atom in our bodies gets swapped out every few years. This is a profound development. We are used to thinking of atoms as enduring and bits as transient and ephemeral, but in fact the reverse is more true today. The emerging planetary computer has the capacity to retain an evolving identity and memory across evolutionary epochs in hardware, both silicon and neural. Like mon ey and writing, software is only dependent on hardware in the short term, not in the long term. Like the US dollar or the plays of Shakespeare, software and sof tware-enabled networks can persist through changes in physical technology. By contrast it is challenging to preserve old hard technologies even in museums, let alone in working order as functional elements of society. When software eat s hardware, however, we can physically or virtually recreate hardware as necessa ry, imbuing transient atoms with the permanence of bits. For example, the Realeaux collection of 19th century engineering mechanisms, a p riceless part of mechanical engineering heritage, is now available as a set of 3 d printable models from Cornell University6 for students anywhere in the world to download, print and study. A higher-end example is NASA’s reverse engineering of 1 970s-vintage Saturn V rocket engines.7The complex project used structured light 3d scanning to reconstruct accurate computer models, which were then used to inf orm a modernized design. Such resurrection capabilities even extend to computing hardware itself. In 1997, using modern software tools, researchers at the Unive rsity of Pennsylvania led by Jan Van Der Spiegel recreated ENIAC, the first mode rn electronic computer — in the form of an 8mm by 8mm chip.8 As a result of such capabilities, the very idea of hardware obsolescence is beco ming obsolete. Rapid evolution does not preclude the persistence of the past in a world of digital abundance. The potential in virtual and augmented reality is perhaps even higher, and the p otential goes far beyond consumption devices like the Oculus VR, Magic Leap, Mic rosoft Hololens and the Leap 3d motion sensor. The more exciting story is that pro ductioncapabilities are being democratized. In the early decades of prohibitivel y expensive CGI and motion capture technology, only big-budget Hollywood movies and video games could afford to create artificial realities. Today, with technol ogies like Microsoft’s Photosynth (which allows you to capture 3d imagery with sma rtphones), SketchUp, (a powerful and free 3d modeling tool), 3d Warehouse (a pub lic repository of 3d virtual objects), Unity (a powerful game-design tool) and 3 d scanning apps such as Trimensional, it is becoming possible for anyone to crea te living historical records and inhabitable fictions in the form of virtual env ironments. The Star Trek “holodeck” is almost here: our realities can stay digitally ali
ve long after they are gone in the physical world. These are more than cool toys. They are soft technological capabilities of enorm ous political significance. Software can preserve the past in the form of detail ed, relivable memories that go far beyond the written word. In 1964, only the “Big 3” network television crews had the ability to film the civil rights riots in Amer ica, making the establishment record of events the only one. A song inspired by the movement was appropriately titled This revolution will not be televised. In 19 91, a lone witness with a personal camcorder videotaped the tragic beating of Ro dney King, triggering the Los Angeles riots. Fast-forwarding fifteen years, in 2014, smartphones were capturing at least fragm ents of nearly every important development surrounding the death of Michael Brow n in Ferguson, and thousands of video cameras were being deployed to challenge t he perspectives offered by the major television channels. In a rare display of c onsensus, civil libertarians on both the right and left began demanding that all police officers and cars be equipped with cameras that cannot be turned off. Ar ound the same time, the director of the FBI was reduced to conducting a media ro adshow to attempt to stall the spread of cryptographic technologies capable of l imiting government surveillance. In just a year after the revelations of widespread surveillance by the NSA, the tables were already being turned. It is only a matter of time before all participants in every event of importance will be able to record and share their experiences from their perspective as co mprehensively as they want. These can then turn into collective, relivable, 3d m emories that are much harder for any one party to manipulate in bad faith. Histo ry need no longer be written by past victors. Even authoritarian states are finding that surveillance capabilities cut both wa ys in the networked world. During the 2014 #Occupy protests in Hong Kong for ins tance, drone imagery allowed news agencies to make independent estimates of crow d sizes,9 limiting the ability of the government to spin the story as a minor prot est. Software was being used to record history from the air, even as it was bein g used to drive the action on the ground. When software eats history this way, as it is happening, the ability to forget10 bec omes a more important political, economic and cultural concern than the ability to remember. When bits begin to dominate atoms, it no longer makes sense to think of virtual and physical worlds as separate, detached spheres of human existence. It no long er makes sense to think of machine and human spheres as distinct non-social and social spaces. When software eats the world, “social media,” including both human an d machine elements, becomes the entire Internet. “The Internet” in turn becomes the entire world. And in this fusion of digital and physical, it is the digital that dominates. The fallacious idea that the online world is separate from and subservient to th e offline world (an idea called digital dualism, the basis for entertaining but deep ly misleading movies such as Tron and The Matrix) yields to an understanding of the In ternet as an alternative basis for experiencing all reality, including the old basis : geography. Science fiction writer Bruce Sterling captured the idea of bits dominating atoms with his notion of “spimes” — enduring digital master objects that can be flexibly re alized in different physical forms as the need arises. A book, for instance, is a spime rather than a paper object today, existing as a master digital copy that can evolve indefinitely, and persist beyond specific physical copies. At a more abstract level, the idea of a “journey” becomes a spime that can be flexib ly realized in many ways, through specific physical vehicles or telepresence tec hnologies. A “television news show” becomes an abstract spime that might be realized through the medium of a regular television crew filming on location, an ordinar y citizen livestreaming events she is witnessing, drone footage, or official sur veillance footage obtained by activist hackers. Spimes in fact capture the essential spirit of bricolage: turning ideas into rea lity using whatever is freely or cheaply available, instead of through dedicated resources controlled by authoritarian entities. This capability highlights the ec
onomic significance of bits dominating atoms. When the value of a physical resourc e is a function of how openly and intelligently it can be shared and used in con junction with software, it becomes less contentious. In a world organized by ato ms-over-bits logic, most resources are by definition what economists call rivalrou s: if I have it, you don’t. Such captive resources are limited by the imagination an d goals of one party. An example is a slice of the electromagnetic spectrum rese rved for a television channel. Resources made intelligently open to all on the o ther hand, such as Twitter, are limited only by collective technical ingenuity. The rivalrousness of goods becomes a function of the the amount of software and imagination used to leverage them, individually or collectively. When software eats the economy, the so-called “sharing economy” becomes the entire e conomy, and renting, rather than ownership, becomes the default logic driving co nsumption. The fact that all this follows from “social” problem-solving mechanisms suggests tha t the very meaning of the word has changed. As sociologist Bruno Latour has argu ed, “social” is now about more than the human. It includes ideas and objects flexibl y networked through software. Instead of being an externally injected alien elem ent, technology and innovation becomepart of the definition of what it means to be social. What we are living through today is a hardware and software upgrade for all of c ivilization. It is, in principle no different from buying a new smartphone and m oving music, photos, files and contacts to it. And like a new smartphone, our ne w planet-scale hardware comes with powerful, but disorienting new capabilities. Capabilities that test our ability to adapt. And of all the ways we are adapting, the single most important one is the adapta tion in our problem-solving behaviors. This is the second major subplot in our Tale of Two Computers. Wherever bits beg in to dominate atoms, we solve problems differently. Instead of defining and pur suing goals we create and exploit luck. ____________ [1] The temptation to understand the evolution of computing in terms of discrete s tages dates back to the idea of generations in computing. The vacuum tube, mainf rame, minicomputer and personal computer eras are usually identified as the firs t four generations. The scheme fell apart with the failure of the Japanese “fifthgeneration” computing effort, devoted to AI, and the rise of networking as the sine qua non of computing. [2] As of this writing, the archive contains over 435 billion webpages. [3] This definition of Big Data is due to Geroge Dyson. [4] In 1999, VMWare introduced the first successful virtualization of the x86 proc essor, which powers most laptops and servers. This paved the way for cloud compu ting. Today, nearly all software is “containerized” to run either on virtual machine s that emulate raw hardware, or more specialized and lightweight containers such as Docker. Virtualization is now so advanced that the x86 processor can be emul ated within a browser. Leading-edge technologies like the Bromium microvisor tod ay allow virtual machines to be instantly created just to run a single command. Virtualization technology isn’t just of historical interest for preserving hardwar e history. It is a mission critical part of keeping software evolving smoothly. [5] Daniel M. Bernhofen et al, Estimating the Effects of the Container Revolution on World Trade, Feb 2013, CESifo Working Paper Series No. 4136. [6] Cornell Kmoddl Collection [7] How Nasa Brought the Monstrous F1 Moon Rocket Back to Life, Ars Technica, 2013. [8] ENIAC on a Chip, PennPrintout, 1996. [9] Drone Footage Reveals Massive Scale of Hong Kong Protests, Mashable, 2014. [10] The EU and Argentina, for instance, have right to be forgotten laws. Tinkering versus Goals Upgrading a planet-scale computer is, of course, a more complex matter than trad ing in an old smartphone for a new one, so it is not surprising that it has alre ady taken us nearly half a century, and we’re still not done. Since 1974, the year of peak centralization, we have been trading in a world who se functioning is driven by atoms in geography for one whose functioning is driv
en by bits on networks. The process has been something like vines growing all ov er an aging building, creeping in through the smallest cracks in the masonry to establish a new architectural logic. The difference between the two is simple: the geographic world solves problems i n goal-driven ways, through literal or metaphoric zero-sum territorial conflict. The networked world solves them in serendipitous ways, through innovations that break assumptions about how resources can be used, typically making them less r ivalrous and unexpectedly abundant. Goal-driven problem-solving follows naturally from the politician’s syllogism: we mu st do something; this is something; we must do this. Such goals usually follow f rom gaps between reality and utopian visions. Solutions are driven by the determ inistic form-follows-function1 principle, which emerged with authoritarian high-mo dernism in the early twentieth century. At its simplest, the process looks rough ly like this: 1. Problem selection: Choose a clear and important problem 2. Resourcing: Capture resources by promising to solve it 3. Solution: Solve the problem within promised constraints This model is so familiar that it seems tautologically equivalent to “problem solv ing”. It is hard to see how problem-solvingcould work any other way. This model is a lso an authoritarian territorial claim in disguise. A problem scope defines a bo undary of claimed authority. Acquiring resources means engaging in zero-sum comp etition to bring them into your boundary, ascaptive resources. Solving the problem generally means achieving promised effects within the boundary without regard t o what happens outside. This means that unpleasant unintended consequences — what economists call social costs — are typically ignored, especially those which impac t the least powerful. We have already explored the limitations of this approach in previous essays, so we can just summarize them here. Choosing a problem based on “importance” means unc ritically accepting pastoral problem frames and priorities. Constraining the sol ution with an alluring “vision” of success means limiting creative possibilities for those who come later. Innovation is severely limited: You cannot act on unexpec ted ideas that solve different problems with the given resources, let alone pursue t he direction of maximal interestingness indefinitely. This means unseen opportun ity costs can be higher than visible benefits. You also cannot easily pursue sol utions that require different (and possibly much cheaper) resources than the one s you competed for: problems must be solved in pre-approved ways. This is not a process that tolerates uncertainty or ambiguity well, let alone th rive on it. Even positive uncertainty becomes a problem: an unexpected budget su rplus must be hurriedly used up, often in wasteful ways, otherwise the budget mi ght shrink next year. Unexpected new information and ideas, especially from novel pe rspectives — the fuel of innovation — are by definition a negative, to be dealt with like unwanted interruptions. A new smartphone app not anticipated by prior regu lations must be banned. In the last century, the most common outcome of goal-directed problem solving in complex cases has been failure. The networked world approach is based on a very different idea. It does not begi n with utopian goals or resources captured through specific promises or threats. I nstead it begins with open-ended, pragmatic tinkering that thrives on the unexpected . The process is not even recognizable as a problem-solving mechanism at first g lance: 1. Immersion in relevant streams of ideas, people and free capabilities 2. Experimentation to uncover new possibilities through trial and error 3. Leverage to double down on whatever works unexpectedly well Where the politician’s syllogism focuses on repairing things that look broken in r elation to an ideal of changeless perfection, the tinkerer’s way focuses on possib ilities for deliberate change. As Dilbert creator Scott Adams observed, “Normal people don’t understand this concept; they believe that if it ain’t broke, don’t fix it. Eng ineers believe that if it ain’t broke, it doesn’t have enough features yet.”2 What would be seemingly pointless disruption in an unchanging utopia becomes a w ay to stay one step ahead in a changing environment. This is the key difference between
the two problem-solving processes: in goal-driven problem-solving, open-ended id eation is fundamentally viewed as a negative. In tinkering, it is a positive. The first phase — inhabiting relevant streams — can look like idle procrastination o n Facebook and Twitter, or idle play with cool new tools discovered on Github. B ut it is really about staying sensitized to developing opportunities and threats . The perpetual experimentation, as we saw in previous essays, feeds via bricola ge on whatever is available. Often these are resources considered “waste” by neighbo ring goal-directed processes: a case of social costs being turned into assets. A great deal of modern data science for instance, begins with “data exhaust”: data of no immediate goal-directed use to an organization that would normally get disca rded in an environment of high storage costs. Since the process begins with lowstakes experimentation, the cost of failures is naturally bounded. The upside, h owever, is unbounded: there is no necessary limit to what unexpected leveraged u ses you might discover for new capabilities. Tinkerers — be they individuals or organizations — in possession of valuable but und er-utilized resources tend to do something counter-intuitive. Instead of keeping idle resources captive, they open up access to as many people as possible, with as few strings attached as possible, in the hope of catalyzing spillover tinker ing. Where it works, thriving ecosystems of open-ended innovation form, and stea dy streams of new wealth begin to flow. Those who share interesting and unique r esources in such open ways gain a kind of priceless goodwill money cannot buy. T he open-source movement, Google’s Android operating system, Big Data technology, t he Arduino hardware experimentation kit and the OpenROV underwater robot all beg an this way. Most recently, Tesla voluntarily opened up access to its electric v ehicle technology patents under highly liberal terms compared to automobile indu stry norms. Tinkering is a process of serendipity-seeking that does not just tolerate uncert ainty and ambiguity, it requires it. When conditions for it are right, the result is a snowballing effect where pleasant surprises lead to more pleasant surprises. What makes this a problem-solving mechanism is diversity of individual perspecti ves coupled with the law of large numbers (the statistical idea that rare events can become highly probable if there are enough trials going on). If an increasi ng number of highly diverse individuals operate this way, the chances of any giv en problem getting solved via a serendipitous new idea slowly rises. This is the l uck of networks. Serendipitous solutions are not just cheaper than goal-directed ones. They are t ypically more creative and elegant, and require much less conflict. Sometimes th ey are so creative, the fact that they even solve a particular problem becomes h ard to recognize. For example, telecommuting and video-conferencing do more to “so lve” the problem of fossil-fuel dependence than many alternative energy technologi es, but are usually understood as technologies for flex-work rather than energy savings. Ideas born of tinkering are not targeted solutions aimed at specific problems, s uch as “climate change” or “save the middle class,” so they can be applied more broadly. As a result, not only do current problems get solved in unexpected ways, but ne w value is created through surplus and spillover. The clearest early sign of suc h serendipity at work is unexpectedly rapid growth in the adoption of a new capa bility. This indicates that it is being used in many unanticipated ways, solving both seen and unseen problems, by both design and “luck”. Venture capital is ultimately the business of detecting such signs of serendipit y early and investing to accelerate it. This makes Silicon Valley the first econ omic culture to fully and consciously embrace the natural logic of networks. Whe n the process works well, resources flow naturally towards whatever effort is gr owing and generating serendipity the fastest. The better this works, the more re sources flow in ways that minimize opportunity costs. From the inside, serendipitous problem solving feels like the most natural thing in the world. From the perspective of goal-driven problem solvers, however, it can look indistinguishable from waste and immoral priorities. This perception exists primarily because access to the luck of sufficiently weak networks can be slowed down by sufficiently strong geographic world boundaries
(what is sometimes called bahramdipity: serendipity thwarted by powerful forces) . Where resources cannot stream freely to accelerate serendipity, they cannot so lve problems through engineered luck, or create surplus wealth. The result is gr owing inequality between networked and geographic worlds. This inequality superficially resembles the inequality within the geographic world c reated by malfunctioning financial markets, crony capitalism and rent-seeking be haviors. As a result, it can be hard for non-technologists to tell Wall Street a nd Silicon Valley apart, even though they represent two radically different mora l perspectives and approaches to problem-solving. When the two collide on highly unequal terms, as they did in the cleantech sector in the late aughts, the over whelming advantage enjoyed by geographic-world incumbents can prove too much for the networked world to conquer. In the case of cleantech, software was unable t o eat the sector and solve its problems in large part due to massive subsidies a nd protections available to incumbents. But this is just a temporary state. As the networked world continues to strength en, we can expect very different outcomes the next time it takes on problems in the cleantech sector. As a result of failures and limits that naturally accompany young and growing ca pabilities, the networked world can seem “unresponsive” to “real” problems. So while both Wall Street and Silicon Valley can often seem tone-deaf and unresp onsive to pressing and urgent pains while minting new billionaires with boring f requency, the causes are different. The problems of Wall Street are real, and sy mptomatic of a true crisis of social and economic mobility in the geographic wor ld. Those of Silicon Valley on the other hand, exist because not everybody is su fficiently plugged into the networked world yet, limiting its power. The best re sponse we have come up with for the former is periodic bailouts for “too big to fa il” organizations in both the public and private sector. The problem of connectivi ty on the other hand, is slowly and serendipitously solving itself as smartphone s proliferate. This difference between the two problem-solving cultures carries over to macroec onomic phenomena as well. Unlike booms and busts in the financial markets, which are often artificially cr eated, technological booms and busts are an intrinsic feature of wealth creation itself. As Carlota Perez notes, technology busts in fact typically open up vast new capabilities that were overbuilt during booms. They radically expand access to the luck of networks to larger populations. The technology bust of 2000 for inst ance, radically expanded access to the tools of entrepreneurship and began fueli ng the next wave of innovation almost immediately. The 2007 subprime mortgage bust, born of deceit and fraud, had no such serendipi tous impact. It destroyed wealth overall, rather than creating it. The global fi nancial crisis that followed is representative of a broader systematic crisis in the geographic world. [1] The principle appears to have been first stated by the architect Louis Sullivan in 1896. [2] Scott Adams, The Dilbert Principle, 1997. The Zemblanity of Containers Structure, as the management theorist Alfred Chandler noted in his study of earl y industrial age corporations, follows strategy. Where a goal-driven strategy su cceeds, the temporary scope of the original problem hardens into an enduring and policed organizational boundary. Temporary and specific claims on societal reso urces transform into indefinite and general captive property rights for the vict ors of specific political, cultural or military wars. As a result we get containers with eternally privileged insiders and eternally e xcluded outsiders: geographic-world organizations. By their very design, such or ganizations are what Daron Acemoglu and James Robinson call extractive institutions. T hey are designed not just to solve a specific problem and secure the gains, but to continue extracting wealth indefinitely. Whatever the broader environmental c onditions, ideally wealth, harmony and order accumulate inside the victor’s bounda ries, while waste, social costs, and strife accumulate outside, to be dealt with by the losers of resource conflicts.
This description does not apply just to large banks or crony capitalist corporat ions. Even an organization that seems unquestionably like a universal good, such as the industrial age traditional family, comes with a societal cost. In the Un ited States for example, laws designed to encourage marriage and home-ownership systematically disadvantage single adults and non-traditional families (who now collectively form more than half the population). Even the traditional family, a s defined and subsidized by politics, is an extractive institution. Where extractive institutions start to form, it becomes progressively harder to solve future problems in goal-driven ways. Each new problem-solving effort has m ore entrenched boundaries to deal with. Solving new problems usually means takin g on increasingly expensive conflict to redraw boundaries as a first step. In th e developed world, energy, healthcare and education are examples of sectors wher e problem-solving has slowed to a crawl due to a maze of regulatory and other bo undaries. The result has been escalating costs and declining innovation — what eco nomist William Baumol has labeled the “cost disease.” The cost disease is an example of how, in their terminal state, goal-driven prob lem solving cultures exhaust themselves. Without open-ended innovation, the grow ing complexity of boundary redrawing makes most problems seem impossible. The pl anetary computer that is the geographic world effectively seizes up. On the cusp of the first Internet boom, the landscape of organizations that defi nes the geographic world was already in deep trouble. As Giles Deleuze noted arou nd 1992:1 We are in a generalized crisis in relation to all environments of enclosure — pris on, hospital, factory, school, family…The administrations in charge never cease an nouncing supposedly necessary reforms…But everyone knows these environments are fi nished, whatever the length of their expiration periods. It’s only a matter of adm inistering their last rites and of keeping people employed until the installatio n of new forces knocking at the door. The “crisis in environments of enclosure” is a natural terminal state for the geogra phic world. When every shared societal resource has been claimed by a few as an eternal and inalienable right, and secured behind regulated boundaries, the only way to gain something is to deprive somebody else of it through ideology-driven conflict. This is the zero-sum logic of mercantile economic organization, and dates to the sixteenth century. In fact, because some value is lost through conflict, in the absence of open-ended innovation, it can be worse than zero-sum: what decision theorists callnegative-sum (the ultimate example of which is of course war). By the early twentieth century, mercantilist economic logic had led to the world being completely carved up in terms of inflexible land, water, air, mineral and — perhaps most relevant today — spectrum rights. Rights that could not be freely tr aded or renegotiated in light of changing circumstances. This is a grim reality we have a tendency to romanticize. As the etymology of wo rds like organization and corporation suggests, we tend to view our social containers th rough anthropomorphic metaphors. We extend metaphoric and legal fictions of iden tity, personality, birth and death far beyond the point of diminishing marginal utility. We assume the “life” of these entities to be self-evidently worth extending into immortality. We even mourn them when they do occasionally enter irreversib le decline. Companies like Kodak and Radio Shack for example, evoke such strong positive memories for many Americans that their decline seems truly tragic to ma ny, despite the obvious irrelevance of the business models that originally fuele d their rise. We assume that the fates of actual living humans is irreversibly t ied to the fates of the artificial organisms they inhabit. In fact, in the late crisis-ridden state of the geographic world, the “goal” of a ty pical problem-solving effort is often to “save” some anthropomorphically conceived p art of society, without any critical attention devoted to whether it is still ne cessary, or whether better alternatives are already serendipitously emerging. If innovation is considered a necessary ingredient in the solution at all, only su staining innovations — those that help preserve and perfect the organization in qu estion — are considered. Whether the intent is to “save” the traditional family, a failing corporation, a cit
y in decline, or an entire societal class like the “American middle class,” the idea that the continued existence of any organization might be both unnecessary and unjustifiable is rejected as unthinkable. The persistence of geographic world or ganizations is prized for its own sake, whatever the changes in the environment. The dark side of such anthropomorphic romanticization is what we might call geogra phic dualism: a stable planet-wide separation of local utopian zones secured for a privileged few and increasingly dystopian zones for many, maintained through policed boundaries. The greater the degree of geographic dualism, the clearer th e divides between slums and high-rises, home owners and home renters, developing and developed nations, wrong and right sides of the tracks, regions with landfil ls and regions with rent-controlled housing. And perhaps the most glaring divide: secure jobs in regulated sectors with guaranteed lifelong benefits for some, at the cost of needlessly heightened precarity in a rapidly changing world for oth ers. In a changing environment, organizational stability valued for its own sake beco mes a kind of immorality. Seeking such stability means allowing the winners of h istoric conflicts to enjoy the steady, fixed benefits of stability by imposing i ncreasing adaptation costs on the losers. In the late eighteenth century, two important developments planted the seeds of a new morality, which sparked the industrial revolution. As a result new wealth began to be created despite the extractive, stability-seeking nature of the geograph ic world. Free as in Beer, and as in Speech With the benefit of a century of hindsight, the authoritarian high-modernist ide a that form can follow function in a planned way, via coercive control, seems li ke wishful thinking beyond a certain scale and complexity. Two phrases populariz ed by the open-source movement, free as in beer and free as in speech, get at the essenc e of problem solving through serendipity, an approach that does work1 in large-scale and complex systems. The way complex systems — such as planet-scale computing capabilities — evolve is pe rhaps best described by a statement known as Gall’s Law: A complex system that works is invariably found to have evolved from a simple sys tem that worked. A complex system designed from scratch never works and cannot b e patched up to make it work. You have to start over with a working simple syste m. Gall’s Law is in fact much too optimistic. It is not just non-working complex syst ems designed from scratch that cannot be patched up. Even naturally evolved comp lex systems that used to work, but have now stopped working, generally cannot be pat ched into working order again. The idea that a new, simpler system can revitalize a complex system in a state o f terminal crisis is the essence of Promethean thinking. Though the geographic w orld has reached a state of terminal crisis only recently, the seeds of a simple r working system to replace it were actually planted in the eighteenth century, nearly 200 years before software joined the party. The industrial revolution its elf was driven by two elements of our world being partially freed from geographi c world logic: people and ideas. In the eighteenth century, the world gradually rejected the idea that people cou ld be property, to be exclusively claimed by other people or organizations as a problem-solving “resource,” and held captive within specific boundaries. Individual rights and at-will employment models emerged in liberal democracies, in place of institutions like slavery, serfdom and caste-based hereditary professions. The second was ideas. Again, in the late eighteenth century, modern intellectual p roperty rights, in the form of patents with expiration dates, became the norm. I n ancient China, those who revealed the secrets of silk-making were put to death by the state. In late eighteenth century Britain, the expiration of James Watt’s patents sparked the industrial revolution. Thanks to these two enlightened ideas, a small trickle of individual inventions turned into a steady stream of non-zero sum intellectual and capitalist progress within an otherwise mercantilist, zero-sum world. In the process, the stability -seeking logic of mercantilism was gradually replaced by the adaptive logic of c
reative destruction. People and ideas became increasingly free in two distinct ways. As Richard Stall man, the pioneer of the open source movement, famously expressed it: The two kin ds of freedom are free as in beer and free as in speech. First, people and ideas were increasingly free in the sense of no longer being c onsidered “property” to be bought and sold like beer by others. Second, people and ideas became increasingly free in the sense of not being rest ricted to a single purpose. They could potentially play any role they were capab le of fulfilling. For people, this second kind of freedom is usually understood in terms of specific rights such as freedom of speech, freedom of association an d assembly, and freedom of religion. What is common to all these specific freedo ms is that they represent freedom from the constraints imposed by authoritarian goals. This second kind of freedom is so new, it can be alarming to those used to being told what to do by authority figures. Where both kinds of freedom exist, networks begin to form. Freedom of speech, fo r instance, tends to create a thriving literary and journalistic culture, which exists primarily as a network of individual creatives rather than specific organ izations. Freedom of association and assembly creates new political movements, in the form of grassroots political networks. Free people and ideas can associate in arbitrary ways, creating interesting new combinations and exploring open-ended possibilities. They can make up their own minds about whether problems declared urgent by authoritarian leaders are actual ly the right focus for their talents. Free ideas are even more powerful, since u nlike the talents of free individuals, they are not restricted to one use at a t ime. Free people and free ideas formed the “working simple system” that drove two centuri es of disruptive industrial age innovation. Tinkering — the steady operation of this working simple system — is a much more subv ersive force than we usually recognize, since it poses an implicit challenge to auth oritarian priorities. This is what makes tinkering an undesirable, but tolerable bug in the geographic world. So long as material constraints limited the amount of tinkering going on , the threat to authority was also limited. Since the “means of production” were not free, either as in beer or as in speech, the anti-authoritarian threat of tinke ring could be contained by restricting access to them. With software eating the world, this is changing. Tinkering is becoming much mor e than a minority activity pursued by the lucky few with access to well-stocked garages and junkyards. It is becoming the driver of a global mass flourishing. As Karl Marx himself realized, the end-state of industrial capitalism is in fact the condition where the means of production become increasingly available to al l. Of course, it is already becoming clear that the result is neither the utopia n collectivist workers’ paradise he hoped for, nor the utopian leisure society tha t John Maynard Keynes hoped for. Instead, it is a world where increasingly free people, working with increasingly free ideas and means of production, operate by their own priorities. Authoritarian leaders, used to relying on coercion and po liced boundaries, find it increasingly hard to enforce their priorities on other s in such a world. Chandler’s principle of structure following strategy allows us to understand what is happening as a result. If non-free people, ideas and means of production resu lt in a world of container-like organizations, free people, ideas and means of p roduction result in a world of streams. [1] In the early years of open-source, these ideas were primarily framed in ideolo gical terms, and the reasons for their effectiveness were poorly understood. Wit h the benefit of 35 years of hindsight and experience, and the maturation of the movement from a fringe philosophy to a mainstream practice in both the non-prof it and for-profit software sectors, the ideas today are best understood as a part of technology strategy. As Simon Wardley argues, the businesses that are most succes sful with open source are the ones that are “open by thinking” rather than “open by defa ult” (i.e. as a matter of uncritically held values). The Serendipity of Streams
A stream is simply a life context formed by all the information flowing towards you via a set of trusted connections — to free people, ideas and resources — from multip le networks. If in a traditional organization nothing is free and everything has a defined role in some grand scheme, in a stream, everything tends steadily tow ards free as in both beer and speech. “Social” streams enabled by computing power in t he cloud and on smartphones are not a compartmentalized location for a particula r kind of activity. They provide an information and connection-rich context for al l activity. Unlike organizations defined by boundaries, streams are what Acemoglu and Robins on call pluralist institutions. These are the opposite of extractive: they are open, inc lusive and capable of creating wealth in non-zero-sum ways. On Facebook for example, connections are made voluntarily (unlike reporting rela tionships on an org chart) and pictures or notes are usually shared freely (unli ke copyrighted photos in a newspaper archive), with few restrictions on further sharing. Most of the capabilities of the platform are free-as-in-beer. What is l ess obvious is that they are also free-as-in-speech. Except at the extremes, Fac ebook does not attempt to dictate what kinds of groups you are allowed to form o n the platform. If the three most desirable things in a world defined by organizations are locatio n, location and location,1 in the networked world they are connections, connections and connections. Streams are not new in human culture. Before the Silk Road was a Darknet site, i t was a stream of trade connecting Asia, Africa and Europe. Before there were li festyle-designing free agents, hackers and modern tinkerers, there were the itin erant tinkers of early modernity. The collective invention settings we discussed in the last essay, such as the Cornish mining district in James Watt’s time and Silic on Valley today, are examples of early, restricted streams. The main streets of thriving major cities are also streams, where you might run into friends unexpec tedly, learn about new events through posted flyers, and discover new restaurant s or bars. What is new is the idea of a digital stream created by software. While geography dom inates physical streams, digital streams can dominate geography. Access to the s tream of innovation that is Silicon Valley is limited by geographic factors such as cost of living and immigration barriers. Access to the stream of innovation that is Github is not. On a busy main street, you can only run into friends who also happen to be out that evening, but with Augmented Reality glasses on, you m ight also “run into” friends from around the world and share your physical experienc es with them. What makes streams ideal contexts for open-ended innovation through tinkering is that they constantly present unrelated people, ideas and resources in unexpecte d juxtapositions. This happens because streams emerge as the intersection of multi plenetworks. On Facebook, or even your personal email, you might be receiving up dates from both family and coworkers. You might also be receiving imported updat es from structurally distinct networks, such as Twitter or the distribution netw ork of a news source. This means each new piece of information in a stream is vi ewed against a backdrop of overlapping, non-exclusive contexts, and a plurality of unrelated goals. At the same time, your own actions are being viewed by other s in multiple unrelated ways. As a result of such unexpected juxtapositions, you might “solve” problems you didn’t r ealize existed and do things that nobody realized were worth doing. For example, seeing a particular college friend and a particular coworker in the same stream might suggest a possibility for a high-value introduction: a small act of socia l bricolage. Because you are seen by many others from different perspectives, you mi ght find people solving problems for you without any effort on your part. A comm on experience on Twitter, for example, is a Twitter-only friend tweeting an obsc ure but important news item, which you might otherwise have missed, just for you r benefit. When a stream is strengthened through such behaviors, every participating networ k is strengthened. While Twitter and Facebook are the largest global digital streams today, there a
re thousands more across the Internet. Specialized ones such as Github and Stack Overflow cater to specific populations, but are open to anyone willing to learn . Newer ones such as Instagram and Whatsapp tap into the culture of younger popu lations. Reddit has emerged as an unusual venue for keeping up with science by i nteracting with actual working scientists. The developers of every agile softwar e product in perpetual beta inhabit a stream of unexpected uses discovered by ti nkering users. Slack turns the internal life of a corporation into a stream. Streams are not restricted to humans. Twitter already has a vast population of i nteresting bots, ranging from House of Coates (an account that is updated by a s mart house) to space probes and even sharks tagged with transmitters by research ers.2Facebook offers pages that allow you to ‘like’ and follow movies and books. By contrast, when you are sitting in a traditional office, working with a laptop configured exclusively for work use by an IT department, you receive updates on ly from one context, and can only view them against the backdrop of a single, ex clusive and totalizing context. Despite the modernity of the tools deployed, the architecture of information is not very different from the paperware world. If information from other contexts leaks in, it is generally treated as a containme nt breach: a cause for disciplinary action in the most old-fashioned businesses. People you meet have pre-determined relationships with you, as defined by the o rganization chart. If you relate to a coworker in more than one way (as both a t eam member and a tennis buddy), that weakens the authority of the organization. The same is true of resources and ideas. Every resource is committed to a specif ic “official” function, and every idea is viewed from a fixed default perspective an d has a fixed “official” interpretation: the organization’s “party line” or “policy.” This has a radical consequence. When organizations work well and there are no st reams, we view reality in what behavioral psychologists call functionally fixed 3 ways : people, ideas and things have fixed, single meanings. This makes them less capable o f solving new problems in creative ways. In a dystopian stream-free world, the m ost valuable places are the innermost sanctums: these are typically the oldest organiz ations, most insulated from new information. But they are also the locus of the most wealth, and offer the most freedom for occupants. In China, for instance, t he innermost recesses of the Communist Party are still the best place to be. In a Fortune 500 company, the best place to be is still the senior executive floor. When streams work well on the other hand, reality becomes increasingly intertwingl ed (a portmanteau of intertwined andtangled), as Ted Nelson evocatively labeled the ph enomenon. People, ideas and things can have multiple, fluid meanings depending o n what else appears in juxtaposition with them. Creative possibilities rapidly m ultiply, with every new network feeding into the stream. The most interesting pl ace to be is usually the very edge, rather than the innermost sanctums. In the U nited States, being a young and talented person in Silicon Valley can be more va luable and interesting than being a senior staffer in the White House. Being the founder of the fastest growing startup may offer more actual leverage than bein g President of the United States. We instinctively understand the difference between the two kinds of context. In an organization, if conflicting realities leak in, we view them as distractions or interruptions, and react by trying to seal them out better. In a stream, if t hings get toohomogeneous and non-pluralistic, we complain that things are getting boring, predictable, and turning into an echo chamber. We react by trying to ope n things up, so that more unexpected things can happen. What we do not understand as instinctively is that streams are problem-solving and w ealth-creation engines. We view streams as zones of play and entertainment, thro ugh the lens of the geographic-dualist assumption that play cannot also be work. In our Tale of Two Computers, the networked world will become firmly established as the dominant planetary computer when this idea becomes instinctive, and work a nd play become impossible to tell apart. [1] A real-estate phrase that appears to date back at least to the 1920s. See Locati on, Location, Location by William Safire in the New York Times, 2009. [2] More Than 300 Sharks In Australia Are Now On Twitter, NPR, 2012. [3] Functional fixedness is a cognitive bias that results in people viewing things e xclusively in terms of their most visible functions.
Breaking Smart The first sustainable socioeconomic order of the networked world is just beginni ng to emerge, and the experience of being part of a system that is growing smart er at an exponential rate is deeply unsettling to pastoralists and immensely exc iting to Prometheans. Our geographic-world intuitions and our experience of the authoritarian institut ions of the twentieth century lead us to expect that any larger system we are pa rt of will either plateau into some sort of impersonal, bureaucratic stupidity, or turn “evil” somehow and oppress us. The first kind of apocalyptic expectation is at the heart of movies like Idiocracy a nd Wall-E, set in trashed futures inhabited by a degenerate humanity that has irreve rsibly destroyed nature. The second kind is the fear behind the idea of the Singularity: the rise of a sel f-improving systemic intelligence that might oppress us. Popular literal-minded misunderstandings of the concept, rooted in digital dualism, result in movies su ch asTerminator. These replace the fundamental humans-against-nature conflict of t he geographic world with an imagined humans-against-machines conflict of the fut ure. As a result, believers in such dualist singularities, rather ironically for e xtreme technologists, are reduced to fearfully awaiting the arrival of a God-lik e intelligence with fingers crossed, hoping it will be benevolent. Both fears are little more than technological obscurantism. They are motivated b y a yearning for the comforting certainties of the geographic world, with its cl ear boundaries, cohesive identities, and idealized heavens and hells. Neither is a meaningful fear. The networked world blurs the distinction between wealth and waste. This undermines the first fear. The serendipity of the network ed world depends on free people, ideas and capabilities combining in unexpected ways : “Skynet” cannot be smarter than humans unless the humans within it are free. This undermines the second fear. To the extent that these fears are justified at all, they reflect the terminal t rajectory of the geographic world, not the early trajectory of the networked wor ld. An observation due to Arthur C. Clarke offers a way to understand this second tr ajectory: any sufficiently advanced technology is indistinguishable from magic. Th e networked world evolves so rapidly through innovation, it seems like a frontie r of endless magic. Clarke’s observation has inspired a number of snowclones that shed further light on where we might be headed. The first, due to Bruce Sterling, is that any sufficie ntly advanced civilization is indistinguishable from its own garbage. The second , due to futurist Karl Schroeder,1 is that any sufficiently advanced civilization is indistinguishable from nature. To these we can add one from social media theorist Seb Paquet, which captures th e moral we drew from our Tale of Two Computers: any sufficiently advanced kind o f work is indistinguishable from play. Putting these ideas together, we are messily slouching towards a non-pastoral ut opia on an asymptotic trajectory where reality gradually blurs into magic, waste into wealth, technology into nature and work into play. This is a world that is breaking smart, with Promethean vigor, from its own past, li ke the precocious teenagers who are leading the charge. In broad strokes, this i s what we mean by software eating the world. For Prometheans, the challenge is to explore how to navigate and live in this world. A growing non-geographic-dualist understanding of it is leading to a network cult ure view of the human condition. If the networked world is a planet-sized distribu ted computer, network culture is its operating system. Our task is like Deep Thought’s task when it began constructing its own successor: to develop an appreciation for the “merest operational parameters” of the new plane t-sized computer to which we are migrating all our civilizational software and d ata.