The urgent need to democratize the internet: market, state and civil society in the digital age

Workshop on

(Vatican City, Casina Pio IV, Oct 19-21 2017)

Pontifical Academy of Social Science

Published in Spanish by EDUC (Editorial de la Universidad Católica de Córdoba)


Where does the wealth of the world’s most powerful companies come from? Until recently, the answer to this question was obvious to everyone: it comes from what they sell. Exxon sells oil, Walmart sells retail goods, AT&T sells telephone services, GM automobiles and so on and so forth. But what do Facebook and Google sell? The fact that this question is not that easy to answer is a strong indication that the digital revolution and its main social consequence, the emergence of the platform enterprise (Brynjolfsson and McAfee, 2017; Evans and Gawer, 2016), necessitates a new definition for the three terms—market, state and civil society—around which this conference is organized.

A recent editorial in The Economist noted that “a new commodity spawns a lucrative, fast-growing industry prompting antitrust regulators to step in to restrain those who control its flow. A century ago, the resource in question was oil. Now, similar concerns are being raised by the giants that deal in data, the oil of the digital era” (The Economist, 2017). Alphabet (Google), Amazon, Apple, Facebook and Microsoft are the five most valuable companies in the world, according to Standard & Poor’s, and some of the most profitable. These companies do not pay for the use of the primary source of their wealth, which completely subverts the idea of a market as an exchange of equivalents. This source is not the labor of people or their goods: it is their daily routine, or rather, the data generated by their most mundane tasks and by their social interaction. What’s more: as Tristan Harris[i] explains, the contemporary digital giants are in a “race for our attention” and, to do so, use methods (the same ones he worked with as a design ethicist at Google) based on neuroscience, which allow “a bunch of people…(to) shape the thoughts and feelings of a billion people.” One of Harris’ most important conclusions is that this entire process “is not evolving randomly” and this takes us to the second term of the title under which this workshop has been convened.

The State has shown itself incapable of regulating the operations of these companies and the technological transformations they have created. Artificial intelligence, until now, has not been the focus of any type of public governance, despite the growing recognition of its risks, which some think tanks do not hesitate to compare with nuclear weapons or climate change (Bostrom, 2014). Elon Musk, one of the main proponents of research and initiatives regarding artificial intelligence has insisted on the need for regulation “before it’s too late.” Mr. Musk is quick to say that it is an unprecedented risk to the existence of civilization and the survival of the human race (Gibbs, 2017).

Furthermore, as the platform enterprises operate around the world, it will be increasingly difficult for states to impose taxes aimed at redistribution, which would offset the disruptive effects of platform capitalism on job markets. Although the Universal Basic Income (UBI) is fast becoming a Silicon Valley mantra, no one knows for sure how underfunded nations would pay for it. And, regardless, it is unlikely that the UBI would be the most important instrument for offsetting the high concentration of income, wealth and power in the world today (Milanovic, 2016; Sheidel, 2017) and for which platform capitalism is acknowledged to be one of its most important drivers. Michael Sandel (2017) explains: “some Silicon Valley visionaries anticipate a time when robots and artificial intelligence will render many of today’s jobs obsolete. To ease the way for such a future, they propose paying everyone a basic income. What was once justified as a safety net for all citizens is now offered as a way to soften the transition to a world without work.” Against the backdrop of increasingly divided societies, the consequences of this effort to smooth the transition to a world of few work opportunities could be highly divisive.

Generalized connectivity brings with it unprecedented potential to expand social collaboration, far beyond the limited circles of inter-knowledge that make up our daily relations, in other words, to strengthen civil society. The internet is the most important common ever created by man. For the first time in history, individuals (and not just companies) have in their hands devices with greater computing power than that which took man to the moon and which form a network. Nonetheless, the creator of the world wide web himself, Tim Berners-Lee (2014), is leading a movement to re-decentralize the internet: “some popular and successful services (search, social networking, email) have achieved near-monopoly status,” he said. Invasion of privacy, opaque algorithms capable of learning more about people than they know about themselves or their families (O’Neil, 2010) encourage political polarization that compromises democratic life (Sunstein, 2017) and encourage the compulsive use of digital devices. It is therefore imperative that civil society regain control of the networks, today controlled by the digital giants.

At the same time, although a minority, there are already social movements and empirical studies aimed at achieving the democratic aspirations of Tim Berners-Lee and other authors, who deposit so much hope in the emancipatory potential of social networks. “Like decentralization, openness empowers people, contributing to the innovation that produces economic and social gains,” wrote Berners-Lee (2014). These movements and studies search for mechanisms so that this immense wealth, in the form of data generated by individuals, can belong to them in a clear and transparent manner. Today, the internet has become “a space where individuals are public and trackable by default” (Hasselbach & Tamberg, 2016, position 144). As a counterpoint to platform capitalism, an important move toward platform cooperativism is underway (Scholz, 2016[1], Design Justice, 2017).

Moreover, the creation and use of a wide variety of digital platforms designed to strengthen citizenship around the world is growing. Almost always, according to the report Civic Tech in the Global South (Peixoto and Sifry, 2017), recently published by the World Bank, existing social networks are used to strengthen social participation. This is how, for example, Participatory Budgeting was established in the city of Porto Alegre at the end of the 1990s, where it quickly expanded to reach 15% of the state’s voting population by 2014. The report also presents interesting experiences from Africa and Asia where citizens use popular social media as channels for complaints and protest. Citiscope gathers experiences in which digital devices are used on behalf of urban development. In 2014, for example, 160,000 Parisians voted on how to allocate around 100 million euros in their city[ii]. And in Boston, young people, aged 12 to 25, were involved in the city’s participatory budget[iii]. But it is also important to highlight initiatives (still a minority, but nevertheless significant) aimed at creating alternatives to the vehicles that dominate the internet today, in order to give citizens a voice.

Public policies designed to protect citizens against the practices of digital giants are recent: India and the European Union have legislation to ensure privacy, which is not a luxury, but a basic value, an element of human dignity and, therefore, one of the essential building blocks of democracy. Recovering the power of individuals over what they do with their digital devices is today one of the most important democratic aspirations, with the power to alter relations between market, state and civil society. European legislation introduced a right to an explanation when a decision comes from an algorithm[iv]. This is one of the central themes of the initiative by Tim Berners-Lee (2014), when creating the campaign “the Web We Want,” which intends to “foster debate on how to resolve the trade-offs between security and privacy, and between the needs of business and decentralized innovation.” A “self-sovereign identity,” conceived by the creators of Uport, is just one of many examples of counterpoints to the centralization and opaque use of information generated by people (Lundkvist et al., 2017).

Social emancipation and big brother

The starting point of any reflection on relations between market, state and civil society, in light of the digital revolution, is in recognizing its strategic contribution to addressing some of the most important socioenvironmental challenges of the 21st century. China and India would have never signed the Paris Climate Accord, in 2015, if the semiconductor revolution had not given rise to the real possibility that sunshine and wind could replace coal as their main source of energy (Abramovay, 2014 and 2016). The electrical grid will be profoundly altered over the next 20 years due to the mass use of not only electric cars, but also autonomous vehicles. And even in the management of biodiversity, the digital revolution expands the possibilities for sustainable exploration of tropical forests, based on a knowledge economy of nature where networked devices are and will be increasingly indispensable (Nobre et al., 2016).

One of the most notable examples in this sense is the creation of devices that enable the recuperation of the history of soil use, in a detailed and precise manner, on a global scale by comparing Landsat satellite images, in orbit since 1985, and current conditions. It is important to mention that this (and many other achievements) resulted from the cooperation of independent groups with Google, which purchased and made publicly available Landsat images, enabling comparisons that, for example, revealed deforestation or illegal occupation of protected areas[v]. “Placing digital at the service of ecological transition” is the objective of a research program that involves important research organizations in France[vi]. Open Source Ecology is a US program with a similar objective[vii].

But, despite these noble initiatives, the business model of the contemporary digital giants threatens the privacy of people, compromises transparency in favor of the opacity of its algorithms and is one of the most important factors in the growth of inequality in contemporary societies. Advances in artificial intelligence, today led by the digital corporations, also raise unprecedented questions about relations between market, state and civil society. As explains the historian Yuval Noah Harari (2017 a), responding to the manifesto in which Mark Zuckerberg (2017) defends the civic virtues of that he believes to be a “community” formed by a billion Facebook users. “There are certainly good reasons to fear Big Brother. In the 21st century, Big Data algorithms could be used to manipulate people in unprecedented ways. Take future election races, for example: in the 2020 race, Facebook could theoretically determine not only who are the 32,578 swing voters in Pennsylvania, but also what you need to tell each of them in order to swing them in your favor.”

More important than the technologies are the values embedded in the potential for social cooperation that the internet facilitates virtually. Three years after Tim Berners-Lee’s manifesto on “the past, present and the future of the World Wide Web,” it has become even more urgent to understand whether his creation will serve to expand the power of a handful of companies, enabling the emergence of machines so intelligent that they could undermine the dignity and liberty of people (Box I) or whether, to the contrary, they will be put to work in a regenerative economy that values human capacities, contributes to reducing inequalities and expands opportunities for sustainable exploration of eco-systemic services on which all of us depend.

Box I

Artificial intelligence, bioengineering, transhumanism and ethics

“For however much we may cling to life, even a snake would hesitate before eternity,” says a character at the start of José Saramago’s “The History of the Siege of Lisbon.” But death, as an incontrovertible component of life, may be coming to an end. Ray Kurzweil, a computer scientist, inventor and futurologist, author of best-sellers on artificial intelligence and health, predicts that eternal life will become technically possible starting in 2029. That is, in about 12 years’ time.

This prediction might sound like the ravings of a madman if Kurzweil did not work in the field of innovation for Google. He is also is involved in work on optical character recognition and direct transmission of spoken language to printers.

From there to eternity is just a step away—at least for those who believe in transhumanism. The movement has developed over the last 20 years and is aimed at improving the functioning of the human body through genetic engineering, information technology, molecular nanotechnology and artificial intelligence.

Humanity, according to transhumanists, is not the end of evolution. Science and technology can make us into posthumans, expanding our capacities far beyond the imagination of humans today. Transcendence or death: this is the slogan of the transhumanist movement. In fact, our intelligence may outstrip most of our current biological limitations. In the next 20 years, science and technology will bring about, in us and in our social organization, many more changes than recorded over the last 300 years. An intelligence already exists, one that not only has no body, but is also devoid of emotions and social sense, and that is capable of performing complex tasks more efficiently than humans.

The entrepreneur and researcher Gerd Leonhard (2016) is one of the most notable students of the potential and threats of artificial intelligence. He is concerned that artificial intelligence dissociates our capacity to intervene in the world from the ethical basis of this intervention. The greatest threat associated with artificial intelligence comes from the fact that machines can imitate our patterns of ethical behavior, but, by definition, can never be endowed with an ethical conscience. Technology is a means to achieve ends and not an end in itself. An intelligence without a conscience is the most radical expression of a substitution of ends (ethics) with means (technologies).

If machines endowed with artificial intelligence expand their power to manage and intervene in society and individuals, there is a risk that they will determine the ends of their actions. In this way, our human condition would increasingly depend on devices to awaken in us the feelings that define us, like our happiness, our sense of belonging and even our libido.

Using neural networks, researchers at Stanford University showed that artificial intelligence is more precise than human intelligence, based on facial recognition, at detecting the sexual orientation of people. The algorithm used by the researchers responded correctly in 81% of the cases for men and 74% for women, based on 35,000 facial images posted on dating websites in the United States. The percentage of correct responses from computers was much higher than from humans, which was around 61% for men and 54% for women. When the computer was shown five images per person, correct responses were even higher. The study was published in the prestigious Journal of Personality and Social Psychology. As reported in The Guardian (Levin, 2017),  the findings of the researchers at Stanford raise crucial questions about the “ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.”

The theme is central to the evolution of the most recent technologies, not only for artificial intelligence, but also bioengineering and gene editing. Since scientists announced, in the journal Nature (Ma et al., 2017), their success in editing human embryos to correct the mutation responsible for a hereditary heart disease, using CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats), transformations in biology have been significant. Changes in genomes can now be accomplished much more quickly and at a lower cost than previously. But the line between the use of these technologies to prevent and cure diseases, on one hand, and practices akin to eugenics, used to select characteristics of the unborn (not only race and sex, but also intelligence) is tenuous and fluid. What’s worse: the search for immortality is organized around technology a very small minority has and will have access to.

The alarm sounded by Yuval Noah Harari (2017 b, p. 273) could not be more current: “in the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.” Or in the words of Gerd Leonhard (2016, position 187): “Will the rich live forever, while the poor still can’t afford malaria pills?” The French mathematician, winner of the Fields medal and today a member of parliament for the party created by Emmanuel Macron is preparing a report on artificial intelligence and addresses a practical dimension of this problem. In an interview with Le Monde, he said: “if the result is that insurance companies are using different rates based on confidential information, causing those who suffer from more serious diseases to pay more and more, evidently, this is not what we want.”[viii]


Although the focus of this workshop is “therapy” and not “diagnosis,” it is impossible to talk about the new and promising relations between market, state and civil society that a networked information society can generate without stopping to look at the alarming capture that it is being subjected to and the dramatic socio-environmental consequences. It is not a matter of sowing fear, which is, almost always, the basis for extreme and irrational behavior, but rather to create a sense of urgency in the face of the great contradiction of our time, which pits the open, free and creative nature of the internet against the opaque, concentrating power of the digital giants, who threaten our privacy.

In this way, the first part of this work is focused on what the workshop calls “diagnosis,” while the second explores paths for “therapy.”

From social cooperation to destructive concentration

The world’s best-selling textbook on economics (Mankiw, 2015) outlines, in its introduction, 10 principles of economics. One of these states that there is an inevitable trade-off between equity and efficiency. In fact, it is impossible to envision contemporary material achievements without large concentrations of capital that enable the formation of production units capable of offering, to the masses, automobiles, electric power, medicines and most of what we experience and consume today.

Information and communication technologies that have spread since the start of the 1980s have begun to invalidate, in practice, the trade-off taught in textbooks. What’s most important about this invalidation is not the computer but rather two attributes associated with it. The first is the exponential growth in computing power, responsible for the transition from gigantic machines mid-20th century to the personal computer of today. The second is that this computing power allows these devices to function on radically decentralized networks (Kelly, 2016).

This means that, in an unprecedented way, the most powerful technologies that scientific research has created are now in the hands of people and no longer exclusively in the hands of companies. The demise of record labels and a significant portion of the press appeared to herald in the power of individuals, doing things for themselves and, more importantly, expanding social cooperation, in an autonomous and independent manner (Benkler, 2006, Benkler, 2011, Anderson, 2012, Rifkin 2016).

Much more than just a technology, the internet is an institutional system that reduces barriers to entry to a set of activities that were previously controlled by those with capital and power. It opens the way not only to economic decentralization, but also to innovative forms of political participation in which people are present in the public sphere, and not only represented by it (Castells, 2014). What the internet represents for social life is the same that climate, oceans and biodiversity represent for the ecosystemic services on which we depend.

But the promise of efficient and decentralized social cooperation envisioned during the first two decades of the expansion of the internet, has not come to pass, not by a long shot. The first decade of the millennium saw the emergence of both smartphones and cloud computing technologies, which have given rise to even more concentrated and threatening powers than the first two decades of the internet had promised to avert.

For example, big data gathers information on what individuals and companies do. Data that conventional types of data analysis could never handle. In fact, the data itself is less important than the predictive capacity of the systems that access, analyze and control it, which allows those systems to suggest films, books, friendships and interfere in the preferences of people, making it a major source of profit and power. This analytical capacity is not in the hands of individuals, but concentrated in the hands of a few corporations and, in many cases, in governments.

At the same time, what was lauded as a collaborative economy, one in which people could share goods such as automobiles or their own homes, was transformed into a giant business that subverted the original inspiration. In place of a distributed ridesharing system, Uber. In place of room or house sharing, Airbnb. If the beginning of the digital age gave rise to promising social cooperation initiatives such as open software and Wikipedia, very quickly it was transformed into a venue for the most impressive concentration of wealth and power of the modern age, one that has begun to present a growing threat to human dignity and democracy. Let us take a closer look at this issue.

Eliminating the competition

At the start of the 20th century, the British publication The Economist was opposed to the splitting up of Standard Oil, as advocated by US competition authorities. Size, in and of itself, is not a crime, said the publication almost 100 years ago, and there is no way to conceal the benefits to the consumer provided by the immense proportions of Standard Oil.

But, in the case of the digital giants, the liberal orthodoxy of The Economist does not keep it from recognizing that “there is cause for concern”: “internet companies’ control of data gives them enormous power. Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy.’” What’s new compared to the anticompetitive practices of the company’s typical of the 20th century? The network effect allows the companies that today dominate the internet to obtain huge amounts of data, which, in turn, allows them to stifle competition. The strength of the digital platforms is derived from concentration, in other words, from the fact that the more participants involved in them, the more participants they tend to attract. It is the mechanism known as “the winner takes all.”

If there are half a dozen programs like Waze in a city, it would be difficult for the device to inform motorists as to traffic conditions. This network effect favors, for example, Google, which “controls five of the top six billion-user, universal web platforms—search, video, mobile, maps and browser—and leads in 13 of the top 14 commercial web functions” (Taplin, 2017). It also helps to explain that 51% of everything that Americans spend online comes from Amazon (Taplin, 2014, p. 121), which launches 65% of new electronic and digital books in the United States. The network effect is also responsible for the dominance of Google, which received 35% of internet searches in 2004 and today receives at least 88%. Together, Google and Facebook concentrate almost all of the resources that companies set aside for advertising in 2016 (The Economist, 2017). The effects on independent advertising agencies and traditional press outlets (which depend on advertising) is, obviously, devastating.

This is one of the reasons why the profitability of these companies is so much higher than conventional companies. Even taking into account the US$ 2.74 billion fine levied against Google by European competition authorities, in 2016, its profit along with the other four digital giants already cited and Netflix was 20.7% of revenue, compared to an average of 10.1% for companies that make up the S&P 500. The market value of the six companies, which rose 33% in 2016, over the previous year, has already topped US$ 3 trillion. This is 70% more than Brazil’s GDP, to get an idea of the magnitude of the digital giants. Apple is on its way to becoming the first company to be worth over US$ 1 trillion (Brigato, 2017). Half of US businesses that generate profits of 25% or more are tech companies, according to the Financial Times (Foroohar, 2017).

The profitability of these companies is not derived fundamentally from being digital companies but rather because they are platforms. Dell, Intel and Cisco, all dominant figures in the digital world at the start of the millennium, have not maintained their positions at the top. And this can be explained, according to Om Malik, one of the world’s top experts and columnist for the New Yorker, because they are not platforms. “A platform is essentially a business model that thrives because of the participation and value added from third parties with only incremental effort from the owner of the platform” (Malik, 2016). Priceline and Expedia together are worth US$ 114 billion, more than all the hotel groups listed in the S&P 500 in 2016. The concentration became so much that Hilton and Marriot went to court to prove that Priceline and Expedia are monopolies (Wigglesworth, 2016).

Likewise, Apple’s earnings come fundamentally from sales made by developers on their platform, which is in the hands of millions of people. The US$ 50 billion that company head Tim Cook proudly handed out to those who develop programs on Apple platforms netted over 20 billion “to Apple for little effort…It’s good to be sitting on a platform, collecting tax, coming or going (Malik, 2016). And it is clear that the platforms on which the digital giants are built give them market power that ultimately stifles competition: at least in the view of the European Union which levied (after court proceedings that lasted seven years) a fine of 2.4 billion euros against Google (Alphabet) because the company “promotes its own online shopping service above search results” (Morozov, 2017).

But, as Evgeny Morozov explains in his article, what platform companies want most is the information that people incessantly share, because this is what allows them to progress towards the intelligence of different systems and segments in which they operate and in which they intend to operate. “These companies are able to hoard data, which allows them to become smarter in learning about their customers. Because of their leviathan-scale operations, they have the infrastructure and resources to write algorithms and make their platforms more effective…This amalgam of algorithms, infrastructure, and data is highly potent” (Malik, 2016). So powerful, in fact, that it is not through the searches that we do on Google that our preferences are detected, but rather through the mass of information that the daily use of digital devices offers (for free) to the network. And the more the internet of things develops, the more information on our habits will feed the deep learning of the machines and, in this way, allow them to know more about us. Morozov explains that this knowledge is so deep that Google will no longer need to promote certain companies in their search mechanisms: this promotion will be done regardless of any search, based on information available about the behaviors of the people in all aspects of their lives. Here applies the maxim that governs the internet age: if the product is free, you are the product (Lanchester, 2017).

The information generated by automobiles, today highly connected, is essential to enable, for example, self-driving cars. In just a few years, a self-driving car is expected to generate 100 gigabytes of information per second. The more data, the more companies can develop algorithms capable of interpreting the movement of objects, people and animals on the roads and in the streets. This is one of the reasons why Tesla, which sold 25,000 cars in the first four months of 2017, is worth more than GM, which sold 2.3 million vehicles, according to The Economist (2017). Likewise, it is based on this capacity to collect, store and interpret data that the 20 billion objects that will be connected to the network by 2020 will be able to function and form the basis for the prediction that in 2050 there will be no less than a trillion connections in the internet of things, according to Carlos Creus Moreira, CEO of WiseKey and former United Nations specialist in cyber security.[ix]

Google’s operational system, Android, which is today in most of the smartphones around the world, is only a means. The end, says Om Malik (2015), is “to push Google’s various services deeper into our lives, collect as much data as possible, and then build intelligent and automatic experiences…the company must put all of your information inside Google’s gigantic server farms…If you’re texting a friend about dinner, Google will give you restaurant reviews and directions automatically.” What is at stake is not just the exchange of equivalents: it is, as Michael Sandel explains, an exchange of privacy for convenience. “The cost is your data, privacy and lack of control,” explains Malik (2015). Tristan Harris, cited previously, worked in the Persuasive Technology Lab at Stanford University, which has the explicit aim of “changing what people think and do.”[x]

This changes the nature of competition, since as soon as a company attains a certain size (which the digital giants already have) it is not possible for innovative firms to emerge and compete with those that already dominate the networks. When a threat of this type occurs, the company is acquired, as was WhatsApp, a company with 60 employees, acquired by Facebook for US$ 22 billion in 2012, or Instagram.

What’s more: economic planning is changing, since the most powerful tools to get ahead are not in the hands of governments or civil society but rather in the hands of a small number of digital corporations. This is the central reason that motivates Izabella Kaminska (2016), a columnist for the Financial Times, to defend the thesis that the global economy is today subject to a type of Gosplan 2.0: “we are reverting to a world where a technocratic elite makes economic planning and allocation decisions based on their subjective interpretations of personal behaviors, status and privileges…” This is a fundamental observation for the discussion of the relations between market, state and civil society.

Dualization of economic life

This staggering concentration of wealth and power has devastating effects on different aspects of social life, beginning with retailing. In 2017 alone, ten large retailing groups in the United States and the traditional department store Sears are teetering on the edge of bankruptcy. No fewer than 8,640 stores, totaling 147 million square feet of retailing space, are expected to close their doors, passing the mark of the great financial crisis of 2008 (Wigglesworth, 2017 FT). Between 1998 and 2014, 2,300 independent bookshops and 3,100 music shops were closed in the United States (Taplin, 2014, p. 80). Retailing lost 9,000 jobs per month in 2017, from what experts call the “Amazon effect,” according to the Financial Times. For each million dollars in sales, a brick-and-mortar shop needs 3.5 employees. The same million can be sold by Amazon with only 0.9 employees, on average (Wigglesworth, 2017 FT).

Rana Foroohar (2017) writes, in the Financial Times, that this process, in which the most profitable companies are miserly in the creation of jobs, is at the root of a type of dualization in developed economies, especially in the United States. “The most profitable 10% of US businesses are eight times more profitable than the average company. In the 1990s, that multiple was just three.” At the same time, these more profitable companies are the ones that offer workers the best salaries. This creates a gap between a small group of relatively well-paid workers at the digital giants and a mass of workers who have not seen their incomes rise over time. The important book by Robert Gordon (2015) shows that throughout the 20th century and up to the 1970s, the United States and OECD countries, in general, saw productivity and salaries rise in tandem. Platform capitalism interrupted this virtuous evolution.

Contrary to what occurred during the industrial transformations studied by Robert Gordon, from the end of the 19th century up to 1970, today demand has shifted decidedly toward skilled labor for entry into the digital economy, increasing the earnings of this segment and transforming the dualization of the job market into a driver of greater inequality. This is what specialists call “skill-biased technological change,” as opposed to the “skill-neutral transition from an agrarian to an industrial economy” (Lindsey, 2017).

The technological changes that marked developed countries between the end of the 19th century and the 1970s allowed lower skilled workers to increase productivity, relative to higher skilled workers. The digital revolution and the predominance of platform capitalism have inverted this trend and created a highly polarized job market. “Digital technologies are changing the world of work, but labor markets have become more polarized and inequality is rising—particularly in the wealthier countries, but increasingly in developing countries,” according to the World Development Report of 2016 (WDR, 2016, p.2). The report cites another significant example of this trend: when it was purchased by Facebook, in 2012, Instagram had only 13 employees. In the 1990s, Kodak had as many as 145,000 employees. In OECD countries, the digital economy is responsible for 3% to 5% of the workforce (WDR, 2016, p. 14).

A study from December 2015, on the US economy, published by the global consultancy McKinsey and cited in the most recent book by the American journalist Thomas Friedman (2017), points in the same direction. The study shows a “distance between the most digitalized sectors and the rest of the economy over time… Despite mass adoption, most sectors have been unable to reduce this distance in the last decade… As the least digitalized sectors are among the largest, in their contribution to GDP and employment, this means that the US economy as a whole is achieving only 18% of its digital potential…”[xi].

Since, in the US, this polarization was accompanied by a decline in the quality of public education, weakening of unions and stagnation in purchasing power of the minimum wage, the consequence was a level of inequality in US society (and in other developed countries) that has not been seen since the end of the 1920s, as highlighted in an important document from the Executive Office of the President (EOP, 2016). According to this document, “the winner-takes-most nature of information technology markets means that only a few may come to dominate markets. If labor productivity increases do not translate into wage increases, then the large economic gains brought about by AI could accrue to a select few. Instead of broadly shared prosperity for workers and consumers, this might push towards reduced competition and increased wealth inequality” (EOP, 2016, p. 8).

Another study by McKinsey[2] shows that before the 2008 crisis only 2% of families in the world’s richest countries were worse off than their parents. Now this proportion has reached no less than two thirds of the total. 65% to 70% of families in 25 “advanced” countries, between 540 and 580 million people, saw their incomes (from salaried work or otherwise) fall or remain unchanged between 2005 and 2014. From 1993 to 2005, this was the case for only 10 million people. The only reason the situation was not worse was because of income transfers. But even if these are taken into account, one quarter of the families either saw their incomes stagnate or had to reduce their standard of living between 2005 and 2014. McKinsey’s conclusion is alarming: the young generation today is at risk of a being poorer than their parents.

A report on the future of artificial intelligence from the Executive Office of the President’s National Science and Technology Council Committee on Technology, from October 2016, also highlights that access to this elite group of workers in the digital economy is highly selective. Only 18% of graduates in computer science today are women, compared with 37% in 1984. At the most important US conference on artificial intelligence, in 2015, only 13.7% of the participants were women. And studies show that the selection of professionals is highly skewed toward men. But the report also found that African-Americans, Hispanics and other racial and ethnic minorities are highly underrepresented in the fields of science, technology, engineering and mathematics (OSTP, 2016, p. 38).

In the labyrinths of power

The conclusion is that the monopolies of the digital age have accumulated much more power than they could have when they were predominantly in the physical world. This power is, primarily, economic: US agencies that regulate competition use an indicator (Herfindahl-Hirschman Index or HHI) to measure the level of concentration of different economic sectors. Markets with an index of 1,550 to 2,500 are considered moderately concentrated. Those with an index of over 2,500 are highly concentrated. The HHI of the internet search market is 7,402. This is why, Barry Lynn, an important scholar on the theme believes that “the monopolist in the digital world has power that the monopolist of the physical world does not.”

But this power is not restricted to the sphere of economics. Barry Lynn had just lost the funding that Google had provided the New America Institute, the organization where he had worked for years. Google didn’t appreciate that Lynn and his associates had supported the European decision to fine Google for anti-competitive practices. Lynn then founded a new organization, “Citizens Against Monopoly,” on whose website this conflict is chronicled.[xii]

The economic power of Google is equally associated with its political power. The Silicon Valley giant spent more on lobbying in the first three months of 2017 than anyone else, according to information obtained by The Guardian (Taplin, 2017). Traditionally banks, oil companies and arms dealers dominated lobbying activities in Washington. But this has changed. In 2002, Google spent less than US$ 50,000 in lobbying. Starting in 2012, spending rose to US$ 18 million a year. And what’s most important about this spending, as highlighted in the article by Taplin (2017) and his important book in 2014 is the effort to defend an ultra-libertarian philosophy that is systematically opposed to government regulation. Their end game, according to Taplin (2014), is to reduce government influence over businesses associated with artificial intelligence, transportation, medicine and education to a minimum.

Algorithms and risks

In 2015, an open letter was written (and signed by almost 9,000 people as of November 2016) that, recognizing the potential of digital technology to tackle important contemporary socio-environmental problems, wanted to raise the alarm regarding the threats brought on by its expansion, of which the most important was a drastic reduction in jobs[xiii]. The letter also raised numerous ethical and legal problems that societies will increasingly face in the relationship between man and machine. The military use of artificial intelligence, for example, is seriously opposed in this document signed by illustrious figures such as Stephen Hawkins, Bill Gates and Elon Musk. Gerd Leonhard, cited previously, upon observing the destructive potential of these technologies, proposed the formation of a Global Digital Ethics Council (GDEC) with the aim of defining “the fundamental rules and the most basic and universal values that a dramatically different, entirely digitalized society should have.”

George Siemens, from the Link Research Lab at the University of Texas at Arlington summarizes the problem well: “We will probably be the last generation that is more intelligent than technology. And we have to be very alert to the social implications of all this”[3]. As aptly noted by Thomas Friedman[xiv], the pace of technological change, globalization and climate change is accelerating at an exponential rate and is not accompanied, by any stretch of the imagination, by transformations in institutions, systems of learning, management training, social safety nets or by government regulations that would allow most citizens to deal with its worst effects. This discrepancy between what Friedman calls the “triple acceleration” (of technological innovation, globalization and climate change) and their institutional bases is probably “the most important governance challenge in the world,” for both developed and emerging nations[xv].

Thomas Friedman contrasts the technological optimism of the book by Brynjolfsson and McAffee (2014) with the skepticism of Robert Gordon, with regard to the impacts of the digital revolution on productivity. According to Friedman, we are approaching a time when the internet, artificial intelligence, cloud computing and machine learning will benefit sectors as varied as health, education, urban planning, transportation and trade. But it is important to observe that, as persuasive as the arguments by Friedman are, Erik Brynjolfsson and Andrew McAffee themselves are concerned about the concentrating impacts of the digital revolution. So much so that MIT launched the “inclusive innovation challenge”[xvi].

If the business model of the digital giants is fundamentally based, as described above, on obtaining, storing and processing information extracted from the daily lives of individuals and companies, this is not just an effort to stimulate sales to people based on knowledge of their preferences. What is happening here is the increasingly sophisticated formulation of algorithms that enables machines to learn with human experience and, therefore, expand the possibilities of doing what today is done by people. It is in this sense that Brynjolfsson and McAfee (2014) maintain that current technological transformations are not only replacing human work (which has been occurring on a mass scale since the steam engine, in the 18th century) but increasingly human intelligence with machines. And it is clear that this substitution creates unprecedented challenges. It is important to mention two of them in this diagnosis.

On the wrong side of sustainable consumption

The contribution of artificial intelligence to the production of decentralized energy or the use of intelligent materials is a decisive step toward the supply of goods and services, in a world whose consumption is expanding, based on material, energetic and biotic resources that are shrinking. The 21st century has seen the emergence of numerous business and civil society organizations created for this purpose: B Corp, Sistema B, Ellen McArthur Foundation (circular economy), B Team and the World Business Council for Sustainable Development. These are just a few of the best known and they contribute to putting contemporary socio-environmental problems on the global business agenda, as reflected by various reports by the World Economic Forum, where climate issues have been among the most important for years, according to officials that come together annually in Davos (WEF, 2017).

The truth, however, is that no matter how large the contribution of the digital revolution to the sustainable supply of goods and services (up to now, this contribution is notable primarily in the field of renewable energy, much more than in the use of materials or biotic resources) it does not come anywhere close to offsetting the huge impact of digital devices on the consumption of individuals and families. It is precisely because of this ability to gain a detailed understanding of behaviors and anticipate aspirations that the business model of the digital corporations consists of customizing this knowledge, based on algorithms that trace the profile of individuals and offer people what they desire.

John Wanamaker (1838-1922), considered the pioneer of modern marketing, famously quipped: “half of the money I spend on advertising is wasted. The problem is I don’t know which half.” Since the emergence of smartphones, in 2007, this problem has been mostly overcome and, increasingly, advertising has moved away from the generic and transformed itself into individualized, customized messages that arrive directly in the devices and the hands of people. More than just a shift in advertising, artificial intelligence is becoming the foundation for the platform company, whose most notable examples are Uber, Airbnb, the Chinese firm Alibaba, Waze. All of these achieve growth, as observed above, not through material assets that they own, but through their ability to use the internet to gather under their command a growing number of economic activities and services.

The agility of the platforms and the individualized knowledge of consumer demand for each one of us results in an unprecedented capacity to apply pressure to expand consumption. This means that not only the value of the platform companies tends to increase (to the extent of its diffusion), but, along with it, consumption itself. The Chinese firm Alibaba, which has no inventory, truck fleet or other assets typical of conventional wholesalers, serves 300 million people a month and today is worth more than Walmart. For the Chinese celebration “Single Person’s Day” (Nov. 11, 2016), Alibaba sold almost US$ 18 billion, three times the combined total of Black Friday and Cyber Monday in the US (Brynjolfsson and McAfee, 2017).

According to a recent publication by the prestigious World Resources Institute (2017), the business model that consists of selling ever more goods and services to ever more people is completely incompatible with the preservation and urgent regeneration of eco-systemic services on which we all depend. “Excessive consumption is not an option for the markets of tomorrow,” warns the report by the WRI. But it is precisely this incessant push to expand consumption, based on the detailed knowledge of the behaviors of people that the wealth of the digital giants is based. Even if, in the future, the plan is to prosper by selling affordable goods and services to a tiny portion of the population (like those who nourish the dream of transhumanists for eternal life), today digital platforms have become an incontrollable means of indiscriminately stimulating the expansion of consumption, exactly the wrong way on the path to sustainable development.

Redundancy and polarization

The second problem, brought about by the influence of the algorithms that underpin the devices in the hands of billions of people, is policy. Ethan Zuckerman (2013, position 104) was one of the first to observe that media tools, in general “help us discover what we want to know, but they’re not very powerful in helping us discover what we might need to know.” It is clear that during insurgencies, like the Arab Spring or the 2013 protests in Brazil the possibility of quick communication between people has facilitated mobilization. There are also numerous initiatives that use digital platforms to strengthen civil society.

But Zuckerman is right to point out that “a central paradox of this connected age is that while it’s easier than ever to share information and perspectives from different parts of the world, we may now often encounter a narrower picture to the world than in less connected days” (position 230). And at the root of this paradox is precisely the problem raised at TED by Tristan Harris. The research labs that study the influence that digital devices exert over people’s behavior do not limit themselves, evidently, to consumption, but probe every facet of our lives. And, throughout the world, opportunities for people to interact with those outside of their circle has become increasingly rare. Contemporary cities are under constant threat of no longer being real public spaces and being converted into delimited territories that belong to specific social groups.

The problem is that contemporary digital devices, instead of serving to offset or at least attenuate this segregation, end up reinforcing it. Based on techniques used to study persuasion through digital devices, information packages are designed that confine people in redundant cocoons, since they transmit to them exactly the messages that they want to hear. The problem would not be that serious if access to information was diversified. However, Facebook is the primary source of information for 44% of Americans. In the three months preceding the election of Donald Trump, the Silicon Valley giant wielded greater influence on public opinion than traditional press outlets (Taplin, 2017).

According to Eli Pariser, digital corporations work like a filter that isolates people from those who are different. The customization is not restricted to consumption, but seriously affects the world of ideas, customs and opinions. It is one of the factors that corrodes the feeling of community belonging, without which, according to Michael Sandel, there is no social cohesion, which forms the basis of democratic coexistence.

The result is explained by Cass Sunstein (2017): the design of the digital devices has strengthened political polarization to the point of being dangerous, by creating parallel worlds whose members are incapable of recognizing others as legitimate interlocutors. Our pages on Facebook and our lists on Twitter serve as veritable echo chambers and discourage contact with the unexpected or something that could lead us to question our convictions. Of course this brings with it some advantages and it is clear that links with community identities are not, in and of themselves, something bad. But, as Sunstein (2017) has noted, social media today are ruled by an architecture that threatens democracy by strengthening bubbles of repetition in which the opinions of many people are formed.

Our ability to share experiences with those who are different from us has diminished. Different communities have become antagonistic. And the public dimension of social life, that which makes us feel part of a group with experiences, dramas and common hopes, is disappearing. The result could not be more paradoxical: the tool created to expand the limits of human communication is doing just the opposite and if this design continues, as the US election has shown, it is a major threat to democracy.

Summary of the diagnosis

The most important contradiction of the contemporary world is that it pits the open, decentralized and communicative nature of the internet against the opaque, concentrated and redundant power of the contemporary digital giants. The pun in the title of the book by Cathy O’Neil on algorithms in digital culture (Weapons of Math Destruction) is telling: she defines them as “opaque, unquestioned and unaccountable.” The liberal aspiration of an economy governed by competition and driven by the resulting innovation, where political power and the strength of citizenship establish limits on inequality and the destruction of ecosystems is incompatible with a world where a handful of companies concentrate not only so much wealth, but so much power. Seemingly, this is the highest concentration seen since the dawn of capitalism, in the 19th century.

And what’s more important is that this power is not just economic, political or cultural: it is exercised increasingly on human nature, through technologies associated with bioengineering and artificial intelligence. This is not the theme of some science fiction novel, but rather, the opinion of Tristan Harris, Nick Bostrom, Cathy O’Neil and various manifestoes, in which some of the most important proponents of artificial intelligence and bioengineering reveal their fear about this evolution. The capacity to track the behaviors of individuals, store the data derived from their day-to-day routines and for machines to learn with our experiences and our social interaction has been transformed into a tool capable of provoking behavior that we have not chosen and making determinations that we are not informed of. Or, as Tristan Harris (2017) put it, “you can target a lie directly to people who are most susceptible.”

Despite the rhetoric that says the digital revolution will provoke only the disappearance of certain tasks, but not jobs, there are countless studies that have found just the opposite. And even if most of the population is employed or otherwise occupied, the jobs in which individuals are active and creative in the technological changes brought about by the digital revolution go to only a small portion of the population. Up until now, far from expanding opportunities, technological changes characteristic of the digital age have been decisive vectors in increasing inequality in the contemporary world.

The economic growth under platform capitalism is, as we have seen, skill-biased. It is clear that education and better professional training are and will be increasingly important. But as Michael Sandel points out, the idea that the position of individuals in the social hierarchy can be strictly determined by their merit (and cultivated in the best schools) it is little more than a myth: the cult of meritocracy has become more corrosive for contemporary societies, since it legitimizes the earnings of certain individuals while condemning the overwhelming majority to economic and social irrelevance. Thus the urgency for a re-decentralization and re-democratization of the internet, so it can be used as a tool to help individuals and the places where they live flourish.

Restore the internet as a common human good

The phenomena described here are so recent that a summary of constructive proposals aimed at countering them is not possible. Far from being paralyzed or perplexed, the data capture to which these social networks have been subjected has provoked reactions not only from social movements, but also from segments associated with the business sector, from professional organizations and also governments. It is from these reactions that alternative projects capable of placing the digital revolution at the service of human well-being and not a handful of corporations have emerged. A survey of these reactions will not be presented here, but rather their guidance, based on what is already happening. It is more important to point out the values underpinning these proposals than the specific proposals designed to correct the path that the digital giants have imposed on the internet.

It is also important point out that, even in their present form, social networks are employed for constructive social purposes. The use of Facebook to allow citizens to participate in the rewriting of Iceland’s constitution is well known. The United Nations launched, in the second half of 2017, a call for proposals from researchers from around the world for projects that use big data to create knowledge and curb the advance of climate change[xvii]. Likewise, Alex Pentland believes that data, the algorithms used to interpret that data and artificial intelligence may be essential to the management of public policies and, particularly, to move forward with his proposal for data-driven cities (Box II).

Box II


Social Physics

Alex Pentland began his important book, published in 2014, with an incontrovertible critical observation on the social sciences. The data on which they are based, up until now, have been marked by a twofold deficiency. When you try to track them in real time and understand how human customs form and how they are modified, anthropologist and ethnologists frequently produce notable texts, but they are based on the observation of a limited number of cases. On the other hand, data from census or opinion polls are massive, but static: they offer a picture to which scientists try to give life by developing hypotheses on how they evolve.

The big novelty about Social Physics is that for the first time one can obtain, process, analyze and discuss observations regarding human behavior as it happens. Digital media open the way for veritable live laboratories, where we cannot only observe the formation and changing of human culture but also interfere with it. The availability of this information presents the unprecedented opportunity to empirically test hypotheses on the flow of ideas and information and on how to interfere in the behaviors of individuals. In an allusion to microscopes and telescopes, Pentland talks about socioscopes, to characterize Social Physics.

The laboratory headed by Pentland coordinates planning initiatives based on the use of Big Data in Trento, Italy and in Abidjan, Ivory Coast. In both cases the work is conducted in conjunction with universities, telecommunication companies, governments and civil society organizations and addresses the themes of mobility, public health, supply and security in an innovative manner. The technologies that enable live monitoring of where we go, how we get there, with whom we speak and in what tone of voice, what we purchase, what we download from the internet, what we eat and what we wear stoke fears that the growth of digital media is accompanied by a loss of human freedoms.

Alex Pentland, who regularly attends the World Economic Forum and whose thesis recommendations frequently result in startups run by his students, is aware of this risk. One way to avoid this would be a New Deal on Data, in other words, a series of guarantees that would ensure that we are the owners of the information we produce (voluntarily or not) and that these data can only be used with the aim of producing public goods and upon informed consent. An interesting way to allow scientific and technological advancements to contribute to strengthening not only material well-being, but also democracy in contemporary societies. His concern is in agreement with the values presented below and, particularly, with respect for privacy.

But the question is not whether the tools created by the contemporary digital giants can be put to constructive use by society. There is no question that there are many virtuous examples in this sense. But this does not diminish the urgent need for reflection on the sense and purposes for which the digital revolution could be employed or reemployed to benefit mankind. And this is why we’ve laid out the seven fundamental values underpinning the various initiatives and achievements of so many companies, civil society and governments.

  1. Respect for privacy

John Kenneth Galbraith, at the end of his life published a short satire called The Economics of Innocent Fraud. Among the innocent frauds stand out the myth of consumer sovereignty, the idea that economic life is determined by our choices and these are based on the rationality of individuals. If in the capitalism of the 20th century, consumer sovereignty was just a smokescreen to hide the power of large corporations, in the digital revolution this power has taken on a qualitatively new dimension marked by the ability to classify individuals according to their behaviors and, based on this, guide their desires and thoughts, using algorithms that have become essential to our daily lives (Finn, 2017).

Furthermore, as shown by Marta Peirano (2015) and clearly demonstrated by the Snowden case, corporations as well as governments invade our privacy, in an attack on our freedoms and democracy. Greenwald and Gallagher (2014)[xviii], in a report published in The Intercept, showed that Britain’s top spy agency (Government Communications Headquarters) “used its surveillance system to secretly monitor visitors to a WikiLeaks site. By exploiting its ability to tap into the fiber-optic cables that form the backbone of the internet, the agency confided to allies in 2012, it was able to collect the IP addresses of visitors in real time, as well as the search terms that visitors used to reach the site from search engines like Google.”

After the attack on the World Trade Center, the black list of possible terrorists compiled by the US government contained a million names. If that wasn’t enough, a National Security Letter, an administrative subpoena issued by the US government, forced telephone companies and internet providers to hand over the personal information of customers to security agencies, without the knowledge that their data were being transmitted to the government (Peirano, 2015, position 134). Especially noteworthy is the TED in which Marta Peirano shows the incredible magnitude (and the dangers not only for activists, but also for common citizens) of personal data revealed to companies and governments[xix]. What is especially serious in this type of investigative procedure is the monitoring of people based on what they read, a practice with a long and tragic history.

But there is a growing movement that questions the business model based on this invasion of privacy. Gry Hasselbalch and Pernille Tranberg (2016) believe that data ethics is nothing more than a new competitive advantage for the digital age. They compare the way companies treat the data that they obtain from their clients and suppliers to the socioenvironmental commitments made in the past, most ultimately just lip service: “individuals and consumers aren’t simply concerned about a lack of control over their personal data (their privacy), they’re starting to take action on it and react with protests, ad blockers and encrypted services” (position 19). We are seeing a “data ethics paradigm shift” which is beginning to “take the shape of a social movement, a cultural shift and a technological and legal development that increasingly places humans at the center.” The criteria proposed by the authors to define data ethics for the companies is very operational: “a data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with?”(position 33).

Commenting on the approval of privacy protection laws by the Supreme Court of India, Carlos Creus Moreira, who worked for the United Nations as an expert in electronic security said: “Your right to privacy is a fundamental right, which is a human right. Your work, your sexual orientation and your religion are your personal information that should not be shared without your consent with anybody.” It is fundamental that the internet be designed in such a way that allows individuals to not be on the internet, if they so desire. And he continues: “While this is being provided to citizens in the EU and India, privacy rights in the United States are not yet established. Why? Because the business model of all American companies is based on selling your privacy. Facebook makes money by selling who you are and what you do. This is a trillion dollar industry for these companies.” For Creus Moreira, far from inhibiting business, respect for privacy will stimulate the cyber security industry and India, a pioneer, together with the European Union, in data protection laws for citizens, “is emerging as a cyber security hub.” And clearly the theme of privacy will become even more important as connections via the internet of things intensify.

The Institute of Electrical and Electronics Engineers (IEEE), a global organization with over 420,000 members, launched The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, at the end of 2016. The organization assembled panels to discuss themes on artificial intelligence and autonomous systems. The panel on “Personal Data and Individual Access Control,” recognizes the “data asymmetry” (IEEE, 2016, p. 7), which marks the current situation. And although there is no perfect solution to the problem, the recommendation “is to eradicate data asymmetry for a positive future” (IEEE, 2016, pp. 7 and 8). People must define, access, and manage their personal data as curators of their unique identity” (p.7). Of course this is not an easy task and it is important to note the questions that went unanswered by the panel: “How can an individual define and organize his/her personal data in the algorithmic era? What is the definition of control regarding personal data? How can we redefine data access to honor the individual? And among the questions appears a fundamental phrase for the way the contemporary digital giants use personal data: “data that appears trivial to share can be used to make inferences that an individual would not wish to share.”

One of the most interesting and radical projects to deal with the theme comes from the previously cited expert Evgeny Morozov. He explains that in the case of Google, anticompetitive practices are no longer (contrary to what occurred in 2010) the result of promoting “its own online shopping service above search results,” for which the company was fined by the European Commission. These practices are increasingly less important, since Google controls such a large mass of information that it can anticipate decisions by consumers, regardless of their searches on the internet. The problem then is in the huge concentration of data in the hands of a small number of companies, which Morozov calls “data feudalism.” The Economist (2017) was right when it said that splitting up the digital giants would make no difference if the aim was to reduce their power. The network effect would again lead to concentration. If the data are not all in one place, they lose their ability to serve as the basis for the development of artificial intelligence.

And that’s where the proposal by Morozov comes in: “if we really want to exploit all the insights that come from putting different data sets together, it’s obvious that data should belong to just one entity, but it does not have to be a big tech firm like Alphabet. All of the nation’s data, for example, could accrue to a national data fund, co-owned by all citizens (or, in the case of a pan-European fund, by Europeans). Whoever wants to build new services on top of that data would need to do so in a competitive, heavily regulated environment while paying a corresponding share of their profits for using it.”

What’s new in the digital age, in sum, is not privacy in itself (an universal value of democracy), but that it has become so important in the relationship between market, state and civil society. It is a theme that, before the digital age, touched only marginally on the economic sphere of social life. Today, it is central and strategic. Data are the most important commodity for the companies that dominate contemporary economic, political and cultural life. But data are also different from any other commodity, not only because of how they circulate or that they are used for free (and therefore do not respond to the idea of exchange of equivalents), but because they involve, more than any other merchandise, a basic value of democracy that is respect for people’s privacy.

  1. Decentralization

Chris Anderson began his book Makers (2012) by recounting the story of his grandfather, who was an inventor, but not a businessman. Because he didn’t have the means to bring his talent to market, he always saw his inventions transformed into earnings for those who had capital to scale up his ideas. “As Marx observed, power belongs to those who control the means of production” (Anderson, 2012, p. 5). Except that now these means of production have been radically decentralized, making it possible to communicate ideas, songs, offer energy or produce material goods using 3-D printers. “The beauty of the Web is that it democratized the tools both of invention and of production” (Anderson, 2012, p. 7). In sum, a world of makers.

The first part of this work showed that this democratization has been seriously limited by the power of digital giants over the most important social networks. That which was promised to be a direct and decentralized relationship of cooperation between people is today submitted to a hierarchical power that absorbs a significant part of the value created in the realm of social collaboration and is able to determine its path and its format.

Two contemporary initiatives have been devised to deal with this problem.

The first is a type of registration, certification and protection used to ensure the integrity of contracts, regardless of their nature. Today, a contract between individuals (people or companies) is guaranteed by hierarchal systems, by the authority of a government or company. There is a method, however, that establishes “trusted transactions directly between two or more parties authenticated by mass collaboration and powered by collective self-interests, rather than by large corporations motivated by profit” (Tapscott and Tapscott, 2016 p. 5): this is the way that one of the most respected figures at the World Economic Forum (Don Tapscott, co-authored with his son, Alex) defined blockchain. It is the technology underpinning the best known cryptographic currency in circulation, Bitcoin, and it was designed to counter the great contradiction of the digital age, which Tapscott summarizes as: “concentrated powers in business and government have bent the original democratic architecture of the internet to their will” (Tapscott and Tapscott, p.12).

Blockchain is a type of distributed ledger. There is no computer file where all the transactions are kept. These transactions can be contracts, currencies, vital records, IDs, titles to property, diplomas or daily transactions, such as payment for transportation or rental of a property. The records are public (because they are distributed by computers on the network), but at the same time anonymous, through the use of cryptographic techniques. It is an important step toward respecting the first of the values mentioned here: privacy. It is a step towards abolishing the power of registry offices and reducing, for example, the immense bureaucracy in foreign currency exchange. Or as Melanie Swan (2015, position 153) puts it: “Blockchain technology’s decentralized model of trustless peer-to-peer transactions means, at its most basic level, intermediary-free transactions.” The technology made the cover of the Economist[xx] (with the headline: “The Trust Machine: How the technology behind bitcoin could change the world”). Moreover, six of the largest global banks (Barclays, Crédit Suisse, Canadian Imperial Bank of Commerce, HSBC, MUFG and State Street) plan to use blockchain to facilitate the recording of remittance transactions starting in 2018. Deutsche Bank and Santander are also participating in the negotiations, according to information obtained by the Financial Times (Arnold, 2017).

But the financial world is not the only place blockchain is being developed. Cryptography is already being used to collect of signatures for political purposes, as occurs today with the tool Mudamos[xxi], created by the Instituto Tecnologia e Sociedade (Rio de Janeiro, Brazil) where citizens present bills or political manifestations, without the need for registry offices to certify signatures. Blockchain is also being used on Rio de Janeiro’s environmental exchange, BVRio[xxii] to certify the legal harvesting of timber in the Amazon and its dissemination will enable the dismantling of this web of corruption that exists today based on the use counterfeit invoices.

Confidence in this technology platform is, paradoxically, entirely depersonalized. And, at the same time, it is not backed by a company or a government: the guarantee lies in the decentralized network. In the world before blockchain, confidence depended on intermediaries who profited (in an increasingly parasitic manner) based on this dependence. It is what allowed the fantastic power of companies like Airbnb, Uber, Task Rabbits and others that, under the cloak of a supposed sharing economy, are the most emblematic of platform capitalism today (Slee, 2016). Don and Alex Tapscott (2016) show that a shift, using blockchain technologies, from intermediation to direct a relationship between people is possible, with the added advantage that the data generated by these transactions would then belong to the individuals and not concentrated in the hands of large corporations.

The second contemporary initiative designed to counter the data capture to which social cooperation is subject is platform cooperativism. It is a joining of forces of the cooperativist movement, which in one shape or another, is present almost everywhere, with collaborative potential opened up by digital platforms. One of the most notable trends in this sense comes from Germany: in 2011 alone no fewer than 167 green energy cooperatives were created (Rifkin, 2015, p. 215). The cooperatives are key to Germany’s strength in solar energy. In Denmark and Holland, it is wind energy that is expanding through the cooperatives. It is clear that the radically decentralized coordination between people, companies, and people and companies presumes some level of organization.

And it is precisely for this reason that the central proposal by Trebor Scholtz (2016), an advocate of a broad cooperativist platform movement, begins with the idea of “cloning” the technological nucleus of companies that brand themselves part of the collaborative economy. To do so Scholtz and his associates launched the Platform Cooperativism Consortium[xxiii], which unites dozens of organizations, from various parts of the world. The objective of platform cooperativism is to strengthen entrepreneurship and the “maker” movement, which is so important in the digital age, without invading the privacy or concentrating wealth and power in the hands of digital giants. One of the most important proposals of the cooperativist platform movement is that people receive fair compensation for their work, just the opposite of what so frequently occurs today in the self-proclaimed sharing economy.

And in this sense, it is important to point out a proposal in the book by Steven Hill (2015). He explains that the United States (although it is clear that this is a global phenomenon) is becoming a freelance society and that the so-called sharing companies are the main drivers of this transformation. Although they are at the vanguard of global technology (and, as we have seen, the data that they collect from the networks that they dominate serve as the basis for their ambitions in the field of artificial intelligence) the workers that feed their platforms earn, in the overwhelming majority of cases, not just very low compensation, but are left without any labor or union protections. Even if the United States has a low unemployment rate today, formal jobs are on the wane: it is projected that before 2030, over half of the US workforce will be left without social rights, as freelancers. To mitigate this problem Hill proposes that all temporary contracts should require the payment of a fee into a fund used to cover Social Security for workers. It is clear that platform cooperativism would be in a better position to do this since typical platform companies use the nonpayment of social rights as an important source of earnings.

This is not a discussion limited to activists. Lawrence Summers (2017), who served as chief economist at the World Bank, head of the National Economic Council during the Obama administration, and president of Harvard University, published an article in the Financial Times under the title “America needs its unions more than ever.” “Middle class anxiety is surely also fed by the slow growth of wages even in the ninth year of economic recovery with unemployment at historic low levels,” wrote Summers. In 2016, average hourly earnings “rose only 2.5%. In contrast, profits of the S&P 500 are rising at a 16 per cent annual rate.” Why is this happening? The main theory forwarded by Summers is that: “the bargaining power of employers has increased and that of workers has decreased. Bargaining power depends on alternative options. Technology has given employers more scope for replacing Americans with foreign workers, or with technology, or by drawing on the gig economy. So their leverage to hold down wages has increased”. Increasingly “employers offer gigs rather than jobs.” Summers ends the article by stating that only 6.4% of the workforce in the private sector is unionized.

However great the benefits of the digital revolution may be, its most important result (the domination of platform capitalism and, with it, the growing threat to the cohesion of societies in which work was, for decades, a noble path to social mobility) today is a growing risk that social life is increasingly marked by income instability and a consequent deterioration of living standards for those who aspire to the middle class of contemporary societies. It is unlikely that a more decisive theme for our discussion on market, state and civil society will appear.

  1. Aversion to inequality

This third fundamental value has taken on new relevance with the advancements in information and communication technologies and simultaneous growth of inequality in countries where the digital revolution has penetrated most. It is true that the poorest 40% of the world population have seen their incomes rise substantially in recent decades and that abject poverty affects the smallest portion of the global population ever, according to the World Development Report published by the World Bank (WDR, 2016). But it is no less true that, especially in developed countries, the precariousness of jobs and the concentration of job opportunities among the elite capable of incorporating, in a creative way, digital devices in their professional activities, are responsible for disincentivizing those who no longer have a way of improving their life through their occupations, as Lawrence Summers describes in his article. And it is important to remember that Summers is no Occupy Wall Street activist but rather a key figure in the establishment of current economic thinking.

Among the initiatives aimed at countering advancing inequality in the contemporary world, two are especially important for the relationship between market, state and civil society.

The first of these initiatives is related to the previously mentioned blockchain. More than just a technology for decentralization, blockchain can be viewed as a platform in which the power to conceive, create and distribute products remains in the hands of people, more so than companies. And this means that inequalities are not left to be dealt with after the fact or with redistribution policies (echoing the mantra of the Universal Basic Income, which has become so important to Silicon Valley).

The emergence of inequalities (or at least the level of inequality that marks contemporary societies) can be avoided or mitigated by the power of the initiatives offered to individuals. Don and Alex Tapscott (2016, p. 14) explain: “Rather than trying to solve the problem of growing social inequality through the redistribution of wealth only, we can start to change the way wealthy is distributed—how it is created in the first place, as people everywhere from farmers to musicians can share more fully, a priori, in the wealth they create.” It is a position similar to the one advocated by Bruni and Zamagni (2007) in their book: “if we continue to insist that the state must be the only entity responsible for redistribution and that this should occur post-factum… we will passively watch inequality increase. On the contrary, it is also necessary to intervene when the good or service is produced. Under current circumstances, working only on redistribution is to arrive too late.”

Creating wealth in a decentralized manner, one of the main values underpinning the emergence of the internet itself, is certainly a constructive way of addressing the advance of inequality in the contemporary world. This assumes, of course, the preservation and development of the open, free and evolutionary nature of the web, as advocated by the organization founded by its creator, Tim Berners-Lee. What is at stake here is not just an economic issue. More than just obtaining the means to survive, it is about developing tools so that individuals can be active participants in the construction of networks. And the development of these tools, as will be seen in the next section, is directly related to the fundamental value of the digital age, innovation. Before we address that, let’s look at the second important initiative for combating inequality.

It comes from the OECD, the organization that brings together the world’s most developed countries and which studies and promotes policies to address their respective problems. In 2016, the OECD formulated a research program called “productivity/inclusiveness” nexus. The research yielded two fundamental findings. First, the data show that despite the immense impact of the digital revolution, productivity in the developed world (and increasingly in developing countries) has not risen as quickly as it did during the technological changes dominant between the end of the 19th century and the start of the 1970s. Nobel prize-winning economist Robert Solow’s observation that computers can be seen everywhere, except in productivity statistics is now famous.

At the same time, inequality continues to rise. In 1980, the top 1% in the United States earned 27 times more than the bottom 50%. In 2016, it was 81 times more[4]. The concentration of wealth is even higher than that of income, according to two of the most respected international experts on the subject[xxiv]: the richest 0.1% of families in the United States held 7% of national wealth in 1978 and no less than 22% in 2012. There are 160,000 families with assets worth more than US$ 20 million each. And there is nothing special about the United States, as shown in the work by Walter Scheidel. Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Ireland, Luxembourg, New Zealand and Great Britain have all seen huge changes in concentration of income between 1980 and 2010. And what stands out is the absence of exceptions in the survey conducted by Scheidel. In 11 out of the 21 countries that publish information regarding the income of those situated at the top of the social pyramid, the share of income of the top 1% rose by 50% to 100% between 1980 and 2010[5].

Based on this double finding, the OECD research program stated that “there is a risk of a vicious cycle setting in, with individuals with fewer skills and poorer access to opportunities often confined to operate in low productivity, precarious jobs, and—in many emerging-market countries—in the informal economy. This reduces aggregate productivity, widens inequality, and ultimately undermines policy efforts to increase productivity and growth.” The OECD report (2016, p. 71) continues: “In unequal societies, low income households are less able to invest in education and take advantage of opportunities than their better-off neighbors. A productivity strategy that just focuses on businesses and innovations, or that relies on a race to the bottom—via low wages, dismantled social protection, or unacceptable working conditions—to increase the competitive advantage of firms and regions, whilst assuming that eventually everyone will benefit, will ultimately be less effective than a strategy that also addresses the disadvantages that hold people back from contributing to a dynamic economy. This suggests policies to ensure that individuals, and particularly those from lower income groups, are well equipped to fulfill their productive potential.”

That this statement comes from a document published by one of the centers for global business thought reveals the growing awareness that the rise in inequality will not be reversed exclusively through social policies, but requires the emergence of patterns of economic growth oriented by constructive socio-environmental objectives.

From the angle of the relations between market, state and civil society, the message from the OECD is fundamental: it is not the digital revolution nor economic growth in and of itself that will reduce the high level of inequality in our contemporary world. On the contrary, it is the reduction of inequalities, by recognizing the value of work, improving the living conditions of the poor and respecting human rights in regions outside the large contemporary metropolises that will counter the digital revolution as a vector of dualization of economic life. And this leads us to the fourth of the values mentioned here.

  1. Innovation

It was in the digital age that festive initiatives celebrating innovation became popular, like Campus Parties, festivals and meetings by entrepreneurs designed to formulate business plans supported by networked devices. Innovation was no longer, as it always had been, a structural feature of capitalism itself, but a decisive value that acts as a type of cultural cement, an ethos. But as shown in the work by the OECD mentioned above, it is fundamental to decentralize contemporary innovations for them to flourish. And above all, it is fundamental to expand the number of talented people capable of contributing to this advancement, which presumes not only better education and professional training, but improving conditions so that poorer neighborhoods (today marked by violence, discrimination and a culture in which obtaining a precarious and poorly paid job is the most that a young person can aspire to) can be actively incorporated into the pursuit of innovation.

As the data from the already cited report from the Executive Office of the President’s National Science and Technology Council Committee on Technology (2016) suggest, contemporary digital culture is predominantly male, white and from the upper echelons of the social pyramid. If it is like this in the United States, then this systematic squandering of talent applies all the more to developing countries and, particularly, to Latin America where outlying neighborhoods are the world’s most dangerous in terms of gun homicides and Brazil has the dubious distinction of topping the list, with young black men the primary victims and perpetrators. In such an environment, it is clear that innovation tends to concentrate. Although there are many factors that explain the rearguard occupied by Latin America in global innovation, certainly crime in outlying neighborhoods is among the most important.

Expanding the social basis for innovation presumes, therefore, respect for human rights in regions where today poverty is concentrated, above all in developing countries, as will be addressed in the next section, in which the values participation and diversity are explored. And this expansion can also be strongly supported by numerous open knowledge initiatives that today mark the contemporary digital environment.

Companies and governments are not the only (and at times, not even the main) sources of contemporary technological innovation. This finding comes from one of the foremost authorities on the subject, Eric von Hippel, a professor at MIT, in a paper he co-authored with his colleague from Harvard, Carliss Baldwin (Baldwin and Hippel, 2010). Despite the importance of investments by companies and governments, these two do not dominate the scene alone. Innovation by peers, decentralized, carried forward for not necessarily financial reasons and based on mechanisms of governance unlike those used by companies and governments, is growing in force.

It is a non-intuitive conclusion. At first glance, innovators are producers, whose scientific and technological work has to be strictly protected by patents, otherwise profits (and therefore the motivation to innovate) would be irreparably compromised. Herein lies, for example, the essence of the Schumpeterian notion of the innovative businessman.

But according to Baldwin and von Hippel, new technologies (that reduce the cost of communication, enable digitalized and modularized design together with less-expensive access to computers operating in a network) compete advantageously with the individualized figure of the innovative producer in many sectors of the economy. For Baldwin and von Hippel, what is in question is the paradigm that scientific and technological progress has been based since the middle of the 21st century.

And, for those who still believe that well-established property rights are at the heart of innovation, it is important to read the article by Michele Boldrin and David Levine (2013), in the prestigious Journal of Economic Perspectives, in which they present empirical evidence that “strong systems of patents retard innovation and have many negative collateral effects.”

It is clear that the role of private and public investment in research is and will be decisive. But innovation by peers is here to stay. In fact, these decentralized forms of innovation have always existed and the users of tools, machines, seeds and tractors have always been able to adapt and improve their use. But it is only now, in the digital age, that these innovations have become truly open, which influences the strategy of companies. The decision by Tesla to open their patents on energy storage, for example, reflects the principle that the chances of achieving greater performance in the field increases with the decentralization of research. Although Tesla’s interest is to improve its electric cars, advances in storage will be one of the key elements in expanding the use of solar and wind energy throughout the world.

Open and collaborative innovation projects, according to Baldwin and Von Hippel, involve people that share the work of creating a design and revealing the fruits of their individual and collective efforts to anyone who’s interested, instead of appropriating innovation for patented use. Evidently there are rules and legal mechanisms for this sharing. In this sense, a rapprochement between Elinor Ostrom’s work on the management of common-pool natural resources and research on knowledge as common heritage of the human species would be productive.

It is based on this rapprochement that Brett Frischmann (2014) gathered various case studies on shared production of knowledge in fields ranging from the genome project to rare diseases, astronomy, aeronautics and journalism. One of the most interesting works of the book organized by Frischmann and his associates studies a platform of “collective intelligence in citizen science” (, which involves no less than 1.6 million active volunteers in on-line citizen science projects.

For Don and Alex Tapscott, blockchain will enable this process of decentralized innovation and collaboration between peers will hasten its progress. Innovation in the blockchain era will not be centered only in companies and the more decentralized innovation systems can be supported by local inventive competencies, the greater the chances of important social achievements. “In the first era of the internet, technical innovation occurred only in the center… Innovation couldn’t occur at the edges (i.e. individuals using the networks) because the rules and protocols of closed systems meant that any new technology designed to interact with the network would need the central power’s permission to operate on it (Tapscott and Tapscott, 2016, p. 124).

Innovation that is open and unconcentrated, and supported not only by large companies, but rather on the creative power of peripheral communities hitherto marginalized in the digital revolution, changes the relationship between market, state and civil society. This is one way of using innovation to fight worsening inequality and stop it from serving as a vector of its expansion. But this presumes, as we’ll see in the next section, that the internet fulfills the aim of its creators to encourage the participation of citizens and value diversity.

  1. Participation and diversity

One of the most important features of democratic life is the constant exposure of people to different ideas, cultures, opinions and values. As we saw above, in works by Ethan Zuckerman (2013) and Cass Sunstein (2017), that social media is doing just the opposite. Although Mark Zuckerberg, in a manifesto published in February 2017, claims he is developing “the social infrastructure to give people the power to build a global community that works for all of us” (underlined in the original), the truth of the matter is that contemporary social media are narrowing and not expanding opportunities for citizens to be surprised by something that might conflict with their opinions. People who are different end up living in parallel worlds and the fact that they do not encounter each other compromises democracy itself. Far from being a global community (necessarily diversified) digital media are encouraging the formation of closed groups, increasingly less capable of dialoguing with those who hold different opinions.

Sunstein’s main proposal to combat this risk is that social media should adopt an “architecture of serendipity,” in other words, one in which public space is not fragmented based on a business model that segments, classifies and ultimately places people in contact with like-minded others. In the digital age, the market needs of the of the corporate giants that dominate the sector end up threatening the development of civil society. It is the architecture of a public space that is fundamental in the lives of people that is being molded in such a way that they lose precisely that which Mark Zuckerberg says he wants to build with Facebook: a global community. Of the statistics that illustrate the civic danger of a business model based on algorithms, capable of offering people an image of the world that corresponds to their own, one stands out: in 1960, Sunstein (2017, p. 13) explains, 5% of Republicans and 4% of Democrats said they would be unhappy if their children married a person from the other party. In 2010, these figures had increased to 49% and 33%, respectively. “For a healthy democracy, shared public spaces, online or not, are a lot better than echo chambers.”

In this sense it is important to remember that concentrating advertising on the digital giants ends up weakening the traditional press. In 2016, the major US newspapers saw circulation fall by 8% over the previous year. It is the 28th consecutive year of decline. It is true that there has been an increase in subscriptions, but that has not offset the decline in circulation. But what has fallen most is advertising: 10% in 2016. In 2016, seven US newspapers listed on the stock market turned over US$ 18 billion, one third of what they earned in 2006, according to the Pew Research Center[xxv].

With this trend, the strength of general-interest intermediaries (newspapers, radio, TV) has waned to the benefit of devices that provide people with information, news, films, events and forms of leisure strictly according to who they already are. It is exactly the opposite of what Jane Jacobs, a Canadian architect, calls a city, where a public space places us in contact with friends, but also strangers, opening up the possibility of having experiences that are not limited to repeating what we already are.

And it is clear that it is not only in politics that this business model encourages polarization and radicalism that borders on the irrational. For this reason, a public debate with the directors and technical staff who conceive and steer the digital giants is needed to discover alternative ways of integrating people, governments and companies into social networks. It is a huge challenge, since the digital giants benefit from the network effect. At the same time, because of the threats of their business model to privacy, and its impacts on inequality, alternatives have emerged (although still a small minority), designed to encourage two values inherent to the internet, participation and diversity.

With the question raised in the document published by the OECD, cited in the previous section, in mind, it is fundamental that access to the internet in outlying regions goes beyond the use of smartphones to access the platforms of digital giants. In this sense, the neutrality of the network is decisive: making these dominant platforms available today for free and charging for access for those outside this nucleus is to deprive low-income citizens of the chance to discover new points of view and access materials that could enrich their social relationships, their culture and their professional activities. In this sense, Brazil’s Civil Rights Framework (Marco Civil da Internet) for the internet is held up as an example by Tim Berners-Lee because it ensures the neutrality of the network and expands the chances of participation and access to diverse publics.

To ensure that this participation is not just a formality, the work of a variety of activists is important to provide access to digital resources to bolster entrepreneurism, channels for complaints and civic activities for outlying communities. The maker movement is an expression of this enterprising culture, which is based both on digital resources and the expanded participation of the poor, blacks and women.

Olabi, for example, an organization where I serve as an advisor, defines itself as “a place, a set of tools and a system to democratize the production of technology in pursuit of a more socially just world[xxvi]. The headquarters of the organization is a makerspace, where participants develop projects in electronics, robotics, permaculture, artificial intelligence, digital manufacturing, handicrafts, woodworking and design. In three years of existence, they have served over 20,000 people. One of their most interesting projects is the “Pretalab: innovation and technology for black and indigenous women”[xxvii]. One statistic is enough to illustrate the problem faced by Pretalab: in 120 years of existence, the Polytechnic School of the University of São Paulo (Brazil’s foremost center for engineering) has graduated a total of 10 black women. Olabi also has a group researching biological tissue, based on the “protocol created by the English researcher Suzanne Lee and made available on the internet.”

Olabi is a member of the global network of Fabs Labs created at MIT. This network aims to “provide access to powerful manufacturing tools—including laser cutters, milling machines, and 3-D printers—to an increasingly broad range of users at educational institutions and local community centers around the world. Incubated at the MIT Center for Bits and Atoms (CBA), the Fab Lab Network now consists of 270 independent manufacturing centers in 70 countries around the world” (Stacey, 2014). Fab Labs is key to improving the chances that the decentralized nature of the internet will result in new modes of wealth creation, through access to efficient mechanisms, but within reach of individuals and small groups. Or, as Chris Anderson (2012, p. 13) puts it, “the digital revolution has now reached the workshop.”

The first characteristic of that which Chris Anderson does not hesitate to call a new industrial revolution is the possibility of individuals conceiving and efficiently manufacturing goods that until recently could only be made by large factories. Thirty years ago no one could imagine the printing of a book outside a professional printers. Today laser printers and the handling of type, layout and copywriting, previously performed by skilled professionals, has been popularized. This has begun to occur in the world of production with devices such as 3-D printers and laser cutters. These machines have already become affordable for individuals and what they can do competitively will continue to expand. The revolution, brought by this fall in price, lies in the blurred borders between inventor and entrepreneur. Conceiving something does not necessarily require submitting your idea to a businessman with a factory for the invention to become a reality. What has occurred in the world of culture, and the universe of bits, has arrived in the world of materials and the universe of atoms.

But this step can only be taken by the masses, as recommends the work already cited by the OECD, if outlying populations and locations are included as important participants. And for this to happen, Fab Labs must reach youth in poor areas, and the programs must be conceived and designed to expand social participation and diversity. And this is why Patrick Rumpala (2014) sees the Fab Labs as not only places designed to expand opportunities for income, but also a political achievement: “Fab labs and makerspaces therefore merit to be examined, particularly in how they can redistribute capabilities, challenge the industrial order, and foster the development of this new form of workshop.”

The Institute for Technology and Society, headquartered in Rio de Janeiro, is today one of the most important centers for the democratization of networked devices. One of its lines of research is “rethinking innovation.” Research, courses and practical activities using blockchain technology are fundamental not only for political participation (using the already mentioned mechanism for the collection of digital signatures for introducing bills, “Mudamos”), but also for economic and governmental activities, particularly in the management of cities.

It is also important to mention, although they are incipient, movements promoting the conception and design of digital platforms that are not only accessible, but also created collaboratively based on a design that values the causes of those who will use them. This is the objective of the US movement Design Justice[xxviii].

  1. Transparency and responsibility

Unlike every technological change before the digital revolution, the key feature of current transformations is in the power to interfere not only in what we produce and the way we produce it, but in the essence of human life, and what makes us human, our humanness. Hans Jonas (1979), one of the most important thinkers of the 20th century, said that one must listen more carefully to the prophecy of disaster than to the prophecy of salvation. He wrote this in the mid-1970s, upon witnessing the socioenvironmental damage brought about by economic growth and the destruction of eco-systemic services fundamental for life on earth. His appeal to the “imperative of responsibility” is more current than ever: no matter how much digital network technology contributes to improving the use of the resources on which the supply of goods and services depends, its power to interfere in our bodies, in our feelings, in our culture and in our social organization goes beyond any technology since the industrial revolution. Therefore, the crucial question of sense and the purpose of technology, in other words, what Hans Jonas calls the ethics of technology, is more urgent than at any time in human history.

It is important to underscore, in this sense, that this concern is present even among leading researchers in the area of artificial intelligence. It is increasingly frequent that artificial intelligence is cited among the existential threats to the human species, comparable with those of climate change and nuclear weapons. The difference, as underlined by Gerd Leonhard, is that in the case of climate change and nuclear weapons, there is some governance, even if it is deficient. While the evolution of artificial intelligence is not the focus of any type of organized and conscientious attention. This is why Leonhard (2016, position 248) said: “we can no longer adopt a wait-and-see attitude if we want to remain in control of our destiny and the developments that could shape it.”

And this control is seriously compromised by the opacity of the algorithms, as shown by Cathy O’Neil. She tells stories of people being fired or being turned down for loans because of a decision made by an algorithm, which cannot be reasoned with or reversed. Consequently, it is clear that transparency in decisions supported or made by algorithms is a technically difficult proposition. But it is fundamental if we want our social relations to be based on human feelings and ethics. And here it is important to note a point on which Gerd Leonhard insists. No matter how intelligent machines are, no matter how much computing power expands learning, machines are not and will never be conscious or ethical. Human feelings that define us, such as compassion, for example, may eventually be imitated by machines, but machines (fortunately!) cannot be endowed with compassion or empathy, even if (as in the film Her) they can be trained to imitate and copy these feelings, which will create difficult ethical problems. “Machine intelligence will not include emotional intelligence or ethical concerns, because machines are not beings—they are duplicators and simulators” (Leonhard, position 298).

Leonhard does not propose freezing the race toward artificial intelligence, even though this is one of the decisive stages of global geopolitics, involving the United States and China, in particular, as he explained in his book. But he insists on the idea that research on artificial intelligence be accompanied by educational investments and professional training that enable everyone involved to play an active role in understanding the ethical consequences of what they do. It is fundamental that the expansion of algorithms be accompanied by an expansion of androrithms (Leonhard, position 2711), by reflecting on that which makes us human.

Leonhard develops nine proposals to deal with this problem. Most of them have to do with the need for self-reflection by society, given the threats represented by technologies that, paradoxically, could help to solve some of our most pressing problems. We must improve our understanding of these phenomena, whose growth is both exponential and cumulative, and reach into every corner of social life. In this sense, it is fundamental to fight what Leonhard calls digital obesity, which transforms that which should be a means of building social interaction into a compulsive habit that compromises human relations and spares no child. He also proposes that in addition to the already established STEM (Sciences, Technology, Engineering and Mathematics) that educational systems attach equal importance to what he calls CORE (Compassion, Originality, Reciprocity and Empathy). Finally, we must not let “Silicon Valley, technologists, the military or investors become mission control for humanity—no matter what country they are in” (Leonhard, 2016, position 2799).

For Ben Schneiderman, one of the most respected researchers in the field, clear human control must be established over technology, especially algorithms. This statement is no longer surprising given the rhetoric about the fusion of man and machine as a virtuous way of improving intelligence, a key point for transhumanists. Control over algorithms cannot fall to one person. This control must be spread over many levels and shared with organizations that are independent from those that formulate the algorithms. The algorithms must be, in Schneiderman’s view “comprehensible, predictable and controllable,” and these three attributes have to be understood everyone who, one way or another, interacts with the algorithms, even if they can’t comprehend all the technical details on which they operate (Schneiderman, 2017, minute 21:00). Schneiderman reveals that Apple Design Guidelines establish that people (not apps) must have control over technology. And in the case of applications available on the Apple platform, the rules are reasonably applied. But this is not always the case: “there is a drift towards some algorithms which take control away” (minute 21:54). He cites Facebook News Feeds, as an example of opacity. Because the algorithms will be used in complex systems, such as those that involve autonomous vehicles or health systems, it is important to expand discussions and mechanisms to make them “comprehensible, predictable and controllable.”

Another important point on which Schneiderman insists is responsibility. However, when you say that computers are our partners, you are renouncing human responsibility, precisely because they are so intelligent. “The human operator has responsibility but the machine does not”(minute 23:17). And based on this, he shows that the precautions recommended by various competent organizations are inconsistent and he cites as an example the Statement on Algorithmic Transparency, endorsed by Royal Statistical Society of Great Britain. And based on this he suggests the formation of a National Algorithm Safety Board, which would perform similar functions for the development of artificial intelligence as the National Transportation Safety Board does for transportation. It would be an independent organization, for investigation and non-regulatory in nature. This type of organization could be complementary to the Global Digital Ethics Council proposed by Gerd Leonhard.

Even if the appeal for responsibility, transparency and participation still does not translate into precise operating mechanisms, it is important to observe the urgency of the exercise of reflection and self-reflection proposed by Tristan Harris, based on his experience at Google and in courses he took at the Stanford Persuasive Tech Lab. A facility focused on “captology,” defined as “the study of computers as persuasive technologies. This includes the design, research, ethics and analysis of interactive computing products (computers, mobile phones, websites, wireless technologies, mobile applications, video games, etc.) created for the purpose of changing people’s attitudes or behaviors”[xxix].

The first step in this exercise of self-reflection is that “We need to acknowledge that we are persuadable” (minute 6:59). But the second step involves the relationship between those who develop the programs that result in artificial intelligence and the general public. But this requires a discussion on the ends of technology. “The only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee” (minute 7:50). And the third recommendation is that “we need a design renaissance,” based on the idea that individuals do not want their desires, their opinions and their values to be influenced by highly efficient techniques that compromise their self-determination. What’s more: Harris criticizes, based on his own experience, the persuasive efforts of digital giants aimed at tethering people to their own digital devices. His appeal is shared by Jonathan Taplin (2014, p. 18) who calls for a “Digital Renaissance” that begins precisely with criticism and resistance against digital monopolies.

  1. Sustainable development

The seventh value underpinning the Digital Renaissance, called for by Taplin, and the Design Renaissance, as envisioned by Tristan Harris, is the most difficult to address. On one hand, technological changes supported in the revolution of semiconductors and in the expansion of social networks offer the infrastructure for the decarbonization of contemporary economies. They allow countries to formulate legislation that will make internal combustion engines a thing of the past. The circular economy and the possibility of conceiving products whose components, after they are used, can be converted into new sources of wealth and not into waste, will depend on the development of the internet of things and a detailed understanding of the lifecycle of materials. Intelligent assets is the name used by Ellen McArthur Foundation (2016) to describe this theme. And the emergence and dissemination of an economy based on the knowledge of nature will depend on digital instruments designed to capture knowledge from natural systems and use of the soil.

At the same time, however, the business model for digital platforms that dominate the contemporary economy is designed to drive precisely that which the World Resources Institute (2017) report calls “The Elephant in the Boardroom”: unchecked consumption (reads the subtitles) is not an option in tomorrow’s markets.

The invitation to this workshop asks whether “social enterprises, civil and solidaristic enterprises, inclusive business, ethical finance, microcredit, fair trade, responsible consumption, B-corps, etc. have real potential for expansion within the capitalist system or will they remain marginal practices? What new instruments and actions are needed to achieve greater integration between conventional capitalist firms and these new alternative experiences?”

These questions have to be reformulated in light of the growing strength of platform companies in all sectors of social life. There is a huge risk that the digital giants will pursue the use of renewable energies in their data centers while continuing as decisive vectors in the expansion of consumption, which goes against the fundamental value of sustainable development. The translation of the values mentioned here into principles, objectives, strategies, tactics and measures for their monitoring will require no less than a drastic cultural change and require a transformation of the business model that today dominates the world economy. It is in this context that a culture of consumption will emerge that is designed to enable the human species to flourish, by improving social coexistence and respect for the ecosystemic services on which all of us depend. And of course no one has a ready-made formula to achieve these objectives.


We are experiencing a double change of epoch, a double mutation. The first, geological in nature, has to do with our arrival in the Anthropocene. Since the Neolithic Revolution, 11,000 years ago, humanity has become a force of transformation, altering the foundation that support life on the planet, in other words, a biological force. But, with the dropping of the bomb on Hiroshima, the large scale exploration of fossil fuels and huge acceleration in global economic growth since the middle of the 20th century, we have become a geological force on a planetary scale, because of our ability to alter the climate itself and, from there, the favorable conditions for the development of human societies that have prevailed over the last 11,000 years. Laudato Si is recognized as one of the most eloquent and well-founded documents on the theme.

The second change was heralded in when the computer and the telephone were fused (Kelly, 2016), opening the way for what Manuel Castells calls the network information society. Its basic components are not limited to computers, but, increasingly, a variable set of interconnected objects that provides processing power that paves the way to something that, until recently, lived only in the imaginations of science fiction writers and inside a few laboratories: artificial intelligence.

The second change has, in theory, the power to expand our chances of development under the conditions created by the first. In part, this power has been achieved in the advances made in modern renewable energies, in the emergence of new materials, and in the design of goods and services that need less material, energy and biotic resources. It can also be found in the numerous initiatives from companies, civil society organizations, governments and multilateral organizations that are working to realize the emancipatory potential embedded in the open and free nature of the internet.

This emancipatory potential, however, is seriously compromised by the business model that today dominates the internet. The objective of this text was to organize some of its main consequences and threats. At the same time, an effort was made to list seven fundamental values that emerge as a reaction to the practices of the digital giants and that have consequences that go far beyond the purely economic sphere.

The strength of largest companies of contemporary capitalism does not come from their capacity to sell goods and services. It comes from the power to transform the data that the individuals provide freely into the primary means for achieving an objective (artificial intelligence, machine learning) which does nothing to reduce the ambition to sell ever more. As progress is made, so do concerns regarding the emergence of a powerful intelligence, devoid of conscience or ethics or ultimately, being.

The interdisciplinary dialogue proposed in the invitation to this seminar has to involve professionals that study the power of persuasion of digital devices and mold their design for use by citizens. In the case of physical products, design is the main driver of how we use the material, energy or biotic resource on which we depend. Studies on the circular economy have shown that without a change in design, it will be difficult to match the size of the economic system with the limits of ecosystemic services. A number of business organizations mentioned here draw attention to this problem.

The design of digital products has acquired the power to interfere in our mental functions, in our emotions and in our social relations based on information and opaque algorithms that threaten the self-determination of individuals. Just as there is an urgent need for goods and services intentionally designed to contribute to the regeneration of the socio-environmental fabric that economic growth up until now has destroyed, there is an equally urgent need to recognize the current threats in the design of our connections and begin to support the values that need to be clearly discussed with society, of which the seven listed here are, clearly, just an attempt.

What is increasingly clear is that the business model of the digital giants does not respect design in which privacy, diversity, transparency, aversion to inequalities, democratization of innovation, responsibility and sustainability are constitutive values. The supposedly free use of digital platforms does not mesh with the “logic of reciprocity and gift-giving.” At the same time, because of its open and free nature, the internet has the potential to promote a “culture of gift-giving,” based on social interactions that are not transformed into sources of huge gains for the owners of digital platforms nor the basis for the knowledge of behaviors of people, ultimately aimed at influencing what they want and what they do.



Abramovay, R. (2014) – “Innovations to Democratize Energy Access Without Boosting Emissions” Ambiente & Sociedade Vol. 17 nº 3, jul/set.

Abramovay, R. (2016 a) “Polarization no longer sets the tone in climate negotiation” in Viola, E. E Neves. L. The World After the Paris Climate Agreement of December 2015. Dossiê CEBRI Special Edition Vol. 1, ano 15.

Anderson, C. (2012) Makers. The New Industrial Revolution. New York Crown Business.

Arnold, M. (2017) “Six global banks join forces to create digital currency” Financial Times. Augst 31.

Baldwin C. and von Hippel E. (2009) “Modeling a Paradigm Shift: From Producer Innovation to User and Open Collaborative Innovation”

Benkler, Y. (2006) The Wealth of Networks: How Social Production Transforms Markets and Freedom. New York. Strange Fruit.

Benkler, Y. (2011) The Penguin and the Leviathan: How Cooperation Triumphs over Self-Interest. New York. Crown Business

Berners-Lee, T. (2014) “Tim Berners-Lee on the Web at 25: the past, present and future”. Wired. 23/08/2014

Bostrom, N. (2014) Superintelligence. Paths, Dangers, Strategies. Oxford University Press

Brigato, G. (2017) “Desigualdade não cai com tecnologia, diz pesquisador”. Valor Econômico. 4/10.

Bruni, L and Zamagni, S. (2007) Civil Economy: Efficiency, Equity, Public Happiness (Frontiers of Business Ethics). Peter Lang.

Brynjolfsson, E. & McAfee, A. (2013) The Second Machine Age. Work, Progress and Prosperity in a Time of Brilliant Technologies. New York. W.W. Norton & Company

Castells, M. (2014) Networks of Outrage and Hope.

Design Justice Network (2017) Design Justice Issue 3. Design Justice in Action.

Ellen McArthur Foundation (2016) “Intelligent Assets: Unlocking the Circular Economy Potential”

EOP, (2016) “Artificial Intelligence, Automation and the Economy. December.

Evans, P. & Gawer A. (2016) The Rise of the Platform Enterprise. The Center for Global Enterprise The Emerging Platform Economy Series.

Foroohar, R. (2017) “Release Big Tech’s grip on power”. Financial Times. June, 18

Friedman, T. (2017) Thank You for Being Late. An Optimist’s Guide to Thriving in the Age of Acceleration. New York. Farrar, Strauss and Giroux.

Frischman, B., Madison M. And Strandburg K. (2014) Governing Knowledge Commons. Oxford. Oxford University Press.

Galbraith, J. (2014) The Economics of Innocent Frauds

Gibbs, S. (2017) “Elon Musk: regulate AI to combat ‘existential threat’ before it’s too late”. The Guardian 2017/07/17 Last view 10/09/2017

Gordon, R. (2016) The Rise and Fall of American Growth. The U. S. Standard of Living Since the Civil War. Princeton. Princeton University Press.

Harari, Y. (2017 a) “Yuval Noah Harari challenges the future according to Facebook” Financial Times 2/25/2017

Harari, Y. (2017 b) Homo Deus. A Brief History of Tomorrow. Harper Collins

Hasselbalch, G. & Tamberg, P. (2016) Data Ethics – The New Copetitive Advantage. Publishare

Hill, S. (2015) Raw Deal: How the “Uber Economy” and Runaway Capitalism Are Screwing American Workers. New York. St. Martin Press.

IEEE (2016) “Ethically Aligned Designed. A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”

Jonas, H. (1985) The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago. University of Chicago Press.

Kelly, K. (2016) The Inevitable. Understanding the 12 Technological Forces that Will Shape our Future. New York. Viking

Leonhard, G. (2016) Human vs. Tech. The Coming ClashBetween Man and Machine. New York. Fast Publishing Ltd.

Levin, S. (2017) “New AI can work out whether you’re gay or straight from a photograph” The Guardian 8/09/2017

Lindsey, B. (2017) “The End of the Working Class” The American Interest August 30

Lundkvist, C.; Heck, R.; Torstensson, J.; Mitton Z. & Sena M. (2017) Uport. A Platform for Self Sovereign Identity.

Lynn B. (2010) Cornered. The New Monopoly Capitalism and the Economics of Destruction. New York. John Willey and Sons.

Ma, H. et al (2017) “Correction of a pathogenic gene mutation in human embryos”. Nature 548, 413-419. 24/08.

Malik, O. (2015) “Apple Versus Google”. New Yorker. June, 15, 2015.

Malik, O. (2016) “Apple, Google, Amazon, and the Advantages of Bigness” The New Yorker. August, 9

Mankiw (2015) Principles of Economics. 7th Edition. Mason. USA. South-Wetern Cengage Learning.

McAfee, A. & Brynjolfsson, E. (2017) Machine Platform Crowd. Harnessing our Digital Future. New York. W.W. Norton & Company

Milanovic B. (2016) Global Inequality. A New Approach for the Age of Globalization. Cambridge. The Belknap Press of Harvard University Press.

Nobre C. et al. (2016) “Land-use and climate change risks in the Amazon and the need of a novel sustainable development paradigm”. PNAS. Vol 113. Nº 39.

O’Neil C. (2010) Weapons of Math Destruction

OSTP (2016) “Preparing for the Future of Artificial Intelligence”. October

Peixoto, T. & Sifry, M. (2017) eds. 2017. Civic Tech in the Global South: Assessing Technology for the Public Good. Washington, DC: World Bank. License: Creative Commons Attribution CC BY 3.0 IGO

Pentland, A. (2014) Social Physics. How Good Ideas Spread – The Lessons from a New Science. New York. Penguin Press.

Rifkin, J. (2016) The Zero Marginal Cost Society. The Internet of Things, the Collaborative Commons and the Eclipse of Capitalism. New York. McMillan.Sandel, M. (2017) “The State of the Resistance”. Democracy. A Journal of Ideas.

Summer, Nº 45.

Sheidel, W. (2017) The Great Leveller. Violence and the History of Inequality from the Stone Age to the Twenty-First Century. Princeton. Princeton University Press

Summers L. (2017) “America needs its unions more than ever” Financial Times. September 3

Sunstein, C. (2017) #Republic. Divided Democracy in the Age of Social Midia. Princeton. Princeton University Press.

Swan M. (2015) Blockchain. Blueprint for a New Economy. Mew York O’Reilly

Taplin J. (2014) Move Fast and Break Things. How Facebook, Google and Amazon Cornered Culture and Undermined Democracy. New York. Little Brown and Company.

Taplin, J. (2017) “Why is Google spending record sums on lobbying Washington?” The Guardian 30/07/2017.

Tapscott D. & Tapscott A (2017) The Blockchain Revolution. How the Technology Behind Bitcoin is Changing Money, Business and the World. New York. Protfolio Penguin.

The Econonist (2017) “The world’s most valuable resource is no longer oil, but data”.

WEF (2017) The Global Risks Report. 12th Edition.

Wigglesworth, R. (2017) “Will the death of US retail be the next big short?” Financial Times. July 16

World Bank (2016) World Development Report 2016: Digital Dividends. Washington, DC: World Bank. doi:10.1596/978-1-4648-0671-1. License: Creative Commons Attribution CC BY 3.0 IGO

WRI (2017) “The Elephant in the Boardrom: Why Unchecked Consumption Is Nota n Option in Tomorrow’s Markets.

Zuckerman, E. (2013) Rewire. Digital Cosmopolitans in the Age of Connection. New York. W.W. Norton and Company



[3] Brigatto, G. (2016) “Inteligência artificial é o nome do jogo” [Artificial intelligence is the name of the game]. Valor Econômico, 21/11/2016 p. B7.



[5] Scheidel, 2016, position 8610.




[iv] General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679):




[viii] Le Monde 9/09/2017

[ix] Interview with Indian publication DATAQUEST


[xi] McKinsey, apud T. Friedman (2017).



[xiv] Friedman, T. (2017) Thank you for Being Late. An Optimist’s Guide to Thriving in the Age of Accelerations”. New York. Farrar, Straus and Giroux. Posição

[xv][xv] Thomas Friedman contrasts the technological optimism of the book by Brynjolfsson and McAffee (2014) with the skepticism of Robert Gordon, with regard to the impacts of the digital revolution on productivity. According to Friedman, we are approaching a time when the internet, artificial intelligence, cloud computing and machine learning will benefit sectors as varied as health, education, urban planning, transportation and trade. But it is important to observe that, as persuasive as the arguments by Friedman are, Erik Brynjolfsson and Andrew McAffee themselves are concerned with the concentrating impacts of the digital revolution. So much so that MIT launched the “inclusive innovation challenge.”





[xx] 2016/10/31.










0 Share
0 Tweet
0 0 votes
Article Rating
Notify of
0 Comentários
Inline Feedbacks
View all comments
Talvez você goste