Dragonfly, an AI driven censorship engine on a continental scale, and then worldwide. Google intends to be the new face of tyranny worldwide. Not good.
Mikal Thalen. Top Google Scientist Quits Over Plan for Censored Chinese Search Engine; ‘I Am Forced To Resign In Order To Avoid Contributing To, Or Profiting From, The Erosion Of Protection For Dissidents’. Infowars.com, September 13, 2018.
The employee, 32-year-old Jack Poulson, who worked as a senior research scientist for Google’s research and machine intelligence division, first raised concerns at the company in early August after documents published by The Intercept first revealed the project’s existence. Known internally as Dragonfly, the censored search engine would allow the Chinese government to keep its citizens from accessing any data it deems sensitive. Poulson, who is believed to be one of at least 5 employees to quit over Dragonfly, told The Intercept‘s Ryan Gallagher that he felt an “ethical responsibility to resign” over the “forfeiture of our public human rights commitments.”
Aside from the censored search engine, Poulson also expressed concern over customer data being hosted in China, a country notorious for targeting dissidents. Poulson laid out his issues with Dragonfly and Google’s direction in a resignation letter to his superiors. “Due to my conviction that dissent is fundamental to functioning democracies, I am forced to resign in order to avoid contributing to, or profiting from, the erosion of protection for dissidents,” Poulson wrote. “I view our intent to capitulate to censorship and surveillance demands in exchange for access to the market as a forfeiture of our values and governmental negotiating position across the globe.” The decision to pursue such projects, Poulson further argued, could lead to other authoritarian regimes making similar demands. “There is an all-too-real possibility that other nations will attempt to leverage our actions in China in order to demand our compliance with their security demands,” Poulson wrote.
Despite growing outcry over Dragonfly, Google has thus far refused to publicly comment on the project, only stating that it does not discuss “speculation about future plans.” Just last month a group of leading human rights organizations called on Google to immediately cease its involvement with the project. “Like many of Google’s own employees, we are extremely concerned by reports that Google is developing a new censored search engine app for the Chinese market,” a letter to Google CEO Sundar Pichai said. “The Chinese government extensively violates the rights to freedom of expression and privacy; by accomodating the Chinese authorities’ repression of dissent, Google would be actively participating in those violations for millions of internet users in China.”
More than 1,400 Google employees also signed a similar letter last month demanding the company let its workers know what it was developing. “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment. That the decision to build Dragonfly was made in secret, and progressed with the [artificial intelligence] Principles in place, makes clear that the Principles alone are not enough,” the letter said. “We urgently need more transparency, a seat at the table, and a commitment to clear and open processes: Google employees need to know what we’re building.”
Google had previously built a search engine for China in 2006 but ended the program four years later after saying the Communist government attempted to curtail free speech and even tried to hack their computer systems as well. Pichai, Google’s CEO, also refused earlier this month to appear before the Senate Intelligence Committee to answer questions regarding Dragonfly.
See also: resignations of Google employees in protest of China project: https://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300
Brandon Downey. An Old Approach to China. [Dragonfly] N.D. [September 2018].
I woke up up this week and saw a story in the intercept: https://theintercept.com/2018/08/01/google-china-search-engine-censorship/
If you haven’t read the story (and you should), here’s a little bit of background. If you live in China, the government has a firewall around the internet connections in and out of the country. There are now all sorts of laws about your data in China, but the important bit for our story here is this: If you are in China and try to search for a term like ‘democracy’ on the web, there is a series of technological controls that will stop you. Look for the wrong term, and the DNS for the hosting site might be poisoned. Your connected might be ‘reset’, and access to the site blocked for a half hour or more. Or maybe the government just decides that the entire site is a menace, and blocks every IP address for it they can find. 700-odd million users, all of them in a default state of being unable to access anything the Chinese government doesn’t want them to see — blocked by a series of technologies collectively referred to as ‘The Great Firewall’.
This article is about Google’s upcoming answer to this problem, which is not how they’re going to start hosting a million Tor nodes all over the world, how Chrome is going to become some super-smart VPN mesh, or even how Project Zero is going to start publish 0-days for the software and hardware that runs the Great Firewall (last time I checked, largely a Cisco product). Those solutions might sound utopian to you, which is ok, because they apparently sound that way to Google’s CEO too.
No, Google’s solution (codenamed “Dragonfly”) is a lot more basic: They’re going to do what China wants. They’re going to launch a search engine that will live in China, and will censor itself to be in conformity with the Chinese government’s wishes. They’re also going to launch a mobile app like Google News, except that they will dutifully remove any stories which reference things that upset the government of China.
The argument being presented is a pretty basic rationalization: “Look, China is already censoring the internet. So why don’t we at least give people what information we can, because some is better than none?”.
Whatever you think about this as an argument, there is one key fact you should know, and it’s this: Google has already done this once, and it ended in disaster. >So let’s talk about history; some personal and some from the internet. I had the great fortune of working for Google for the better part of a decade. Here’s something you should know: For the vast majority of my time there, it really was a place that was focused on making the world better. And reviewing the things I worked on there, I would say that while there were some value neutral things, I feel very happy with how I got to work on things that helped people (or at least, helped people who were helping people).
Except for this one little thing. Around 2006, Google made a decision: It was going to China.
Let’s walk it back just a step, but there are some things you should know about running a foreign internet business in China. Even in those days, the big catch was you needed to do business with a “Joint Venture” — a locally owned Chinese company, and they needed to own 49% of your venture there. It helped, of course, if they did real work so it looked good to regulators. I’m pretty sure a lot of these joint ventures are a little shady — I suspect most of them are companies run by people who run local “joint ventures as a service” (if you catch my drift), but this was the way you entered the China market.
Here was the pitch for Google’s entry into China in 2006: (1) We were going to try to censor ourselves so the Chinese government wouldn’t block us. (2) Since nobody at Google wanted to be a censor for an authoritarian regime, our joint venture would do the work of this ‘pre-censoring’.
I had grave doubts about this then, but then we heard the pitch from our CEO, Eric Schmidt. It went something like this: “This is indeed a crappy situation, but we believe that providing people some information was better than none. Our censoring will be minimal; the least censoring we could do and stay off the radar of the coarse and automated censorship of the so-called ‘Great Firewall of China’.”
The last step in this argument was that if we just gave people some information, they would inevitably want more, because the rising curve of technological improvement would inexorably draw people into wanting to be more like a western, liberal style democracy.
I cringe as I write this, because I believed this then. Always been a bit of a skeptic about the whole techno-utopian-upload-ourselves-into-utopia stuff, but the idea that technology was a modus ponens for providing us with an appetite and a capability for more freedom was baked into my understanding of how the internet would inevitably work.
If you want to be real about it, this was part of the foundational myth of the internet here: “Here is a place where you can be yourself without the world knowing who you are. Information here will flow freely (or at least “too cheap to meter”). No government can catch up with this new frontier, which is advancing faster than you can imagine; and even if it did, by then citizens will be so drunk on this freedom it will be political suicide to stand in the way of Progress.”
You could find echoes of this in Google’s published value statements with lines like this one: “Democracy on the web works” [https://www.google.com/about/philosophy.html]
I wish I could say I believed this today, but I just can’t, not anymore.
Technology can provide us with great, powerful tools, but unfortunately it is a Failed Dream to suppose that the force it provides is some sort of historical inevitability that will solve the hard problems of human governance and societal order. Technology isn’t a magic genie that sweeps the chores of civics under a rug; in most cases it isn’t even magic. It is, however, a tool — a force amplifier for doing a thing. And like any tool, it can be used or misused.
Not to belabor the point, but as I write this, an eleven year old just followed a script to hack a US voting machine. Along with being a legitimately hard problem, technological progress can’t even push back the political torpor that gave us Diebold voting machines; how do you think it fares against active political corruption? Is Twitter going to give us the next Arab Spring, or is it going to be in the selling megaphones to Nazis business? And the pitch here is that a censored search engine is supposed to improve the lot of a people behind an Orwellian ‘Great Firewall’? Like I said, I don’t believe this now, but there was a time when I did. Or at least, I thought loftily “hey, this is an experiment. Let’s try it and see how it works!”
So, when I got asked to help my co-workers build a secure environment for Google’s joint venture in China, I did. And when we had to design a network architecture to give them limited access to a special purpose tool designed for censoring the index, I did. I thought it was gross that we were doing so, but it seemed like the best path forward. “Cost of doing business”, I mused.
It was a fun project technically: I got to work with talented people, learn some new and exciting things about proxy servers, and travel to exotic places. We got bad press about it; a lot of human rights organizations flamed us for it, and it even showed up at shareholder meetings. There were a lot of good points made: Google wasn’t keeping user data in China, so unlike (let’s say) Yahoo, we didn’t have to fork over the identities or inboxes of our users to the Chinese government. All we had to do was to make sure nobody got to search for subversive terms. Little things like “Tiananmen Square”, or “Democracy” — those were un-places and un-terms for people using google.cn. I can’t give you a complete list of subversive terms; not because we didn’t keep one, but because the complete list to be censored was a state secret.
I want to say I’m sorry for helping to do this. I don’t know how much this contributed to strengthening political support for the censorship regime in the PRC, but it was wrong. It did nothing but benefit me and my career, and so it fits the classic definition of morally heedless behavior: I got things and in return it probably made some other people’s life worse.
Before I go any further, I want to jump back in history just a little bit. In 1993, I left to go to college. When I was packing, I brought with me a metric ton of books — amongst them, the very battered and worn copy of the World Book Encyclopedia I had gotten from my grandparents. Why do this when there was a giant library close to my dorm full of vastly more accurate reference tomes? I loved looking stuff up so much, I was worried what I would do when the library closed. There was no Google then, no Wikipedia, not even a Lycos. There was, of course, gopher and archie and ftp.wustl.edu; and it was a beginning that I had just discovered. When the web came, and Google, and all the other tools you take for granted, it seemed like a miracle to me; an unalloyed one, and if you’d asked me at any point in my personal history, I would have said everyone should have, and in fact would be wrong to take it away.
Why didn’t I think about the ramifications of what I was I was doing helping with a project like our joint venture? I wasn’t like some sort of Captain Planet villain, chortling over dumping my trash on the street while rolling around in money (as it turns out, the payoff was just getting to be comfortably middle class). No, what makes scenarios like this so scarily plausible is the two things: the power of our brains to rationalize, and the power of success to warp our perception of the world.
You can write a book about the stories we tell ourselves to rationalize our behavior (and there are a lot of them), but I wanted to highlight why I went along with it: (1) I really believed that technological progress was an ethically positive force. (2) I really believed that the virtuous cycle of our business model was something that made the consequences worth it. Which is to say, I believed in a kind of ‘carbon credits for ethics’ deal, where we might do something clearly not good in return for a greater good. (3) I really believed that the only way to change the status quo was to collaborate with a bunch of authoritarians — even with the best of intentions.
I want to emphasize that the people and management were actually very supportive during this process: I was even told if I was uncomfortable with this (and I was), I didn’t have to work on this project. I also never thought “well it’s a paycheck” (a much simpler rationalization to spot). I do think that if you are reading this, it’s worth it to think about these things in your own life. I don’t think my attitude is unique amongst technologists — human problems are hard, so when presented with a kind of ‘royal road’ to solving them — just through improving technology, it’s a pretty tempting proposition.
If you asked me what I believe now, it’s closer to this: We have a responsibility to the world our technology enables. If we build a tool and give it to people who are hurting other people with it, it is our job to try to stop it, or at least, not help it. Technology can of course be a force for good, but it’s not a magic bullet — it’s more like a laser and it’s up to us what we focus it on. What we can’t do is just collaborate, and assume it will have a happy ending.
You may have noticed that my arguments have not really touched on how doing the wrong thing might have had bad consequences for Google (or for myself). I want to tell you that thinking about this has spawned a massive research project in ethics, but in reality, I learned something by watching The Good Place. Sometimes, our bad actions don’t have bad consequences. We get away with it; converting our externality into somebody else’s problem: a big ball of plastic in the Pacific, a few earthquakes one town over from our fracking, or a convenient financial collapse which we sell a few derivatives and make a killing from it. Other times, our actions do have consequences, and have a moral desert — we get what’s coming to us. This isn’t a happy thing, but it is a useful one, because we can get a chance to learn from our mistakes; to pull back from the brink. That’s what happened for Google. The situation where we had a business partner who censored the web for us in China persisted until an Incident in 2010. I’ll let the Google blog describe it for you: (https://googleblog.blogspot.com/2010/01/new-approach-to-china.html): “In mid-December, we detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google. However, it soon became clear that what at first appeared to be solely a security incident–albeit a significant one–was something quite different. First, this attack was not just on Google. As part of our investigation we have discovered that at least twenty other large companies from a wide range of businesses–including the Internet, finance, technology, media and chemical sectors–have been similarly targeted. We are currently in the process of notifying those companies, and we are also working with the relevant U.S. authorities. Second, we have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves. Third, as part of this investigation but independent of the attack on Google, we have discovered that the accounts of dozens of U.S.-, China- and Europe-based Gmail users who are advocates of human rights in China appear to have been routinely accessed by third parties. These accounts have not been accessed through any security breach at Google, but most likely via phishing scams or malware placed on the users’ computers.”
Short summary: China was not being made more liberal by the presence of our minimally-censored search engine. Not only had their society not become more liberal as a result of us being there, there was an escalation of their behavior, resulting in a program of hacking western companies for things like information about human rights workers and other dissidents. Censoring ourselves there wasn’t making things better. Working under a protectionist trade regime and a local company wasn’t making things better. Even soft values like “being globally connected” or “highlighting democracy on the web” weren’t helping — in fact, technology was enabling the suppression of dissent.
Google, to its credit, did something bold: They pulled out of operating web sites in China.
It was a principled decision, and one that no doubt cost Google money and missed opportunities. The government of China already had its fingers on the scale of business there — local competitors weren’t the ones spending minutes, hours, or even days off the web. Local competitors weren’t getting hacked — because in many cases they existed in active collusion as the pet projects of government officials. But by making this decision, Google almost certainly ceded what chance it had in the market to companies like Baidu, and this was a decision made out of principle.But how could they do anything else? If your mission was making the world’s information accessible, and you had committed to protecting the data of your users, what was the justification for doing it now?
Lesson learned, I thought, and a case study in how a corporation could return to an ethical stance after making a bad decision. At least, until I read this article (which seems to be true; the Intercept may not be the hottest at protecting its sources, but it has been pretty accurate in the past, and the story makes sense.)
Google has no principled excuse left for doing this. There’s not an iota of evidence left in the neo-liberal line about how the rising tide of technology lifting all boats you could continue believe.That experiment has been done! If you look at the intersection of the worlds of technology and society for the last two years, how could you even begin to think this would get better by doing this? Google knows this, because unlike the last go around, this isn’t some internal project with a lot of deliberation amongst the staff. No, it was meant to be compartmentalized and secret, because the people in charge know it’s against the values of so many of their employees.
So why do it?
The answer is that Google is acting like a traditional company; one that squeezes every dime out of the marketplace, heedless of intangibles like principle, ethical cost, and even at the risk of the safety of its users.
Let me quote you something from Google’s IPO filing: “Google is not a conventional company. Eric, Sergey and I intend to operate Google differently, applying the values it has developed as a private company to its future as a public company. Our mission and business description are available in the rest of the prospectus; we encourage you to carefully read this information. We will optimize for the long term rather than trying to produce smooth earnings for each quarter. We will support selected high-risk, high-reward projects and manage our portfolio of projects. We will run the company collaboratively with Eric, our CEO, as a team of three. We are conscious of our duty as fiduciaries for our shareholders, and we will fulfill those responsibilities. We will continue to attract creative, committed new employees, and we will welcome support from new shareholders. We will live up to our “don’t be evil” principle by keeping user trust and not accepting payment for search results.”
As a shareholder, I feel like Google has been dishonest with me if they really believe that going back to China under these conditions matches with this letter of intent about how the company will be run.
I have a suspiciously high number of friends now who are self-described socialists. If they’re reading this, they’re probably shaking their heads: Pretty naive to believe that a corporation could ever be anything but a vehicle for capital to perpetuate itself. Couldn’t be any other way: historical inevitability! With respect, I differ with this belief, just as I now differ with the belief that the naive increase in technological progress will lead us to some sort of golden path for our problems (I am posting this to Twitter; #irony).
It may, however, be time to reconsider the historical attitude to these problems in the technology industry. One of the things my friends get right about capital is that it is to a great extent a product of the work, creativity, and talent of the employees. Not that there are not brilliant and in some cases inessential Founders in Silicon Valley, but rather that creating something as colossal at Google inevitably involves a partnership between the investors, Founders and the employees. And in this particular case, the Founders have apparently left the company in the hands of others. You could plausibly argue that Google is a unique creation of its Founders or even its most iconic CEO, Eric Schmidt. You would have a harder time arguing today that the leadership there still occupies this special or privileged position. It was Sergey Brin who boldly drove the withdrawal from China; yet now Google is using Sergey’s yacht’s name as code for its secret project to build AI-based censorship into Google search. Google has changed.
This means the people working at Google today have a choice: They don’t have to make the same mistake twice.
Collective action and outrage at Google has already seen positive results: In the last year, it has seen a reaffirmation of the value of diversity (to which I know a long road ahead exists for all tech companies), and it has also seen internal dissent quash technological innovations being used to aid institutions who exist primarily for taking human life.
If technology is a tool, then it means the people making that tool have a responsibility to curb their tool’s misuse by playing a role in the decisions on how it gets used. And if the people who are the leaders of the company don’t believe this, they should hear it in plainer and clearer terms: namely, you do not become one of the largest companies in the history of capitalism without the assistance of the workers making those tools. This collective consciousness of how the tools we make are being used is something we all need to take responsibility for. The things we make, the systems we build — our creations have externalities, and their impact on human lives is real — and if technology is going to be a force for improving the lot of humanity, it will need people with a conscience behind it.
That won’t be easy, and there is unfortunate no easy path to getting there — but if we say and do nothing, and the only historical inevitability we can count on will be a repeat of the same mistakes.