Eric Schmidt on How to Build a Better Web – The New York Times

Eric Schmidt on How to Build a Better Web – The New York Times:

Authoritarian governments tell their citizens that censorship is necessary for stability. It’s our responsibility to demonstrate that stability and free expression go hand in hand. We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice. Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.

The good news is, it’s all within reach. Intuition, compassion, creativity — these are the tools that we will use to combat violence and terror online, to drown out the hate with a broadly shared humanity that only the Web makes possible. It’s up to us to make sure that when the young girl reading this in Indonesia on her tablet moves on from this page, the Web that awaits her is a safe and vibrant place, free from coercion and conformity.

(Via.)

The whole essay is a great read. Too many essays about “what tech companies can and should do to combat hate and terror” focus on restrictions, and limits, and increased surveillance that comes with decreased personal liberties.

Schmidt nods at that stuff, but he chooses to focus on the Internet’s capabilities to diminish the shadows by providing a greater amount of light. I was so moved by the last two paragraphs that I read them verbatim in this week’s Material Podcast (which should post on Thursday) and I wanted to put them here, too.

When Schmidt spoke of “spell-checkers, but for hate and harassment” he was clearly speaking speculatively. But that’s the sort of thing that Google is well-positioned to do. They have the brainpower, they have the specific deep-learning technology, and they have the position within society as the unofficial cataloguer and indexer of the world’s thought and conversation.

It’d be great to have a politically and ideologically-neutral markup system for Web content. What if Chrome could infer, from context, that a certain thing in a blog post is probably being presented as a statement of fact…and thus, subject to further exploration?

“Meanwhile, studies at the University of Michigan are raising troubling concerns about a statistical link between futons and the 33% increase in the number of ‘Alvin and the Chipmunks’ movies in 2016…”

There’s a blue squiggly line underneath the line. Clicking it takes you to a targeted Google search about the actual study, and the discussions around it. You notice that many of the more quotable quotes were copied and pasted directly from a press release that was sent out by a think-tank called “Americans For Libertationous Change Towards Greatness.” A squiggly line under that leads to your discovery that this think-tank is 100% funded by the National Pull-Out Sofa And Adjustable Bed Co-Marketing Board.

I’m making up a silly example to avoid hot-button topics. Google’s Spock Engines are great at neutral facts. The squiggly lines in an article about guns would help people learn the terrible human cost of gun violence in America…but they would also help people learn that there are many people who own a handgun because of a threat to their safety that’s anything but hypothetical or paranoid. 

Eric Schmidt’s last paragraph is lovely. The same infrastructure that put a hateful piece of speech in front of a little girl can also make sure that the very next thing she sees is something constructive and truthful, instead of dark and built on fear and lies. Seek light on the Internet, and you’ll find it.

Google and other tech companies can make seeking out and living inside a larger world more attractive and easy than hiding inside a small, scared, and poorly-informed community. Fear only wins because truth is usually harder.

7 thoughts on “Eric Schmidt on How to Build a Better Web – The New York Times”

  1. Great Utopian ideal, ignoring the fact that Google is massive corporation with corporate interests who trades and makes money on information. I can’t for a second trust them on neutrality on any, even seemingly silly and irrelevant issue. Google benefits from shaping opinion through information presented and it is, unfortunately, in monopolistic position vs. tabulating information on the Internet. Abuse of such position is trivial and likely.
    The only way to keep a shade of trust would be Google corporate promise to keep neutral in an American first amendment sense: all information on the Internet is protected, equal free speech except if it directly calls for harm to be done to other fundamental rights (life, liberty,…). Editorializing is not acceptable if the Editor has skin in a game. With that freedom come human consequences, there is love, there is hate,… I would even claim that hiding hate fuels it.

  2. Well, the idea would be to hand all of the editorial decisions to neutral algorithms. Like when there’s a phone number inside an email, and your phone knows that it’s a phone number. Tap it, and it takes an action. In this fantasy scenario I’m thinking about, the squiggly line wouldn’t go to a curated page about an issue…just Google search results.

  3. I humbly and respectfully disagree with Andy.

    The “gatekeeper” is likely to be more pernicious than the original infraction. The “algorithm” will limit speech by limiting access to ideas. The capacity to see hate speech or a bad argument for what it is should not be put at risk of atrophy through disuse. How will an individual develop the capacity to confront “bad” ideas, if they are never presented with them? Are we really so fragile as human beings? Expecting an algorithm to replace the human capacity to seek the “true” and “beautiful” is ill considered at best and, potentially, down right dangerous if we surrender our ability to discern right from wrong.

    Further, can an algorithm be truly “neutral”? It is written by a human with prejudices they likely don’t see in themselves. So, “neutral” in the eyes of which censor with what economic, political, or social bias?

    The notion that there is a silver bullet in technology that is somehow more sophisticated in discerning truth than a human thinking through a problem for themselves seems false and, I would argue, diminishes our humanity. Justice cannot be pursued by those who have been deliberately sheltered from authoritarian or unpleasant ideas. Rather, justice can only be served by those who have come to understand the inherent cost and sacrifice entailed in upholding it.

  4. Of course, I’m talking about an idealized technology whose results would be trusted, eventually, because of its actual track record for impartiality. A service like Google would be better for this sort of job than most actual news outlets, as they’d make the same money (within a few fractions of a penny) whether people actually clicked through to the articles or not.

  5. As this seems a useful and polite debate, let me try to express my argument in a different form. We are shaped by our experiences. I had the good fortune to witness the fall of the Iron Curtain from Munich as a member of Radio Free Europe / Radio Liberty. My experience in speaking to Central and East Europeans and researching their societies is perhaps why I have come to see the wisdom in Article 19 of the Universal Declaration of Human Rights. It states:

    “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

    But why do I hold this view? I believe the free exchange of ideas is a foundation stone in maintaining and perpetuating free societies. The daily news reporting provided by RFE/RL was the ultimate subversive act from the point of view of the Leninists who held power over those behind the Iron Curtain. The totalitarian cannot tolerate factual reportage, debate, or the competition of ideas. It raises questions and contradictions to which orthodoxy has no reply.

    The act of questioning undermines the totalitarian mindset. Limiting ideas, debate, and human interaction, even with the best of intentions, limits freedom. It undermines the capacity to question and feeds orthodoxy.

    There is wisdom in the saying that “the road to hell is paved with good intentions.” The good intention of a Google “gatekeeper” limiting expression would only undermine the strength of our free societies. The success of international broadcasting to the Iron Curtain is proof that it is better to directly address corrupt ideas with arguments that further freedom, than to sweep uncomfortable notions under the rug. This is why dissidents like Vaclav Havel pronounced Radio Free Europe a partner in his struggle against the Czechoslovak regime.

    To put the argument in concrete technological terms, if Google’s algorithm presents the counter argument to a text along with the item sought then it would contribute to the free expression of ideas. If the algorithm eliminates a text, on whatever basis, then it is merely a form of censorship by another name. The presentation of counter arguments would have at least two advantages. It allows an individual who might be enticed by dubious ideas to confront their opposite. And for those of us who promote enlightenment ideas as orthodoxy, we would have to refine our thinking in light of arguments we might find reprehensible. This of course relies on the conviction that humans have the capacity to reason and that they cherish freedom above subjugation. This is a faith my experience reveals as properly placed.

    There is nothing new in Mr. Schmidt’s desire to protect us. But, I believe he is untutored in the enlightenment ideas of Hobbes, Locke, Voltaire, Montesquieu, and Rousseau. If he is going to float an idea of limiting discourse on the web, then he has an obligation to understand why Burke supported the American revolution and the ideas enshrined in the Declaration of Independence and Constitution. I believe America’s founders came down on the right side of the argument with the First Amendment.

  6. I don’t know if I’m making my point clear. There should be no restrictions on free speech and people should spend more time reading opinions they don’t agree with. Technology has the ability to promote both of those goals.

    I do think it would be interesting — in a “space colonies on Mars” way — to use technology to create a living cross-reference with the same neutrality of the Oxford English Dictionary. Something that would help people dig deeper into an issue, not steer them towards some segment of society’s idea of the “right” one.

  7. Thanks for the clarification. I think I misunderstood your argument. The technology you suggest would certainly be a benefit. It would be wonderful to have something like an expert system that led us through the “literature” so to speak.

Comments are closed.