Menu
Kim Kardashian Leaked Photos Backlash

Kim Kardashian Leaked Photos Backlash

Mazda Miata 2016 model revealed

Mazda Miata 2016 model revealed

The Sexiest Halloween Costumes of 2014

The Sexiest Halloween Costumes of 2014

Miley Cyrus New Butt Gets in Trouble with Law

Miley Cyrus New Butt Gets in Trouble with Law

Larry Ellison Steps Down as CEO of Oracle

Larry Ellison Steps Down as CEO of Oracle

Inside Google's Mysterious Ethics Board and DeepMind

Feb 4 2014, 4:40am CST | by , in News

Inside Google's Mysterious Ethics Board and DeepMind
 
 


By Patrick Lin and Evan Selinger

The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.

Is the deal linked to Google’s buying spree of seven robotics companies in December alone, including Boston Dynamics, “a company holding contracts with the US military”? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve search engine functionality.

All this uncertainty is driving an unnerving question: What exactly is DeepMind so worried about that they insisted on creating an ethics board? Is it a basic preventative measure, or is it a Hail-Mary pass to save “humanity from extinction”? Whatever the answer, we don’t want to feed the rumor mill here. But as professional ethicists, we can throw some light on the mysterious nature of ethics boards and what good the can do.

It’s fair to assume that the smart folks at DeepMind have thought deeply about AI and its implications. AI is very powerful technology that is largely invisible to the average person. Right now, AI controls airplanes, stock markets, information searches, surveillance programs, and more. These are important applications that can’t help but to have a tremendous impact on society and ethics, increasingly so as every futurist predicts AI to become more pervasive in our lives.

AI developers are thus under pressure to get it right. Just as we’d want to make sure you knew how to be a responsible gun-owner before we sell you one, DeepMind seems to have the same concern for commonsense responsibility as it sells potent AI technology and expertise. But because DeepMind is looking for ethical guidance from a review board, there are key cautionary issues to keep in mind as we follow its development.

1. Ethics Isn’t Just About Legal Risk

The first issue to be concerned with is the limits of ethics framed as legal advice.

We don’t know who will be invited to be on the ethics board, but we do know that that “chief ethics officer” has been a popular role in business for more than a decade. That position has primarily been filled by lawyers focused on compliance issues or following extant law. Google exemplifies this trend with its Ethics & Compliance team that works “with outside ethics counsel to ensure compliance with all relevant political laws and the associated filings and reports.”

This specific focus can lead to wonderful outcomes, such as decreasing consumer risk and improving public safety. But let’s not kid ourselves: the focus is dictated by a self-interested  goal of minimizing corporate liability. Harms that aren’t currently prohibited are therefore not considered much. This is a notoriously grey moral area for emerging technologies, since they usually are unanticipated and unaddressed by laws or regulations. Just as poking unnecessarily into a hornet’s nest is dangerously foolish, companies are afraid that probing into issues beyond what is legally required may compromise plausible deniability and open up new possibilities for litigation.

As it turns out, there isn’t much law that directly governs AI research—though the usual business laws about privacy, product liability, and so on still apply. So, DeepMind’s demand for an ethics board may be a signal they’re interested in more than legal risk-avoidance.

If this is the case, we hope the key players appreciate the full scope of ethics. Ethics isn’t just about dictating rules for what you should and should not do. Especially in the domain of technology ethics, the answers to pressing questions tend to be unclear: the law is often undefined; applications of new technologies are unclear; and, social and political values conflict, both internally and with each other in new ways.

A technology ethics board, therefore, can be an invaluable canary in the coalmine—scouting for explosive issues in advance of emerging technology and before the law eventually turns its attention to these new problems and the company itself. An ethics board might suggest that research and applications should be taken in a direction that avoids such problems entirely. Or, it could recommend an open discussion to clarify and defuse toxic issues before a public backlash.

2. Internal vs. External Advisors: Pros And Cons

The second issue to be concerned with is the limits of setting up an internal ethics board.

On the one hand, an internal ethics board has special standing. It potentially can influence corporate leadership to a greater degree than an external board or advisor can. It may have access to privileged information on the inside, and it can provide on-demand guidance as needed. So, even if Google also consults with outside ethicists, there’s real value in creating in-house capabilities.

In-house ethics committees have been a mainstay in medicine for the last 30 years, when a US Presidential commission recommended it in 1983. Those committees are composed of lawyers too, but also doctors, nurses, bioethicists, theologians, and philosophers—a much more capable approach than mere risk-avoidance to tackle controversial procedures, such as ending life support and amputating healthy limbs.

Internal ethics boards—not just lawyers focused on legal compliance—are less common in other industries, though they seem to be trending up, given the rise of technology ethics in the last decade or so. Besides the Google-DeepMind deal, the automotive giant BMW recently told us that it (wisely) had an internal ethics team to help guide development of automated or self-driving cars.

On the other hand, external boards have unique benefits. They can be much more independent than internal advisors—unafraid to offend management and less inclined to pull their punches. Simply put, outsiders typically have greater freedom to call it like they see it without worrying about losing their jobs or being co-opted into hired guns. Having distance from the center of things, outsiders are also more likely not to have drunk the metaphorical Kool-Aid.

This separation can result in more objective counsel and a greater capacity to look past individual items and see things holistically. For these and other reasons, ethicists like us are increasingly called upon to advise industry, government, and nongovernmental organizations, such as the US Department of Defense.

Ethicists can also provide similar guidance without working as contractors or employees, drawing from public-interest funding. For example, one organization we’re both involved with is the Society for Philosophy of Technology, which has numerous professors who specialize in the ethics of emerging technology. Dutch philosopher Peter Paul Verbeek, the society’s president, was just awarded a 1.5 million euro grant from the Netherlands Organization for Scientific Research to study how “Google Glass influences public space and the way in which we interact with one another” and other issues.

3. Lip-Service About Ethics

The third issue to be concerned with is ethical smokescreens.

News reports stated DeepMind had “pushed” for and “insisted” on an ethics board, implying that Google was reluctant about the idea. This possibility raises questions about how long the arrangement will stick and how seriously Google will take it. Did Google agree to an ethics board to appease DeepMind, knowing that it would make for nice window-dressing? Or will the board have real opportunities to provide input?

This is a familiar worry, that organizations engage ethics only as part of a public-relations checklist to show that they care. We’ve heard this in connection to the defense community’s interest in weapons ethics, for instance, on drones and military human enhancements. But our experience is that these organizations really do care about ethics and account for it as much as they can in their decisions. The design of Stuxnet, as Naval Postgraduate School’s professor George Lucas and others have noticed, seemed to have paid close attention to academic articles about ethics of cyberweapons. Even if motivated by self-interest, inviting ethics in is still a positive step forward for society at large.

While we wish to avoid rumors, they can nonetheless be revealing. In the absence of official statements, speculation inescapably fills that void. One rumor that Google is taking ethics seriously comes from its possible withdrawal from the DARPA Robotics Challenge—a contest it is currently winning—that bestows not just prize money, but also great international acclaim. Whether Google believes that the military market is insufficiently profitable, or whether it doesn’t want to participate in the military-industrial complex, many anti-war campaigners are relieved.

Back to DeepMind… If the AI ethics board is little more than for show, Google will be missing an incredibly valuable opportunity to embody its often-repeated philosophy of “Don’t Be Evil.” Without an ethics board or other such experts to help define “evil” and identify evil activities, it will be difficult—as critics point out—to truly live up to that world-famous motto.

It’s not just Google’s soul at stake here; it’s also about the future of our increasingly wired world. Whether its ethics board is tasked mainly with privacy issues or with existential risks, even natural skeptics like us are encouraged by the news. We hope it inspires other technology leaders to be aware of the power they wield and their responsibility to us all.

***
Patrick Lin is an Associate Philosophy Professor and the Director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a Visiting Associate Professor at Stanford’s School of Engineering; and an Affiliate Scholar at Stanford Law School.

Evan Selinger is an Associate Professor of Philosophy at Rochester Institute of Technology. He’s also a Fellow at The Institute for Ethics and Emerging Technology, the Head of Research Communications, Community & Ethics at the RIT Center for Media, Arts, Games, Interaction & Creativity (MAGIC), and serves on the Advisory Board of the Future of Privacy Forum.

Source: Forbes

You Might Also Like

Updates

Shopping Deals

 
 
 

<a href="/latest_stories/all/all/31" rel="author">Forbes</a>
Forbes is among the most trusted resources for the world's business and investment leaders, providing them the uncompromising commentary, concise analysis, relevant tools and real-time reporting they need to succeed at work, profit from investing and have fun with the rewards of winning.

 

 

Comments

blog comments powered by Disqus

Latest stories

Music Midtown 2014 Blasts Through Piedmont Park
Music Midtown 2014 Blasts Through Piedmont Park
Music Midtown's one of Atlanta's biggest concerts and festivals and this year the stars came in droves. Eminem headlined while Jack White, John Mayer, Iggy Azalea, and Lorde all made their mark.
 
 
Upcoming &#039;Bond&#039; Film To Begin Filming In December
Upcoming 'Bond' Film To Begin Filming In December
Main villain to be physically imposing character.
 
 
Rihanna Leaked Photos also Found In latest Hacker Scandal
Rihanna Leaked Photos also Found In latest Hacker Scandal
This weekend another big leak of private celebrity photos has hit the web. First time victims Kaley Cuoco and Jennifer Lawrence are hit again and now reports surface that Rihanna's private photos are also among the leak pics.
 
 
Kim Kardashian Leaked Photos Backlash
Kim Kardashian Leaked Photos Backlash
We have a new celebrity nude photo scandal this weekend. A hacker has published nude photos of Kim Kardashian and others on 4Chan. The internet seems though to have no sympathies for Kim Kardashian's violation of privacy.
 
 
 

About the Geek Mind

The “geek mind” is concerned with more than just the latest iPhone rumors, or which company will win the gaming console wars. I4U is concerned with more than just the latest photo shoot or other celebrity gossip.

The “geek mind” is concerned with life, in all its different forms and facets. The geek mind wants to know about societal and financial issues, both abroad and at home. If a Fortune 500 decides to raise their minimum wage, or any high priority news, the geek mind wants to know. The geek mind wants to know the top teams in the National Football League, or who’s likely to win the NBA Finals this coming year. The geek mind wants to know who the hottest new models are, or whether the newest blockbuster movie is worth seeing. The geek mind wants to know. The geek mind wants—needs—knowledge.

Read more about The Geek Mind.