Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts

Friday 30 September 2016

IBM, Google, Facebook, Microsoft, Amazon form enormous AI partnership

On Wednesday, the world learned of a new industry association called the Partnership on Artificial Intelligence, and it includes some of the biggest tech companies in the world. IBM, Google, Facebook, Microsoft, and Amazon have all signed on as marquis members, though the group hopes to expand even further over time. The goal is to create a body that can provide a platform for discussions among stakeholders and work out best practices for the artificial intelligence industry. Not directly mentioned, but easily seen on the horizon, is its place as the primary force lobbying for smarter legislation on AI and related future-tech issues.
Best practices can be boring or important, depending on the context, and in this case they are very, very important. Best practices could provide a framework for accurate safety testing, which will be important as researchers ask people to put more and more of their lives in the hands of AI and AI-driven robots. This sort of effort might also someday work toward a list of inherently dangerous and illegitimate actions or AI “thought” processes. One of its core goals is to produce thought leadership on the ethics of AI development.
So, this could end up being the bureaucracy that produces our earliest laws of robotics, if not the one that enforces them. The world “law” is usually used metaphorically in robotics. But with access to the lobbying power of companies like Google and Microsoft, we should expect the Partnership on AI to wade into discussions of real laws soon enough. For instance, the specifics of regulations governing self-driving car technology could still determine which would-be software standard will hit the market first. With the founding of this group, Google has put itself in a position to perhaps direct that regulation for its own benefit.
But, boy, is that ever not how they want you to see it. The group is putting in a really ostentatious level of effort to assure the world it’s not just a bunch of technology super-corps determining the future of mankind, like some sort of cyber-Bilderberg Group. The group’s website makes it clear that it will have “equal representation for corporate and non-corporate members on the board,” and that it “will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts.”
Well, it’s one thing to say that, and quite another to live it. It remains to be seen if the group will actually comport itself as it will need to if it wants real support from the best minds in open source development. Below, the Elon Musk-associated non-profit research company OpenAI responds to the announcement with a rather passive-aggressive word of encouragement.
The effort to include non-profits and other non-corporate bodies makes perfect sense. There aren’t many areas in software engineering where you can claim to be the definitive authority if you don’t have the public on-board. Microsoft, in particular, is painfully aware of how hard it is to push a proprietary standard without the support of the open-source community. Not only will its own research be stronger and more diverse for incorporating the “crowd,” any recommendations it makes will carry more weight with government and far more weight with the public.
That’s why it’s so notable that some major players are absent from this early roll coll — most notably Apple and Intel. Apple has long been known to be secretive about its AI research, even to the point of hurting its own competitiveness, while Intel has a history of treating AI as an unwelcome distraction. Neither approach is going to win the day, though there is an argument to be made that by remaining outside the group, Apple can still selfishly consume any insights it releases to the public.
Leaving such questions of business ethics asiderobot ethics remains a pressing problem. Self-driving cars illustrate exactly why, and the classic thought experiment involves a crowded freeway tunnel, with no room to swerve or time to brake. Seeing a crash ahead, your car must decide whether to swerve left and crash you into a pillar, or swerve right and save itself while forcing the car beside you right off the road itself. What is moral, in this situation? Would your answer change if the other car was carrying a family of five?
Right now these questions are purely academic. The formation of groups like this show they might not remain so for long.
This blog was first published on: http://www.extremetech.com/extreme/236459-ibm-google-facebook-microsoft-amazon-form-enormous-ai-partnership