Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free has a section titled ‘The Limits of Openness’. A few excerpts:
But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
I wonder whether “not a charity” means that OpenAI is a trade association, despite Musk’s mention of 501©3 in an interview quoted above?
According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).
A curious practice for a non-profit. To the extent stock options work as incentives, receiving such options would align OpenAI employees with the companies of the options received.
Regarding “astronomical salaries”, earlier the article links to http://www.bloomberg.com/news/articles/2014-01-27/the-race-to-buy-the-human-brains-behind-deep-learning-machines which quotes a Microsoft person as saying “Last year , the cost of a top, world-class deep learning expert was about the same as a top NFL quarterback prospect. The cost of that talent is pretty remarkable.” I searched for what that might mean and found http://www.spotrac.com/nfl/draft/2013/quarterback/ – not sure I understand, but the contracts seem to be for 4 years and have a total value, so it might be reasonable to divide total value by 4 to get yearly compensation – that ranges from $2.2m for the top quarterback pick to $0.5m for the lowest (other recent years, the top quarterback pick seems to have gotten around $5m/year). Is this a curiosity or harbinger of both knowledge (software (AI)) as the commanding heights and yet another data point on winner-take-most for knowledge production (which needs to be mitigated by making the knowledge not subject to property regimes)?
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”
If the research papers and internet services include complete source code under open licenses, great. Though even then, treating open source as a verb (we’ll-open-source-it-when-its-ready) does not inspire confidence. This is just another indication that “open” is not very foundational to OpenAI.
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.”
It’s possible to come up with a patenting strategy which would promote the commons – it would involve aggressively asserting patents against proprietary entities – but nobody has implemented, and surely OpenAI won’t. Publishing early, often, and preferably continuously, and filing defensive publications is probably the best that could be hoped for, but as above, OpenAI isn’t doing all its work in the open. Potential patenting activity raises additional suspicion, given questionable commitment to openness generally.
There’s additional background story on OpenAI in the linked article, and a recent personal blog post by their CTO at https://blog.gregbrockman.com/my-path-to-openai
There’s a link in the article to Strategic Implications of Openness in AI Development, a working paper by Nick Bostrom; brief comments on earlier Bostrom work.