3 challenges of "The Software Revolution", IP, and commons

ai
Tags: #<Tag:0x00007f622cb2c538>

#1

Sam Altman describes 3 challenges from what he calls The Software Revolution (2015-02-16):

  1. “huge-scale destruction of jobs”
  2. “drastic wealth inequality”
  3. “concentration of huge power” in particular from synthetic biology and AI

Altman doesn’t propose anything regarding the first other than to “stop pretending” it won’t be a problem. The freedom infringing regimes of copyright and patent will make the problem worse: a head tax makes it hard to stay alive and keep or become competitive with no or very low wages.

On wealth inequality, Altman writes “We can—and we will—redistribute wealth”; but we don’t have a good record of doing so efficiently, fairly, or without crisis. Why not avoid a significant amount of wealth concentration in the first place? Favor commons-based production, which at once increases equality of access and wealth, and the constituency for favoring commons-based production, and decreases the constituency for wealth-concentrating intellectual property. (Previously.)

Regarding concentrated destructive power, Altman writes:

I think the best strategy is to try to legislate sensible safeguards but work very hard to make sure the edge we get from technology on the good side is stronger than the edge that bad actors get. If we can synthesize new diseases, maybe we can synthesize vaccines. If we can make a bad AI, maybe we can make a good AI that stops the bad one.

The build up to later post on AI development regulation notes that Nick Bostrom’s Superintelligence is the best thing Altman has seen on the topic. Following is an excerpt from a 2014-12 interview with Bostrom:

There have been various tech races in the 20th century: the race to develop nuclear bombs, thermonuclear bombs, intercontinental ballistic missiles–some other things like that. And one can see what the typical gap between the leader and the closest follower were. And, unsurprisingly, it looks like it’s typically a few months to a few years. So, the conclusion one can draw from that is that if we have a fast takeoff scenario in which you go from below human to radically super-human levels of intelligence in a very short period of time, like days or weeks, then it’s likely that there will be only one product that has radical superintelligence at first. Because it’s just unlikely that there will be two running so closely neck to neck that they will undergo such a transition in parallel. Whereas, if the transition from human level machine intelligence to superintelligence will take decades, then we are more likely to end up with a multiple outcome. And, yeah; then there is the question of what we can do to try to coordinate our actions to avoid–so one danger here is that if there is a technology raised to develop the first system, that if you have a winner-take-all scenario, that each competitor will scale back on its investment in safety in order to win the race. And you’d have a kind of race to the bottom, in terms of safety precautions–if each investment in safety comes at the expense of just making faster progress on being actually making the system intelligent. And so you’d want if possible to avoid that kind of tech race situation.

This suggests commons-favoring regulation (from funders, institutions, or third-party regulators, conceivably via international treaty) mandating transparency, perhaps along the lines desired generally for reproducibility of research and analogous to the petite regulatory side of copyleft.

It also reminds me of a concept for Open Arms (2001), that is open design-ahead of defenses against nanotech weapons. Unfortunately I don’t know of any online description of the proposal beyond that mention. Added: slightly longer mention also from 2001.

More Bostrom:

Yeah. I mean, I think, that it’s a common technical problem that anybody would face trying to develop it, which is more important than exactly who develops it, in terms of whether the values actually contain anything–whether the outcomes contains anything that’s humanly valuable. But it is true that in addition to that, if you could solve that technical problem, then there is still the question of which values to serve with these AI. And so, I think that it is important to try to get into the field from the very beginning this thing that I call the ‘common goods’ principle, which is that superintelligence should be developed, if at all, for the benefit for all of humanity, and the service of widely shared ethical ideals. If everybody would share the risks, if somebody develops superintelligence, and everybody also, in my view should stand to get a share of the benefits if things go well.

This suggests re-orienting knowledge regulation to make freedom, equality, and security its top goals, each of which intellectual property harms. AI development isn’t going to adhere to a ‘common goods’ principle the broader context of the knowledge economy and knowledge regulation esteem and enforce a regime of private censorship and wealth concentration.

But the most important variable is peace. War (including cold) accelerates the weaponization of advanced technology, under ‘ideal’ conditions: massive resources, massive secrecy, and safety nowhere near a top priority. As I wrote recently:

One thing we can do to increase the chance of a peaceful future is to take the dominant resource, knowledge, off the table as something to be owned and controlled. For years I’ve predicted trade and possible shooting wars over intellectual property, largely on the intuition that states will fight over the most important scarce resource, and knowledge is ever more important, and ever more treated as scarce by intellectual property regimes. As I was writing this I noticed that I had just missed a live stream of a Cato event on Sovereign Patent Funds — A New Issue at the Nexus of International Trade and Intellectual Property. Update your priors accordingly.

Of course better knowledge regulation obtained and sustained through commons-based production isn’t a panacea. Peace, war, and existential threats are hugely complicated with many unknowns. All the more reason to press forward with the greatest urgency on the relatively mundane but obvious and important issue of intellectual freedom.


Y Combinator Research
List of ways intellectual property harms security
Food Insecurity Promotion → War
Feasible and sustainable reform toward abolition of regressive knowledge regulation to boost freedom, equality, and security
PostCapitalism: a pro-commons, anti-IP plan?
How open OpenAI?
How open OpenAI?
#2

How Different AGI Software?:

However, the key foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

How to estimate the extent to which policy and the structure of software production modifies these chances?