Protecting Ourselves… From Ourselves.

Thought Experiments on how the Crypto Commons will evolve into the First ‘Friendly AI’s’.

Patrick Kershaw
Token Economy

--

I’ve been thinking about our transition towards new forms of governance and what this means.

There has been no great human transformation without massive amounts of upheaval and usually equal amounts of warfare and death.

Let's be honest, we just aren't very good at being peaceful, especially when it comes to disagreements over books written >2k years ago, or over anything to do with resources or land. 😐🤷‍.

Unfortunately, we are stupid monkeys.

TL/DR.

The creation of the first Decentralised Autonomous Co-Operative’s (DAC) will evolve into our first interactions with ‘Friendly AI’s’ (as defined by machine ethics theorists).

These will seem very simple to begin with. The tipping point into the unknown will be when we hand over control of digital resource.

They will be built to act in our ‘best interest’ however they will also lead to ‘Instrumental Convergence’ where seemingly well-intentioned AI ‘act in surprisingly harmful ways’.

Appealing to our more positive selves, I also recognise, or more hopefully believe, that most of us don’t want death, or more generally, enjoy life.

As I have written about before, I think this will lead us to experiment with emergent commons and by default, this will bring about ‘Friendly AI’

So, What is Friendly AI?

“Yudkowsky (2008) goes into more detail about how to design a ‘Friendly AI’. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize… that their own designs may be flawed”

Source: Wikipedia

Friendly AI (fAI)is basically defined as the most simple AI’s we will build for the benefit of humans.

It is my opinion that the first fAI will begin in the emergent commons, as it seems clear to me that the first commons will likely be architected to provide ‘commonwealth’ for their participants.

This can be done in two ways.

Payment of dividend/UBI or similar and/or secondly ‘buying’ valuable assets/resources and holding/locking them, inherently increasing the value of the underlying common.

Most obviously in regard to digital assets, automated and exponential buying and locking of Bitcoin.

Long Live BTC.

This will also extend, in time, to protect members and/or users unique to communities inside each common and their needs. This could take the form of grants, or social responsibility funds, depending on the nature of the community. This won't be optional either, it will be baked into an incentive chain. Anyone interacting inside the community will effectively be subsidising this with their work.

Whilst we maintain control of the machines, we need to ensure that ‘we’ as humans are looked after.

Ralph Merkle said it best in this Epicenter episode. (I highly recommend the entire episode however the below starts around 45 minutes in.)

“..Lets face it, you and I are not going to be competitive in the future. You and I are biologically based, we have brains that operate on millisecond synaptic delays…using componentry which is large and unreliable…. compared with the types of systems that we expect that we are going to be able to build…and so at some point we will have systems that are a lot brighter, a lot faster, a lot smarter, a lot more competitive than we are…

And there are a lot of people that are perfectly happy to set up an economic system where those people who don’t carry their weight, don’t get to be part of the system. ‘They’re losers!’.

Well, I have news for everyone. We are all going to be losers. All of us.

So unless we set up a system which looks after slow-moving, not very bright people, which is all of us, we are going to create a system that will eventually simply dispose of us.

…the DAO would be of such a nature that it would remain intact, it would survive, and its core function of responding to the wellbeing of the people who are composing it, would continue to be served…. we get to set up the initial conditions, because right now we are running the show, so I would suggest we arrange the initial conditions in such a way that we have long happy lives. That's my suggestion.

…if you have a DAO who’s goals are limited to simple growth, then we probably want to give an edge to our DAO (one that looks after humans) in some fashion… like it (needs to) get there first, it (needs to) own most of the resources”’

Source: Ralph Merkle — Epicentre Ep 252

Have a deep breath. Thats pretty heavy. We are all losers in the long term.

What this actually means is that the Commons, at scale, will have no end. No off switch. The Commons will constantly build value for the participants. It will be inherently baked into the architecture. That is the stepping off point into the true DAO’s. However, we will have zero control.

At a high level, this is a classic throwback to Bostrom’s ‘paperclip maximiser’ thought experiment and the need to program machines to value human life.

Hence, by creating these Commons to build ‘commonwealth’ and protect those in need we will have arrived at ‘friendly AI’ in the wild.

So, why are resources important?

A gold mine in DRC (cnn.com)

Simply, Control of resource = Control of power

When dictators take control of countries they first control physical, usually primary resources.

This is because whoever controls the flow of resource, controls payments to themselves and their constituents/citizens.

See any history book for studies of this.

For some more recent examples:

And last, but by no means least:

Vlad loves dogs.

You get the point.

Control resources and their allocation. Control humans.

Following this train of thought, for AI’s to have any lasting impact on us, we will need to ‘hand over’ allocation of digital and (eventually) physical resource.

There is a counter-argument to this, that I am a proponent of, that we won’t actually ‘hand over’ digital (or any) resource.

The commons/DAO’s will just buy it. And they will buy it all if possible. The same also stands for physical resources. They are the new benevolent dictators.

There isn’t a stronger bull argument for Bitcoin than to consider the construction of an entity that becomes a ‘Bitcoin eating machine.’

We are flying headlong into a science experiment with the amplification of our ‘nature’.

We are going to see a Darwinian explosion (and selection) of entities created to harness the best and the worst of humans. It is terrifying and exhilarating all in the same thought.

And it will bring about the emergence of ‘instrumental convergence’ whereby seemingly well-intentioned AI ‘act in surprisingly harmful ways’.

What happens when we create a commons that incentivises humans to kill? What about to love? What about give?

What about Governance?

New Zealand’s Parliament Grounds 🤙

There is a certain arrogant rationality that exists in engineers creating ‘governance protocols’.

Put another way, we’ve spent ~70k years trying to figure out governance ourselves and have largely failed in the process.

Why is now any different?

I contend that we will have given away governance by the time we get to true DAO’s. True AI doesn't give a rats what we think.

The limited governance that we have will only be in our interim developments and using history as a guidepost, it is highly rational to believe that it will fail anyway.

The successful and lasting commons won't even have a governance mechanism. They will actually be multi-tiered incentive systems.

We are going to figure out ways (I hope) to incentivise people to be good humans.

Now, of course, this spirals immediately into, what is ‘good’… however, that's a whole other topic!

Nonetheless, these experiments are imminent. Enjoy the ride and get involved.

--

--