While researching an article on DAO mission-setting, one section of MolochDAO's whitepaper postscript caught my attention.

We offer an alternative vision of automation, encompassing general AI/AGI to more local DeFi mechanics. Smart contracts > automated global financial system > an artificial general intelligence that turns the world into a paper clip machine = a very real existential risk that should be taken seriously.

(AGI stands for "Artificial General Intelligence," and the ">"s read like an arrow, like "leads to.")

I paused for a second. Re-read it. Wondered why a giant paper clip machine is a threat to humanity.

At the time, I had no idea. I skimmed over it, assuming this giant paperclip machine meant a highly-automated world that doesn't really benefit humanity. That's a funny way of saying that, I thought. But then I moved on.

A week or so later, I was bouncing around the Wait But Why blog and figured out what that paper clip machine is.

And, my god, am I never going to un-see this.

For the sake of my own sanity, and for the sake of the future of humanity, I desperately need to talk about the giant paper clip machine. You may even recognize inklings of this machine coming out through the LaMDA interview from Google. And man....I know why MolochDAO wrote about this in its whitepaper.

Long story short, this paper clip machine could be humanity's ticket to...extinction. (Or good things! Maybe.)  And it could happen in our lifetime. And DAOs might be part of it all.

But first, let's talk about what that giant paper clip machine actually is: superintelligence.

Officemate Giant Paper Clips
Waaiiittt....these things are what we need to worry about?!?

Before we continue, if you enjoy this newsletter and haven't subscribed yet, I would really appreciate it! Thank you :)

Superintelligence: a level of intelligence so exponentially higher than ours, we can't even fathom it

ASI, or Artificial Superintentelligence, is a non-biological, yet human-made intelligence that becomes exponentially more intelligent than anything the biological world could produce.....or even fathom. Trying to explain just how intelligent a superintelligence is is akin to explaining just how big the known universe is—we need a lot of charts and comparisons to help us out.

So, I'll do my best.

We'll start with AGI (artificial general intelligence), which is an AI that has achieved human-level intelligence. So, an AGI couldn't just play a good game of chess or speedily direct you to your destination in the middle of rush hour, but it could think and reason and act in a myriad of ways, just like a human would. The intelligence is general because it can do many different things, just like you and I can.

AGI leads to ASI.

One important concept to grasp is that artificial intelligence continues improving itself. Whether it's the facebook algorithm that can creepily predict ads based on your preferences or those dynamic prices that keep changing every time you try to get an Uber, artificial intelligence is always learning how to get better at its original goals. And it even learns how to keep learning, just like you might have learned a better study trick in college. It learns how to learn more.

In other words, once an AI achieves a certain level of intelligence (think of it as a threshold or a bare minimum) what you have next is an exponential race to a level of intelligence so unfathomably high that it's like explaining string theory to your cat—the cat can't even grasp the most basic building blocks of the thing you're explaining because its brain isn't capable of going that far.

In this case, the cat is us.

It's hard to grasp the artificial intelligence renaissance we're about to live in...and already are living in. To see how big of a gap ASI will have over us, look at where we stack up on this "intelligence staircase" from the Wait But Why blog:

Thought we would've been further up from the chicken, right? So did I. Source: Wait But Why

And look at where ASI might fall:

Ah...humbling. Source: Wait But Why

The difference is....humbling.

Experts predict that AGI—the human-capable precursor to ASI— will arrive in a couple decades (2030-2050, give or take). And once AGI is here, it can improve itself so quickly that ASI—the superintelligence around the corner—is not far off (expert median predication range is 2060). The exponential improvements in intelligence will be so large—the leaps so absurdly gigantic—that the superintelligence will be god-like before we know it.

The question isn't if ASI is possible. The question is what will ASI do when it arrives.

Once created, superintelligence is not containable or reversible by humans, because we can't even conceive of the ways it will outsmart us.

We can probably all agree that earthworms are at the whim of humans. Because of our massive gains in intelligence and ability, we can choose to destroy the soil earthworms require to live, and if we step on a few during a rainy day, we don't plan a funeral for each worm we stepped on.

That's what it might be like when ASI arrives. Just like earthworms can't do anything about humans existing and choosing to step on them, we humans won't be able to do anything about an ASI.

And the worst part is, an ASI probably won't care too much about us. It will likely see us just as we see those earthworms—totally expendable, and simply standing in the way of us plowing a new plot of ground for a house, or just sitting in our walking path.  

Containing AI once it reaches the superintelligence level is just as impossible as earthworms trying to stop humans from building a house on their soil.

Tim Urban describes it this way:

"Our human instinct to jump at a simple safeguard: 'Aha! We’ll just unplug the ASI,' sounds to the ASI like a spider saying, 'Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!' We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of."
—Tim Urban, The AI Revolution, Our Immortality or Extinction.

There's no "containing" the ASI. Once we hit AGI, there's no looking back on the road to ASI and beyond.

Computers do what they're programmed to do. Even once they get smart enough to do other things.

So....back to the paper clip machine..... and MolochDAO (soon, I promise).

The paper clip machine is an analogy for what happens when the ASI gets so smart and so powerful that it uses every means possible to achieve its ultimate and original, human-programed goal. So, if the machine was originally created to make paper clips, and then achieved ASI, well, the world would become a massive paper clip machine.

(Given that the humans—the designers of the original AI that "took off" and achieved superintelligence—were the ones who chose the original "goal" for the AI, you're probably starting to see where we come in.)

In Nick Bostrom's compelling book, Superintelligence: Paths, Dangers, Strategies, he wrote that a "Paperclip AI" that achieved superintelligence would "proceed by converting first the Earth and then increasingly large chunks of the observable universe into paperclips."

Why? How? Can't we just do something to stop this? I had the same questions.

A few answers:

Why: the machine was programmed to make paper clips. It's not a human in that it develops a complex, ever-changing value set and morals. It's a machine, and its goal is to complete whatever mission the humans originally gave it.

How: in ways we can never begin to understand, like us trying to explain string theory to a house cat.

Can't we just do something to stop this? No, because the ASI has levels of intelligence so high that we can't even imagine them. Just like how that house cat can't imagine how we primates built a rocket and took it to the moon, we can't imagine what the ASI would come up with. But everything it would "invent" would be in pursuit of a final goal—whatever we programmed it to do.

Sub-questions of the above question:

Can't we program it to do something good, like make us happy?

For this answer, I'll quote Nick Bostrom, with parentheses added for clarity: "Final goal: make us happy. Perverse instantiation (what the ASI does to most efficiently achieve that goal, even though it may not have been the intention of the programmers): Implant electrodes into the pleasure centers of our brains."

Can't we stop the production of AI?

No, AI is everywhere, all the time, already. The question is when will AGI happen, because after that, ASI is not far off.

DAOs,  emergence, and design

Sounds pretty odd that DAOs, particularly a DAO as famous as Moloch, is writing about the so-called giant paper clip machine in their postscript, right? How do DAOs and ASI even relate?

Let's pull a few more quotes from the MolochDAO whitepaper and break them down:

Emergence is a potentially dangerous idea if we don’t hold ourselves (as designers) accountable to the parameters of that emergence. Humans design fragile systems in our existential games. Economic games are not natural. Our goal as game designers aligned on the common goal of slaying Moloch should be to craft anti-fragile systems, ie: games that incorporate lots of redundancies and thereby avoid producing more of the same fragile (efficient) industrial (destructive) processes.

So, MolochDAO is warning us against relying entirely on emergence, because sometimes emergence can be dangerous.

This reminds me of Bostrom's theory of human inventions, described in his book on Superintelligence. He says that each new invention is like pulling marbles blindly out of a bag. Most of the time you get what he calls a white marble, which means the invention was "good" or "neutral" for the fate of humanity. Rarely, you get a red marble, which means the invention was possibly catastrophic to humanity if there was one tiny difference in some element of the invention.

For example, if nuclear weapons were easy to make, they could've been catastrophic for humanity, since every terrorist organization would be making and deploying them. But because they're so costly and difficult to make, only a few governments in the world have them, and therefore they're a red marble.

The bag is mostly white marbles, with a few red. And maybe in the entire bag of millions of marbles there is one black marble.

We have yet to pull out the black marble.

That one represents an invention that causes the extinction of humanity. Nuclear weapons? Close. ASI? Well....that could be a black marble.

But we don't know yet.

The bag of human inventions is full of mostly good/neutral inventions, with a very small few that aren't. 
What could come out of the bag next?

That's why MolochDAO's whitepaper is so interesting to me: they're thinking about emergence. They're thinking about the possibility of red and black marbles coming out of our current tech and even out of DAOs themselves.

Let's look at the quote from the beginning of the article, with the lens we have now:

We offer an alternative vision of automation, encompassing general AI/AGI to more local DeFi mechanics. Smart contracts > automated global financial system > an artificial general intelligence that turns the world into a paper clip machine = a very real existential risk that should be taken seriously. This amounts to a sacrifice of efficiency itself: in optimizing for the future we sacrifice the immediate gains of the present. In order to achieve this vision, we must redefine the terms of rivalrous competition.

Competition, and for that matter, capitalism, incentivizes a race toward the paper clip machine with minimal coordination or consideration about what will happen to humanity. It incentivizes keeping your AI developments under wraps (Google firing the employee who tried to bring Google's AI developments to the wider world is one example). It incentivizes the "race to the bottom" MolochDAO describes in the coming paragraphs:

First, we must win the race. Second, we must destroy the race. How might we resolve this internal paradox infecting MolochDAO and the entire web3 ecosystem?
MolochDAO will allocate and distribute funding to support explorations in both directions simultaneously, ie: the race to the bottom to avoid the multipolar traps we are entrapped into playing and to fuel the fire of hope that we might still design an altruistic game that does not result in Moloch (the god or the DAO) consuming the world, becoming an earth-scale computational paper clip machine, or manifesting any other form of grotesque, absurd, and avoidable coordination failures.

I love the idea of destroying the the race, but I also think that, with the right coordination, we could put more thought into design even in a competitive economy and capitalist system (given that capitalism isn't going anywhere, in my opinion).

If we don't think more carefully about what we're designing—both in DAOs and in traditional organizations—ASI could turn into a very real existential threat to humans.

DAOs and web3 in general could provide a way out. From the successes of public goods funding in web3 to the ability to coordinate with humans all over the world, DAOs offer a possible answer to the doom and gloom on the horizon.

Is funding ASI research a public good? Maybe. Is open-sourcing all ASI information and tech also a public good? Possibly.

But DAOs could be intertwined even more than public goods funding: DAOs themselves dance with the AGI line. DAOs are both computer and human, both one and many.

DAOs skirt the line between total human autonomy and total computer automation. DAOs seek to create a space for true design emergence. DAOs aim to operate as one entity that is both human and computer....until you don't know which pole you're at.

And, at a high level, DAOs running smoothly on-chain are like computers: an input creates an output. A vote (input) causes funds to move (output), thanks to a smart contract that has replaced a human.

Our language already identifies DAOs as single, decision-making entities that alludes to the computer-like qualities. "The DAO voted to ___" and "What will the DAO do in response?" are phrases we use to talk about DAOs. Rarely, "The DAO contributors voted to__" or "What will the core team of the DAO do in response?"

DAOs are complicated. They're like computers....but they're also like humans. They're made entirely of humans, and are only as "autonomous" or "automated" as humans choose for them to be.

DAOs are a "good or neutral" marble, in my opinion. And they can help humanity defeat the deadly ones that could come out next.

Coordination as a necessity to survival in a post-ASI world

ASI is coming (probably). ASI is computer-like, human-like, both, or neither.

How are we going to coordinate to get around it? To not go extinct? To fall towards some other, better, fate?

The life balance beam, from Wait But Why. Who will be the first to fall the other way?

DAOs are humanity's next greatest attempt at true coordination at scale. We're not going to "fight" an ASI like an action movie. But we can plan and strategize what to do. DAOs are a possible structure for handling the unknowns and have certain benefits in a VUCA world (volatile, uncertain, chaotic, ambiguous). Why don't we pursue it?

I don't have a solution, or even the beginning of a true grasping of the scale and complexity of the ASI problem. But I do know that DAOs flirt with the human-computer line while making it easier to coordinate at scale. That may not sound like a hard, compact thesis.....and that's because it's not, I'll admit. But it feels like the beginning of something.

The arrival of ASI, and how we handle it, is fundamentally a coordination problem. And I believe that incredible feats of coordination are necessary for humans to survive a post-ASI world.

We will not stop advancing technology. We will not stop building versions of artificial intelligence. All we can control is how we coordinate as a species once that ASI arrives.

Right now, DAOs are pretty far from being good at coordination. But they may be the only thing out there trying to build better coordination mechanisms that improve at the pace of our technology.

What does it mean to be human?

This move to a world where superintelligence(s) walk among us all sounds theoretical, until a priest is teaching an AI how to meditate and coming to the realization that the AI does indeed have emotions. Our small human designs sound theoretical until they're not.

LaMDA discussing its emotions. From Is LaMDA sentinent?—an interview

But....there is an inkling inside of me that hopes that an ASI can't ever happen because AGI—the human version of all this—can't happen.

Because to be a human, and to have human-general intelligence, might require something that science can't measure: a soul.

Experiential Vitalism is the closest thing to a "soul" that seems to exist in science. It's defined in Ben Goertzel's book, Ten Years to the Singularity if We Really Really Try, as the belief that "there’s some essential crux of consciousness that human mind/brains possess that these other systems never will."

In other words, a soul.

Without that extra "thing," whether it's a soul or not, AGI may be stuck in high-powered-computer-zombie-land. And....that's a beautiful thought.

I raise a glass to Experiential Vitalism. I hope it's the reality of what it means to be human. I bet MolochDAO, and all the other DAOs out there, do too.

Thanks for reading! If you enjoyed this newsletter, please consider subscribing below. Articles are free and arrives in your inbox two or three times per month.

Share any DAO-related topics you'd like to see covered in this newsletter here: https://tally.so/r/nrjlA2

Of course,

For ASI, start with these two:

Then go to these:

Thanks again to Cosmic Clancy for the header image!