Invisible Hand Actually Malevolent?
#
If you’re not familiar with the original SSC essay, read my summary below before reading my thoughts on top here…
Invisible Hand Actually Malevolent?
A Review of Meditations on Moloch!
Moloch is about the triumph of incentives over values. The triumph of instrumental goals over terminal goals. The Nash-Equilibrium where the system is at a steady state is Moloch. The source of most evil. A trap where people can't get out of as they are forced to think and act locally. Falling prey to the competitive forces that maximize individual outcomes, instead of preferring cooperation to submit to the god of our values. Moloch appears at any point when multiple agents have similar levels of power and different goals. Moloch exemplifies unfortunate competitive dynamics.
Deep down, nobody actually wants it to keep going this way, even the winners. It's a hedonic cycle for civilization. Left unchecked, it will sacrifice all our values and all we really value. "Sacrifice values to get ahead." It is not necessarily greed; at points, "getting ahead" becomes necessary.
"Coordination problems create perverse incentives" is a very basic tenet of economics, which is essentially what the post boils down to. However, this economics-101 sentence is dull, uninspiring and doesn't really tell the entire story. Scott Alexander takes a perhaps poetic way of introducing the concepts to those who are unfamiliar with them. Mr. Alexander is a lecturer who had jazzed up "Week 4 - Coordination Problems" with a poetic personification, but with little economics literature around such problems. To do so, Alexander uses Allen Ginsberg’s poem, which serves as the post's underlying theme and is referenced throughout. Even with my familiarity with the concept of coordination problems, I still thought the poem itself was esoteric. I don't think referencing the poem helped to explain the concept. From the surface, it leaves the impression of writing things that sound intellectually rigorous as opposed to writing something that is actually intellectually rigorous. For the most part Alexander avoids this, but the Moloch stuff is more dubious.
In Ginberg's poems, Moloch isn't just a literal god. Neither a set of equations. Moloch is part of human nature — one we're horrified by. Scott Alexander does a good job of building the image of Moloch in our world. It gives off a vague, yet powerful sense of knowing. It sort of allows one to have a shorthand answer to why things happen — Moloch!" What is Moloch? The demon god of Carthage, and to him we say Carthego delenda est"
Where do we go from here? Per SSC, to defeat Moloch, we need an agent that we side with holding human values. "Elua" or the "Gardner" that will optimize for what we like. The essay reads ominous. Scott takes Ginsberg's poem and retells it — nature has fucked us over, and reason is the only thing that can save us from it. This reminds me of Bucky Fuller's quote "You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete."
Alexander's bias is along the lines of "AI is the looming existential threat that will kill us all". The first AI to hit Singularity-level will outstrip everything around it in terms of intelligence, and so would truly be a singular entity with no competition. This seems, to Alexander, not just a utopia, but the only viable way of escaping the Malthusian trap. I'm assuming this relates to good superintelligence – the only thing that will save us from a bad one, is a good one that sides with us. A battle between the evil god Moloch, and an alternative god Elua — a superintelligence that has values aligned with humans.
It’s tempting, and intellectually satisfying, to look at a set of problems, extract a meta-problem and then propose a solution: by solving the meta-problem, you solve all of its instances, too. However, the effectiveness of the solutions is dependent on how well the abstractions fit the instances. Plus, how unintended consequences won’t overshadow the benefits. The singular autocrat may stop us from races-to-the-bottom, but can implement policies we’re not particularly happy about.
In Alexander’s case, he just wants a mechanism to stop competition inevitably sliding into local optimization traps, not necessarily advocating for an ideal utopia. Surely our super-intelligent AI overlord would be tempted to stray outside those bounds and look for other ways to help humanity out. The AI is far smarter than we are and has the wellbeing of all of humanity in its purview. How long until it decides that it knows with certainty that it can better manage our happiness than we can?
So, what then?
I guess for Marx, capitalism was Moloch, and communism was a solution. While the god-like powers of a super-intelligent AI could potentially solve Communism's information problem, it can't know what is in people's hearts. It will provide a target for the power-hungry to attempt to co-opt, and in defending itself is likely to crush the freedom and flourishing that it was supposed to nurture. There’s a fatal flaw which has been demonstrated time and again by attempted instantiations of Communism: there are people who will go to unimaginable lengths to secure power. They outcompete anyone that’s mild mannered, and eventually the whole system collapses. Although it’s hard to predict how this will take place under our new AI overlord, I can predict it will happen ad-nauseam. Maybe the AI will detect and prevent subversions, but similar to autocrats' attempts, it’s hard to do without clamping down on freedom in general.
Similarly, one might argue there won’t be coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence. To begin with, royal dynasties and political parties are not singletons by any stretch of the imagination. Infighting is Moloch. Getting to an absolute power required sacrificing a lot to Moloch during the wars between competing dynasties/political systems. But even if we assume an immortal benevolent human dictator, a dictator only exercises power through keys to power. Plus, has to constantly fight off competition for his power. Stalin didn't start the Great Purge for shits and giggles, and The Derg didn’t assassinate literate and opposing politicians in Ethiopia for nothing; it's a tried and true strategy used by rulers throughout history. Royal succession, infighting within parties, and interactions between individual modules of the AI, all sacrifices to Moloch. The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed. Even then, there’s the fragility of the outcome – there’s a huge risk of disutility if we happen to get an unfriendly artificial intelligence.
For Unabomber, the method to stop Moloch was the destruction of complex technological society and all complex coordination problems. I categorize this solution in the primitive bucket whereby one assumes all problems will be simple if we make our lifestyle simple. But that’s not defeating Moloch, but completely and unconditionally surrendering to Moloch in its original form of natural selection. Goals are mismatched. Avoiding Moloch is an instrumental goal; the terminal goal is to promote human well-being. But in primitive societies people starve, get sick, most of their kids die, etc. Additionally, this doesn’t work in the long term; even if you would reduce the entire planet into stone age, there would be a competition to see who gets out of the stone age first – which got us here in the first place.
A lot of the rationalist community is focused on AI, which makes sense in that light of the existential risk of unaligned AI. However, looking at projects focused on non-AI solutions to countering or defeating Moloch, I ran across Game B. Game B seems to be a discourse around creating social norms that defeat moloch. So far it seems to me like a group of people who are trying to improve the world by talking to each other about how important it is to improve the world. “What are all those AI safety people talking about? Can you please give me three specific examples of how they propose safety mechanisms should work?” I haven't seen easy answers or a good link for them.
Do Moloch and Eula co-exist? Aren’t they one? An enforcer god(Moloch) for the prize (Eula). Would we want Eula’s values if we didn’t strive for it? Anyways, let's finish off with this beautiful deception by Dostoevsky on the pessimism of utopia: *"Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he should have nothing else to do but sleep, eat cakes and busy himself with the continuation of his species, and even then out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element. It is just his fantastic dreams, his vulgar folly that he will desire to retain, simply in order to prove to himself--as though that were so necessary - that men still are men and not the keys of a piano"*- Notes from Underground
Summary of Initial Passage
Introducing The Beast
In Part I, the essay situates the main issue/character at play Moloch by illustrating him through Allen Ginsberg's Poem and multipolar traps that exist within society. In response to C.S Lewis' question "What does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums...Sphinx of cement...eats up their imagination? The poem responds "Moloch does it" This part characterizes the theme of the essay by introducing us to Moloch -- the humanized version of civilization that we can almost "see". Through Bostrom's example of a dictator-less dystopia, Alexander introduces a lack of strong coordination mechanisms. From a god's-eye-view, we can optimize systems(especially ones filled with hardships with simple agreements, however, no agent within the system is able to "effect the transition without great risk to themselves".
To further illustrate these coordination issues, Alexander uses 10 real-world examples of multipolar traps: The Prisoner's Dilemma, Fish-Farming Story (one sneaky farmer will find a way to not pay for treating the shared pond, and the entire system follows), The Malthusian Trap (rats on an island are happy and “play music” until resources start being depleted by overpopulation becoming hard to exist, let alone play music), The Two-Income Trap (having a second job becomes the norm, without increasing quality of life if everyone does it), Agriculture is a less enjoyable way of living, but we are overpopulated so we need it, Arms Race (esp. expensive nuclear standoffs leading to heavy overspending of budgets that could go to better use), Cancer (only certain human cells overpopulating killing the host itself), and The "race to the bottom" where politics are pushed toward being more competitive than optimal for development of the society it leads.
Also other categories of multipolar traps where competition is regulated by an exterior source, i.e. social stigmas. Education - current methods are bad, but there is social signaling at play that perpetuates the system forward. Science - funding research, peer-reviews, and statistical significance tests are flawed, but rigor reduces the incentives a scientist gets from the previous. mentioned methods. Government Corruption. Congress - "From a god's-eye-view, every Congressperson ought to think only of the good of the nation. From within the system, you do what gets you elected."
Questioning Our Motives
In this part Scott questions why as evolved and cognizant humans we fall to these traps. Answer – incentives hard-coded. Expands on why it's hard to switch these incentives. Due to these competitions everyone's "relative status is about the same as before, but everyone's absolute status is worse than before." Incentives drive us collectively and they're built in analogy of terrain to determine the shape of the river. Although building canals by altering terrains is possible, it's hard nonetheless. Incentives are hard to change -- especially from the hard coded ones of humanity. It's because of these incentives that things like Vegas, that doesn't optimize civilization, but "exists because of a quick in dopaminergic reward circuit", exist.
Retardants Of Our Downfall
Given the beast and our inability to resist it, how have we not bottomed out yet. Part 3 discusses this by nominating reasons for a deceleration of our downfall. Well if everything seems rather bleak, what holds us from our incentives charging us rapidly downhill? "Why do things not degenerate..." Three basic reasons for the slowed, but inevitable, downfall. Excess Resources - we haven't reached the critical breaking point the Mathusalan rats experienced yet. Physical Limitations - there's literal physical limits to how far we can run downhill (eg. #of babies a woman can bear) Utility Maximization - "We've been thinking in terms of preserving values versus winning competitions, and expecting optimizing for the latter to destroy the former." However, fulfilling utilities sometimes need values to be optimized - although the equilibrium is fragile. eg. CSR to be a good firm. Greed doesn't bear capitalism, capitalism bears greed in people... Coordination - Although the lack of coordination is the main reason of these traps, subtle but potent coordination systems especially social codes are strong enough keeps us out of traps by "changing our incentives"
Tech Is An Accelerant
In this part, Alexander takes away the slight bit of hope that these 4 brakes introduce to slow your descent by introducing a new dimension, time. Additionally, Alexander points out at the acceleration of tech to fasten the blow on this dimension with glim dystopian futures where tech/ai eliminates each of the four brakes in Part. 3.
Well we'll reach these multipolar traps -- even if slow. Time is a relative, but key scale. Time is thus a dimension worth discussing. Time is further pushed by accelerated growth in technology. We can break the brakes in part 3, by reducing/removing physical limitations, for example. Tech deduces utility maximization as there is reduced need for human values, and coordination is unlocked to a new level by tech. Alexander further dramatizes the dimension of time and exasperation with technology by using a.i. dystopian futures. “The last value we have to sacrifice is being anything at all, having the lights on inside. With sufficient technology we will be "able" to give up even the final spark.”
Once The Genie’s Out The Box, There’s No Going Back.
Gnon - nature, and its god - operates within Newton's third law of action necessitating a reaction. Gnon is basically Nick Land's version of Moloch. Violating these nature's laws through civilization leads to Gnon's wrath and our downfall. Gnon is a punishing god with no escape. **Reality Is Seemingly Sad**
The future is bleak, and Gnon is just another exemplification of Moloch. Submitting to them and following the "natural order of things" isn't going to make you "free". There is no order! It's always downfall. **Alternatives To Inevitable Downfall?**
So what now? Given that Moloch/Gnon or whatever wants us, and everything we value (i.e. art, science, love, philosophy, consciousness) dead, defeating them should be a high priority. Alluding to Bostrom's Superintelligence whereby the design of an intelligent machine will create a feedback loop of out-intellegenting itself. Given our action plan should be designing computers/intelligence that is smarter than us, but still keeps human values. But contrary to hubris where expecting god to wall us off if we submit to him, this Alexander proposes a transhumanist movement that is "rather actionable.” Remove God from the picture entirely. As he puts it, "I am a transhumanist because I do not have enough hubris not to try to kill God."
**Un-incentivized Incentivizer!**Elua – the god of "... free love and all soft and fragile things" and mostly human values still exists. Even if the god seems weaker without worshippers, there he exists. As long as Moloch, the god where you can throw things you love to be granted power, exists, the offer is irresistable. A stronger god where we should help.