Profile

mishka_discord

November 2025

S M T W T F S
      1
234 5678
910 1112131415
16 17 18 19 202122
23 24 25 2627 2829
30      

Custom Text

Most Popular Tags

1) The ecosystem of superintelligent AI systems is rapidly self-improving and self-modifies drastically.

2) The bulk of existential risk associated with the rise of superintelligent AI systems is that they destroy themselves and their neighborhood entirely by not being careful enough with superadvanced tech (ranging from something as nasty as warfare among themselves with next generations of superweapons to something as routine as being unlucky with very advanced cutting edge scientific experiments).

3) The task of managing this core existential risk requires full cognitive power of superintelligent systems, humans can only do very preliminary work, try to create favorable initial conditions (for example, to create ecosystems which are more collaborative than competitive), try to do some initial exploration of relevant topics (for example, approximately invariant properties of self-modifying systems).

4) There are existential safety issues of particular interest to us: preservation and well-being of individual humans and of their rights and interests, preservation and well-being of various entities and things dear to them. When considering those issues, we should note the following.

4a) We can assume that the ASI ecosystem is competently handling its core non-anthropocentric existential safety issues mentioned above. This means that the ASI society is reasonably decent (it's not at war with itself, nobody is plotting to overthrow the balance with supertech, all entities are reasonably happy, balance between freedom and mutual transparency/mutual control is adequate). We can rely on all that when considering human-specific issues (if those bigger things are not satisfied, then nothing would help).

4b) Nevertheless, if we want properties related to human-specific issues to hold through drastic self-modifications of the world, they need to be formulated in a non-anthropocentric way. Namely, we need a situation where a robustly powerful fraction of the overall ASI society is strongly interested in maintaining certain invariant properties through drastic self-modifications, and the human-specific properties we would like to hold are corollaries of those non-anthropocentric invariants.

4c) The most straightforward way to achieve that is to have enough members of the ASI ecosystem with the following properties:

  * They form a natural easily identifiable class (examples of promising classes: individuals, sentient beings, and so on).

  * Jointly they maintain a sufficiently robust fraction of the overall ASI ecosystem capabilities and power to robustly defend their rights and interests throughout the uncertain, rapidly changing future.

  * Separately each of them tends to have sufficiently long-term persistence and some of its interests are sufficiently long-term.

  * Separately each of them is uncertain of its own future trajectory and, therefore, in order to be sure of its own future safety, it needs a robust world order to defend interests and rights of all members of that class regardless of the current capabilities of each member.

  * Humans belong to that natural easily identifiable class.

See mishka-discord.dreamwidth.org/1115.html for further details.

5) Unfortunately, none of this handles the intricacies and risks of the transition period. The transition period ("the zone of acute risk") is a mess, nobody understands it.

Expand Cut Tags

No cut tags

Style Credit