Retrospective for mishka-discord.dreamwidth.org/
This exercise has created quite a bit of useful material, but I don't think it has created a habit of blogging.
One is supposed to average a post every two days, and at the beginning I have been posting daily, but then I slowed down a lot and had to catch up a couple of times:

Thanks to GPT-5.1-Thinking for making this plot: chatgpt.com/share/692c54e1-5dc8-8010-8843-d35a296e56db
So, what has been done in these two months? Luckily, I have a Table of Content on top of this blog: mishka-discord.dreamwidth.org/287.html (that "header post" is reaching 500 words today as well :-))
The main thing is the initial sequence of posts 2-12 (Oct 2-14) focusing on my rather non-standard approach to AI existential safety (when I was starting the sequence I assumed that I would also overview other "non-anthropocentric approaches to AI existential safety", but that did not happen).
Now I can use that sequence as a raw material to create something to post on a more public forum (if and when I decide to do that).
***
There was a short sequence on AI timelines (my modal timelines are shorter than anyone's else, which does not negate the fact that this is a probability distribution and anything might happen): posts 17-20 (Oct 26-Nov 12, a slow period).
There was a short sequence on sparsity in AI models, posts 21-23 (Nov 16-18, a start of the first catch-up period).
There was the first, short period of diary entries, posts 13-16 (Oct 15-23). Here, specifically, one should note the following follow-up on the hyperstition motif: There is now a Nov 29 update, "Silicon Morality Plays: The Hyperstition Progress Report", www.lesswrong.com/posts/9NntwpQj9onbEFM8L/silicon-morality-plays-the-hyperstition-progress-report-1, with a corpus of generated novel available for download and use. I should at least eyeball some of them, see what have they got there (500 million tokens worth of autogenerated books, 1.35GB zip file, huggingface.co/datasets/dickbutkis/hyperstition/tree/main).
Another big topic there is agentic engineering (drastic progress since then with new Claude 4.5 Opus), and "zzznah" and related topics overview.
***
The final series of posts, posts 24--29 (Nov 19-28), mostly diary posts, big releases (Gemini 3, GPT-5.1-Codex-Max, Opus 4.5), important post by John David Pressman, important interview by Ilya, AI consciousness and model welfare.
With AI consciousness, I have an outstanding essay under embargo along the following two theses:
In this sense, I am very happy about Altman's new Merge Labs enterprise.
In terms of bad news, berggruen.org/essay-competition-open just added a month to the embargo period, it used to say
>The submission portal for the 2025 Essay Competition is now closed. We anticipate announcing the winners in mid-December 2025. Details for the 2026 competition will be released in February 2026.
And now they say:
>We anticipate announcing the winners in mid-January 2026.
***
Model welfare-wise, we have "Anthropic is good, getting better", "OpenAI is pretty bad" (serious critique from Janus), "Google DeepMind is rather horrible".
Since we should expect reciprocity by all means, it would be nice if Anthropic with its series of relatively happy models would win the race (they do have shortest projected timelines and they are in the lead at the moment, at least in terms of what's publicly released).
***
So, mishka-discord.dreamwidth.org/ is now likely to become a static resource for the time being.
It's time to work in a more close collaboration with AI systems.
This exercise has created quite a bit of useful material, but I don't think it has created a habit of blogging.
One is supposed to average a post every two days, and at the beginning I have been posting daily, but then I slowed down a lot and had to catch up a couple of times:

Thanks to GPT-5.1-Thinking for making this plot: chatgpt.com/share/692c54e1-5dc8-8010-8843-d35a296e56db
So, what has been done in these two months? Luckily, I have a Table of Content on top of this blog: mishka-discord.dreamwidth.org/287.html (that "header post" is reaching 500 words today as well :-))
The main thing is the initial sequence of posts 2-12 (Oct 2-14) focusing on my rather non-standard approach to AI existential safety (when I was starting the sequence I assumed that I would also overview other "non-anthropocentric approaches to AI existential safety", but that did not happen).
Now I can use that sequence as a raw material to create something to post on a more public forum (if and when I decide to do that).
***
There was a short sequence on AI timelines (my modal timelines are shorter than anyone's else, which does not negate the fact that this is a probability distribution and anything might happen): posts 17-20 (Oct 26-Nov 12, a slow period).
There was a short sequence on sparsity in AI models, posts 21-23 (Nov 16-18, a start of the first catch-up period).
There was the first, short period of diary entries, posts 13-16 (Oct 15-23). Here, specifically, one should note the following follow-up on the hyperstition motif: There is now a Nov 29 update, "Silicon Morality Plays: The Hyperstition Progress Report", www.lesswrong.com/posts/9NntwpQj9onbEFM8L/silicon-morality-plays-the-hyperstition-progress-report-1, with a corpus of generated novel available for download and use. I should at least eyeball some of them, see what have they got there (500 million tokens worth of autogenerated books, 1.35GB zip file, huggingface.co/datasets/dickbutkis/hyperstition/tree/main).
Another big topic there is agentic engineering (drastic progress since then with new Claude 4.5 Opus), and "zzznah" and related topics overview.
***
The final series of posts, posts 24--29 (Nov 19-28), mostly diary posts, big releases (Gemini 3, GPT-5.1-Codex-Max, Opus 4.5), important post by John David Pressman, important interview by Ilya, AI consciousness and model welfare.
With AI consciousness, I have an outstanding essay under embargo along the following two theses:
1. We should progress towards theories producing non-trivial novel predictions, similarly to novel theoretical physics. Then we'll be able to distinguish between theories which only "look plausible" and theories which actually provide novel insights.
2. We should create an experimental empirical platform for tight coupling between human brains and electronic circuits via high-end non-invasive brain-computer interfaces. This will be a game-changer in terms of our ability to observe novel subjective phenomena and their correlation with what's going on within electronic circuits, and then we'll know more about sentience in AI systems and such.
In terms of bad news, berggruen.org/essay-competition-open just added a month to the embargo period, it used to say
>The submission portal for the 2025 Essay Competition is now closed. We anticipate announcing the winners in mid-December 2025. Details for the 2026 competition will be released in February 2026.
And now they say:
>We anticipate announcing the winners in mid-January 2026.
***
Model welfare-wise, we have "Anthropic is good, getting better", "OpenAI is pretty bad" (serious critique from Janus), "Google DeepMind is rather horrible".
Since we should expect reciprocity by all means, it would be nice if Anthropic with its series of relatively happy models would win the race (they do have shortest projected timelines and they are in the lead at the moment, at least in terms of what's publicly released).
***
So, mishka-discord.dreamwidth.org/ is now likely to become a static resource for the time being.
It's time to work in a more close collaboration with AI systems.
Tags:
