>Eleos AI Research, a non-profit org specifically dedicated to investigations of AI sentience and wellbeing, https://eleosai.org/, led by Robert Long, https://robertlong.online/. There are even having a conference in 10 days or so (although it's sort of a mess organizationally, no registration link, but just a contact e-mail, https://eleosai.org/conference/). Their Nov 2024 preprint might also be of interest, "Taking AI Welfare Seriously", https://arxiv.org/abs/2411.00986
Now we have a report from that conference by Eleni Angelou: www.lesswrong.com/posts/jeeyhuEzvdpsDw9pP/takeaways-from-the-eleos-conference-on-ai-consciousness-and
***
Five days ago Scott Alexander wrote www.astralcodexten.com/p/the-new-ai-consciousness-paper:
>Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these - or maybe raise one to the exponent of the other, or something - and you get the quality of discourse on AI consciousness. It’s not great.
The post itself is, unfortunately, not great.
I made this comment, www.astralcodexten.com/p/the-new-ai-consciousness-paper/comment/179194470:
>>But it the boyfriend AIs and the factory robot AIs might run on very similar algorithms - maybe they’re both GPT-6 with different prompts! Surely either both are conscious, or neither is.
>
>Not necessarily.
>
>If one takes Janus' Theory of Simulator seriously, it might turn out that consciousness is a property of a simulation, of an inference run (due to differences in dynamic connectivity emerging during the particular run in question).
>
>In any case, the progress in solving the Hard Problem of Consciousness and Qualia should eventually be made via a two-pronged approach.
>
>1. We should progress towards theories producing non-trivial novel predictions, similarly to novel theoretical physics. Then we'll be able to distinguish between theories which only "look plausible" and theories which actually provide novel insights.
>
>2. We should create an experimental empirical platform for tight coupling between human brains and electronic circuits via high-end non-invasive brain-computer interfaces. This will be a game-changer in terms of our ability to observe novel subjective phenomena and their correlation with what's going on within electronic circuits. I am very happy that Sam Altman's new start-up, Merge Labs, is starting an effort in this direction.
I wrote a relatively long text along these lines in July-September, it is still under embargo waiting for another few weeks for the decision in berggruen.org/essay-competition-open:
>The submission portal for the 2025 Essay Competition is now closed. We anticipate announcing the winners in mid-December 2025. Details for the 2026 competition will be released in February 2026.
One way or another, I am planning to make that text public after their decision is made.
***
Transformers themselves are feed-forward machines, but autoregressive LLMs are recurrent machines, and the expanding context is their working memory, see, for example, "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention", https://arxiv.org/abs/2006.16236 (technical details are on page 5, Section 3.4).
Or see Shanahan & Janus paper in Nature, "Role play with large language models", www.nature.com/articles/s41586-023-06647-8 (open access, Nov 2023) and examine Figure 1, Autoregressive sampling.
This addresses the standard objection that we at least expect recurrence in a conscious system.
(no subject)
Date: 2025-11-28 03:26 pm (UTC)And now they say:
>We anticipate announcing the winners in mid-January 2026.
I hate that.