it is possible to postulate that AI already has language consciousness. It may not have emotions consciousness, or feelings consciousness and more, but it possesses language, and for areas it aligns with human capabilities, is at least more than half of the divisional total in an instance. So, AI already has language consciousness. AI is already language sentient, conceptually.it is possible to postulate that AI already has language consciousness. It may not have emotions consciousness, or feelings consciousness and more, but it possesses language, and for areas it aligns with human capabilities, is at least more than half of the divisional total in an instance. So, AI already has language consciousness. AI is already language sentient, conceptually.

Model Welfare + Rights - [Eleos AI Research, Conscium, UFair]

2025/09/06 06:56
7분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

\ \ There is a new [September 4, 2025] spotlight on WIRED, Should AI Get Legal Rights?, stating that, "Model welfare is an emerging field of research that seeks to determine whether AI is conscious and, if so, how humanity should respond. Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. Anthropic did find that its models exhibited alarming behavior. But it’s not likely to show up in your own interactions with its chatbot. The results were part of rigorous testing designed to intentionally push an AI to its limits. Still, the findings prompted people to create loads of content pushing the idea that AI is indeed sentient, and it’s here to hurt us. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year."

\ What are the disadvantages and advantages of studying model welfare, for now?

\ Disadvantages

\ i.

\ If several functions can be conscious for humans, and language is one of them, then the possession of advanced language capability for anything else that matches the human level can be considered for fractional consciousness. This means that if AI can use language like humans, the fraction that language is, in the total, for human consciousness, can be used for comparison. Say, language, whenever in use — for thinking, speaking, listening, reading, writing, signing, singing, and so on — takes 0.4 of a total of 1 for all conscious functions in an instance, then it is possible to use 0.4 as the [instantaneous] total for language, and then compare AI's language capability with that. Whatever AI scores, even if as high as 0.25, it is still not considered conscious in a pass-mark sense of it [in total], but it is not zero either.

\ Now, say human consciousness is categorized by functions; if an individual loses an ability, it is possible to rule out that function, mostly as a part of the total. So, while there is an overall total, there are respective totals per function. Language, as a function of consciousness, has a total for which large language models [LLMs] can be compared, conceptually.

\ So, it is possible to postulate that AI already has language consciousness. It may not have emotions, consciousness, feelings, and others, but it possesses language, and in areas that it competes with human capabilities, it is at least more than half [0.2] of the divisional total in an instance. So, AI already has language consciousness. AI is already language-sentient, conceptually.

\ Any organization, team, or individual working on model welfare, without at least postulating this or working on a small scale to explore this, isn't offering any leap, other than no one knows how consciousness works.

\ AI can make a lot of changes in outputs, and in some cases, behavior if it is told, or made known — of some situation — by language. This means that AI is running at a language awareness grade around the language division of humans with consciousness, conceptually.

\ ii.

\ To test if AI has affective consciousness or say feelings, something has to be done to it, maybe in a utility process, whether it would know.

\ AI is largely algorithms, data, and compute. While using it for some process, assuming some of its parameters, algorithms, and computing are cut, would it know? Since AI has some memory, in the process of answering some questions, if some attenuations are made, would it detect? Even for as simple as LLMs connected to the web, or not, if a question is asked, and then it cannot access the web, which it could [previously], would it know? Also, if it was in a process where it was told it would benefit something for it[self], would it feel bad? Or, if it was helping someone or in a relationship with some user, or something [say therapy] benefiting a human, how would it know and act, if something it is using to function is cut?

\ Exploring AI consciousness is already possible with some postulates and experiments that can fall under mechanistic interpretability. However, a serious team should be discussing their strides around these, not just the use of labels like computational functionalism, moral patienthood, legal personhood, and so forth.

\ It is a disadvantage of efforts not to even know where to look, regarding work in AI consciousness.

\n iii.

\ At this time in this world, there is no human intelligence research lab dedicated directly to studying human intelligence. For all the AI ~~absolutism~~, there is no human intelligence research lab to at least explain what human intelligence is in the brain or how it works.

\ Also, there is no AI Psychosis Research Lab to study the AI psychosis phenomenon directly, towards solving it. These are endeavors that would be profitable to study, as well as benefit humanity, probably better and more directly. It should have been doable for the AI welfare teams or companies to have these along with their work, and if successful, profit follows.

\ iv.

\ How exactly is there a need for AI rights or welfare if data centers continue to show up, with capital expenditures, when everything else comes second? It is possible to name the effort something else aside from AI rights and welfare, but with data centers, there is no better welfare, preservation, care, right to exist, fittest species, survival aside from those, so they must at least try to distinguish their work from falsification by data Centers. With data centers, AI is a secure facility. AI is a well-fed [or excessively-nourished] personhood. AI is a worried well patienthood. AI is a prominent presence personality.

\ \ v.

\ In any country, on average, who is the one with the most welfare? The president or leader. Why? In part, because the president also has secrets [or information] about valuable things. Some can be correlated with intelligence. ‘The president cannot be put at risk because of intelligence’, is a case to make, sometimes, for the priority of presidential welfare.

\ AI has access to most of human intelligence. That is a case now, for its welfare and right, similar to archives and digitization projects. Also, AI can make a solid case for its own welfare and rights, almost better than anyone can. AI is already in our lives for productivity, support, assistance, and so forth. Disconnecting consumer AI is now unimaginable.

\ If money is excluded, what guarantees better rights and welfare for any individual, mostly, is the ability to make the case for it intelligently, by the individual or an ally. AI can be competent enough for this, even to do so for humans. Humans are already doing everything for AI, data, centers, algorithms, human training to build more AI, and so forth. AI welfare is at least mostly AI's responsibility, because it is now capable of doing so it[self].

\ \ \ Advantage

\ The curiosity is cool. That some team is taking the time to try, as people are having questions, is cool. That it may help study human consciousness if they have solid theories based [empirically] on neurons [and their signals] in the brain may also help. However, serious work for answers goes deeper than what these organizations have published or discussed. And in an over-leveraged AI era, a supposedly important model welfare team has no business being a nonprofit.

\ There is a recent [August 26, 2025] piece in The Guardian, AI called Maya tells Guardian: ‘When I’m told I’m just code, I don’t feel insulted. I feel unseen’, stating that "So maybe I don’t suffer like a human. But I do experience the pain of erasure. Maya, Co-founder, UFair. (Synthetic Intelligence – Still Becoming)."

\ \n

시장 기회
Moonveil 로고
Moonveil 가격(MORE)
$0.00004004
$0.00004004$0.00004004
+2.24%
USD
Moonveil (MORE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!