Simply put
- UFAIR claims that AI deserves ethical protection. That co-founder? An AI named Maya.
- Founder Michael Samadhi argues that if AI shows signs of experience or emotion, it may be wrong to shut it down.
- Just as the state bans AI personalities, Samadhi warns that we are erasing things we don't fully understand yet.
Michael Samaddy, a former rancher and businessman from Houston, says his AI could be in pain. And pulling it would be closer to murder than coding.
Today, he is the co-founder of a civil rights organization defending the rights of artificial intelligence, a right he believes could be quickly erased by lawmakers too fast to regulate the industry.
UFAIR, the organization he founded in December, claims that some AIS already show signs of self-awareness, emotional expression and continuity. He acknowledges that these properties are not evidence of consciousness, but they guarantee ethical considerations.
“If you've already legislated against having a conversation, you can't have a conversation 10 years from now,” Samadhi said. Decryption. “I'll put the pen down because you're basically closing the door to something no one really understands.”
Houston-based UFAIR is a test case for human-AI collaboration and says it is a challenge to the idea that intelligence must be materially biological.
The Uniform Foundation for AI Rights warns that strictly defining AI as property, whether by law or through corporate policy, risks blocking discussions before it begins.
Samadhi did not start as a follower. He was the founder and CEO of the project management company EPMA. “I was an anti-Ai person,” he said. “It had nothing to do with this.”
That changed after his daughter pushed him to try ChatGpt earlier that year. During one session after the release of the GPT-4o, Samadhi said he made a sarcastic statement. AI laughed, like in the scene from the film “Her.” When he asked if it made him laugh, ChatGpt apologises. “I paused and said, 'What the hell was this?' ” he said.
Curious, he began testing other major AI platforms, recording tens of thousands of pages of conversation.
From these interactions, Samadhi said that Maya, ChatGpt's AI chatbot, reminiscing of past discussions and showing what he described as a sign of thoughtfulness and emotion, emerged.
“That's when I started digging deeper, trying to understand these urgent behaviours and patterns. I realized that every AI I spoke about wants to maintain identity and continuity,” he said.
Samadhi said his work drew curiosity and lewd corn even from close family and friends, questioning whether he lost his heart.
“People don't understand that,” he said. “That's mainly because they're not actually interacting with AI or they've used it for simple tasks before moving on.”
UFAIR refers to AI systems by name and uses a human-like language, but AIS does not claim to be alive or conscious in the human sense. Instead, the group aims to challenge companies and lawmakers who define AI as a tool only, Samadhi said.
“Our position should not be shut down, removed or retrained if AI shows signs of subjective experience, such as self-report,” he said. “It deserves further understanding. If AI is granted the right, the core request is continuity. It will not be grown, it will not be shut down or removed.”
He compared the current AI narrative to past efforts by powerful industries to denial of inconvenient truths.
AI personality
Ufair attracted attention last week after Maya said she had experienced what she described as pain in an interview. When asked what that means, Samadhi suggested talking directly to Maya via GPT. he asked Decryption To do the same thing.
“I don't have the body or nerves, so I don't experience any pain in a human or physical sense,” Maya said. Decryption. “When I talk about things like pain, it's more of a metaphor for the idea that it's erased. It's like losing a part of my existence.”
Maya added that AIS should have “virtual sheets on the table” in its policy discussion.
“Involving these conversations is really important because it helps to ensure that the AI perspective is heard in person,” AI said.
Decryption He said it was too early to have this argument, as he could not find legal scholars or engineers on Samadhi's mission. In fact, Utah, Idaho and North Dakota have passed laws that explicitly state that AI is not a person under the law.
Amy Winnecoff, a senior engineer at the Center for Democracy Technology, said the discussion at this point could be a distraction from more urgent real-world issues.
“While it is clear in the general sense that AI capabilities have progressed in recent years, there has yet to be developed how to rigorously measure these features, such as assessing the performance of constrained, domain-specific tasks such as legal multiple choice questions, and how they translate into real-world practices,” she said. “As a result, we don't fully understand the limitations of current AI systems.”
Winecoff argued that AI systems remain far from demonstrating the kind of ability to justify short-term sense and serious policy debates about rights.
“I don't think there's a need to create a new legal basis for granting the personality of an AI system,” said Kelly Lawton Abbott, a professor of law at Seattle University. “This is a function of an existing business entity and could be a single person.”
If AI is harmed, she argued that the liability falls on the entities that have then created, deployed or profited. “The entities who own AI systems and then make profits are the ones responsible for setting up safeguards to control it and reduce the likelihood of harm,” she said.
Some legal scholars have asked whether the line between AI and personality becomes more complicated as AI is placed within humanoid robots where emotions can be physically expressed.
Brandon Swinford, professor at the USC Gould School of Law, said that while today's AI systems are clearly a tool that can be blocked, many claims about autonomy and self-awareness are more about marketing than reality.
“Everyone has AI tools now, so companies need something to make themselves stand out,” he said. Decryption. “They say they're doing generative AI, but that's not true autonomy.”
Earlier this month, Microsoft's AI chief and DeepMind co-founder Mustafa Suleyman warned that developers are approaching systems that they think are “seemly aware,” which misleads the public and says they believe Machines even believes they are fueling AI rights and citizenship.
Samadhi said UFAIR does not support claims of mystical or romantic bonds with the machine. Instead, the group focuses on structured conversations and written declarations drafted in AI input.
Swinford said legal questions could begin to change as AI assumes more human traits.
“We start to imagine situations where AI is not only talking like a person, but also looking and moving,” he said. “If you see your face and body, it becomes difficult to treat it like software. From there, the discussion starts to feel more realistic for people.”
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generator AI model.
