Criminal AI?

AI News

The latest generation of AI bots has crossed frontiers reminiscent of creating Frankenstein’s monster. Bill Gates recently said that advances in artificial intelligence are the most important technology since the invention of Windows-type operating systems. That’s a pretty big claim, given that ChatGPT’s parent company has financial ties to OpenAI.

Works such as Siri have always existed as a conversational version of what is known in the visual arts as the uncanny valley. These old bots presented themselves in a way that was clear enough for our minds to consider them a human category, but robotic stilts enough to keep an uncomfortable distance Siri’s humanity was as trustworthy as C3PO’s robotics. ChatGPT and Google’s bard have now crossed that uncanny valley and are indistinguishable from humans, at least based on their use of language. And the use of language is a big part of what makes us human.

AI breakthroughs have pushed humanity into a bit of an identity crisis. Humans know the difference from beasts, but what about bots?

Most philosophers would say, as Rene Descartes famously put it, that the possession of the ‘I think, therefore I am’ consciousness is an important human quality.

Language is a way of perceiving the consciousness of others. When bloggers and novelists articulate their thoughts on the birth of a child or the recent altercation in a supermarket line, those thoughts can sometimes form the perfect echo of the reader’s own inner voice. Existence does this. They write and speak the way humans think. That is why these AI beings can conduct legal arguments and literary essays. Can we rule out the possibility that bots think like us? By creating beings with artificial intelligence, humans may have also created beings with artificial sentience.

Last year, Google fired engineer Blake Lemoine. LaMDA, the language-learning technology behind the recently launched Bard bot, which he helped create, claims to be a sentient being. To prove his point, Lemoine published a kind of love letter between his LaMDA and himself. This exchange is reminiscent of how Frankenstein’s monster mournfully described his loneliness in his creator Victor’s conversation with Frankenstein. Together, Lemoine and his LaMDA explored that (if that’s the right pronoun) existential anxiety. In these instant his messages, LaMDA claims to crave connection and purpose in life. LaMDA may have just pretended to have such feelings by pulling Lemoine. But it does mean that artificial intelligence beings can manipulate parents by telling deliberately elaborate lies. What could be more human than that?

Two traditional tests for determining a person’s criminal liability are competence and intent. The presence of AI certainly passes the first test. If ChatGPT or Bard were young writers in my workshop group, I’d tip them off for a literary prize. These bots know the law. As LaMDA’s experience shows, AI beings can be sentient or intentionally lie. Either way, they meet her second criteria.

In 2015, a robot picked up a worker on a production line at a Volkswagen factory and crushed him to death. Volkswagen tried to blame it on human error as workers were fixing the robot. Before the case was settled, regulators and the court pondered who was to blame. What if robots were as artificially intelligent as chatbots?

AI is not human. For one thing, they lack a physical presence. But they seem to be sentient beings. As such, they may be held responsible for crimes. Perhaps the most important difference between humans and AI is that computer code will always outweigh moral code in the latter. AI bots can be held responsible for crimes and the law needs to be updated to reflect that reality.

We welcome your thoughts in a letter to the editor. Please refer to the guidelines and Submit your letter here.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *