gist
- it wasn’t me. It was AI. The importance of user accountability in AI interactions and the need to avoid scapegoating technologies.
- Facts only, ma’am. An important role in fact-checking and corroborating information from AI sources before acting.
- Responsibility meets ethics. Users should educate themselves on the capabilities and limitations of AI and promote responsible and ethical use of technology.
Just the headline of a news article tells us that the person who wrote the article or agreed to publish it didn’t bother to do basic homework, or was mainly concerned with getting people to click on the article and pass it. You may find out what you were aiming for. Go along with that story without reading it (often it’s both).
One such article recently appeared in The Washington Post, whose headline read:
Having been reading articles like this for months since ChatGPT launched in the mainstream in November 2022, I fully understand what the article is selling and my immediate response to the headline is ‘s response was to ask: ChatGPT is alleging that he was one person and was widely known for his occasional hoaxes. Are some wild and decidedly hoaxes, some half-truths, and some subtly deceptive?
Of course the answer is no.
What is the difference between AI and humans?
So let’s go back to basics and ask: What’s the difference between an AI telling you something and a human telling you the same? If you’re an adult, you’ll double-check, triple-check before accepting what’s being said. Especially when what you hear or read is related to the consequent problem or information. what you are trying to do.
Consider Source Credibility
First, if you are such an adult, you would consider the credibility of the source. (or it may not be). It was amazingly hallucinating about me, worked at Oxford, was once a member of SRI, founded several companies I had never heard of personally, was a millionaire (in a dream! ) and so on.
There was a lot of truth in what it came back about me, but it also produced howlers and many other things besides being a collection of garden truths. ‘ or ‘concerns’. that was it. When I gave the software a series of words as input, what the software gave me was a bunch of text. Nothing more. Only those who are naive about such technology will perceive its output as something more, let alone as gospel truth.
Related article: OpenAI’s new ChatGPT could be the first great chatbot
fines to corroborate information
Second, after considering the credibility of the source, you, as an adult, take the next step of corroborating what was said. corroborate with the sources of Perhaps a hard copy book you own or something like that.
Once you’ve done all of this and feel you’re on a solid footing (that’s why editors exist in journalism), act on your information. For journalism, also provide the name of the editor. Please describe the people you quoted, or at least their origins.
So no, these AIs and the companies that created them are in no way responsible, as many claim. The blame lies with those who use the output without doing good old-fashioned due diligence. Whether their false allegations or defamation were made in bad faith or as a result of lazy negligence, they are responsible.
Also, if you are simply clueless about how AI works, you need to be curious and educate yourself. Ignorance is no excuse long ago. We now have free and easy access to powerful search engines and smartphones. We use email, social media, videos, podcasts, and live discussion forums like Clubhouse and Discord.
The Next Step in Generative AI: Frivolous Litigation
No doubt there will be lawsuits sooner or later (if some have not yet been filed). But these lawsuits fall on the basic illusion that the companies that manufacture these AIs are claiming their AIs to be trustworthy dispensers of truth, when in fact they have never made such claims. , is going nowhere, as it soon becomes clear that those who have filed the lawsuits are suffering. And anyone who has used them at any time knows that reality and the passive nature of what we hear or read (or what we are avoided from hearing or reading). Attitudes towards receiving authorities or perceived authorities can be easily understood.
Today, if, for example, a professor tells a big lie, and the students accept that lie and act on it, and something bad happens, most of us will not hesitate to tell the damage caused by it. I would say that the professor is responsible for A student who acts on the lie of the professor. why? We operate in an epistemological ecosystem that trusts people such as professors, experts, and gurus, so they betrayed our trust by lying to us. When a smart AI comes along, we pull it into its perception ecosystem and expect it to behave according to the rules of that ecosystem.
Related article: ChatGPT suffers first data breach, exposing personal information
It’s not AI’s fault
But the solution here is not to put a red flag on ChatGPT and leave the rest of the expertise buying and selling ecosystem alone. Rather, we should ask: Is it a good thing to trust someone, especially a professor or other breeder of a submissive culture? And was it a good thing? think.
Yes, professors have considered opinions, and perhaps even knowledge, and helpful remarks, but that doesn’t make them immune to scrutiny. What grounds do we have if we agree that we shouldn’t accept it as gospel truth, even if “” says something to us? To point to some kind of software built by a non-private company?
As my hip hop friends say: