The development of artificial intelligence (AI)-driven platforms, such as Open AI’s ChatGPT, has received persuasion from universities as students turn to this technology to avoid critical thinking and research and produce academic output. A strong response is required.
This problem is one of many AI topics covered by Professor Dan Lemenny, a 1972 graduate of the University of Cape Town (UCT), in his summer school extension series webinar “Dangerous Brands – AI and Chatbots.” was.
Lemenyi also asked the audience to think carefully about how this technology will lead society.
Now living in the UK, he has been involved in the Information and Communication Technology (ICT) industry for decades, working with companies, consultants and leading research universities. Author of several books on ICT, his Remenyi is currently an emeritus professor at two computing schools.
The AI-related potential is noteworthy, but caution is warranted, he said.
“Some compare chatbot technology to a dancing bear.”
From an academic point of view, current AI technologies are critical in producing accurate, unique, unbiased, non-repetitive, and academically robust output backed by plausible references. It fails on all basics.
“Some people liken chatbot technology to a dancing bear. The strange thing is that this technology does not work well, it works perfectly. said Mr.
But are universities actively discussing and understanding the power and limitations of AI and chatbots and helping students do the same?
On a broader scale, examining potentially dangerous issues arising from AI chatbots, the main threats are security, misinformation and propaganda, data privacy, emotional manipulation, turnover, and loss of control.
“This equates to the fact that many of the dangers of AI chatbots are that bad actors can use this technology for malicious purposes,” he added. “After all, the most powerful ways we protect ourselves are through awareness, education, and a healthy attitude of curiosity (and perhaps doubt).”
huge investment
“The adoption rate of this technology is staggering,” he said.
“When OpenAI made ChatGPT free at the end of November 2022, it had over 1 million users in a week.”
The technology has already attracted more than US$100 billion in investment and has generated extraordinary publicity. Critics liken the competition among tech giants for market share in this space to an “invisible arms race.”
“There is a lot of money at stake. The value of the global chatbot market is projected to reach US$10.08 billion by 2026. The amount of money involved is staggering and unprecedented. ”
existential threat
But the darker side of this arms race involves a larger ethical and social question: the existential threat of AI to humanity.
“AI chatbots have yet to show evidence of artificial general intelligence” [independent reasoning] But key experts in the ICT field believe it is inevitable,” Lemenyi said.
Quoting former British Prime Minister Benjamin Disraeli on “lies, outright lies and statistics,” Lemeny said of the current thinking of AI software engineers, referring to the most difficult task of providing plausible predictions. said 50% of engineers believe there are statistics. There is a 20% chance that AI will cause human extinction.
“There is definitely some justification for thinking about what a pervasive AI future might look like.”
In fact, Max Tegmark, a professor of information technology at the Massachusetts Institute of Technology (MIT), has suggested that there is a 50% chance AI will destroy humanity, he said.
“Considering this, and a public call in March by more than 1,500 AI experts to pause further AI development after the launch of ChatGPT-4, what does an AI-pervasive future look like? There is certainly justification for contemplating whether
But it’s hard to know what a moratorium should look like. It is also unclear how AI will destroy humanity.
Others argue that thanks to artificial intelligence, human-made extraterrestrial life forms, life forms that did not originate in natural life processes, will be discovered on Earth in the near future.
“This hinted at the hopes of some computer experts that an extraordinary combination of microprocessor technology and ingenious software engineering would enable the creation of fully conscious entities, independent of human intervention. is a thing.”
It’s important to never say anything is impossible, but Lemenny’s view is that it’s unlikely to happen in the next few decades.
His view is that the more real danger posed by AI is not the “extreme sci-fi scenario” but the danger of undermining trust, trust and ethics.
“The use of AI, even simple and imperfect applications like AI chatbots, ‘dancing bears’, can be dangerous to society. And it’s the less dramatic, more real kind of danger that we should be concerned about. “
