The arrival of GPT-4 brightened the world and opened the way to amazing possibilities. The latest version of OpenAI’s language model shows that artificial intelligence (AI) is starting to think more like humans than ever before.
So where does this lead? Should we fear that robots will gain free will to rebel against humanity, or can we continue to work harmoniously with them for continuous improvement?
As we all know, AI is making many advances sooner than expected. Only recently have we witnessed the first steps and developments in data science, cognitive computing, deep learning, and more.
As machines evolved, their computing power expanded into art, research, and education. They were apprentice candidates preparing for various job interviews. They are transformed from functional tools to assistant-like extensions.
Traces of AI can be found in many applications, search engines, and user interfaces. OpenAI is one of the most famous research companies, known for products such as DALL-E, a system that creates realistic images by description, and ChatGPT, a chatbot that can return responses and fulfill requests. is. In this case, GPT becomes the focal point within it.
Generative Pre-trained Transformers (GPT) are language processing AI models that use neural networks to generate human-like text. GPT works at a given prompt, answers questions, summarizes or translates text, and writes code.
The most advanced version, GPT-4, has the ability to accept text and images as input, interpret visual prompts, analyze memes, better understand nuances, scan handwritten notes and build websites. There are new features such as build and multimodal capabilities. Additionally, performance benchmarks show that the GPT-4 scores higher on exams like his SAT and the Uniform Bar Exam, supports more languages, and is less hallucinating than previous models.
The limits of global ability assessment are currently unknown. However, we are already collaborating with several applications in various fields.
Be My Eyes, a mobile app that helps the visually impaired, has announced that it will use GPT-4 as a virtual volunteer tool for visual assistance. In addition, AI models will appear as digital tutors on educational platforms such as Duolingo and Khan Academy.
As you can see, artificial intelligence started passing job interview after job interview and running more businesses with more important roles. These facts conjure sensational debates about the ultimate outcome of advanced AI.
Dawn of Singularity
Ray Kurzweil, a world-famous computer scientist, has a hypothetical future scenario called the “Technological Singularity”, claiming that robots will eventually outperform humans.
According to Kurzweil, artificial general intelligence (AGI) can be an inseparable part of humans through brain-computer interfaces. Therefore, by providing a collective consciousness, AGI can reach singularity and become a superior intelligence.
He said these events could occur soon, by about 2045, but the likely outcome is unknown. This path could conjure up either uploads and immortality, or cyber warfare and the collapse of society. GPT and similar AI models use deep learning algorithms to take cues from internet-based datasets.
Reinforcement learning from human feedback (RLHF) technology creates a vast resource for teaching intelligent machines. While this method may seem effective, it can get you out of control.
Norman: Pessimistic Algorithm
Norman, the world’s first psychopathic AI, provided a great example in this regard. As the name suggests, Norman is a pessimistic algorithm inspired by Norman Bates from Hitchcock’s classic horror film Psycho.
Norman is trained to perform image captions with disturbing perceptions using data from the darkest corners of the web. This psychopath, the programmer of his algorithm, aimed to show that biased data are more important in the danger of artificial intelligence failure.
Furthermore, Norman isn’t the only one corrupted by faulty data. Other robots display negative attitudes along the lines of racism and gender apartheid due to faulty machine learning algorithms.
Unfortunately, creating villains in code seems more likely than we thought. I don’t know, but if it happens, at least I know it’s not an AI decision.
Future with Frenemies
Worst-case scenarios aside, there are ever-growing ethical issues to deal with when it comes to partnership relationships with virtual co-workers.
Because GPT and similar software products touch our lives, many people are concerned about AI stealing jobs or pretending to use programs. But even as artificial intelligence transforms workstreams on a massive scale, many experts are assuring people will continue to play an important role in the job market.
On the other hand, apart from all positive steps, the GPT model also leads to cheating. Legal risks range from copyright issues to official fraud. Cheating in essays is also prevalent, and academics are starting to worry about this tricky situation.
A research paper written by ChatGPT himself, “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT,” turns out to positively affirm their concerns.
In addition to this, there are more complicated cases. For example, DoNotPay is an AI-powered app that aims to serve customers to fight big corporations and make legal information and self-help accessible to everyone.
Initially, their mission was just to solve problems like breaking parking tickets, charging bank fees, and robocall complaints. Now they are adding new services to their job definition. It’s about hosting the first robot attorney to advise defendants in court. This technology is designed in a chat format. Running on the defendant’s smartphone, it listens to commentary and instructs the client on what to say next.
Soon, it would fly out of sci-fi and become a famous cause, and so far it seems to be fine. But after the conspiracy deepened, DoNotPay’s chatbot lawyer was sued by US law firm Edelson for working without a law degree. Edelson claims the service is illegal, impersonating a licensed practitioner. Also, the lawyers do not oversee the company and the legal documentation is subpar.
DoNotPay founder and CEO Joshua Browder denied the allegations. Browder argued that these allegations were without merit and would fight back in a lawsuit.
It is probably too early to make a definite judgment on this case, but it raises some doubts about the credibility of the sources. It has been widely criticized for not being a commercial enterprise.
Some complain that the company is not on track, while others support terms of confidentiality against potential threats. I warned you it was out of control. It can be used for malicious intent such as disinformation, office abuse, and aggressive cyberattacks.
Having exclusive access and full authority over advanced AI can put a lot of danger in the hands of many. For example, it could be developed as a manipulation tool for governments and corporations to seek more power, or used with felony intent on the general public.
Taking more essential security measures, such as the Three Laws of Robotics, is attracting attention. These fictitious rules appear in his Asimov novels by Isaac, one of the greatest science fiction writers of all time. Asimov’s Law is designed to form a safe interaction between robots and humans for social welfare.
According to these instructions, robots must not harm humans, obey human orders, or defend themselves unless such protection violates the First or Second Laws. Nevertheless, robots often fail to stay within boundaries due to serious contradictions in the rules.
Artificial intelligence in the real world is no exception. Encountering a conflict of interest is a blessing in many ways. GPT-4 may not be as revolutionary a technology as it seems, but it puts us on the cutting edge of a new revolution. proving that there is