
Written by Golamdi Radha Krishna Marti
Today, AI has become a buzzword in corporate corridors. Executives are rushing to integrate AI into their businesses to ensure continuity. Otherwise they fear they may become extinct. AI has indeed become the norm in the business world.
Artificial intelligence is nothing more than the ability of machines to perform cognitive functions associated with the human mind, such as perception, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity.
The convergence of algorithmic advances, data explosion, and massive increases in computing power and storage have made AI a reality in recent years. Today, most machine learning algorithms learn how to detect patterns and make predictions rather than relying on explicit programming instructions. Algorithms can also improve their effectiveness by adapting to new data and experience over time. Simply put, machine learning is the process of automatically discovering patterns in data. And when patterns are discovered, they are used to make predictions. It is this ML process that ultimately leads us to AI.
In 2013, Oxford University researchers published an analysis on AI that found that jobs with a repetitive/unimaginative nature, such as telemarketers, bookkeepers and computer support specialists, were the jobs most likely to be replaced by robots. says there is. , bots, and AI are coming in the next few years.
But in 2022, it became clear that AI products appeared to do exactly what Oxford researchers thought was almost impossible: mimicking creativity. AI is adept at tasks that were once considered the prerogative of humans. JP Morgan says commercial loan agreements will be reviewed in seconds by AI. review. Microsoft has reportedly laid off dozens of journalists at MSN and replaced them with AI that scans and processes content.
Language learning models such as GPT-4, which stands for Generative pretrained Transformer, answer questions and write articles like humans do, with astonishing flair and precision. Because it has been trained on 1 trillion parameters, it is able to learn and understand complex patterns of natural language much better than ChatGPT thanks to having access to such a large dataset. is ready. It can also handle both text-based and image-based queries. That is, it turned out to be as good as human intelligence.
The emergence of various cognitive technologies, covering facial recognition, emotion recognition, speech recognition, natural language processing, etc., has caused a paradigm shift in how businesses operate today. For example, MIT-developed ELSAs (AI bots that act as psychotherapy counselors) are all likely replacements for cognitive-behavioral therapists!
In addition to these benefits, AI also poses many ethical risks to businesses. For example, AI used by many health systems to identify high-risk patients in need of follow-up care identified only 18% of black patients, whereas the actual figure was 46% of sick patients. It turns out there is. The reason for such problems is the underlying historical bias in the data used to train the machine. And since AI is built to operate at scale, the impact of such biases can be very large.
Against this backdrop, on March 29, a group of 1,300 AI experts signed an open letter to the AI Lab stating: Even its creators can understand, predict, or control with certainty,” said the recently announced Large Language Models (LLM), which he put on hold to develop AI systems more powerful than GPT-4. I asked for
The letter from the Future of Life Institute (FLS), a non-profit organization committed to the responsible use of AI, said that “the letter does not represent a pause in AI development in general,” but rather a “retreat.” further clarified that From perilous races to ever-larger, unpredictable black-box models with new capabilities”. I need it,” he stressed.
The letter also highlights AI experts’ concerns about the unintended consequences of AI. One is that the newly launched AI systems are competitive with humans, so shouldn’t we allow computers to be flooded with misinformation and bias? Second, to develop non-human minds that could eventually replace us, thereby risking losing control of our civilization? We strongly believe that “powerful AI systems should be developed only after we are confident that their effects will be positive and the risks manageable.”
Even OpenAI co-founder Sam Altman observes that artificial general intelligence (AGI) technologies carry “serious risks of misuse, catastrophic mishaps, and social disruption,” and therefore “step-by-step I am looking for a “transition”. It gives people, policy makers and institutions time to understand what is going on, experience the advantages and disadvantages of these systems personally, adapt the economy and introduce regulation. ”
Given these known and unknown challenges, as the letter emphasizes, AI labs and experts should work together to “jointly develop, implement and audit safety protocols for AI design and development. need to be “by an independent outside expert.” And it clearly needs a pause to figure out the pros and cons of AGI and its adoption by companies, and chart a safe and ethical roadmap. Boroji.com
