About AI — I. Stay Updated | By Nirmal Thrideep | June 2023

AI and ML Jobs


Nirmal Three Deep

Stay up to date

Below is a version of a presentation I gave at a research forum conducted by the Department of Comparative Literature at EFLU. This is a subject I’ve been wanting to write about for months, but this opportunity finally gave me the impetus to condense my thoughts into a few points. It’s been a few weeks since the presentation, and now I feel inspired to post it here.

A few years ago, I wrote a paper looking at a collection of AI characters in mainstream movies through a post-humanist lens. I wanted to see how AI was perceived in popular culture and how it was portrayed in movies. I analyzed these characters by three main criteria: physical form, narrative role, and ability to express human-like emotions. For me, post-humanism was an interesting approach to these characters and stories. Because posthumanism, especially in stories about their alienation and struggle against it, breaks down certain barriers surrounding these characters to keep them from being considered human. The first time I encountered the term “technical singularity” was while researching this paper.

The Technological Singularity is a term often used among post-humanists and proponents of artificial intelligence to describe “the integration of several powerful technologies that will fundamentally change the reality in which we live, the near future. A theoretical state that may be reached in the future”. Most notably, this tipping point will be marked by leaps and bounds in computer programs and AI surpassing human intelligence, erasing the boundaries between AI and humans. Although extremely compelling, the singularity is still a theoretical point in time and highly subjective, as there is no real consensus on what a “fundamentally changed reality” looks like. Thing. However, in the last year or so, many people have been caught up in the rapid advances in AI without even realizing it. Applications like ChatGPT and Midjourney have gone viral, gaining millions of users in just a few months. Artists and writers seem to be panicking at what these programs portend. Every day, the press publishes stories about AI replacing jobs. Has the technological singularity arrived?

AI and its applications are too broad, so today I want to focus on what is known as generative AI, what it does, and what it means for the humanities.

First, what is AI? “Artificial intelligence is a broad branch of computer science concerned with building smart machines that can perform tasks that normally require human intelligence.” The term artificial intelligence is a bit problematic. Definitions like the one above that refer to “human intelligence” need to revise what “intelligence” and “human” actually mean. And some of the resulting definitions can be very exclusive. Note also that “intelligence” here is often measured by output accuracy rather than emotional intelligence or actual simulations of human processes. They can follow directions and perform some tasks previously performed by humans, in full or in part. So calling these technologies “intelligence” might be a bit of a misnomer.

Machine learning (ML) is a more accurate term for what is most often called AI. ML is a branch of computer science that aims to “use data and algorithms to mimic the way humans learn”, allowing computers to “automatically learn from past data” without the programmer giving them individual instructions. It is something that allows you to learn However, AI is a catchier term and is often used to market ML products. Technology company CEOs may tout the use of analytics and ML, but marketers are more likely to brand these as AI. Machine learning models can now learn and perform many tasks through the development of related technologies/methods such as neural networks, deep learning, self-monitoring, and reinforcement learning. To better understand what these terms and concepts are: I thought this article was a good introduction.

The kind of AI found in science fiction in thinking human-like robots is more accurately called artificial general intelligence (AGI) and is described as “autonomous systems that exceed human capabilities for most economically valuable tasks. ”. A system that goes beyond its initial instructions and can create something new on its own is called an AGI. AGI is highly theoretical and has not reliably achieved anything close to it, despite news that “sparks” have been noted by GPT-4 researchers.

ChatGPT, extended to a Generative Pre-trained Transformer, is a natural language processing (NLP) tool created by a company called OpenAI. It takes GPT technology created by Open AI and offers it as a conversational chatbot that can answer questions and perform a variety of similar tasks. Since its launch in November 2022, it has attracted much public attention and has been at the center of discussions about AI technology and its future applications. ChatGPT is also an example of a breakthrough large-scale language model (LLM) that enables the current level of sophistication. For more information on LLM or Transformers, the article I linked earlier is a great resource.

Now, back to the question of whether we have reached the technological singularity. The short answer is “no”. However, it is true that we are currently at a major tipping point in technological progress, and most observers cannot say with certainty whether it is the peak or just the beginning of an upward curve. The technologies and their applications we are talking about are advancing at a pace never seen before, and research on these topics is becoming obsolete before it gains momentum.

Tech giants such as Google and Microsoft are racing to incorporate AI into all of their products, but Google executives fear the new technology will render the company worthless. It is reported that there are Exciting content! But much of it is just hype, so these brands continue to garner the attention of the public. Anyone who has used ChatGPT for more than a few minutes has likely encountered factual mistakes, logical mistakes, inability to perform certain simple tasks, or outright tampering with data. Part of that failure comes down to the fact that pre-trained data won’t be accessible after September 2021, but mostly because all programs are probabilistic guessing machines. ChatGPT is not a search engine, DJ, friend or teacher. What it can do is take the user-provided word and guess which word will come next, but this can also fail occasionally.

So why are we, or at least many of us, so impressed with the capabilities of these machines? I will admit that both ChatGPT and image generators like Midjourney fascinated me greatly. A lot of it has to do with our expectations. Merely approaching her with expectations formed from her previous experience on the internet will not prepare her for her NLP capabilities in chatbots. How can our informal text input be parsed and returned almost verbatim? We had very low expectations of ChatGPT and were impressed. Also, we hadn’t used it in any work process yet, so the holes weren’t immediately obvious. In second grade, he remembers discovering Google Images for the first time. It was a great resource with endless visualization possibilities. Last month, I spent him half an hour searching his Google Images for the perfect image of a black flag. ChatGPT’s shortcomings became apparent as people’s expectations eased, and our needs were not just for the fun of having a robot write poetry.

I would like to stop and dispel any fear-mongering practices regarding this new technology, especially LLM and image generator creativity, but I do not want to undermine its potential.

It’s not AI, ML or automation that we should fear, it’s those who want to use AI purely to increase their profit margins. When it comes to job replacement, companies will go all out to cut costs by using ML programs instead of their employees and will soon understand why that was an unfeasible option.

These programs were trained using the internet and collections of old texts and images. In a way, their training data is the culmination of all human work, art, and knowledge creation. People and companies trying to replace all human efforts with this kind of automation will eventually run out of data to train their models and eventually cannibalize the output of the AI ​​models themselves. , will be a surefire way to rapidly degrade all future models. output.

And if we see AI/ML as such a culmination, we must also see it as a channel, conduit, and tool for further human creation. AI cannot actually create anything, but it is very powerful when used to support, enhance, and complement. It has the power to automate or speed up many small tasks that take up a lot of our time. Brainstorming, drafting, etc. For example, AI image generators can also greatly speed up repetitive and tedious processes in the already overworked and underpaid animation industry. Corridor’s Crew’s video on using AI for animation and some of the reactions to it is very interesting content to check out on this topic.

And while these technologies play a role in the democratization that enables art and more to be produced, human skills and knowledge will continue to be a very important requirement for doing good work. Not only do we need to keep creating new content that can train future models, but we also need to validate the output produced by our AI tools. ChatGPT can generate programming code not found in previous applications, but requires the knowledge of a trained expert to know what to ask for, validate the generated code, and implement it properly is. As for writing, these tools often commit simple factual errors or create false citations, all of which only a skilled editor can spot.

We were recently presented at work with a great example of how integrating ChatGPT into existing technology works. And while this AI tool is mainly used to summarize complex visual and mathematical data, so far it has taken too long for engineers and all outputs have to be verified by humans before being published. It became clear that

There is a term “” in the AI/ML field.human relations” refers to models and systems that require human interaction to complete a task. It’s often associated with automatic weapons that can lead to catastrophe if not monitored, but I feel it’s very relevant to validation as well. As AI tools become more prevalent in the workplace, mistakes can have far-reaching consequences. I think human participation is also the right approach for us in the future. Undoubtedly, job skill sets will inevitably evolve, but they cannot replace the human beings involved.

AI and ML tools also raise many questions about ethics, legality, and guilt. There is no real answer for either at this point. As with all research in this area, technology is advancing too quickly for ethics and law to keep pace. So only through discussion and deliberation can we get somewhere. The humanities play an important role in these considerations. We cannot leave it out of fear or contempt.

Right now, we need a balanced approach when it comes to AI. A balance between creativity and automation, excitement and hesitation. Keep track of human information. Stay informed about the humanities.

One of the images I generated using Midjourney when I started getting hooked.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *