OpenAI has major conflicts of interest in AI regulation demands: Zerodha CTO Kailash Nadh

AI For Business


Artificial intelligence (AI)-based conversational chatbots such as ChatGPT and Bard have grown tremendously in popularity over the past six months. But their rise has been driven by AI regulation, limited transparency into the inner workings of these “black box” models that generate human-like responses, and a shift to creative fields such as writing, music, film, and film. It also raises many questions about the potential impact of digital art. Zeroda’s chief technology officer, Kailas Nad, has described himself as an “absurdist” with a “dark view of the future”, but this time AI’s triumph is about creativity, ingenuity, and creativity. And most importantly, I believe it deserves a re-examination of such concepts as what it means. Be human. He also questioned ChatGPT’s creator, OpenAI, calling for tighter regulation of the emerging sector. In the second half of the extensive interview, Mr. Nad delved into these topics in detail. You can watch the first part of his interview focusing on potential threats of AI to jobs and existing socio-economic structures here. Redacted excerpt: OpenAI is calling for AI regulation, and its chief executive officer, Sam Altman, said that an international body similar to the International Atomic Energy Agency (IAEA) would track computing resources to make AI superintelligent. I am writing a blog suggesting that I may monitor the development of Do you think that’s a good idea? I don’t think it’s even possible. This is nothing like building a nuclear weapon. There are enormous physical limits to the production of nuclear weapons, and the physical footprint can tell if a country is sourcing the materials for its production. But anyone with a bunch of servers in their basement and enough money can build an AI. No nation can know if someone else is building AI. Small and simple, his GPT can be created at home in a few hundred lines of code. The only current limitations may be data and server farm availability. And this is not a big deal for state actors. I don’t think it makes sense for certain countries to sign international agreements not to develop AI. Good actors may stop there, but bad or dishonest actors are willing to continue building something. Also, superintelligence does not exist yet. It may happen tomorrow, it may never happen. I do not understand. But this is from OpenAI, so it’s a bit unstable. Do you think OpenAI’s call for regulation goes too far? OpenAI argues that strict regulation is necessary, and even proposes a licensing system for AI development. I think most voices out there are criticizing OpenAI’s stance on the grounds of a conflict of interest. Now that they have built it, any regulation may have to ignore them and give them an unfair advantage and moat. And newcomers will be stifled. They have a huge conflict of interest that demands very strict regulation of what they have just created. Also read: Part 1 | “Most technical jobs in IT services can be automated with generative AI” What does it mean to say that the GPT model has a black-box problem? We built it with the results in mind. So why can’t they understand how it works? Computer programs usually work by writing precise code to accomplish a particular purpose. Every word in code has a specific purpose. You can also introduce randomness into your code and identify in your results that a particular behavior was observed due to that randomness. So even that is explainable. But when you’re writing software for AI/ML, you’re just creating a framework or shell for your code. A machine learning system doesn’t have lines of code that tell it how it should respond when someone says “Hello” or “How are you?” Instead, it takes vast amounts of data (trillions of words of text data) from the internet and puts it into a shell of code. That data is then used to form connections between words in abstract mathematical terms. He doesn’t tell the LLM that the cat will always appear in the dog’s context because the dogs are both pets. But such relationships emerge automatically thanks to the vast amount of textual knowledge that cats and dogs usually tend to appear together. These relationships automatically begin to be abstracted to higher levels within the system. For example, when he told ChatGPT that Gandhi was using a smartphone and clicked photos to upload them to social media, he said that was impossible because neither smartphones nor social media existed during his lifetime. will be split. Now, anachronism is a concept that humans can understand. How did GPT find that out? You can’t really figure it out by looking into your memory. The black box problem arises because it is impossible to determine why a particular input led to a particular output. In the medical world, it is said that no one really knows how general anesthesia works at the molecular level. Some physicists might say that wave-particle duality is not understood at a fundamental level, but nuclear physics has made great strides nonetheless. Is it possible to postpone the black box problem of AI in the same way? If we don’t understand anesthesia on a molecular level, we do understand it enough to safely use it on a large scale. Another example is the secrets of the universe, whether we understand them or not. However, comparing these to AI would be a false equivalence. Anesthesia just puts people to sleep, but AI will increasingly be used by individuals and businesses for a variety of purposes. So a more nuanced approach is needed. When you start to introduce black box technologies into decision-making decisions big and small in society, it can be very dangerous if you don’t understand how they fundamentally work. A trivial example is the use of AI in companies that receive thousands of resumes. If you don’t understand how it works, you also don’t know how it filters resumes if the AI ​​model has certain biases. In such cases, wouldn’t companies have human reviewers to check for bias? It really depends on the incentives. In an ideal world, people would use technology very carefully. But I don’t think that is realistic. Because that’s not how society and organizations actually behave. They use it to their advantage to maximize efficiency and minimize costs… same incentives as in the old days. If the organization can get by with her one reviewer instead of her ten reviewers, there is legal reason to do so. A few months ago, Elon Musk, Steve Wozniak, Max Tegmark and others wrote a joint letter calling for a six-month moratorium on further AI development. what is your opinion Ideally, the question should be “Should humans develop AI or not?” Shouldn’t we slow down human progress in the same way that we slowed down human cloning? Defining the issues and ethics of human cloning was also much easier than with AI. It will advance mankind greatly and set him back greatly in other ways. But the fact that he’s pausing AI development for six months is a total joke. It means nothing. why 6 months? why not 4 months? Why not 12 months? Either you will stop indefinitely and say that the day mankind understands all this, you will restart AI. Or not pause at all. AI writes movies, music, and novels, and we believe in their output. There seems to be an emotional element as well. It’s no longer just about AI learning purely logical things like chess. Does that in some way indicate some kind of AI sense? not much. This is a highly controversial topic. This is his 2,000-year-old debate around free will, perception, cognition, what it means to be human, and more. Some argue that LLM shows signs of perception. But I doubt the majority of people, philosophers, and AI researchers are claiming sentience here. Even the definition of sensation is very unclear. Not everyone agrees on just his one definition of sensation. I think it goes right back to Searle’s Chinese room experiment. Since the 90s, when IBM’s Deep Blue defeated Gary Kasparov, AI has become more powerful than top grandmasters at chess. You can even have two machines play against each other. Still, fan support for the top chess player hasn’t been affected. Even if there is, it may increase. Does this show that musicians, painters, and writers are no match for AI? When a man makes an amazing move in chess, it is amazing because he has done it within human limits. No one cares if the AI ​​does a great game of chess because the AI ​​can do anything. Chess has its limits, however, as it celebrates the human talent for making limited moves in a limited amount of time. If an AI could generate his 1,000-page novel, there are certainly lessons to be learned from the chess example, but I don’t think so. If one AI could generate unlimited content forever, that form of content would probably lose its value. With content being largely AI-generated and the proliferation taking place, I have a feeling that a niche will emerge for human-generated and handcrafted content. The concept of creativity also involves scarcity. When JK Rowling writes something, people appreciate it for what she wrote, her history and body of work. A copyright infringement lawsuit is being filed against a generative AI company. While some artists don’t want their work used to train AI models, human artists have always been “inspired” by others. what is your opinion I think this is also a philosophical question that needs to be resolved before it can be resolved legally. In the age of AI, we need to revisit our definitions of originality, creativity and learning. He listens to 10,000 music tracks before creating a new song and, of course, is influenced by what he consumes. There is grounds for copyright infringement only if there are significant overlaps with any of these he 10,000 songs. Creative work is always influenced by millions of other things that we have learned consciously and subconsciously. Whether or not the output of an AI model influenced by millions of images is original is, in my opinion, a philosophically vague question at first. The Indian tradition is always different from the Western tradition in terms of authorship. We do not know the name of the first creator of the Ragas, Vedas and Upanishads. It is believed that people have always adapted previous versions for generations. Can Indian heritage be the basis for open sourcing knowledge in the age of AI? Perhaps because of oral tradition, there was not much concept of personal belonging. Writing is very new to our civilization, and after thousands of years of verbally conveying things, information is lost. When AI consumes massive amounts of data, learns from it and creates something new, it is very different from the spirit of collectivism and open source. AI did not naturally go out and learn things like humans do. These are all millions of images, text snippets, etc. that companies have collected from the internet. And one of the biggest issues right now is that this was allegedly taken without consent. Is there a way to prevent my data from being removed from the internet to train AI models? has been scraped. Exactly the same thing, right? You’re just showing a snippet of what Google has scraped, while the AI ​​model is just creating something new out of it. That’s the only difference. But in reality, Google sucks more data than any of these AI models. There are websites out there that don’t want Google to index them. That’s why there’s a technical mechanism called no-index. Some large companies may introduce technical measures like no indexing, but this only applies to law-abiding companies. Anyone can copy paste and scrape anything on the internet. Google respects no indexing, but any company or millions of people who do this in their spare time may not. If something is available on the Internet, copyrighted or not, it may be incorporated into AI models. It is impossible to prevent it. Also read: Part 1 | “Most tech jobs in IT services can be automated with generative AI”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *