Dr. Vishal Sikka is well versed in AI. He may be the founder and CEO of his AI startup (Vianai) on trend, but he’s been in AI games for a long time. I met Sikka often when he was his CTO at SAP. Even then, he was praising the potential of enterprise AI outside of the hype cycle.
So, during a recent video call, I asked Sikka: Must he indeed have conflicting feelings? What Sikka has been working on for so long is now a hype-filled tech bandwagon, full of opportunist and techno gimmicks. “It’s a mixed feeling,” admits Sikka. He remembers all these roots.
I have different emotions every day. It’s like a barrage. I started AI with natural language, but the phrase we used at the time was he NLU (Natural Language Understanding). I he was 17 years old. I started working on the basic techniques of natural language understanding. I had an idea about which he wrote to a professor at MIT. I was still in India then. It was Marvin Minsky, one of the fathers of AI. I had a chance to play with him when I was still an undergraduate. He wrote me a letter of recommendation to attend Stanford University for my Ph.D.
AI Roller Coaster Highs and Lows
Back to today. AI suddenly started to gain a lot of momentum technically (and culturally). Sikka found himself signing an important, controversial, and I would argue, widely misunderstood letter warning of the dangers of AI and prompting its infamous moratorium. . No, it’s not a pause in AI innovation, it’s a pause in the expansion of Large Language Models (LLM). Sikka said to me:
I am one of the signers of that letter. Stuart Russell was one of my academic brothers. We have the same PhD advisor. he was one of the principal authors. He asked me to sign it. We were not advocating a halt to AI research. This is very important. We were talking about pausing models larger than GPT4 for six months to give regulators a chance to figure out what’s going on. For me, the risks of AI today are very worrying.
Wait for Sikka’s mixed emotions. The risks of AI are significant, but so are the potential for life-changing applications. Working with Vianai, Sikka spends her days:
On the positive side, this is an incredibly powerful piece of technology. There are so many things you can do with it. It has the potential to bring about major changes in the way we work and work. This morning someone released something called RedPajama. This is a large open source released language model with an open data set and more. So there’s the openness that people can experiment and tinker with and try things out, and the speed with which that happens, the incredibly exciting applications that can be built with it.
hila – New Generative AI Solution
But companies are a different story. Your risk profile is more insightful. Company leaders are rightly wary of the risks of this particular AI hype cycle, as they were with the metaverse and blockchain before it. But they also want to study use cases. They want better control over what is currently possible and what is being overheated by the marketing department. Adopting an LLM enforces just that question.
So what can we do today? Let’s start with hila, Vianai’s newly announced generative AI solution.New financial research assistant. hila helps you find answers quickly from your earnings records. We are also adding new data all the time.Sikka:
hila is a tool for investment. It consists of three components, so give it a try. Just visit hila.ai and sign up. [hila has] Zero resistance to hallucinations. Considering ChatGPT, there are many such tools. But what we’ve worked hard on is making sure these things don’t become hallucinations.
hila has several scenarios. For listed companies, you can ask questions on documents such as earnings briefings and 10ks 10Q. There are also datasets that can be queried. This is a SQL-type query of structured data using natural language.
Jake Klein, CEO of Dealtale, a Vianai company, helped Sikka pursue this type of capability 18 years ago, when both were at SAP. “But today I can finally do thissays Sikka. And her second scenario?
again, [feature] Here you can upload your own documents and ask questions about them. But in any case it is characterized by safety, accuracy and zero tolerance for nonsense. For Dealtale, our team did a lot of work in a specific area of AI called causality. This is about causality.
Causality and correlation have long given data scientists headaches.
Basically there is causation, but there is also correlation. In general, it is difficult to distinguish between these when building a model from data. Sometimes people confuse correlation with causation or causality.
Our team has done a lot of pioneering work in that area. You can say, “Hey, I put this offer up before John, is he likely to click on it based on what his fellow people are doing?” Understanding that causality and understanding what is the phenomenon that triggered that behavior is what we were working on. So we acquired Dealtale as a way to collect the data to build these causal models and graphs.
OpenAI then released a little one called ChatGPT, which inspired the Vianai team. Therefore, another app, Dealtale IQ, was released on the Vianai platform. Sikka:
This product we launched a few months ago, Dealtale IQ, essentially gives marketing analysts the ability to ask any question they can think of, not just four or five causal scenarios. .
As always with AI, the power and comprehensiveness of your data makes a difference.
In addition to this data [we pull in data] There are 19-20 such systems, from systems like Salesforce, Marketo, Microsoft Dynamics, Google Ads, Facebook Ads. We built this one view of the customer from Customer Engagement.
This is typically aimed at smaller businesses, i.e. businesses that are 100% digital in nature. HubSpot is one of the systems we looked at. And now marketing analysts can ask any question they can think of. Customers really like it.
I couldn’t tackle this without starting coding. Sikka has managed several fairly large development organizations over the years. Like other prominent enterprise technologists I’ve spoken to, Sikka believes generative AI is particularly well-suited to programming, and perhaps even disruptive.
One of the interesting results of this large-scale language model technology is that it is particularly effective at coding. If you put the right parentheses, the right safeguards, guardrails, etc. in place, you can actually do a pretty good job of generating SQL, generating JSON, or even just generating code.
Our goal is to make the entire application dynamic. You can virtualize an entire application and replace it with human language. We will be making an announcement on this in the coming weeks and months.
my view
I have the ax to overcome technical limitations of LLM that seem to be overlooked. I understand Sikka’s opinion on that. I will share that in my next article. However, there is no denying the high level of adoption of these tools.
The most important concerns in the aforementioned “AI pause” letter are the pending risks of artificial general intelligence (we’re not quite there) and the huge implications of if/when AI becomes sophisticated at the human level. Some thought it was about the sensational risk of massive unemployment. Solving and cognitive functions. I don’t think I’m anywhere near that either, due to the technical limitations of generative AI. So what was the purpose of the letter?
It really depends on the person. There is no correct opinion on this. Many of the signatories put aside their big differences about each other’s opinions and the future of AI. But some of the signatories, including my college classmate Gary Marcus, an AI expert himself, are still incredibly impressive despite the current limitations of generative AI. Some have signed it because they believe it is a powerful tool, with great potential for misuse and unintended consequences. and/or exploitation by malicious parties. Sikka said to me:
You’ve probably seen the news about the 40,000 chemicals synthesized using these substances. Each of these 40,000 has the same level of lethality as VX, one of the most deadly compounds known to mankind…and that’s just one example.
The regulatory environment is too far behind to properly avoid this. Having been directly involved in the evolution of AI, Sikka is exactly the type of voice this conversation needs.
But people are experimenting with ChatGPT for a variety of use cases, including their own productivity, so there are some compelling benefits as well. Sikka has already integrated his GPT into his personal workflow.
One of the great uses for this is to send you a message and get out of the rut…I had to write something last night and I was so tired. I went to her ChatGPT and said, “Draft a letter.” Well, when I finished writing the letter, there was not a single sentence from ChatGPT in the letter. But that’s what got me started. And it gave me an idea. And that gave me a frame and ultimately pushed me forward.
Given Sikka’s passion for teaching, it’s understandable that he’s concerned about the impact this technology will have on junior roles. Now that AI can perform more of the day-to-day tasks of junior programmers, junior contract writers, junior resource analysts, Sikka thinks: Big changes are coming.
You and I have talked before about the burden of education. It is now stronger than ever.
Aspiring professionals need a viable path forward. I see this as a major shift in both educational requirements and our approach to professional instruction. But as Sikka points out, attitudes also need to change.
If junior analysts don’t want to know what these things enable, they’re going to be confused if they don’t want to use these in their world. When you actually use these things, you will be much more productive and greatly empowered as an employee.
There’s a lot to think about – try new apps too. Sikka also advises companies keenly considering AI apps and opportunities. We’ll cover that in part 2 next week.
