When Tom Gruber co-founded Siri, he introduced AI technology to the mobile market, positioning Apple as one of the world’s most popular mobile brands. In recognition of this impact, Tom has received several awards and honors, including being listed among his top technology speakers at conferences worldwide. In this exclusive interview, learn about Tom’s humanistic AI principles and how to avoid ethical issues related to AI technology.
What are the guiding principles of humanistic AI?
Humanistic AI is a philosophy of AI. We often hear that machine intelligence is the goal. A lot of the time, it’s, “Let’s do whatever it takes to give machines the intelligence to automate the things we admire about humans. Let’s build machine versions of us.” . that’s totally good.
Another idea is humanistic AI. Why not build machines that make humans smarter? The difference really matters. The pursuit of the goal of machine intelligence itself, especially in business, ends up in machine intelligence that often competes with humans for their work and attention.
In many cases, all things being equal in economic conditions such as the ad-based attention economy, pursuing the goals of machine intelligence itself will eventually lead to AI being used against humans, competing for human attention. Become. Or in the case of employment, they often compete with humans for jobs.
While this may be profitable for some companies, it is not an effective use of AI to impact humanity. On the other hand, the original design goal of humanistic AI is to enhance humans to make them more efficient.
For example, we are talking about automating tasks that humans do at work. This refers to tasks that are dangerous, tedious, or time consuming and cannot be done by humans. That line will shift over time as intelligence becomes more automated and more and more mundane tasks go away.
For example, in the medical field, there are many things that are done in a tedious way because that is the only way. AI can now do so much more, allowing medical and health scientists to think about theories of health and how we can better respond to disease. This is true across the board. It turns out that this idea of competing with, rather than augmenting, human intelligence has been around for some time.
How do you think AI will shape our daily lives?
There are many good ways to help us! AI will enhance us in every way. It’s already starting to strengthen us. So Siri was an attempt to extend us. When we developed Siri, the only way you could use your phone was by tapping the tiny little screen. Of course, it’s not easy for everyone.
At the same time, everything from travel to restaurant reservations with Siri is now available on the web. Well, these things were only possible if you had 10 fingers, a big screen and an internet connection. We felt like we wanted all the benefits to reach someone with a cell phone with just their voice. This is an extension of sorts, allowing him to have 10 fingers and a screen while carrying around.
Now we see more examples of how AI is empowering people to overcome obstacles. For example, one of the companies I advise helps people like Dr. Stephen Hawking who are unable to speak due to neurological disorders. AI reads brain waves and interprets them into speech to understand what the other person is trying to say. This was not possible until the AI learned to understand.
Next is basically upbringing. Now feed in the sense that a mother raises her offspring. As you know, AI today is like Big Brother and is often used against humans, but I like to think of AI as Big Mother. Like a big bear mother who protects her children… There are many dangerous things in the world, but what does a mother do? Your mother will try to use her skills to make you a better person. Healthier, better self-care, better mental health, better social interaction with peers, that’s what AI can do in the future.
I’m not just making this up. Starting with simple things like wearables, watches and rings, there are many real companies and projects working towards these goals. They’re starting to give us feedback about our sleep quality and focus.
“The last thing AI is trying to do is transform society. It has already started to change in a negative direction, but I think it can turn it around. will change.
First, AI can overcome insignificant differences between people. As they say on the internet, you can be whatever skin color or gender you want. Even with things like cognitive impairment, whether it’s the spectrum of emotions or just IQ, AI-mediated interventions could further strengthen these differences.
There are more subtle and even more powerful methods. At this point, we as individuals have not yet fully realized our collective potential. We don’t have very good ideas for each other today. So the only technology we had for that was media and politics…the traditional way we collectively thought was seriously destroyed. The only piece of collective intelligence that survives is science.
Ironically, AI advances the fastest because it uses the scientific method. The point is that some systems of thinking together, like science, work well and others, like traditional politics, do not. AI will mediate many collaborations and play a big role in solving some of the problems, such as climate change, that need to be solved based on large-scale solutions.
Are there ethical risks to AI and big data, and if so, how should companies address them?
AI has major ethical issues. AI is perhaps the most powerful technology invented this century. Google’s CEO calls it the most important invention since the fire. He has invested in his AI more than any other company in the world. This is very powerful and omnipotent technology has ethical implications.
With AI today, some people worry about the future – will it be something like Terminator or Skynet? I have found that there is a problem now. Most notably, AI is the optimization engine behind major social media platforms such as Facebook, YouTube, and Snapchat. They’re optimizing for so-called growth hacking how dependent they can keep people online. They use the big data they collect from their users to predict what will work to keep them online.
Basically, humans are no match for that kind of technology. It’s not about free speech or economics. This is a technology that is having an impact on a human scale, a political scale and a geopolitical scale, and lives are being lost to misinformation. This is serious business in the real world.
How can companies deal with this? Well, ethics, especially in technology, is not a simple matter of doing the right thing and being moral. What’s interesting is how you build your values into the technology you use. An example of not doing so is that if you only care about winning an adversarial game against humans to get the AI’s attention, it will have a negative impact on humans.
We not only want to make sure businesses are profitable, but we also want to make sure that human society is profitable. You can literally incorporate these two last lines into the equation.
This exclusive interview with Tom Gruber was conducted by Mark Matthews.