How Professor Russ Altman uses AI

Applications of AI


This Q&A is part of a new series that explores Stanford experts using artificial intelligence in their personal and professional lives. These candid conversations aim to shed light on the practical and everyday choices people make as AI becomes increasingly integrated into work, research and home life. This series does not reflect official university policies or guidance.


Russ Altman, a professor of bioengineering at Stanford, who has been working with AI since the 1980s, shares why he still writes his own letter of recommendation, why he helped AI shop the camera lens, and why he remains optimistic about the benefits of AI while worrying about its impact on current generations.

What is your personal approach to using AI in your work and in your daily life?

First, subscribe to the AI ​​chatbot recommended by AI Sophistician students with recommendations from Data Science, Informatics, and AI Research. So I switched to different vendors a few times based on who was doing my best at a particular time.

I can write quickly and effectively (my own opinion, but I think it's true). So I'm better and faster with better writing than AI – for now, that might change. So I used it to create some outline or initial text, but I've made a major revision to it. (By the way, these answers were not created with AI!) I probably use a little more in my personal life than in my work. However, I asked for a summary of a technical field that I'm new to it and commented on the text I wrote where I wanted a brief second opinion. My personal life is not currently dependent on the generation AI. My professional life I'll do it It depends on the generation of new AI technologies for biology and medical discovery, but as a builder rather than as a user.

Can you share some specific examples of how AI was used in research, education or personal life?

For the purposes of the research, I wrote a one-page grant summary and sought the strengths/weaks. It was useful, but a bit superficial. But that took up some criticisms that I hadn't thought of.

I chaired the University Committee and advised Provost on how to approach both the promises and threats of AI in education, research and management at Stanford.

Education allows AI in classes, but students must disclose their use (such as how prompts and subsequent output is handled). Many students use it and they are not always upfront about accepting its use, flat It is permitted and must be disclosed. I know my colleagues are worried about not learning how to write and therefore how to think. This is a very effective concern. I don't know exactly what to do about it, but I'm trying to make sure my students prepare for the future and make sure they learn things.

I find it very useful for personal use. I recently purchased a camera lens for my photography hobby, used AI extensively to understand the pros/cons of various products and it is likely to meet my requirements. It was very useful as there were hundreds of product reviews that could be used to give me advice. In that case, I was already doing quite a bit of research without AI, so I could say that it was giving me good information.

Are there situations where you intentionally choose not to use AI? why?

Personal letters and letters of advice. I want them to be 100% in my voice. Because they are extremely important to travel around the world (decisions about careers, promotions, etc.). Others are against me and the tools may be better. In most cases, we do not accept text messages or email suggestions, just write them yourself.

As AI tools become wider, which social impacts are most interested in you? And what are the potential benefits you most want?

We are concerned about properly educating students (kindergarten through university) and ensuring they do not create a generation of people who are not skilled thinkers. This generation is at a certain risk. This is because in the past, I was taught how to read, write, think, and think without AI. In 10-20 years, we understand all this (I hope!), but this is the generation that we can most hurt if we don't understand things.

I am most optimistic about the impact of AI on scientific and engineering discoveries, particularly what I can do to support biology and medical discoveries related to treatment. There are also many exciting advancements in material science, energy, sustainability and other key areas.

I'm worried about the worsening digital disparity, so I want to make sure AI is available to everyone and that it improves society.

How has your own opinion on AI changed due to your tech savvy?

I worked on AI in the 1980s, but it wasn't very powerful. It's so powerful now, and I'm still optimistic. I think it will define an intellectual agenda for educators and researchers for at least the next 10-20 years. I'm not worried about existential threats, but I'm worried that people will not control AI and that some “decidingers” will make too many decisions. AI decisions should be made at a social level with a certain degree of transparency.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *