
As part of Conversations on Artificial Intelligence, a webinar series hosted by the Caltech Science Exchange, two artificial intelligence (AI) researchers (Pietro Perona and Suzanne Stathatos) will discuss powerful biodiversity research for wildlife conservation and biodiversity research. We discussed the potential of AI as a powerful tool.
Perona is an Allen E. Puckett Professor of Electrical Engineering at Caltech, and Statathos is a graduate student who, prior to coming to Caltech, was a software engineer at Amazon and JPL, which Caltech manages for NASA.
In a conversation with Caltech science writer Robert Perkins, the engineer discusses AI applications for identifying and tracking wildlife that provide fresh insights for biologists and other individuals interested in the environment. I’m explaining.
Highlights of the conversation are below.
The questions and answers below have been edited for clarity and length.
What is computer vision and how is it used?
Pietro Perona: When you wake up in the morning, you begin to see the world and understand what’s around you. And that’s what we’re trying to reproduce in computer vision. We want to give machines the same ability we need to know about the world just by looking at it.
One of the purposes for which we use vision is to understand the geometry of the world around us, for example to avoid hitting obstacles. Another thing we use it for is recognition. This allows us to classify objects in the world and know how to interact with those objects.And of course we use it for social interaction.For example Robert, look at you I can see that you’re paying attention to what I’m saying and aren’t too confused.
We want to give machines the ability to see so that they can replicate all of these abilities we have and interact better with people.
Why is it difficult for computers to identify objects just by looking at them?
Pietro Perona: Observing something with the eye produces an image that represents the object, which can be very different from what actually exists. Images are produced by light entering the eye after bouncing off surfaces in the environment, and carry information about what touches the eye before it enters the eye. This representation is based on our perspective and the lighting that exists. It takes a lot of work to decipher what the images are telling us. And it turns out that more than half of our brain is devoted to vision. We don’t realize it, but our brains work more with vision than most other things we do during the day, like language or proving theorems.
Suzanne Statatos: Humans are also constantly evolving and changing models. As we age, and over time, we learn many new sensory inputs. . We’re training computer vision models to learn about the world, but using far fewer images than the human brain collects.
What are some ways AI can help ecology and conservation?
Suzanne Statatos: I’ve been working on a project using sonar imagery to monitor salmon populations in the Pacific Northwest. For ecological and economic reasons, fisheries in Alaska, Washington, California, and Oregon have begun installing sonar her cameras in riverbeds to track salmon as they swim from their spawning grounds. They observe how many fish swim across the video frame at a time and count as they swim. We’re trying to automate that process to reduce fish stocks and allow you to work on other things while you’re counting fish swimming upstream. This is difficult because the swim patterns are almost the same.unlike detection [something] They are much more difficult to detect and track, like pedestrians and cars, because they are two different objects against very different backgrounds.
I’ve also worked with students who want to understand how walrus populations are responding to the changing Arctic.Count brown pixels using remote satellite imagery and a computer vision approach [the walruses] Against a white background, I was able to start getting a baseline for the walrus population. This is important for understanding how populations are changing.
It seems that it is not only to save effort for graduate students, but also to obtain data that would otherwise not be available.
Suzanne Statos: That’s right. Not only does it save the graduate student effort, but it allows them to answer questions they might not otherwise be able to answer. They can approach the data in a different way and address questions about what they don’t know.
Pietro Perona: Let’s talk about iNaturalist. In other words, iNaturalist is an app that anyone can download to their smart device, usually a smartphone. This app was developed by Scott Rowley of the California Academy of Sciences to bring together naturalists and amateur field his biologists to help them identify plants and animals from each other. We’ve added the ability to automatically interpret images to iNaturalist so anyone can identify plants and animals. This is primarily the work of Grant van Horn, who was a student in his lab.
The idea was that when you go hiking or see a plant or animal in nature, you want to know what species it is. Are you a rare species? Should it be here in the first place? and so on. You can use your mobile phone to take a picture of this animal or plant and have your mobile phone classify and suggest how many species it could be. And you can choose the most probable among these. Since this is a social network, you can post your observations. Also, other people interested in that species or location can come and see the observations, post their thoughts on what species it is, and modify their decisions about the species. Or mechanical species determination. Behind the scenes, we’ve built a statistical machine that associates each person with an account of their knowledge and an estimate of how often they’re correct when spouting out species identifications in a particular domain.
As this technology improves, we must also consider potential ethical and privacy concerns, especially regarding things like facial recognition. Have you ever encountered a tricky problem?
Pietro Perona: That’s what we think. Facial recognition is, like it or not, as successful as the applications we were talking about. Face recognition systems are far more accurate than humans and could be a great tool. For example, in the United States, for example, the National Academy of Sciences reports on how often crime witnesses make false identifications and people end up in prison for racial prejudice. It’s really nice to have something that can test prejudices, prejudices, etc. You can tell if your system is biased, and if so, fix it. The hope is to have systems that help create a more just society, a better functioning society. However, it is true that they can be abused.
Suzanne Statatos: An interesting application dedicated to nature conservation is related to camera traps. [Camera traps are motion-triggered static cameras that are placed in the wilderness to monitor wildlife.] There is the question of whether the camera trap should do something if it finds a poacher, or should it recognize the poacher. And then there’s the flip side. Is this the responsibility of the computer vision researcher? Also things like geographic location are important, especially when dealing with endangered species. I don’t want to publish my exact GPS coordinates because when this data is published, opportunists may take it and say, “Oh, I can find rhinos at this exact location.” It’s from Therefore, the data aspect should be inaccessible.
Here are some of the other questions covered in the video linked above:
• What first piqued researchers’ interest in applying computer vision techniques to ecological problems?
• How do I get started in the field of computer vision? (Includes information about participating in the AI for Conservation Slack community.)
• What other potential applications exist for this technology?
• Are there any unrelated and exciting applications for computer vision?
Learn more about artificial intelligence at Caltech Science Exchange.
