Mehta announced Segment Anything Model 2 (SAM 2), An integrated AI model that can accurately identify which pixels in an image or video are associated with which object. SAM 2 allows you to segment any object and track it consistently in real-time across every frame of a video, making it a potentially transformative tool in the world of video editing and mixed reality.
Our new AI model can segment anything, even video | Meta
https://about.fb.com/news/2024/07/our-new-ai-model-can-segment-video/
Introducing SAM 2: The next generation meta-segment Anything model for video and images
https://ai.meta.com/blog/segment-anything-2/
Correctly identifying which pixels belong to which object is called “segmentation” and is useful for tasks like scientific image analysis and photo editing. Meta has developed its own segmentation AI model. Segment Anything Model (SAM) “Used in Instagram's AI function background ' and ' Crop .
Beyond Instagram, SAM has sparked a variety of applications in science, medicine, and many other industries. In marine science, Sonar Image To Analyzing Coral Reefs For disaster relief Analyzing satellite imagery In the medical field, cell images are segmented Detect skin cancer .
Meta has announced the next generation of SAM, SAM 2. SAM 2 extends SAM's segmentation capabilities to video, allowing any object in an image or video to be segmented and tracked consistently in real-time across all frames of the video. Not only does segmentation capability extend to video, but it also reduces operation time by up to one-third.
Existing AI models have been unable to achieve this because video segmentation is much harder than images: in a video, objects can move quickly, change appearance, or be obscured by other objects or parts of the scene. With the creation of SAM 2, we have been able to solve many of these challenges.
To see how accurately SAM 2 can identify objects in a video, check out this video posted by Meta CEO Mark Zuckerberg.
Meta has also released a demo web app for SAM 2, allowing you to see how the segmentation features work in practice.
SAM 2 Demo | By Meta FAIR
https://sam2.metademolab.com/
To experience the demo, click “Try it now.”
You will then be notified that the SAM 2 demo is for research purposes only and may not be used for commercial purposes, that the demo is not available to residents of Illinois and Texas, that the demo may not work as intended, and that any data submitted through the demo and its output will be collected, stored, processed, shared and used to train and improve AI models in accordance with the Terms of Use linked below. Click “I Agree” at the bottom of the screen.
Here is the demo screen: Click on the objects on the screen.
This identifies the object as follows:
If you want to identify a new object, click Add another object.
You can identify up to three objects at a time. Once you have selected the objects, click Track Objects.
The video in the red frame is then played. Both the ball and the shoes are white, and there are many moments when they overlap, but SAM 2 succeeds in accurately identifying each object.
Click Next.
Click the Selected Objects section to change how the selected object is highlighted, or click the Background section to change how the background outside the selected object is treated. Click Next.
You can also download the segmented video or you can upload a video using “Upload your own video” and segment it.
The following video shows a demonstration of SAM 2:
I tried out a demo of Meta's segmentation AI model “Segment Anything Model 2 (SAM2)” – YouTube
SAM 2 is open source and Apache 2.0 The license is available on GitHub.
GitHub – facebookresearch/segment-anything-2: This repository provides code for running inference using Meta Segment Anything Model 2 (SAM 2), links to download checkpoints of the trained model, and an example notebook that shows how to use the model.
https://github.com/facebookresearch/segment-anything-2
In addition, Meta has developed the SA-V dataset used to train SAM2. CC BY 4.0 license.
SA-V | Meta AI Research
https://ai.meta.com/datasets/segment-anything-video/