Meta’s new segment Anything model can identify and remove objects in images and videos

AI Video & Visuals



Meta has introduced a new AI model that can select and extract individual objects within an image or video. The Segment Anything Model can split images and videos into component objects even if they have not been previously trained to recognize them, while Meta has the largest data set of image annotations ever created to accompany the model. We also released something called a set.

Meta SAM

SAM can split the image into separate objects and lift them individually from the whole. Users can also search for specific objects by text. In other words, you can easily find Wado just by typing “red and white striped shirt” into the tool. “Segmentation” AI basically identifies how to assign the tags that make up a single object to pixels. SAM appears to be a more robust version of the technology Meta uses to identify individuals, prohibited content, and content that may be of interest to Facebook and Instagram users. But Meta envisions a much more ambitious set of potential uses for segmenting AI. Especially since it comes with a huge image identification dataset and the ability to extend it. SAM can handle web pages full of images, their separate aspects and their meaning as a whole.

“Reducing the need for task-specific modeling expertise, training computing, and custom data annotation for image segmentation is at the core of the Segment Anything project. , was to build a foundational model for image segmentation, similar to how prompting is used in natural language processing models, called promptables, which can be trained on diverse data and adapted to specific tasks. Model.. Blog post on SAM: “In the AR/VR domain, SAM allows you to select an object based on the user’s line of sight and then ‘lift’ it into 3D.” For content creators, SAM can improve creative applications such as image region extraction for collages and video editing. SAMs can also be used to support scientific studies of natural occurrences on Earth and in space. For example, animals and objects can be located and video studied and tracked. We believe the possibilities are wide open and are excited about many potential use cases we hadn’t even imagined yet. ”

Voicebot founder Bret Kinsella participated in a demo of SAM, but there is no official product yet. SAM is a lot like his Make-A-Video generative AI text-to-video tool that he demonstrated last year. That said, demos and datasets suggest that Meta wants developers to move in that direction.

“By making it more accessible, everyone will have a better understanding of what Meta is trying to achieve. You might start watching,” Kinsella noted in the Synthedia newsletter. Also, given the popularity of generative AI solutions, there will be a greater understanding among other observers of the very wide range of AI technologies today. That, in turn, helps drive stories and awareness of Meta’s activities in the field.

Meta dives into synthetic media with text-to-video AI generator Make-A-Video

Stanford University Closes Meta LLaMA-Based Alpaca-Generated AI Demo Over Safety and Cost Issues

Meta stops demo of Academic Paper Generator AI after 3 days








Source link

Leave a Reply

Your email address will not be published. Required fields are marked *