New machine learning tools help create realistic 2D sketches

Machine Learning


Carnegie Mellon University’s Robotics Institute (RI) uses machine learning tools designed by scientists to help novice and professional designers create 3D virtual models of everything from bespoke home furnishings to video game content. may be able to create

Pix2pix3d enables anyone to create realistic 3D representations of 2D sketches using generative artificial intelligence tools similar to those that power popular AI photo generation and editing applications. Image credit: Carnegie Mellon University.

Pix2pix3d allows anyone to create realistic 3D representations of their 2D sketches with the help of generative artificial intelligence tools very similar to those that power the famous AI photo generation and editing application.

Our research goal is to make content creation accessible to more people through the power of machine learning and data-driven approaches..

Jun-Yan Zhu, Assistant Professor, Department of Computer Science, Carnegie Mellon University

Zhu is a member of the pix2pix3d team.

In contrast to other tools that may be able to create 2D images, pix2pix3d is a conditionally generative model of 3D recognition, allowing users to create more sophisticated images from 2D sketches and label maps such as edge maps and segmentation. You can enter your information.

In addition, pix2pix3d synthesizes 3D volumetric representations of geometries, labels, and appearances that can be given from several perspectives to create realistic, three-dimensional images resembling photographs.

If you can draw a sketch, you can create your own customized 3D model.

Kangle Deng, Robotics Institute PhD Candidate, Carnegie Mellon University

Deng was part of a research group with Zhu, Professor Deva Ramanan, and PhD student Gengshan Yang.

Pix2pix3d is certified with data sets such as cats, cars, and human faces, and the team is working to extend these capabilities. In the future, it can be used to design consumer products, such as giving people the power to customize their home furnishings. Beginners and professional designers alike can use it to adjust items in their virtual reality environments and video games, add effects to movies, and more.

As soon as pix2pix3d generates a 3D image, users can modify the image in real time by deleting and redrawing the innovative 2D sketch. This feature gives users more freedom to adjust and improve their images without having to provide the entire project. The 3D model reflects your changes and displays accurately from multiple perspectives.

Its interactive editing features set pix2pix3d apart from other modeling tools as it allows users to make adjustments in a quick and efficient way. In particular, this feature is beneficial in areas such as manufacturing, as it allows users to easily design, test, and adjust products.

For example, if a designer brought in a sketch of a car with a square hood, the model would provide a 3D rendering of such a car. When the designer erases that part of the sketch and replaces the square hood with a round hood, the 3D model instantly updates. Also, the team will continue to improve and enhance this feature in the future.

Deng says even a minimal artistic user can achieve satisfactory results. Models can produce accurate output for simple or difficult sketches. Given highly accurate and sophisticated data from edge maps or segmentation, the model can produce highly sophisticated 3D images.

Our model is robust against user error.

Kangle Deng, RI PhD Candidate, Carnegie Mellon University

Deng said adding even a 2D sketch that roughly resembles a cat produces a 3D image of the cat.

3D-aware conditional image compositing (pix2pix3D)

3D-aware conditional image compositing. Video credit: Carnegie Mellon University.

Source: https://www.cs.cmu.edu/



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *