Solving the
2D-to-3D problem

The transformation from 2D into 3D remains one of the biggest challenges in computer vision, making it infeasible to develop a single, closed solution that can convert any set of images into its corresponding 3D asset.
This limitation stems primarily from poor input data—a few images of an object simply do not contain all the information necessary for inferring the full 3D representation of the object.
Rather, any viable solution to the problem must also involve prior knowledge regarding the distribution of objects at hand—that is, must be tailored to a specific problem domain. develops a complete framework to facilitate solving the 2D-to-3D problem for various domains, including data curation, generation and preparation, a computational pipeline, distributed training and postprocessing — all designed to provide timely solutions to this problem for specific domains.


Major retailers across various sectors, including furniture retail, the beauty industry, fashion and footwear, have already begun reaping the benefits of 3D technologies in their online storefront as well as in-store. 3D representation of the products and AR/VR technologies are bridging the gap between customers’ perceptions and the reality of the products by allowing them to virtually “try on” the products or physically visualize the products within their own spaces.

As augmented reality shopping becomes the standard shopping experience, it will reveal new value for retailers, including a better conversion rate, more time and money spent on the website/app, and a lower return rate wherever a 3D model of the product is available to the customers.

Our technology can help transform entire product catalogs from 2D to 3D in a fully automated and fast process, that’s also affordable, unlike the expensive “pay-per-model” commercial model.


The growth of computing power has enabled the presentation, transmission, interaction and rendering of photorealistic 3D models in real time. These developments have been well used in the gaming industry, particularly AAA games, which are mostly in 3D.
A result has been a huge rise in video game development costs as well as the number of 3D artists employed in the creation process.

The massive use of 3D assets for gaming is expected to keep growing while the 3D content creation capacity is limited, resulting in a need for new technologies to balance supply and demand.

By transforming projections, sketches and/or concept arts into initial 3D assets, 3D artists can work significantly more efficiently, thus improving the entire asset-creation process.
With automatic creation of certain 3D content, such as ordinary or background objects, artists can devote more time and resources to creating unique assets, like main characters and special artifacts.



In recent years, major efforts have been devoted to the research and development of hardware to create immersive experiences, namely AR/VR headsets. However, one of the primary barriers to the evolution of augmented and virtual reality is the lack of affordable 3D content that enables the development of useful applications and experiences.

Creating virtual and augmented reality experiences remains difficult since content creation is a lengthy, expensive process and is limited to subject matter experts.

Our scalable 3D content creation capability can solve the chicken-and-egg problem of insufficient content to make AR/VR compelling and useful for end users and to boost adoption in the next years to come.


Systems based on visual perception such as self-driving cars, robots, and drones require massive amounts of 3D data to train their DL algorithms and simulate real-world objects and scenarios. Today, this data is collected by taking photos and videos of objects, environments, and scenarios as well as generating synthetic data.

Intelligent autonomous systems such as self-driving cars, robots and drones require spatial and semantic understanding of scenes and situations. Those systems are traditionally trained using data collected from their dedicated sensors (e.g., cameras, LIDAR). However, due to the time and cost associated with acquiring real-world footage, many companies also use synthetic simulation environments to train their systems. Creating highly realistic virtual scenes requires a massive amount of 3D assets, many of which are created manually nowadays, limiting the variety of virtual objects and bearing high time and monetary costs.

Using the plethora of 2D images available today, many virtual objects can be created in an automated way, thus reducing the time and cost of creating virtual environments.


Custom solutions

The use of 3D assets for applications in a wide range of verticals is constantly growing. Our platform can be customized for a variety of other use cases. Customization is done  in collaboration with the customer and entails understanding the problem domain and the specific use case in order to formulate a clear problem definition and specifications, including measures of accuracy.

Our deep learning infrastructure is designed to be agnostic and can be trained on a variety of object classes. This lets us quickly adapt it to new use cases based on customer needs.