At 3DFY.ai, we’ve taken the state-of-the-art research a few steps forward to create a complete, scalable, and robust technology that can solve the 2D-to-3D problem in the real world.
There are no priors or prerequisites for the lighting conditions.
Contrary to academic works, which mostly work with a single input image, our technology supports multiple input images.
We use the 2D images without knowing the camera parameters used for imaging the object or acquiring the input images.
The 3DFY.ai framework closes the loop between a trained model and the datasets used for training and validation.
This is achieved through a semi-automated solution for synthetic data generation. This capability facilitates fast iterations and significantly reduces time and costs needed to adapt our solution to new use cases.
We use “in the wild” images as our input images.
As our input consists of only a few images, some parts of the object may not be seen in the input. 3DFY.ai technology fills in the occluded areas to complete the 3D models.
Since 3DFY.ai models and datasets are typically large, and our process is designed around bootstrapping our datasets and re-training our networks, training and data preparation workloads can dominate the time and cost involved in adapting to new domains.
Therefore, we created a custom infrastructure to enable distributed processing over fleets of efficient cloud instances, which facilitates quick and cost-effective computations.
Methods for generating 3D models have existed for decades and can be roughly divided into three categories: manual creation, 3D scanners, and photogrammetry.
Manual creation of 3D content using dedicated software (e.g., 3DMax) is arguably the most ubiquitous method at the moment. This process requires a professional 3D modeler to use a few 2D images as the guiding data in the model creation process, which requires a significant amount of work using specialized software. Due to the expertise and amount of labor required, it is relatively expensive and non-scalable.
3D scanners are dedicated devices that are designed to capture geometric information. These devices typically contain an active illumination component, a photon sensor, some electronics, and computational software to reconstruct the 3D geometry from raw measurements. However, besides requiring dedicated hardware, 3D scanning typically poses limitations on the imaging conditions (e.g., ambient lighting), object reflectance properties (e.g., specularity), achievable resolution, and/or image object size. These limitations are inherent to the specific design and geometry of the scanner itself. In addition, obtaining a high-quality, 360-degree model using a commercial 3D scanner is usually a time-consuming and cumbersome task.
The term photogrammetry pertains to the field of making measurements based on 2D images as input (using a regular camera). Photogrammetric 3D reconstruction is the process of reconstructing the 3D geometry of an object using a relatively large number of images of the object, taken under different conditions (typically different viewpoints) using primarily geometric considerations. Photogrammetry is a relatively popular method for 3D model creation, as it is capable of producing high-quality models under certain conditions. Its main limitations are the need for a relatively large number of images—a few dozens to thousands, depending on the model size and geometry—with overlap between the different views. Another major limitation of this method is specular objects since, at specular locations, the image is saturated and therefore looks white, obscuring image features and breaking the assumption of photometric consistency.
Manual creation of 3D content using dedicated software (e.g., 3DMax) is arguably the most ubiquitous method at the moment. This process requires a professional 3D modeler to use a few 2D images as the guiding data in the model creation process, which requires a significant amount of work using specialized software. Due to the expertise and amount of labor required, it is relatively expensive and non-scalable.
3D scanners are dedicated devices that are designed to capture geometric information. These devices typically contain an active illumination component, a photon sensor, some electronics, and computational software to reconstruct the 3D geometry from raw measurements. However, besides requiring dedicated hardware, 3D scanning typically poses limitations on the imaging conditions (e.g., ambient lighting), object reflectance properties (e.g., specularity), achievable resolution, and/or image object size. These limitations are inherent to the specific design and geometry of the scanner itself. In addition, obtaining a high-quality, 360-degree model using a commercial 3D scanner is usually a time-consuming and cumbersome task.
The term photogrammetry pertains to the field of making measurements based on 2D images as input (using a regular camera). Photogrammetric 3D reconstruction is the process of reconstructing the 3D geometry of an object using a relatively large number of images of the object, taken under different conditions (typically different viewpoints) using primarily geometric considerations. Photogrammetry is a relatively popular method for 3D model creation, as it is capable of producing high-quality models under certain conditions. Its main limitations are the need for a relatively large number of images—a few dozens to thousands, depending on the model size and geometry—with overlap between the different views. Another major limitation of this method is specular objects since, at specular locations, the image is saturated and therefore looks white, obscuring image features and breaking the assumption of photometric consistency.
EVERYTHING
3D,
NOW.
EVERYTHING
3D,
NOW.
EVERYTHING
3D,
NOW.
EVERYTHING
3D,
NOW.
EVERYTHING
3D,
NOW.
Site Map
Site Map
Site Map
Contact
Contact
Contact
Email: contact@3dfy.ai
Twitter: @3DFYAI
Email: contact@3dfy.ai
Twitter: @3DFYAI
Email: contact@3dfy.ai
Twitter: @3DFYAI
© 2021 3DFY.ai Ltd., All rights are protected.