3D model creation is highly dependent on the data used for training. At 3DFY.ai, we developed a data-centric infrastructure and related methodology, which are designed for rapid adaptation to new applications and use cases.
The overall pipeline comprises four main building blocks: input module, data engine, core computational pipeline, and output validation.
3DFY.ai operates on any type of image format without making assumptions regarding the acquisition setup. In particular, 3DFY.ai can handle images of complex scenes taken with arbitrary lighting conditions and camera setups.
The 3DFY.ai computational pipeline is designed to make the most out of existing data and can utilize any number of input images for a given object. In fact, the only requirement is that the object to 3DFY appears in all input images.
The input module can process images from various sources, including network drives, databases, and even directly from websites.
The 3DFY.ai data engine is designed to efficiently generate and handle the different types of data needed to train our algorithmic pipeline. Our core AI models are trained using synthetic high-quality 3D models, which are typically slow to create and expensive to accrue.
Therefore, at the core of our data engine is the capability to procedurally generate additional 3D models in a highly automated manner to create new datasets and enrich existing ones. This greatly facilitates improving model performance in a cost effective way.
Processing large 3D datasets is a burdensome computational task. 3DFY.ai mitigates this by creating native infrastructure that distributes the computing over large clusters of cloud machines, enabling large dataset preparation at a fraction of the time and cost.