3D model creation is highly dependent on the data used for training. At 3DFY.ai, we developed a data-centric infrastructure and related methodology, which are designed for rapid adaptation to new applications and use cases.
The overall pipeline comprises four main building blocks: input module, data engine, core computational pipeline, and output validation.
3DFY.ai input module can ingest both imagery and textual data.
When images are received as inputs, any type of image format is accepted without making assumptions regarding the acquisition setup. In particular, 3DFY.ai can handle any number of images, even of complex scenes taken with arbitrary lighting conditions and camera setups. In fact, the only requirement is that the object to 3DFY appears in all input images.
When receiving textual data, the input module can process any textual description in natural language while seamlessly overcoming typos and grammar mistakes and elegantly ignoring any inappropriate language.
The 3DFY.ai data engine is designed to efficiently generate and handle the different types of data needed to train our algorithmic pipeline. Our core AI models are trained using synthetic high-quality 3D models, which are typically slow to create and expensive to accrue.
Therefore, at the core of our data engine is the capability to procedurally generate additional 3D models in a highly automated manner to create new datasets and enrich existing ones. This greatly facilitates improving model performance in a cost effective way.
Processing large 3D datasets is a burdensome computational task. 3DFY.ai mitigates this by creating native infrastructure that distributes the computing over large clusters of cloud machines, enabling large dataset preparation at a fraction of the time and cost.