How It Works_

 

How do we generate 3D models out of small data?
Automating the generation of 3D models based on a few words, or even a few images, is a difficult problem to solve. The challenge arises from the fact that the inputs lack a comprehensive and detailed representation of the object. It's even more difficult if you want to create 3D models at the same quality level as a human creator would - and we do.
At 3DFY.ai, we tackle this challenge by separating the AI pipeline into object categories. We use category-specific datasets to train our AI based 3D generation pipeline for each class of objects. Using this methodology, our DL Models are capable of learning category specific priors, so that they can complete any information missing from the inputs in a plausible manner.
 

3DFY in three simple steps:

3DFY.ai - emblem presenting importing data

1

Pre-processing

Whether the input is a text description or a few images, the first thing we do is to standardize and clean up the inputs.

As an example, for image data we will identify the object of interest, crop it out of the image and remove the background.

For textual inputs, we typically remove meaningless tokens and convert the remaining text into a more readily machine readable format.


3DFY.ai - emblem presenting object code

2

Analysis

The next step in our pipeline involves fine-grained analysis of the input data, regardless of its format, to extract a compact, meaningful and interpretable representation of the content in the input, which we term object code.

3DFY.ai - emblem presenting processing through pipeline

3

Synthesis

In the final stage of our pipeline, we generate the actual 3D asset out of the object code, in addition to control parameters (e.g. level of detail, texture resolution) which affect the representation of the object at hand.

Example 3D models created by 3DFY.ai pipeline
Here are a few examples of models created by the 3DFY.ai pipeline
for four different categories: lamps, sofas, dressers, ottomans.
loading