0

So lets say we have a model that generates a 3d point cloud given some context information. The generative model is based on encoder/decoder architecture.

The model is typically trained on input size of 10000 points and outputs size of 2500 points. Inputs are sampled from a triangle mesh(stl files)

Now suppose we have two input meshes one with resolution 250k triangles and one with resolution 100k triangles. It wouldn't be wise to sample 10k points from both due to resolution variance.

So if we want to maintain a ratio, we would sample something like 10k points from 100k mesh and 25k points from 250k mesh.

On the other hand , its really difficult for models to run on 25k size as it involves computing pairwise distances and chamfer functions..

the idea is to make model resolution invariant. One way would be to feed different resolution data into the model as training examples. And here I was wondering about the best way to batch these data ( meshes with different resolutions ) together in one training set.

How would you handle resolution variances ?

Best , KK

0 Answers0