Converting massive point clouds into VR scenes under Unity – Virtual Interaction Room part 10

No Comments

If you want to catch up with earlier episodes of this blog article series, here is a link collection:

1. value proposition and assets

2. usage scenarios and features

3. moderation formats

4. market overview and practical experience

5. “Cheat sheet” for VR moderators

6. VR entry barriers

7. Testing Hypotheses

8. Integrating collaboration tools into Virtual Reality

9. Finding innovators for VR meetings

Converting point clouds for Unity

Since our focus customers require actual rooms as VR environments, we need to be able to digest and prepare point clouds with sometimes billions of points as Unity scenes. Our testing data contained 500.000 points for a small production site of our customer. There are several approaches to work with point clouds as suggested by Marten Krull (

and I guess there will be a few more after a bit of research.

Working with Faro Scene

We planned to start with Meshlab but it turned out that it might be easier to open the customer’s data in his orginal software Faro Scene.

Faro Scene (apart from many other features) has a built-in mechanism of creating meshes from scanned point clouds which we thought might come in handy. Unfortunately, it turned out that the results have a limited amount of meshes and the result was not sufficient for our use case:

Note that the picture above just shows the geometry of the mesh, it is also possible to import the textures but we were more interested in the geometry at this point.

Point Cloud Viewer and Tools

After some experiments with meshlab, we wondered if other people out there might have the same problem. So we researched the Unity Asset Store and found the Point Cloud Viewer and Tools. This tool is capable of importing the point clouds directly into Unity It does a nice job regarding the Level of Detail (LOD) by increasing point size and reducing their amount the farer away they are from the spectator.

However, this feature requires DirectX11 support. Since the target platform is still the Oculus Quest (see part 6 of this series to understand why), this was not an option. So now, we are using the mesh generation feature of Point Cloud Viewer and Tools and store the mesh as scene asset in unity. Unfortunately, the generated meshes are still very huge so they won’t work on the Oculus Quest. As a consequence, we have to reduce the polygon count which can be done with Meshlab or directly with the underlying algorithms natively in C# under Unity (e.g. The ball-pivoting algorithm for surface reconstruction)

So much for the technical part, but there is also news on our product-market-fit:

Update on “Finding Innovators for VR meetings”

As mentioned in part 9 of this series, we are searching for customers who are in the “innovators” segment of the innovation adoption lifecycle. I’ve been talking to a lot of customers in the last two weeks and it seems that the main interest is among the plant equipment manufacturers. So for the next 2 weeks, I will try to specifically target these customer segment, hoping to find my “innovators” there. The product will still be usable for other companies who have the same needs, but to obtain further funding, we need to focus on one market segment.


Continue to part 11

Christian is an agile evangelist and has more than 15 years experience working as a product manager for software applications in different domains. He is expert for scaled organizations and complex projects. Promoting value-add- and customer-first-attitudes, Christian accompanies our customers through the complete codecentric-Porfolio as a partnership companion – from idea-generation to maintenance of the final solution. Since 2017, he is leading Mixed Reality projects including the Innovation projects “modulAR” (2018) and “VR Interaction Room” (2020).


Your email address will not be published.