Reverse Photogrammetry

No Comments

Introduction

Photogrammetry, in short, is the process of extracting 3D data from 2D photo source imagery. It usually starts with obtaining a series of images with a digital camera, covering our object of interest from many angles. These images are then fed into a photogrammetry application of choice. When loaded inside a photogrammetry app, the photos get aligned so we can build point clouds and dense clouds.

After that we generate a mesh and, finally, project textures onto it:

Reverse engineering of geometry using Google Earth

But what if we can’t, for some reason, capture the necessary photos? Maybe the stuff we want to capture is a remote geographical location which requires special equipment like drones? Luckily, we can get the next best thing to real photos by utilizing the Google Earth service.

Most people are familiar with Google Earth so I won’t waste your time by explaining its details. Google has been using photogrammetry technology for some years now. It shows us geo locations around the globe, which we can enjoy in glorious 3D. We can explore the Grand Canyon, visit the Acropolis or stop over at Graf-Wilhelm-Platz in Solingen.

When our project team was assigned the task of creating 3D content for a VR project around Graf-Wilhelm-Platz in Solingen, one of the first issues that popped up was how to accurately and efficiently create the building’s surroundings with its iconic town centre. Modeling everything by hand would take too much time and money. Photogrammetry was one of the obvious choices, but the problem was that we didn’t have the means to fly around the town square and capture a bunch of photos. Then it came to us that we can simulate flying around by orbiting in Google Earth’s 3D view, and taking photos could be replaced by taking screenshots. Some quick tests confirmed that it actually works, so we jumped to production.

I created quite a few screen shots, a.k.a. virtual photos, for all buildings that we wanted to be present in our project. After processing those in Agisoft photoscan I was left with high poly meshes that were a great 3D reference, but were unsuitable for displaying in our actual project due to their high computational cost. My team mate Stefan did a great job on a retopology process on most of those meshes by reducing their geometrical complexity to bare minimum and keeping the majority of the features, including textures.

Conclusion

To wrap things up: this unusual kind of reverse engineering photogrammetry process proved to be quite useful in our case and it really made a difference. Without it, the goal to have an accurate town square as a setting for the planned building would be that much harder to achieve.

Nikola Jankovic (Guest Author) has been working with the codecentric AR/VR team as freelancer 3D artist since 2018. His expertise includes modeling, texturing, rigging, animation and photogrammetry.

Avatar

Christian is an agile evangelist and has more than 15 years experience working as a product manager for software applications in different domains. He is expert for scaled organizations and complex projects. Promoting value-add- and customer-first-attitudes, Christian accompanies our customers through the complete codecentric-Porfolio as a partnership companion – from idea-generation to maintenance of the final solution. Since 2017, he is leading Mixed Reality projects including the Innovation projects “modulAR” (2018) and “VR Interaction Room” (2020).

Post by Christian Prison

Virtual Reality

VR Virtual Interaction Room – Part 3

Virtual Reality

VR Virtual Interaction Room – Part 2

More content about Virtual Reality

Comment

Your email address will not be published. Required fields are marked *