Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Reverse Photogrammetry

30.9.2019 | 2 minutes of reading time

Introduction

Photogrammetry, in short, is the process of extracting 3D data from 2D photo source imagery. It usually starts with obtaining a series of images with a digital camera, covering our object of interest from many angles. These images are then fed into a photogrammetry application of choice. When loaded inside a photogrammetry app, the photos get aligned so we can build point clouds and dense clouds.

After that we generate a mesh and, finally, project textures onto it:

Reverse engineering of geometry using Google Earth

But what if we can’t, for some reason, capture the necessary photos? Maybe the stuff we want to capture is a remote geographical location which requires special equipment like drones? Luckily, we can get the next best thing to real photos by utilizing the Google Earth service.

Most people are familiar with Google Earth so I won’t waste your time by explaining its details. Google has been using photogrammetry technology for some years now. It shows us geo locations around the globe, which we can enjoy in glorious 3D. We can explore the Grand Canyon, visit the Acropolis or stop over at Graf-Wilhelm-Platz in Solingen.

When our project team was assigned the task of creating 3D content for a VR project around Graf-Wilhelm-Platz in Solingen, one of the first issues that popped up was how to accurately and efficiently create the building’s surroundings with its iconic town centre. Modeling everything by hand would take too much time and money. Photogrammetry was one of the obvious choices, but the problem was that we didn’t have the means to fly around the town square and capture a bunch of photos. Then it came to us that we can simulate flying around by orbiting in Google Earth’s 3D view, and taking photos could be replaced by taking screenshots. Some quick tests confirmed that it actually works, so we jumped to production.

I created quite a few screen shots, a.k.a. virtual photos, for all buildings that we wanted to be present in our project. After processing those in Agisoft photoscan I was left with high poly meshes that were a great 3D reference, but were unsuitable for displaying in our actual project due to their high computational cost. My team mate Stefan did a great job on a retopology process on most of those meshes by reducing their geometrical complexity to bare minimum and keeping the majority of the features, including textures.

Conclusion

To wrap things up: this unusual kind of reverse engineering photogrammetry process proved to be quite useful in our case and it really made a difference. Without it, the goal to have an accurate town square as a setting for the planned building would be that much harder to achieve.

Nikola Jankovic (Guest Author) has been working with the codecentric AR/VR team as freelancer 3D artist since 2018. His expertise includes modeling, texturing, rigging, animation and photogrammetry.

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.