In all my rearranging I’ve created a lot of inadvertently broken links… On fact all of my old blog posts from the “architectrual photogrammetry” blog that i maintained on wordpress for a while are gone. And people are arriving at this website looking for some of the old posts. Therefore I will try to “re-up” them here as time permits.
I’ll start by posting some of the images that were associated with the old blog. These were collected in an effort to describe how photogrammetry works (a tall order!). Not including the Durer drawings, this work dates from early 1990’s.
I’ve long loved this drawing which depicts a physical means to create a perspective image of a three dimensional object as perceived from a single point in space. I doubt that words describing what you are looking at do a better job than simply studying the image.
The idea contained in this drawing are more or less replicated in a camera. The big difference with a camera, however, is that the 2D plane onto which the perspective image is captured is not located between the point of view and the object (as shown above) but projected behind the focal point onto film/digital sensors. The point of view (the eye hook mounted to the wall, above) for a camera is the focal point of the glass lens.
Here are some more diagrams by Albrect Durer:
In each case we see that an accurate perspective is the result of a picture plane and focal point that have a fixed relationship to one another.
This is how cameras record images of 3D space. Light is reflected off of the object, passes through a focal point, and is cast onto a planar surface. The key to photogrammetry is that when two such documents exist – and the relationship between the focal point of the lens and the picture plan and precisely understood – the three dimensional conditions of the subject captured in the two images can be “back calculated” through triangulation.
For example: Here are two images of the same building:
If we can pinpoint a series of precise locations on the picture plane that correspond to the same spot on the building, then we are starting to construct a three dimensional model of the two camera stations and their relationship to one another at the time the photos were shot.
In the “old days” this was don by putting the images onto a digitzing tablet and locating the points with a digitizng ‘puck’ that has a ‘loupe’ mounted onto it.
Once a handfull of such measurements are collected, a software application can create a 3D model from each image by projecting rays from the picture plane back through the focal point towards the actual object or subject of each photo.
Then the individual models are compared, making sure that the rays projected toward the same point locations on the building will intersect. This, too, is conducted by a software application that can mathematically solve multiple unknown values simultaneously in an operation known as a ‘bundle adjustment’.
Once the camera stations position relative to each other is know, one can extract the three dimensional value of intersections created by locating the same point on the building in at least two images. These measurements can then be used to create measured drawings to describe the subject, which in turn can become construction documents used for preservation.
Happily, nowadays the process is practically automatic -and more precise than ever. But hopefully these images will help illustrate what is happening under the hood, so to speak.