This post is part of a meta-series. Click here for a list of all posts in this series.
Photogrammetry has been a major interest of mine for a number of years now, but all of my efforts toward making use of it as an artistic tool have thus far met with failure. None of the open-source, free, or even pay solutions either work or do what I want.1 I have designs on cooking up a program of my own at some point that does it all, but haven’t really set aside the time (hah!) to work something up.
Imagine my delight when I discovered that Blender could do some of what I wanted, natively.
It’s got major restrictions, though: namely, it only solves for a single camera (i.e. one focal length, one sensor size). Mingling images from different cameras, even if the various properies of those images are known2, is a no-go. That put me in a bit of a pickle, because I have a ton of Stormtrooper helmet reference photos, but very few from the same camera and even fewer that present a good “turntable” set. Fortunately, I did have one set, complete with full EXIF data that I could use to set the correct camera properties!

Of course, it was only nine images, with a lot of movement between frames. Blender couldn’t hope to solve that on its own. So, I spent hours and hours every night tracking points across my nine “frames” by hand, trying to find any features that stood out and were easily tracked. Naturally — because it couldn’t possibly be easy! — these points were almost never major “feature” points of the Stormtrooper helmet as one might conceive of them. They were usually blemishes; chipped paint, drips, dings, and so forth.
It took me a while to realize that tracking these “defects” was even worthwhile. My first approach was to try to project the 3D coordinates into the scene so that they coincided with actual features of my existing model. As time went on and I learned more, though, I realized this was folly. I just needed the right “origin” (I used the top of the gray “frown”) and to set the proper scale. I also came to understand, since I wasn’t defining any lines as denoting an X and Y axis3, that the camera solver made use of my initial camera position in 3D space as-is. It wasn’t “solving” that; it was using that as the starting point for the camera’s motion. That meant I had to eyeball that into the right position.
Eventually, though, I got it. A “perfect” solve is anything with a Blender-reported error of <= 0.3, Anything up to about 6 can still be "pretty good." My solve is ~0.9, which I am astonished by after how impossible a task it seemed when I set out.

The little balls are the 3D projections of my tracking points. The reason the photo and the right side (camera left) of the model are so different is explained further down. Image source.
With my camera calibrated, I could finally start modifying my existing model to make it better match the real, screen-used prop! This was the very first time in my entire history 3D modeling that I’ve been able to do that — take a “real life” picture that wasn’t purpose-shot as near-orthographic and use it as a reference plate in 3D space. It took some doing, but this part was much easier than the tracking itself. After all, it’s essentially the same sort of thing I’ve been doing for the better part of two decades. It entailed a great deal of hopping back and forth between “frames” to make sure everything lined up from all nine of my camera angles, but eventually I had the entire left half of the helmet photo-matched.
The screen helmet, though, is asymmetrical. That meant copying my left-side model and tweaking it all over again on the right side to make it match that one. That went a great deal faster, though, and with a quick hop back over to the left to do some final tweaks, I had a bang-on (with a handful of exceptions that could easily be chalked up to lens distortion of the photos themselves) match for the asymmetrical ANH Stunt helmet.
From there, it was a simple matter to “average” the vertices from the left and right sides to create a symmetrical helmet that matched pretty well with both the left and right helmet sides in the photos.
Next step, convert it to paper!
- PPT and Voodoo always seem to crash or spit out garbage and Catch123D is super off-putting. The Cloud and cloud computing can be amazing things, but I still want my applications local, man. [↩]
- One of the things that’s possible to do in general, given sufficient shared coordinates between images, but unknown camera parameters, is to back-calculate the camera properties. My photogrammetry program, whenever I eventually write it, will do this. [↩]
- My image sequence was shot against a single, static background and the helmet itself was turned, so there was no true 3D origin coordinate I could use. [↩]