Sep 112015
 

This post is part of a meta-series. Click here for a list of all posts in this series.

Photogrammetry has been a major interest of mine for a number of years now, but all of my efforts toward making use of it as an artistic tool have thus far met with failure. None of the open-source, free, or even pay solutions either work or do what I want.1 I have designs on cooking up a program of my own at some point that does it all, but haven’t really set aside the time (hah!) to work something up.

Imagine my delight when I discovered that Blender could do some of what I wanted, natively.

It’s got major restrictions, though: namely, it only solves for a single camera (i.e. one focal length, one sensor size). Mingling images from different cameras, even if the various properies of those images are known2, is a no-go. That put me in a bit of a pickle, because I have a ton of Stormtrooper helmet reference photos, but very few from the same camera and even fewer that present a good “turntable” set. Fortunately, I did have one set, complete with full EXIF data that I could use to set the correct camera properties!

Of course, it was only nine images, with a lot of movement between frames. Blender couldn’t hope to solve that on its own. So, I spent hours and hours every night tracking points across my nine “frames” by hand, trying to find any features that stood out and were easily tracked. Naturally — because it couldn’t possibly be easy! — these points were almost never major “feature” points of the Stormtrooper helmet as one might conceive of them. They were usually blemishes; chipped paint, drips, dings, and so forth.

It took me a while to realize that tracking these “defects” was even worthwhile. My first approach was to try to project the 3D coordinates into the scene so that they coincided with actual features of my existing model. As time went on and I learned more, though, I realized this was folly. I just needed the right “origin” (I used the top of the gray “frown”) and to set the proper scale. I also came to understand, since I wasn’t defining any lines as denoting an X and Y axis3, that the camera solver made use of my initial camera position in 3D space as-is. It wasn’t “solving” that; it was using that as the starting point for the camera’s motion. That meant I had to eyeball that into the right position.

Eventually, though, I got it. A “perfect” solve is anything with a Blender-reported error of <= 0.3, Anything up to about 6 can still be "pretty good." My solve is ~0.9, which I am astonished by after how impossible a task it seemed when I set out.


The little balls are the 3D projections of my tracking points. The reason the photo and the right side (camera left) of the model are so different is explained further down. Image source.

With my camera calibrated, I could finally start modifying my existing model to make it better match the real, screen-used prop! This was the very first time in my entire history 3D modeling that I’ve been able to do that — take a “real life” picture that wasn’t purpose-shot as near-orthographic and use it as a reference plate in 3D space. It took some doing, but this part was much easier than the tracking itself. After all, it’s essentially the same sort of thing I’ve been doing for the better part of two decades. It entailed a great deal of hopping back and forth between “frames” to make sure everything lined up from all nine of my camera angles, but eventually I had the entire left half of the helmet photo-matched.

The screen helmet, though, is asymmetrical. That meant copying my left-side model and tweaking it all over again on the right side to make it match that one. That went a great deal faster, though, and with a quick hop back over to the left to do some final tweaks, I had a bang-on (with a handful of exceptions that could easily be chalked up to lens distortion of the photos themselves) match for the asymmetrical ANH Stunt helmet.

From there, it was a simple matter to “average” the vertices from the left and right sides to create a symmetrical helmet that matched pretty well with both the left and right helmet sides in the photos.


(Click for full-resoltion)

Next step, convert it to paper!

  1. PPT and Voodoo always seem to crash or spit out garbage and Catch123D is super off-putting. The Cloud and cloud computing can be amazing things, but I still want my applications local, man. []
  2. One of the things that’s possible to do in general, given sufficient shared coordinates between images, but unknown camera parameters, is to back-calculate the camera properties. My photogrammetry program, whenever I eventually write it, will do this. []
  3. My image sequence was shot against a single, static background and the helmet itself was turned, so there was no true 3D origin coordinate I could use. []

Photogrammetry

 Posted by at 13:42  2 Responses »
Sep 242010
 

Photo-what?

That ugly word is actually a very useful tool for reconstructing geometric information from 2D images. Using a collection of similar photographs of a given subject, you can use matrix math to recompute the 3D structure of that object from the 2D images. Not by hand, mind you. That would take way more brainpower and patience than pretty much anyone has any desire to lend to the task. Computers, however, make great photogrammetric calculators.

Why is this relevant to anything? Well, it’s pretty important when you want to accurately recreate something in the world in a 3D modeling environment and you don’t have access to A) the thing you want to create and B) a 3D scanner. Specifically, I’m talking about modeling spaceships. Most 3D hobbyists just wing it, eying the proportions and getting pretty close. But let’s be honest: when have I ever been satisfied with getting “pretty close” when I could use math to be exact?

I started out modeling a Star Destroyer last year, trying to take very accurate measurements in Photoshop and extrapolating the “right” values by averaging several of these measurements together. I was putting together what looked like a fairly accurate model. Then I read about photogrammetry. This had two effects: the first was that my progress on the Star Destroyer model ground to a halt; the second was a period of intense research into the fundamental math behind photogrammetry. This included (re-)teaching myself matrix math, learning about projection matrices1, and so on. I googled university lectures, dissertations, and dissected open-source projects to understand how this process was done.

Sadly, none of the open-source projects I found would do quite what I want. It seems that the hot thing in photogrammetry is reconstructing terrain surface detail with as many recreated vertices as the resolution of the source images would allow. I wanted to define just a handful of points each image and have a mesh reconstructed from them. From there, I would do the fine detail work on my own. So, I started writing my own program (in Python) to do it. Losing my job, getting a new job, and getting married all conspired to prevent much progress on this front, though, so it hasn’t progressed very far yet.

Assuming I can get something I’m happy with, though, it will alleviate one of the biggest sticking points I’ve always had when modeling technical things: accurate blueprints. Just about every set of blueprints for every technical thing2 I’ve tried to model has had errors in it. Not little, nitpicky errors, either, but major, mismatched proportions between orthographic views. In one image, a component would be X pixels long but in another image—from the same set of diagrams, mind you—it would be Y pixels long. In some cases, you can just split the difference and get something decent. Most of the time, these compromises compound until you’ve got an irreconcilable problem.

Anyway, this is probably one of those topics that will prompt most people who read this to smile, nod, and pat me on my math nerd head. All the same, it’s interesting to me, so maybe it’ll strike your interest to.

  1. A projection matrix describes the conversion of a 3D coordinate to a 2D coordinate through a camera lens, essentially. []
  2. Okay, okay, spaceship. []