Sep 112015
 

This post is part of a meta-series. Click here for a list of all posts in this series.

Photogrammetry has been a major interest of mine for a number of years now, but all of my efforts toward making use of it as an artistic tool have thus far met with failure. None of the open-source, free, or even pay solutions either work or do what I want.1 I have designs on cooking up a program of my own at some point that does it all, but haven’t really set aside the time (hah!) to work something up.

Imagine my delight when I discovered that Blender could do some of what I wanted, natively.

It’s got major restrictions, though: namely, it only solves for a single camera (i.e. one focal length, one sensor size). Mingling images from different cameras, even if the various properies of those images are known2, is a no-go. That put me in a bit of a pickle, because I have a ton of Stormtrooper helmet reference photos, but very few from the same camera and even fewer that present a good “turntable” set. Fortunately, I did have one set, complete with full EXIF data that I could use to set the correct camera properties!

Of course, it was only nine images, with a lot of movement between frames. Blender couldn’t hope to solve that on its own. So, I spent hours and hours every night tracking points across my nine “frames” by hand, trying to find any features that stood out and were easily tracked. Naturally — because it couldn’t possibly be easy! — these points were almost never major “feature” points of the Stormtrooper helmet as one might conceive of them. They were usually blemishes; chipped paint, drips, dings, and so forth.

It took me a while to realize that tracking these “defects” was even worthwhile. My first approach was to try to project the 3D coordinates into the scene so that they coincided with actual features of my existing model. As time went on and I learned more, though, I realized this was folly. I just needed the right “origin” (I used the top of the gray “frown”) and to set the proper scale. I also came to understand, since I wasn’t defining any lines as denoting an X and Y axis3, that the camera solver made use of my initial camera position in 3D space as-is. It wasn’t “solving” that; it was using that as the starting point for the camera’s motion. That meant I had to eyeball that into the right position.

Eventually, though, I got it. A “perfect” solve is anything with a Blender-reported error of <= 0.3, Anything up to about 6 can still be "pretty good." My solve is ~0.9, which I am astonished by after how impossible a task it seemed when I set out.


The little balls are the 3D projections of my tracking points. The reason the photo and the right side (camera left) of the model are so different is explained further down. Image source.

With my camera calibrated, I could finally start modifying my existing model to make it better match the real, screen-used prop! This was the very first time in my entire history 3D modeling that I’ve been able to do that — take a “real life” picture that wasn’t purpose-shot as near-orthographic and use it as a reference plate in 3D space. It took some doing, but this part was much easier than the tracking itself. After all, it’s essentially the same sort of thing I’ve been doing for the better part of two decades. It entailed a great deal of hopping back and forth between “frames” to make sure everything lined up from all nine of my camera angles, but eventually I had the entire left half of the helmet photo-matched.

The screen helmet, though, is asymmetrical. That meant copying my left-side model and tweaking it all over again on the right side to make it match that one. That went a great deal faster, though, and with a quick hop back over to the left to do some final tweaks, I had a bang-on (with a handful of exceptions that could easily be chalked up to lens distortion of the photos themselves) match for the asymmetrical ANH Stunt helmet.

From there, it was a simple matter to “average” the vertices from the left and right sides to create a symmetrical helmet that matched pretty well with both the left and right helmet sides in the photos.


(Click for full-resoltion)

Next step, convert it to paper!

  1. PPT and Voodoo always seem to crash or spit out garbage and Catch123D is super off-putting. The Cloud and cloud computing can be amazing things, but I still want my applications local, man. []
  2. One of the things that’s possible to do in general, given sufficient shared coordinates between images, but unknown camera parameters, is to back-calculate the camera properties. My photogrammetry program, whenever I eventually write it, will do this. []
  3. My image sequence was shot against a single, static background and the helmet itself was turned, so there was no true 3D origin coordinate I could use. []

Full Guinea Pig

 Posted by at 20:36  No Responses »
Aug 202015
 

This post is part of a meta-series. Click here for a list of all posts in this series.

This is sitting on my dining room table right now.

Glaring inaccuracies? You bet. Beyond the overall dimension one I mentioned yesterday, even. All correctable in the next version, which can also be even more detailed on top of being more accurate.

I’m…pretty excited.

That excitement, though, is tempered somewhat by questions and self-doubt around the term “accuracy.” Ever since hearing about them and especially since meeting some of them in person, I’ve had my eye on eventually applying to join the 501st, whenever I got myself around to actually building this damn thing. But even though that badge of honor, that community would have meaning for me, doing this my way has more.

I don’t aim to achieve “screen accuracy.” The screen accurate model is asymmetrical, there are differences in the helmets seen in each movie, and even within individual movies (the ANH “hero” and “stunt” helmets). For my helmet, I want to opt for the “best” of all of them, not just pick one and replicate it. That’s not to say I’m looking to take shortcuts or produce a sub-par product by any stretch of the imagination. My goal is to create something that you could easily put on screen next to any of the other “screen accurate” suits and have it blend right in…unless you knew exactly what to look for.

I’ve been lurking on the 501st boards for a long time and the prevailing sentiments on this topic stick to just a few schools of thought.

There is the most common reaction that one should “just buy a kit” from an approved vendor. Some consider this the “cheapest” path, especially factoring time in. Maybe they’re right, if that’s where their priorities lie. I want to create, so that holds no value to me. Others expressing this view come across as pushing a marketing scheme. “You won’t get approval to join unless you buy from an approve vendor!” I realize this is an intensely cynical view; the “approved vendors” have all spent tremendous time, thought, and energy into creating authentic, accurate replicas and that is work that should only ever be commended. It’s still got an unpleasant feel to me that I can’t shake.

There are those who simply don’t “get” the process of papercraft molds. They see the papercraft version and think people are going to apply with that alone, which obviously doesn’t meet any kind of standard for authenticity. And, for what it’s worth, some — many, even — folks do go on to use the paper model as the basis for the final, wearable piece. There have been some great costumes created this way. Again, that’s not what I’m doing, but the prospect of having to explain and re-explain that isn’t terribly appealing.

Along a similar line, the 501st has been around for a long time. They’ve no doubt had countless people trying to apply and get approval with “unique ideas” or “unique approaches” or whatever else that are, objectively, pretty terrible. They’re tired of it, they’re cynical of anything that has even the vaguest aroma of this, and they’d rather steer such enthusiasm toward a non-terrible end product (and often end up dovetailing heavily with the “just buy a kit” crowd, as a result). I sympathize with this group; they have no reason to believe I’d be anything other than yet another in a very long parade of wannabes.

Finally, there are those who just seem to enjoy the entirety of the hobby and want to encourage participation and creativity as a whole. These seem, rather depressingly, to be the rarest sort. They do exist, though, so that’s something.

At the end of it all, I have to remember that I’m doing this for me. If it doesn’t pass someone else’s sniff test but it does pass mine (knowing just how high my bar is for myself), so be it. They just aren’t looking for the same thing I am.

Regardless, I have work to do.