Sep 112015
 

This post is part of a meta-series. Click here for a list of all posts in this series.

Photogrammetry has been a major interest of mine for a number of years now, but all of my efforts toward making use of it as an artistic tool have thus far met with failure. None of the open-source, free, or even pay solutions either work or do what I want.1 I have designs on cooking up a program of my own at some point that does it all, but haven’t really set aside the time (hah!) to work something up.

Imagine my delight when I discovered that Blender could do some of what I wanted, natively.

It’s got major restrictions, though: namely, it only solves for a single camera (i.e. one focal length, one sensor size). Mingling images from different cameras, even if the various properies of those images are known2, is a no-go. That put me in a bit of a pickle, because I have a ton of Stormtrooper helmet reference photos, but very few from the same camera and even fewer that present a good “turntable” set. Fortunately, I did have one set, complete with full EXIF data that I could use to set the correct camera properties!

Of course, it was only nine images, with a lot of movement between frames. Blender couldn’t hope to solve that on its own. So, I spent hours and hours every night tracking points across my nine “frames” by hand, trying to find any features that stood out and were easily tracked. Naturally — because it couldn’t possibly be easy! — these points were almost never major “feature” points of the Stormtrooper helmet as one might conceive of them. They were usually blemishes; chipped paint, drips, dings, and so forth.

It took me a while to realize that tracking these “defects” was even worthwhile. My first approach was to try to project the 3D coordinates into the scene so that they coincided with actual features of my existing model. As time went on and I learned more, though, I realized this was folly. I just needed the right “origin” (I used the top of the gray “frown”) and to set the proper scale. I also came to understand, since I wasn’t defining any lines as denoting an X and Y axis3, that the camera solver made use of my initial camera position in 3D space as-is. It wasn’t “solving” that; it was using that as the starting point for the camera’s motion. That meant I had to eyeball that into the right position.

Eventually, though, I got it. A “perfect” solve is anything with a Blender-reported error of <= 0.3, Anything up to about 6 can still be "pretty good." My solve is ~0.9, which I am astonished by after how impossible a task it seemed when I set out.


The little balls are the 3D projections of my tracking points. The reason the photo and the right side (camera left) of the model are so different is explained further down. Image source.

With my camera calibrated, I could finally start modifying my existing model to make it better match the real, screen-used prop! This was the very first time in my entire history 3D modeling that I’ve been able to do that — take a “real life” picture that wasn’t purpose-shot as near-orthographic and use it as a reference plate in 3D space. It took some doing, but this part was much easier than the tracking itself. After all, it’s essentially the same sort of thing I’ve been doing for the better part of two decades. It entailed a great deal of hopping back and forth between “frames” to make sure everything lined up from all nine of my camera angles, but eventually I had the entire left half of the helmet photo-matched.

The screen helmet, though, is asymmetrical. That meant copying my left-side model and tweaking it all over again on the right side to make it match that one. That went a great deal faster, though, and with a quick hop back over to the left to do some final tweaks, I had a bang-on (with a handful of exceptions that could easily be chalked up to lens distortion of the photos themselves) match for the asymmetrical ANH Stunt helmet.

From there, it was a simple matter to “average” the vertices from the left and right sides to create a symmetrical helmet that matched pretty well with both the left and right helmet sides in the photos.


(Click for full-resoltion)

Next step, convert it to paper!

  1. PPT and Voodoo always seem to crash or spit out garbage and Catch123D is super off-putting. The Cloud and cloud computing can be amazing things, but I still want my applications local, man. []
  2. One of the things that’s possible to do in general, given sufficient shared coordinates between images, but unknown camera parameters, is to back-calculate the camera properties. My photogrammetry program, whenever I eventually write it, will do this. []
  3. My image sequence was shot against a single, static background and the helmet itself was turned, so there was no true 3D origin coordinate I could use. []
Aug 202015
 

This post is part of a meta-series. Click here for a list of all posts in this series.

You’d think after working on this project on-and-off for two years that any new setback would come as yet another dispiriting blow. For once, tonight’s setback is a huge win and even serves to make all of the previous setbacks — especially the CarveWright-related ones — seem like blessings in disguise.

You see, I had the size wrong all along.

I originally scaled the 3D helmet model in Blender to an approximation of my own head. I eyeballed it until the size looked right. Later, I found some actual measurements folks had taken of the molds from the films and checked those against my existing pieces, which seemed to line up correctly. Cool, my estimate had been correct out of the gates! Confident now that I was on the right path, I proceeded through all of the various updates you’ve read about this project. I occasionally spot-checked during the cardboard process to make sure I was still within expected tolerance of those dimensions. When I switched to the CarveWright, I was already set, since the Blender model hadn’t changed and the cardboard cross-sections had been correct in any event. Having now switched to paper, I continued on as before with the existing dimensions.

Before printing everything out on heavy-duty cardstock, I did a test print of just a few portions of the helmet in plain paper to get a feel for the method, check dimensions, sanity check my paper templates, and so on.

Plain paper 'dome' prototype

Lumpy, but promising. Size seemed pretty good when I put it over my head (dopey as I looked doing it…), so I started printing out the cardstock parts. Here’s the same set of templates, printed in cardstock, used to make the plain paper prototype.

The same templates, printed in cardstock, used to make the plain paper prototype

All in all, everything was coming together very nicely.

'Jowl' before... ...and after

More than any other time in the project, I felt like I was making real progress at last.

A face emerges

I got quite far along. Here’s where things stand as of right now.

Progress to date

All along, though, something’s been nagging me. Every time I held up the “face” to my face, every time I eyeballed the dome, it all felt really big. Having never actually handled a stormtrooper helmet of any variety in person before, I figured this was just expectations clashing with reality. But I’d hate to go through the entire process and screw up something as basic as the proper dimensions, so I started measuring things.

And they were too big. The helmet, which I expected to “stand” about 12″ tall, measured closer to 14″. Did I misprint? Scale something wrong in the process? I couldn’t have gotten the model wrong; I’d checked that against the research from that theRPF post…

…hadn’t I?

I jumped into Blender and threw down a 12″×12″×12″ cube…and it was smaller than my model!

What the hell? At what point had I overscaled it? Perhaps at no point. I may have deliberately underscaled the cardboard cutouts when I did them and forgotten about having done so somewhere along the way. Why I would’ve done that instead of scaling the Blender model, I couldn’t tell you. Maybe something to do with render resolution and creating consistently sized cross-sections? In any event, with the exception of those templates, my dimensions have been too big all along. Even if the CarveWright had worked perfectly, I’d’ve had a garbage mold that I’d need to re-carve.

But now…I actually have a testbed. It’s too big, sure, so I won’t be casting from it, but I’m so close to done with it that it’s actually a worthwhile guinea pig to test out other aspects of my approach: resin-and-fiberglass reinforcement, Bondo filling, sanding, and so on. It won’t need the same level of finish as the “real” one will, but it’ll give me free reign to learn and screw up without feeling tremendous loss.

What’s more, I can use everything I’ve learned about the Blender papercraft export plugin thus far along with the experience of having cut out all this stuff once before, to create better, more detailed, and easier-to-assemble templates than I did the first time through.

Catching this now is a huge win compared to catching it at any other point along the way and especially going forward. Color me relieved!

Jul 252015
 

I’m mostly writing this for my own notes, but on the off-chance my incoherent notes are useful to others, I decided to put it here. Most of this is going to be devoid of context, but for reference’s sake, I’m using a combination of XWA Opt Editor, Blender, XWA Texture Replacer (XTR), and finally OPTech to create the XvT/TIE-compatible OPTs. I’ll probably add more to this as I go.

Clean Up Unused Materials

There’s an addon that ships with Blender but is dormant by default called Material Utils that has a function to remove unused materials from an object (Clean Material Slots (Material Utils)). Use this once you’ve finished futzing with materials.

Clean Up UVTextures

These garbage up the exported OBJ with bad materials long after you’ve done any material editing. The following script obliterates them:

import bpy

objects = bpy.context.selected_objects

if (objects is None):
	print("You must select at least one object") # This warning will only show in the Blender console
	quit()
	
for ob in objects:
	uvTexData = ob.data.uv_textures.active.data[:]
	print("Active UV on %s has %s faces of data" % (ob.name, len(uvTexData))) # Purely informational; can be omitted if desired
	for i in range(0, len(uvTexData)):
		if (uvTexData[i].image is not None): # We do not want ANY uv textures!
			print("Face %s: %s" % (i, uvTexData[i].image.name)) # Purely informational; what face has what UV texture
			uvTexData[i].image = None
			print("Cleaned UV texture from face")

Material and Texture Naming

Materials and Textures (the Blender concept of a Texture, not the actual filename) must be named TEX*, with a 5-digit numeric identifier (starting at 00000 and incrementing by 1) in order to behave properly. I tried a bunch of different naming schemes in the hopes that I could keep human-meaningful names applied to either Materials or Textures, but this inevitably caused problems once trying to check the model in XTR or OPTech. XWA Opt Editor handles it fine, though. I wrote several python scripts to do this, based on whatever previous iteration of material naming I had. Here was the most recent:

import bpy, re

materials = bpy.data.materials
idx = 0

for mat in materials:
	if mat.name[0] == 'X': # Detecting whether a material was prefixed with X, which was the previous naming scheme for my top-level LOD
		newName = "TEX%s" % format(idx,'05') # 0-pad to 5 digits
		
		print("Renaming %s to %s" % (mat.name, newName)) # Informational
		mat.name = newName # Rename the material
		
		imgEx = mat.active_texture.image.name[-4:] # Get the extension on the Texture
		print("Renaming %s to %s%s" (mat.active_texture.image.name, newName, imgEx)) # Informational
		mat.active_texture.image.name = "%s%s" % (newName, imgEx) # Rename the texture; NOT the file, though
		idx += 1 # Only increment if we matched above

Export Settings

Make sure Selected Only is enabled if you only want to export your selection (which I did/do, since I had multiple LODs in the same Blender file) and make sure Triangulate Faces is turned on. Optionally, turn off Include Edges, which I think will keep the OBJ from having two-vertex mesh objects treated as full faces (if you have these, you probably did something wrong).

Texture Format Doesn’t (Seem To) Matter

One thing I tried was converting all the PNGs exported by XWA OPT Editor to BMPs before loading them into Blender, but this didn’t ultimately make a difference when then re-importing the exported OBJ back to XWA OPT Editor; they still came in as 32-bit images and had to be internally converted to 8-bit. Irritating limitation of the tool, I guess. One issue I’ve variously encountered is garbage material/palette names that I thought might be connected to this in some way. The solution here, though, seemed to simply be saving the imported OPT as soon as it was imported from the OBJ, then running the 32 -> 8-bit conversion. That resulted in non-garbage palette names. Of course, this may also be related to the previous note about naming and have nothing to do with the conversion order of operations.

Look, Up, Right Vectors

I’m not actually sure about any of this yet, because I haven’t tested it, but I wrote the following script to compute my best-guess for the OPT convention for what “Look”, “Up,” and “Right” vectors should be, based on an input selection of vertices and the average of their normals. The idea here is to use it to define rotational axes and such for rotary gun turrets and other moving model parts. For most parts, this isn’t necessary.

import bpy
from mathutils import Vector

selVerts = [i.index for i in bpy.context.active_object.data.vertices if i.select == True]
retNormal = Vector((0,0,0)) # The resulting vector we'll calculate from the selection

for i in selVerts:
	vertNormal = bpy.context.object.data.vertices[i].normal
	retNormal += vertNormal # Add to the calculated normal
retNormal = retNormal / len(selVerts) # Average the summed normals by the number of vertices involved
retNormal = retNormal * bpy.context.active_object.matrix_world * 32767 # Scale to the OPT convention and multiply by the world matrix to get global normals instead of local

# ALL OF THIS IS SPECULATIVE!
# The idea is to take the computed average normal from Blender's coordsys and convert it to the OPT coordsys displayed in XWA Opt Editor
lookVector = Vector((retNormal.y, retNormal.z, retNormal.x))
upVector = Vector((retNormal.z, retNormal.x*-1, retNormal.y))
rightVector = Vector((retNormal.x, retNormal.y*-1, retNormal.z*-1))

print("Look: %s\nUp: %s\nRight: %s\n------" % (lookVector, upVector, rightVector))

Getting a Coordinate for a Hardpoint

Rather than manually copying every vertex I wanted to use as a hardpoint, I wrote this script.

import bpy, os

objLoc = bpy.context.active_object.location
objWorldMatrix = bpy.context.active_object.matrix_world
objVerts = bpy.context.active_object.data.vertices
selVerts = [i.index for i in verts if i.select == True]


for i in selVerts:
	# Need to do the following vector/matrix math to get the value 
	# actually reported as a Global coordinate by Blender for a 
	# selected vertex
	#
	# (Local vertex coordinate + (object location * object world matrix)) * inverse object world matrix 
	vertLocalPos = objVerts[i].co
	vertGlobalPos = (vertLocalPos + (objLoc * objWorldMatrix)) * objWorldMatrix.inverted()
	
	# Flip the y value to match OPT coordinate space
	vertGlobalPos.y = vertGlobalPos.y * -1
	
	# Dump the string to the clipboard
	optStr = "%s; %s; %s" % (vertPos.x, vertPos.y, vertPos.z)
	print(optStr) # Informational
	os.system("echo %s | clip" % optStr)
Jul 132014
 

This post is part of a meta-series. Click here for a list of all posts in this series.

Been remiss on posting updates to the T’Varo model. I’ve actually finished the modeling and UVs at this point and am now on to texturing.

May 22:

May 26:

June 5:

June 7:

June 8:

June 11:

June 14: Modeling finished!

July 5: UVs finished!

Today:

Jun 222014
 

This post is part of a meta-series. Click here for a list of all posts in this series.

To be honest, part of me feels like this is cheating. My original objective was to do an accurate helmet but inexpensively (Blender being free and all, and everything else being mostly household/hardware store commodities easily and cheaply obtained). In some respects, I feel like I’m betraying that original goal in the interest of improving accuracy. But…new toy!

In other news, here’s the result of the second test run!

It’s much better, but still not quite right. The curvature is definitely right, thanks to the linear colorspace change, but I’m still having issues with the pieces not matching up (most notable in this image around the the “jowls”).

After a bunch of googling, comparing my heightmaps with the interpretation in the Designer software, and then looking at the result, I think the “problem” lies between the Designer software and the machine itself. Specifically, I think the machine is disregarding certain levels of black/white and just considering them flat, when in fact they should be subtly curved. This may be the result of using “Draft” quality and images where “1,1,1” and “0,0,0” color differences are really important. For example, look at the third “slice” up from the table: there’s a flat area around the bridge of the nose here that should not be flat at all. It’s not flat on my model, in my heightmaps, or in the Designer software’s 3D preview, yet it came out flat.

My next test will just be two pieces, but at a much higher quality setting, to see if this hypothesis proves true or if it’s something else after all.

Impaneled

 Posted by at 00:56  No Responses »
Mar 122014
 

This post is part of a meta-series. Click here for a list of all posts in this series.

Been slowly plinking away at the underside hull paneling. The topology doesn’t at all match the location of the panel lines, which makes cutting them in that much more tedious and slow, but I finally had enough done that I thought it’d be worth sharing.