Now that I have a single animation working, it's time to start worrying about how to use multiple animations. The Milkshape3D file format only supports a single animation, so I did a bit of investigating to find out how games normally store multiple animations. From what I could tell, a lot of people just roll their own format. I'm not interested in doing that at the moment, so I started discussing solutions with our artist GreyKnight. What we eventually decided on was to have a base file which stores the geometry and base pose. Each animation would be stored as its own "model", containing only keyframe information. This system is a bit rigid in that adding/removing a bone would be really painful (no pun intended), but it should work well enough until I find a better alternative.
I finally discovered why my method for converting orientations from RH to LH seemed wrong. Turns out it was! My method (found through trial-and-error since what should have been the correct method wasn't working) was to invert the z-axis orientations when positions along the z-axis are inverted for coordinate conversion between RH and LH. If you draw this out, it doesn't make much sense; the z-axis orientations are correct between RH and LH (when the z-axis is inverted to do the conversion), but the x and y axis orienations are in reverse. In other words, the one orientation which should have been constant between the two coordinate systems was the only one which needed to be changed. Well, while poking around in the math code I noticed something rather odd: the rotation matrix creation function negates the angle before it is used to create the matrix. So all x and y axis orientations were being inverted automagically, and the only way to prevent this from happening to z-axis orientations is to invert them before they are sent to the creation function. I suspect this was a left-over from when I was creating the camera code, since the angles needed to be negated.
Today I managed to make it so that when you click on the screen, the game can detect where you clicked in the world. This involved a ray intersection test with the world geometry, something I had never done before. In order to define the ray, you need a point in world space and a normal vector pointing in the direction of the ray. The easiest of these to calculate is the point, which should be right on the "lens" of the camera where your mouse cursor was when you clicked. To get the point in 3D world space, you need to transform the screen-space coordinates of the cursor into clip space, which means reversing the viewport mapping and homogeneous divide. This step can be a bit tricky, since you need to work in the world-space z-coordinate. Once safely in clip space, you can transform into world space using the inverse of your view and projection matrices. Calculating the normal was a bit more tricky. My original idea was to transform the vector (0, 0, 1) from screen space into world space, but that doesn't work because it needs to be attached to a point other than the origin in order to be affected by the pyramid shape of the view frustum, which leads us naturally to the solution I ended up using; transform a second point from screen to world space, one which is at ray_point+(0, 0, 1) Once in world space, you can get the ray normal by normalizing the vector from the ray point to the second point.