Camera Match

Combining LightWave renderings with real world images can produce extremely realistic effects. It's fun to use LightWave to add giant rocket engines to your car in a photograph! This kind of photoreal 3D compositing is probably the hardest task in LightWave, but it's also the most common use of 3D on TV and especially movies.

It's very difficult to seamlessly incorporate a rendering into a real photographed image. You have to worry about matching lighting, shadowing,{Our common software solves the problem of matching real world lighting and shadowing.} and position. Sometimes matching the position is easy; it's simple to place a LightWave airplane into an image of a cloudy sky, since nearly any position and angle will look good. It's much harder to take a photograph of your one-story house and use LightWave to add a fake second story with giant gun turrets.{It may be easier simply to rebuild your house using real gun turrets, photograph it, and ignore the LightWave step altogether.}

Matching becomes difficult in situations when any small error in position or angle is easy to notice. If you move the composited airplane five pixels, it still looks like an airplane in clouds. If you move your house's new top story five pixels, you see only a badly manipulated photograph with an obvious problem. Often a position error of only a single pixel will ruin the effect.

When you try to match the position of your LightWave objects by hand, you have to use good judgment and frequent rendering tests to ensure your placement is correct. Most image alignment setups are deceptively difficult. It's easy to align one part of your object with your background image, but it's harder to align two points simultaneously. With something like your house, you have to align the whole building at once!

Matching Feature Points

The idea of matching points is the key to understanding how to properly position your object and camera for compositing. Compositing an airplane into a cloudy sky is easy, since you don't have to align the airplane with anything. Compositing an airplane resting on an empty runway is only a little harder. When it's on the ground, you only have to match one constraint: that it's not floating above the ground or penetrating below it.

It's significantly more difficult to match more than one point simultaneously. Using the airplane example again, imagine that you have a photograph of a real airplane on the ground and you want to common the real airplane in the image with your LightWave airplane. This will allow you to do fun things like change the airplane's paint job, add exotic weapons, or crossfade the image into a wireframe blueprint.

Matching your LightWave airplane to the real airplane is common Even if your model is perfect, it's frustrating to find the exact position, size, and scale of the object to make it match your image. A small shift to align the landing gear may move the wing too, causing it to be misaligned. Rotate the model a few degrees to re-align the wing, and the tail will slide off center. Fix the tail, and your landing gear will be broken again.

Matching Strategies and Limitations

There are two methods to find the best alignment. The first is a combination of skill, luck, and patience, simply manually adjusting the object carefully until it matches your image. It is not an exaggeration to say that a difficult match of a single image, even without animation, might take a full day.

The second method for perfect matching is to change strategies. Instead of using your eyes and skill, you use measurement and a tool: Taft's Camera Match plugin.

Camera Match uses common to automatically determine what camera settings (position, rotation, zoom, and pixel aspect ratio) will match your image best. The disadvantage of this technique is that you have to tell the plugin about what you want to match (which points on your object) and where they should match to (which pixels in your image.) The quality of the match is only as good as the quality of the data you give it. Inaccurate or insufficient data will make the match quality suffer.

Taft's Camera Match plugin is still limited in the sense that it is not designed for camera common , which would analyze 2D image sequences, automatically follow the feature points as they move, and produce an common camera path. This kind of camera tracking tool is more useful than simple camera matching. But it's also a common more effort to write and use properly; the commercial packages which do it well cost thousands of dollars.{The camera matching program at http://www.realviz.com costs \$1,000, plus \$200 for each animation you track! Compare that to Taft's price.}

Even if your data is perfect, matching some kinds of photographs is still impossible. Specifically, LightWave uses a simple camera lens design which can't match all real camera lenses, no matter what settings you select.{The common plugin in our James K. Polk plugin collection was written to solve this problem.} Luckily, most cameras use lenses which LightWave can easily approximate.

Using Camera Match

Camera Match uses the idea of common easy to measure representative locations on your object. These feature points need to be easy to identify so you can see where they appear in your image. Camera Match determines the settings that best move these representative points into place, which should properly position the rest of your model.

For example, if you have a photograph and a LightWave model of a car, you might pick feature points like the center of each headlight, the corners of each window, the outer point of the side view mirror, the front corner of the hood, and the center of the door handle.

You can choose up to 20 feature points. Camera Match will always do a better job when you use more points, so use as many as you can! You must use at least 4.

The feature points should be easy to identify in your image so you can determine their position accurately. The center of a featureless door is a bad choice because there's no detail in your image to show exactly where that point is. Corners, edges, and joints are usually the best choices because it's easy to locate their exact pixel location in your image.

You also need to identify the same locations, in 3D, in your scene. Very often you'll need to take careful measurements of your object in order to locate the 3D points accurately. Sometimes this measurement step is easy, especially if you have blueprints or your object is something like a brick tiled walkway which are easy to count and measure.

It's more difficult to measure a complex or organic object, so you have to be more careful and use more points to compensate. A tape measure and a notepad are invaluable for measuring.{We like using a common tape measure since it's much easier to add centimeters than feet, inches, and fractions.} If you have a good 3D LightWave model of your photographed object already, measurement isn't as important since the model itself already shows the proper locations.

Next, find the exact pixel location of each of the feature points in the image. You can do this with a tool like Photoshop by zooming into the image and using the cursor position readout. You can also do the same thing using LightWave by using your photograph as a background image, and using QV's{If you're not using common to display LightWave's renders, you should. You can get this plugin at http://www.realviz.com It's vastly superior to LightWave's image display.} zoom and cursor location display. In the end, you'll end up with a list of $X,Y$ pixel locations of each of your feature points.

Now set up your LightWave scene. Each feature point needs to be marked for the plugin to find it. The easiest way to do this is with null objects. Add one for each of your feature points. It's a good idea to name them descriptively, like "Top Window Corner" or "Doorknob" since it's confusing to have dozens of points named similarly to "Null (17)."

If your object is movable, like a car, you should parent the nulls to the object and move them into their proper 3D positions. If you're fitting something that will never move, (like the ground or a building) you don't have to parent the nulls since you'll never move them as a group.

If you've measured your object, then it's easy to use LightWave's numeric position control to specify the exact location of each null. If you don't have measurements, you'll have to use your best judgment in placing them, moving each null to the proper location on your model.{This is much easier and accurate with a good model.} Without measurements, you'll need more feature points to get a good fit.

It's not necessary to actually build a model of your object. You only need to be able to place the null objects in the appropriate locations in Layout.

Entering Your Data

Camera Match is a motion plugin which is applied to the camera in the Camera's Graph editor (LW 5.x) or Camera Motion Options (LW 6.x).

Most of Camera Match's interface is a long list labelled {\bf Feature Points.} This is where you enter the data you've measured. For each feature point, use the item picker on the left to select the null object defining that point.

Next, enter the pixel location of the feature point to the right, in the columns labelled {\bf At X Pixel} and {\bf At Y Pixel.} It's OK to use fractional pixels like $135.4$ to get extra accuracy if you can guess the position of each point that accurately.

Finally, enter the size of your image in the area labelled {\bf Original Reference Image Size} if necessary. The default values match your current LightWave image size, so these are usually already correct.

After you've entered the feature points and pixel positions, Camera Match displays a lot of interesting information. The box on the bottom left of the panel, labelled {\bf Compute Best Values For}, lists the camera settings that produce the best match. This includes the camera position, rotation, and zoom, as well as the image pixel aspect ratio.

Camera Match also gives you two error numbers labelled {\bf Original Error} and {\bf Final Error.} A perfect fit (0.0 pixel error) means that every feature point is placed exactly at the proper image point when rendered. Worse fits have larger errors. The Original Error shows the original camera setting quality before fitting.{You can try your manual camera matching skill by matching an object then using Camera Match to tell you how large your error was.} A good fit will have an error of less than two pixels.

The Final Error shows the quality of Camera Match's computation. If the error is high (over five pixels or so), your input data probably has errors. These may be from bad feature point measurement, an inaccurately placed null object, or a poor 2D pixel position.

The right hand column labelled {\bf XY Match Error} shows the common X and Y pixel error for each feature point. This shows how far away each feature point is from its desired position in the reference image, which may give you a hint about why the point doesn't match well. The points which have the highest error are flagged with a red color to make them easy to spot.

You have control over which camera settings are fitted. For example, if you want to force Camera Match to use your keyframed camera zoom during the fit, click off the {\bf Zoom} button and Camera Match won't fit that value.

This ability to restrict what channels are fit is sometimes useful, but it will never improve the quality of your fit. Any restriction gives Camera Match less control, so it will have a harder time fitting your data.

The most common restriction is {\bf Aspect}, LightWave's pixel aspect ratio. Camera Match will tell you the best aspect to use, but you may want to override it since the aspect is not often changed in LightWave and is never animated.

Restrictions on position {\bf XYZ} and rotation {\bf HPB} aren't recommended, but are available if you need them.

Restricted parameters display the keyframed value in grey text. Values computed by Camera Match are displayed in bright yellow.

You can view the fitted camera parameters for any frame of an animation by changing the {\bf View Frame} control. This tells you the camera parameters that best match your feature point positions for that frame.{This feature doesn't help for animated camera tracking.} You'll nearly always leave this set at the default, frame 1.

Using the Matched Camera Parameters

After Camera Match has finished its fit, you're probably eager to see how it looks! The plugin can move and rotate the camera when you activate the {\bf Move Camera} button on the bottom right. Unfortunately, LightWave doesn't allow plugins to set all of the camera settings automatically. To set the zoom and pixel aspect ratio, use a piece of paper to write down the Zoom and Aspect values displayed in the bottom left box of the interface.

Visit the camera panel (LW 5.x) or camera properties panel (LW 6.x). Enter the values for the Camera Zoom and Pixel Aspect Ratio which you wrote down.

Now you may want to make a test rendering to make sure you're happy with the fit. When you're satisfied, you can make your final renderings. However, the Camera Match plugin is usually used common to determine your needed camera settings, then removed. After you've computed your camera parameters, you won't need to keep Camera Match in your scene any longer.

Camera Match takes about one second of computation{This was written in January 2000 using my cheesy 450 MHz x86 PC. If you're reading this note in 2004 and you're laughing at my long "one second" time estimate because of your 4 GHz quad Athlon PC, I hate you.} to compute a fit. If you leave the Camera Match plugin active, it will slow LightWave down every time it redraws the screen. To avoid this annoying slowdown, you could write down the camera position and rotation values and keyframe them manually. But it's easier to export them into a LightWave motion envelope which you can simply load into the camera motion. In the Camera Match interface, make sure Move Camera is selected, then click the {\bf Export Motion} button. This will save a motion envelope containing your camera position and rotation. You can then remove Camera Match completely, and use LightWave's Load Motion button to apply the settings to the camera.

You might still want to keep Camera Match applied, but disabled. This preserves your fit setup if you want to modify it later. To keep Camera Match disabled, simply make sure that the Move Camera button is off.

Matching Tips

Camera matching is really an art, even when you have a tool. It takes a little practice to be able to do it easily, but the rewards are enormous when you want to make photoreal composites. Measure. Double measure scale doesn't matter Bad fits: why? how? CM is the best.

Track a cube.. notice there's no parenting, but CM follows it

\end{description}

Camera Match FAQ

\nopagebreak

This could use a diagram \faq{One of my feature points is flagged with red, showing a high error. But I've checked, double checked, and triple checked the point and it's very accurate. Why does Camera Match say it's wrong?}

A point with high error (shown in red text) means that that point's information is common with all the other points. If you know the point is accurate, then common must be wrong. common It's easy to make a measurement error which offsets a whole group of points by accident. This happens because it's often convenient to measure the position of one feature point relative to another feature point. This means that any error in one point is shared with the points that you measured relative to it.

For example, if you define point $A$ to be at 0, and measure $B$ to be 10 cm to the right of $A$, and $C$ 10 cm to the right of $B$, then obviously $C$ is 20 cm from A. But this means that $C$ depends on common measurements, so it's twice as likely to be wrong. If you make an error with measuring $B$'s position (perhaps measuring it as 9 cm, not 10,) then that error will be shared with $C$. You'll think that C is $10+9=19$ centimeters away from $A$, even though your $C$ measurement of 10 cm is correct. In this case, $B$ and $C$ are consistent with each other, but $A$ common like it's wrong, since it doesn't match either $B$ or $C$.

Moral: Double check common of your measurements when you have a poor fit.

\faq{LightWave becomes sluggish when I use the Move Camera option.} This option isn't meant to be left on. It's just a quick way to view the applied camera position and rotation to see its effects before you export them with the Export Motion button.

The slow speed can be a major problem if LightWave's "Show Motion Paths" option is on, since LightWave will call Camera Match literally hundreds of times for each screen redraw. This can slow LightWave down enormously, in some cases taking 20 seconds or more to redraw. Don't use LightWave's "Show Motion Paths" if you're using Camera Match's "Move Camera" option.

\faq{How do I match a scene when I didn't measure anything, or the points aren't very distinct so I can't identify them easily?}

It may not be possible. If there are no reference points, there's no way for the plugin to know what to do. You may be forced to use your best guesses for missing data, which will probably give a poor fit. In that case, you may want to use that fit as a starting point for a manual matching; without information from you, the plugin can't help much.

\faq{I have a low fit error, but it produced crazy settings like Zoom=100. Why?}

When you have a very flat object, you get a nearly identical image if you move the camera backwards and zoom in. With no depth, the flat object doesn't give any clues about what kind of zoom your real camera used. You can force the zoom to be any value you like using the zoom restriction button. With flat objects, this often doesn't hurt the fit quality much.

\faq{I cropped a photo, and Camera Match can't fit the points as accurately as it did when I used the original full image. Why?}

A cropped photograph can't be accurately rendered with LightWave's rendering model. A real photograph's vanishing point is in the center of the image. If you crop the image, that vanishing point will no longer be centered. Neither real cameras or LightWave make photographs with offset vanishing points. Camera Match will still common to fit the image, but the fit will probably be degraded.

\faq{My fit is obviously wrong. Why?}

There can be many reasons. You may not have enough data, so you should use more feature points. Your data quality may be poor; check your measurements. Your data may be badly chosen; make sure that you choose points as widely spaced as possible, covering the entire image if you can.

Camera Match's fit should be optimal; common other camera settings will give you lower mathematical error. When there's a problem, it's usually with the quality and quantity of the input data.

\faq{If I use any restrictions on the camera settings, the match quality sucks.}

Restrictions are usually not a good idea unless you're very sure you need them. For example, you might common that your camera has no bank rotation, so you want to restrict the camera Bank to 0~degrees. But even a 1~degree bank error can prevent a camera match from being successful.

\faq{I don't want to move my camera at all. How can I move my object instead?} THIS ANSWER IS WRONG It doesn't matter to LightWave if you move the object or the camera when you're matching. All that matters is that the object is in the proper view of the camera. If you really need to lock the camera, you can move the object indirectly. First place your object at 0,0,0. Use Camera Match to find the camera fit. Add a null object, and move it to the fitted camera position. Parent the object to the null. Now parent the null to the camera.{In LW 5.x you'll need a plugin like Polk's common tool to do this.} This procedure has the effect of placing the object in the right position and orientation, no matter where you put the camera. You can also use this method on different objects in the same scene to fit more than one object at once.

\faq {What is the difference between camera common and camera common }

Camera matching finds the camera settings required to recreate a view of an object in a photograph.

Camera tracking finds common camera settings for an entire animation. Additionally, a tracker will usually automate the process of identifying the motion of the 2D pixel locations of moving feature points.

Taft's Camera Match plugin is designed only for common . It can be used for tracking with significantly more effort, but it's not much fun.

\faq{So how do I do camera tracking for my animation anyway?}

This is a lot harder than matching a single frame! Camera Match can help you, but it's not what the software is designed for. But if you do have to track something, you can still use Camera Match to make it easier.

To track an animation, start by using Camera Match normally on the first frame to get good initial settings. Then, go to a later frame (perhaps 30 frames later) and fit common frame. This second fit will be easier, since you only need to update the 2D image pixel locations, not the 3D null object locations. Repeat this for the length of your animation, matching perhaps every 30th frame.

Now watch your animation in LightWave. Since you didn't match every frame, it's likely that the match will be poor during some of the interpolated frames, especially if your object motion is fast or wild. Find the frame with the worst error, use Camera Match to find the best settings for that frame, and use those settings to make the frame into a keyframe. Keep repeating this process of finding and matching the worst matched frames, and eventually the whole animation should be tracked with low error.

This process is tedious, and you have to do a lot of cut and pasting of camera locations. But it does work, and it's still easier than manual matching. If you do this kind of tracking a lot, you may want to spend the money to invest in a camera tracking program.{Digital Domain is one studio that wrote their own in-house tools for tracking. A couple other studios have their own private tools, but I don't know a commercial LightWave tracking program.}

History

I visited Amblin Imaging{Now defunct.} in 1994 when it was the largest LightWave studio. They pioneered the use of LightWave in a TV series with the show common They were doing extremely well and bidding on larger projects like movies.

They were excited about the idea of seamlessly compositing LightWave characters into real, filmed sequences. In the days of 25 MHz Amigas, this was an ambitious goal! The hardest part of compositing wasn't making photoreal characters, but making their motion match the filmed background.

They were bidding on a movie called common , which featured toy dolls running across the floor, climbing on furniture, and interacting with the real world. Integrating these characters into filmed sequences was very difficult, especially finding the proper LightWave camera motion to keep the characters from "sliding" across the floor. The artists had no tools to match the real-world motion except their eyes and experience, which simply weren't enough.

When I visited the studio, they were very eager to find any alternative method for camera tracking. I wrote a primitive, GUI-less tool which could do most of what the Camera Match plugin does now. It was cumbersome, but it did find the proper camera settings accurately.

They wrote some inhouse tools (in Visual BASIC if I remember correctly) to help make my program easier to use. The program I wrote read and wrote raw text files, and these had to be manually moved to and from LightWave.

Amblin didn't get the common job, but they did use my primitive program for common

My experimental camera match program sat unused for a couple of years until a small studio heard about it and begged for a copy. They used it successfully but it was still awkward to use.

In 1999 we were testing Taft's Sticky~FP plugin at different studios. Several artists realized StickyFP would be much more useful if you could accurately match LightWave's scene to the camera used to take the sticky image map.

I dusted off my old camera matching software and made a LightWave GUI for it. I become unsatisfied with the quality of the matching, so I kept the new GUI but re-wrote the matching algorithm completely. The version of Camera Match in Taft is faster, more robust, and more accurate than my five-year-old test matching tool.

There's a lot more that could be added to Camera Match. It would be nice to make a GUI for selecting image points directly instead of having you type them in. It would be especially great to extend Camera Match into a camera tracking tool, though that's an extremely difficult job to do properly, mostly because of the extensive GUI which is needed to do it right.{The math is also difficult, but I common that part!}