91-11/head-tracker.papers
From: azuma@cs.unc.edu (Ronald Azuma)
Subject: Re: head tracking
Date: 24 Nov 91 20:12:01 GMT
Organization: University of North Carolina, Chapel Hill




In article <1991Nov20.231255.9659@milton.u.washington.edu> galt@dsd.es.com 
(Greg Alt - Perp) writes:

>It made it sound like it is not extremely difficult to
>discover velocity and rotational velocity by looking at images from a moving
>camera.  

	Gary Bishop explored this idea in his 1984 thesis work [1][2].  By
placing an array of "smart" 1-D image sensors on the user's head and comparing
the differences between two images taken at different times, it should be 
possible to detect changes in the head's position and orientation.  The faster 
you can run the system, the smaller the image differences will be, which in 
turn means your image sensors can be simpler, easier to build, and faster.  
In theory, this scheme does not require modification of the environment.  

	However, making a robust working system out of this concept is non-
trivial.  Phil Jacobsen, a PhD student here, is continuing the work that Gary
initiated.

	We want to explore long-range tracking, 1) to allow the user to
explore larger virtual worlds in a more natural way, and 2) to support our
upcoming see-through HMDs that superimpose computer-generated objects on top of
the real world, where you don't have the luxury of moving by "flying" anymore.
So, in parallel, we've been pursuing a more straightforward technology of 
using infrared LEDs in the environment and head-mounted "cameras" viewing 
them.  These "cameras" are really lateral-effect photodiodes, which return
the centroid of a blob that gets imaged on them, which means we don't have to
spend lots of time on image-processing algorithms.  Jih-Fang Wang built a 
three-"camera," three LED, benchtop prototype of this system [3][4].

	Why three cameras?  Geometrically, you need to use LEDs that are
widely-separated to get a good solution.  Conceptually, one camera with a wide
field-of-view will suffice, but all detectors have limited resolution (eg 
1K x 1K).  One wide field-of-view lens spreads that resolution across the
entire ceiling and hurts your accuracy.  Having several cameras with narrow
field-of-view lenses means you get the geometric separation and good resolution
simultaneously.

	Initial speculations of how to turn this into a full-scale system are
in [5].  The actual system, which we put together (barely) in time to bring
to the "Tomorrow's Realities" exhibition in this year's SIGGRAPH, is rather
different in execution (only the sensors and the concept are retained: hard-
ware, software, and math have all been changed), and will be covered in an 
upcoming paper [6].  This system has 960 LEDs in a 10' x 12' ceiling and four
"cameras" on the user's head.  By adding more panels to the ceiling, one can 
scale the system to the desired working area.

	This system is an experimental one and has problems.  The biggest
is the combination of weight and lack of head rotation range.  The "cameras"
and lenses are too heavy, which limits us to four "cameras," which is not 
enough to let you tilt your head as much as you would like.  We need to make
the head-unit lighter so we can eventually put more "cameras" on the head that
will provide the desired rotation range.  We've ordered a set of lighter
lenses (1.5 oz. vs. 11 oz.), and we'll see if they work.  Beam-splitters or
holographic optical elements might be able to superimpose multiple views onto
one sensor (thus letting one do the work of several) but that's longer-term.
Discrete "jumps" in position and orientation can occur when switching the
working sets of LEDs, and we've taken steps to minimize these.  (This
problem should occur in *any* cellular system, not just in optical 
technologies.)  We are exploring calibration techniques to reduce the need
to control error sources so carefully (looking again to photogrammetric
techniques and work that J.F. Hughes of Brown University is doing to help us.)
Every tracker seems to have a weakness: while metal is the Kryptonite of
magnetic-based trackers, infrared light is ours.  Fluorescent light is no
problem, but incandescents and sunlight are.  We have filters (hardware
and software) to reduce their effect, so background light is usually tolerable.
For example, incandescents were used to dimly light the "Tomorrow's Realities" 
exhibition at SIGGRAPH, and we were still able to run our system.

	This tracker is by no means a finished product; lots of work remains
to be done.  We hope to use it as general support for HMD applications and 
tracker research, and as a testbed for developing and exploring technologies 
that will eventually supercede this system.


[1]	T. G. Bishop and H. Fuchs.  The self-tracker: A smart optical sensor
	on silicon.  In Proceedings Conference on Advanced Research in VLSI.
	MIT Press, 1984.

[2]	T. G. Bishop.  Self-Tracker: A Smart Optical Sensor on Silicon.  PhD
	Thesis, U. of North Carolina, Chapel Hill, NC, 1984.

[3]	Jih-Fang Wang.  A Real-time Optical 6D Tracker for Head-mounted Display
	Systems.  PhD Thesis, U. of North Carolina, Chapel Hill, NC, 1990.

[4]	Wang, Chi, Fuchs.  A real-time 6D optical tracker for head-mounted
	display systems.  Proceedings of 1990 Symposium on Interactive 3D
	Graphics (also Computer Graphics, Vol. 24, No. 2, March 1990), 
	Snowbird, Utah, 1990.

[5]	Wang, Azuma, Bishop, Chi, Eyles, Fuchs.  Tracking a head-mounted
	display in a room-sized environment with head-mounted cameras.  SPIE
	Vol. 1290 Helmet-Mounted Displays II (1990), from SPIE Technical
	Symposium on Optical Engineering and Photonics in Aerospace Sensing
	(16-20 April 1990, Orlando, FL).

[6]	Ward, Azuma, Bennett, Gottschalk, Fuchs.  A Demonstrated Optical 
	Tracker with Scalable Work Area for Head-Mounted Display Systems.
	To appear in Proceedings of 1992 Symposium on Interactive 3D Graphics,
	Cambridge, MA, 1992.

							Ron Azuma
							(azuma@cs.unc.edu)
