> On reflection (no pun intended) it might be easier to use the camera to > detect where a beam falls on a target than to look at the user and attempt > to determine where the beam is pointing. The "target" could be a wall or Most definitely. I believe there has been much research directed at determining a persons gaze from camera images, and it turns out to be a hard problem. Even with an active piece (laser pointer) in the equation, I think that calibration would be a nightmare. But, detecting where the laser hits in the world would be easier, if the problem is constrained enough (which I believe it is). > screen above the page turner. The user need not necessarily be able to see > the beam. While this is effectively just replacing optical detectors with a > single wide area detector the above points re flexibility etc still apply. > > Has anyone had experience of similar systems and / or can offer useful > suggestions that may help reduce design time. My initial reaction is to use the 4 sensor + light beam approach. Simplicity, ease of use / configuration, and low cost are all there. If you do go with a camera based solution, a few things to consider: 1. Detecting where the laser dot (or infrared light beam, or whatever) is in the image is pretty straightforward. You might have the best success buying a Matrox meteor image capture/image processing board, and using the MIL (matrox imaging library) which has fast and robust image processing features (hardware accelerated). This will probably make your life quite a bit easier, but adds $$$. I have some image processing code that you may be interested in, contact me off list if you want it. 2. Determining what the world point of a given image point is somewhat more difficult unless constrained (which I believe you can do in this case). If you constrain the camera to be viewing an approximately planar surface (say a desk + the book) you can compute image to world correspondences with a planar homography. You need 4 matchpoints (point the laser at known world points (in the plane) and figure out where they hit in the image. Of course you can constrain the problem even further and place the camera orhtogonal to the the plane, and only have to deal with an affine transformation (scale/ rotation/translation, no skew) (and you can get rid of the rotation part pretty easily too) See "Three-Dimensional Computer Vision" by Olivier Faugeras for details. 3. If determining what part of the plane (on the desk, for now) the spot hits is not be enoguh? (you mentioned targets, and pointing the laser at different targets to cause different actions, etc) Automatically detecting different targets, (without a manual calibration phase) will probably turn out to be fairly difficult. Changing lighting conditions, camera positions, etc may cause problems. The more targets you must be able to distinguish, the harder it becomes. You can vary the color, structure, and relative positioning of the targets, but it will definitely be a task to get the system to distinguish them robustly. This might not be the best paper, but you might find some useful info: http://www.cs.cmu.edu/~rahuls/Research/Projector/ Good luck. -- http://www.piclist.com hint: PICList Posts must start with ONE topic: [PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads