Looks like there's a few good ideas in there. My summary: 1) Your space is anisomorphic; if I pick the word correctly; that is, it is not of uniform texture. Almost any scanning modality will of necessity detect irregularities. 2) Probably the only useful modality to distinguish important objects (humans in particular) by characteristic is heat, and that only useful at night. 3) The alternative is image comparison. The "reference" image may be either fixed or updated by some delay (low-pass filter) mechanism. 4) A mixture of the above would seem most appropriate. PIR detectors, as already installed, are the simplest implementation of this, but give a view with no spatial dimension (ie., only time!). 5) What is therefore called for is a two or possibly three-dimensional implementation. The simplest way to do this to my mind is an a small array of linear (one- dimensional) resolvers strategically placed. Such devices are readily constructed using PIR sensors and drum scanners, both readily available in the "disposals" (surplus) market and of course, new. 6) Processing of the data stream from these is probably best performed centrally. Each sensor requires a "reference" store, some processing to separate differential data and (low-pass) update the reference, and a mapping function of its linear "view" into (part of) the actual area in conjunction with overlapping sensors. It would be appropriate as suggested, to implement a "teaching" function of the fully implemented system to map critical points and subsequently interpolate between these. The map would become an array of point entries, each documenting an actual grid location, an identity/ position (one-dimensional) pair for each sensor which can "see" this location, and an az/ el pair to direct the camera to this location. The algorithm to interpolate (map) readings is most challenging. It goes along the lines of: If one sensor only registers, approximate by averaging (least squares) those reference points with the closest readings, giving preference to points ONLY seen by that sensor. If two or more sensors register, use points with entries for those sensors accordingly. Not trivial. It is either formal or informal "fuzzy" logic! Cheers, Paul B.