Matt, Your imaging device has only a single point of view, and that will limit greatly the "detail" you can extract from the echo information. Imagine TWO of your current units mounted, say a few feet apart and operated in sing-song fashion. First one sends/receives, then the other. Since each can independently sweep an overlapping zone, you can glean more info about this overlapping region. Mode Two: Instead of sweeping circularly as you currently do, mount a single unit on a linear track so it can move back and forth a few feet. If you aim the unit straight ahead and move from one side of the track to the ot her you will build up a sliding image that has many forward looking points of view. Well, at least it's linear instead of curved... Extend the concept one more stage: Position your sonar transducers to the extreme left and aimed 45 degrees to the right. As you step the sensors down the track from left to right, keep adjusting the angle of view of the transducers so that they are always looking directly at a POINT a few feet ahead . By the time the sensors are at the extreme right of the track they will be point ed *left* towards this point. Now you have a bunch of data that describes a curved view looking *in* instead of *out*. If you do the run again but looking a t a different focal point you will get some more "new" information instead of just the "same old" information over and over. Of course you now have to manipulate this data to be able to display an image, but at least now your image is richer. A phased array of sensors arranged in a curve can give you much the same info. In this case the sensors are stationary and activated one after the other to generate a "sweep". Tradeoff is more sensors, but greater speed of acquisition. Just a few random ramblings from Fr. Tom McGahee ---------- > From: Matt Bennett > To: PICLIST@MITVMA.MIT.EDU > Subject: PIC based imaging sonar > Date: Wednesday, June 23, 1999 10:46 AM > > I've built a PIC based (16F84 and 16C71) sonar that, along with a computer > can produce an image- I've got a picture and some details here: > > > > I wanted to get some feedback and maybe enlist some help in the effort, so > take a look and tell me what you think. It is far from complete, it really > needs some enhancements and refinements, but it definitely has a lot of > potential. I'm hoping to use it as a sensor on a roving robot. I > concentrated on resolution- so I can resolve things to from about 6 inches > to about 4 feet (until I get a 16C711) which will double the range to a > whopping 6 feet. My resolution within that range is about 2 inches. > > My azimith resolution is poor, due to the transducers that I'm using, I've > found much more focused transducers, but they are much larger and far more > expensive. I fully realize that size and gain are coupled, but I'm hoping > I can figure out a way to increase gain (and correspondingly the resolution) > without resorting to a far more expensive transducer. A couple of the > ideas I have been toying with are parabolic reflectors and phased arrays of > transducers (actually just multiple transducers placed the proper number of > wavelengths apart, driven in paralell). > > So, please take a look, and let me know what you think. > > Thanks, > Matt Bennett > > -- > Matt Bennett > mjb@arlut.utexas.edu > 512-835-3867