At 09:37 PM 7/21/99 -0400, you wrote: >On Wed, 21 Jul 1999 15:28:22 -0700 "William K. Borsum" > writes: > >>I have an application where I need to log data at the rate of up to >>200 >>K-bytes per second, to a depth of 120-160 Mega-bytes. >>Part of the problem is that it is memory is broken up into a series of >>ring >>buffers that are sequentially over-written till an event occurs, > >Consider using this logic: While waiting for the event, keep writing >through all the memory over and over again. When the event occurs, save >the address which corresponds to data written at the time of the event. >Then continue writing until you're about to overwrite the last >pre-trigger data you need to save. > >This method is easy to program, and it also distributes wear over the >entire memory. If you use 200 Kbytes per second, and a capacity of 120 >Mbytes, each location is only re-written once per ten minutes while >waiting. The recorder can run in the "waiting for trigger" mode for >about *19 years* before a location will have been re-written 1 million >times. Interesting.... How would you deal with multiple events? Had thought about using sRAM for the ring buffer then copying to Flash, but may not be able to "afford" the data "hole" created by the copy operation. kelly **************************************************************************** ******** All legitimate attachments to this email will be clearly identified in the text. William K. Borsum, P.E. OEM Dataloggers and Instrumentation Systems &