I have a system under development at the moment that stores two copies along with complemented checksum (to avoid the "All Zero" trap). I am using it for record lengths of between 1 and 50 or so bytes. If one copy doesn't check out OK the other copy, once verified, can be used for repair. (Not that I've noticed any corruption problems anyway). It seems to work well enough without having to do a lot of comparisons to check differences etc.. RP On 29/09/05, Gerhard Fiedler wrote: > Spehro Pefhany wrote: > > >> I'm storing data (3 bytes per data-set) into EEPROM, and I need some way > >> to later validate that none of the bytes are corrupt. So I'm > >> investigating checksums, raid, etc. > > > I often triplicate critical data, but I like to separate it into > > different pages, and avoid certain locations. > > I guess that's what I would do. Maybe even make sure you use different > offsets (by setting the start addresses of the data arrays appropriately), > but not sure this makes for much of a difference. > > > >> Haven't found much info on checksum algorithms that would be useful for > >> this simple application, or more importantly how to develop a good > >> checksum application for just 3 bytes. > > I guess you could use a checksum for all of your data (rather than only the > 3 bytes), or for certain subsets. But that may be not much better than > triplicating the data, and worse in terms of being able to recover data. > > Rather than checking integrity on read you could have a regular integrity > checking routine that runs in the background when nothing else is going on. > This way you don't burden the process with that (if read time is a > concern). > > Gerhard > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist