Harold, The SD card management IC handles that transparently. It would read the=20 existing data off, do the merge, and write the new data out. (Possibly=20 moving the block in a wear level operation as well) I believe I used timing... which is not guaranteed to be right, but it's=20 hard to see what kind of design mistakes would have thrown my=20 measurements off entirely: 1) Create a 1GB file containing either all 0xFFs (test 1), 0x00s (test=20 2), or 0xA5s (test 3) 2) Power cycle the embedded Linux system under test 3) Overwrite the file from above starting at offset 0 with anywhere from=20 16 to N byte data records, flushing after every write 4) measure the total time to fill the file. No matter what I did, the timings on the 3 tests was always almost=20 exactly the same. Since writes did take substantially longer than reads,=20 I should be able to assume a good portion of the write time was actually=20 the erase/write cycle on the card. Given the size of the file, even if I missed some caching level=20 somewhere there still should have been a difference in timing if the SD=20 card was smart enough to tell it didn't need to erase. The complete lack of any difference in timing lead me to believe the SD=20 cards never perform any 0xFF-style checking when writing to see if they=20 need to erase or not. I suppose non-embedded use cases would almost=20 never benefit from that sort of check, so I guess they never bothered. I believe I even played with the size of the record writes and I was=20 able to see substantial increases in speed when I matched eraseblock=20 sizes, but not filesystem block sizes. That also sort of confirmed the=20 conclusions. I also tried not pre-filling the file, but simply appending to a new=20 file... again no difference. Darron On 4/13/15 1:17 PM, Harold Hallikainen wrote: > I'm having trouble understanding how they can do an erase for every write > since that would erase the data already written to the sector while we're > trying to append data in the sector. > > Thanks! > > Harold > > > >> Harold, >> >> It's the sector erases. >> >> Normally, an excellent way to reduce erase counts would be to write a >> bunch of 0xFFs, use 0xFF or 0xFFFF as an end of record marker, and add >> records using writes only. >> >> I tested a few SD cards and I determined that they perform erases at >> every single write, no matter what data is written. I'm not clear off >> the top of my head how I determined that, but I was quite convinced at >> the time. The behavior is probably vendor dependent and unreliable anywa= y. >> >> ... just one more way SD cards suck for embedded. It's a terrible >> standard. >> >> >> Darron >> >> >> On 4/13/15 8:16 AM, Harold Hallikainen wrote: >>>> If you hammer an SD card writing a byte at a time (okay a line at a >>>> time) >>>> then you kill the writes. >>> Is there really a problem with writing a byte or line at a time? It >>> seems >>> the issue is the number of times a sector gets erased. On several >>> systems >>> I've designed, I log to SPI flash. I have a function prototyped like >>> this" >>> >>> UINT32 ExtFlashStreamProgram(UINT32 Addr, UINT8 *pData, UINT32 NumBytes= ) >>> >>> This returns the next address that would be written. If a byte is about >>> to >>> be written to the first address of a sector, the sector is erased, then >>> programmed. My logs typically take several sectors (logging goes to >>> everything not used for something else). >>> >>> So, is the issue the number of write instructions, or is it the number >>> of >>> sector erases? >>> >>> Thanks! >>> >>> Harold >>> >>> >> -- >> http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive >> View/change your membership options at >> http://mailman.mit.edu/mailman/listinfo/piclist >> > --=20 http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .