|I wonder why they can't make a DRAM chip that has automatic refresh that |is transparent to the user. |It would seem more practicle. In general, the difficulty is that most memory devices have no way of telling the system when requested data is available. Instead, they simply have a spec which says "data will always be available within XXns of when it's requested". If the chip happened to start an internal refresh cycle just before the system asked for some data, it would take much longer than nor- mal for the chip to supply the data; the only way to provide for that in a normal memory system design would be to slow ALL accesses down to that speed. Note that a somewhat more interesting idea (which was actually discussed in an engineering class I took, and which I'm surp- rised I've not seen done) would be to incorporate multiple row buffers on a DRAM chip and allow them to be accessed under sys- tem control. These extra buffers could be used as a cache, but with a major benefit over normal L2 caching: an entire row buffer may be filled in a single operation. Unfortunately, I'm not aware of any system designs that use this architecture. FYI, a normal DRAM architecture supports four fundamental oper- ations: [1] Read a row from the memory array into the row buffer [note: this will erase the row in the memory array!] [2] Write a row from the row buffer into the memory array. [3] Read out part of the row buffer [send data off-chip] [4] Write part of the row buffer [get data from off-chip] Operation [1] is the slowest (60ns on a 70ns chip); [3] and [4] are the fastest (about 10ns); [2] is in-between (about 30ns). All normal operation of the chip will consist of [1], followed by zero or more occurences of [3] and/or [4], followed by [2]. While a 70ns DRAM can supply the contents of any memory location within 70ns, it can't start another access until 30ns after the previous one is finished (so it can perform step [2]).