John Payson wrote: > > |their choices; They chose 20 bits AFACT arbitrarily; Poor arbitrary > |choices, is my opinion. They could have used segment registers shifted > |8, 12, or 16 bits left, as all of us know, giving us far more address > |range, far easier; part of my gripe is that 16-byte paragraphs are tiny > |for most PC type apps, even back during CP/M days I occasionally used > |arrays of 256 bytes or larger (Yes, 64k beats the IBM mainframe's > |4k page sizes! But...) We could have 256 byte paragraphs, just as > |easily, or 4k paragraphs, and still have segmentation - or have just > |gone to linear addressing and "shot the moon". For a desktop PC, why > |not try to make it easily expandible? (I agree for embedded processors, > |but I was talking desktop CPU's, admittedly these can resemble one > |another sometimes! ) > > The 8086's memory architecture easily allows arrays of up > to 64K bytes, aligned on any 16-byte boundary. Why do you > seem to believe that creating data structures larger than > one paragraph is at all difficult? While it would have > been possible for Intel to have used 256-byte paragraphs, > one of the advantages of the segment architecture is that > any paragraph-aligned data structure can be up to 64K, > *STARTING AT ADDRESS ZERO WITHIN ITS SEGMENT*. Paragraph- > aligning data structures (when paragraphs are 16 bytes) will > cause a moderate, but not unreasonable, waste of space. To > increase the paragraph size to 256 bytes would have meant > that more space would be wasted when paragraph-aligning > things. Know all that, John; I talk about having coded for large sparse arrays/numerical analysis using way over 640k of ram, you assume I don't know how to do blocks sized larger than a paragraph? (4k or 8k blocks are most efficient in the Dos file system, btw.) Whatever, I don't want to play the "I have coded more in assembler/C than you have" contest game. Please don't put words in my mouth here. What I'm saying is that, Using memory over 640k is quite un-necessarily a pain, OBVIOUSLY, or game designers wouldn't be paying 3rd party companies royalties for the code to USE the memory over 640k for any large-memory application. The folks who wrote Doom, for example, aren't horribly incompetent - yet their application uses a 3rd party memory manager. (I won't comment on the people who wrote Dos or Windoze. ) > Arguing that paragraphs should have been larger would be like > arguing--in the days of 40MB disk drives--that disk clusters > should be 32K in size (to allow partitions up to 2gig). If > you don't have much storage, the theoretical ability to access > huge quantities of storage is far less useful than the real ab- > ility to access what you do have /efficiently/. Whereas, pesonally, I'd argue that cluster sizes should be configurably smaller than 32k (if the operating system coders were competent, the user should have choices and options here.) I really don't want to play "put words in my mouth" any more here. If you're going to argue that I'm saying something I'm not, I'll just stop trying to have a constructive conversation. We're OT in this listserve anyways, and I have much to do in the real world. Mark