|their choices; They chose 20 bits AFACT arbitrarily; Poor arbitrary |choices, is my opinion. They could have used segment registers shifted |8, 12, or 16 bits left, as all of us know, giving us far more address |range, far easier; part of my gripe is that 16-byte paragraphs are tiny |for most PC type apps, even back during CP/M days I occasionally used |arrays of 256 bytes or larger (Yes, 64k beats the IBM mainframe's |4k page sizes! But...) We could have 256 byte paragraphs, just as |easily, or 4k paragraphs, and still have segmentation - or have just |gone to linear addressing and "shot the moon". For a desktop PC, why |not try to make it easily expandible? (I agree for embedded processors, |but I was talking desktop CPU's, admittedly these can resemble one |another sometimes! ) The 8086's memory architecture easily allows arrays of up to 64K bytes, aligned on any 16-byte boundary. Why do you seem to believe that creating data structures larger than one paragraph is at all difficult? While it would have been possible for Intel to have used 256-byte paragraphs, one of the advantages of the segment architecture is that any paragraph-aligned data structure can be up to 64K, *STARTING AT ADDRESS ZERO WITHIN ITS SEGMENT*. Paragraph- aligning data structures (when paragraphs are 16 bytes) will cause a moderate, but not unreasonable, waste of space. To increase the paragraph size to 256 bytes would have meant that more space would be wasted when paragraph-aligning things. Arguing that paragraphs should have been larger would be like arguing--in the days of 40MB disk drives--that disk clusters should be 32K in size (to allow partitions up to 2gig). If you don't have much storage, the theoretical ability to access huge quantities of storage is far less useful than the real ab- ility to access what you do have /efficiently/. > In general, though, the instruction set does a decent job at provid- > ing 20-bit addressibility on a 16-bit machine. The 80286, on the > other hand, does a poor job of extending the 8086: the only way(*) to > access memory above $10FFEF is to enter "protected mode", in which > segment registers work totally differently, and the only way to return > to having segment registers work normally is to reset the chip. The > IBM AT provides a hardware means of resetting the CPU, and the BIOS > startup routine checks a "non-boot" flag which indicates that it > should resume normal program execution (rather than doing a warm or > cold boot). **THAT**, rather than anything to do with the 8086 arch- > itecture, is the biggest source of headaches for accessing large mem- > ory within DOS mode. | I agree that they did a good job of limiting the 808x to **20 bit |addresses**; If we had 32-bit addressing, this wouldn't be germane, |would it? Certainly not arguing that the result is useless - just that |the "thought aforethought" was lacking in some ways. The 8086 is not a 32-bit processor. Linear addressing is great if your address space is no bigger than your CPU registers, but is often not so great otherwise. As for expandibility, the his- torical problem was not just that upper memory couldn't be acc- essed using the same mechanisms as the first meg, but that it was hard to access **AT ALL** without breaking code that expected to access stuff within the first meg the same was as it always had. > |Motorola's earlier processors were designed > |smarter in some ways than Intel (I've always sorta wished I had Dos for > |a 68030. No segmentation ) > > Yeah, but look at the 68HC16's addressing. | I'm talking 680x0's, not 68HC16's Yeah, I know Whatever, I |think we're successfully misunderstanding each other here... The 68000 is a 32-bit processor. As such, it is much better than 16-bit processors at accessing >64K data structures. It's also a relatively nice chip all around. You were talking, though, about Motorola's earlier processors. Al- though I don't know when the HC16 came out, it's the only non-32-bit processor of theirs I'm at all familiar with that can access >64K of memory. I mentioned it to further my point that accessing more than 64K of memory on a machine with 16-bit registers will necessarily involve compromises. IMHO, Intel did a far better job at forging such compromises than any other CPU I've seen. > ** NOW WHAT DOES THIS HAVE TO DO WITH PICS ** > > PICs are, as anyone here most certainly should recognize, all 8-bit > parts. On the new 18Cxx line, however, Microchip is planning to > increase the data addressing space to 4K and the code addressing space > to 128K. It will be interesting to see what approach they use to > boost their addressing space. Anyone who's done much programming with > PICs should be well aware that the banking schemes Microchip has used > have generally been less than convenient (at times it would be fair > to describe them as a royal pain in the tusch). It'll be interesting > to see if they manage anything as [generally] convenient as Intel's. | I don't do numerical analysis of really huge arrays or do |memory-mapped IO to huge disk files (for a couple examples where |segmentation has seriously ANNOYED me!) - on PICs; I am entirely FOR |different solutions for different processors - An axe doesn't make a |very good screwdriver, nor vice versa! For a laptop or desktop PC I |feel we want linear addressing - especially with bloatware going the way |it's going - on a microcontroller, it shouldn't be expected unless you |want to pay far more, sure. Paging is a pain, but really it's the same |general thing as segmentation - just different division of your address |space, into fewer, more fixed pages as opposed to many, overlapping |pages (aka segments.) For that matter I like self-modifying code when |done right (obviously NOT something I can use on a PIC very often ) I'm not quite sure what you mean when you describe paging as 'the same general thing as segmentation'? True, both are means of trying to expand a limitted address space, but that's about the extent of it. > | I DO agree that hindsight's easier than foresight, but let's see how > |it would be if Intel ran Microchip. > > | If Intel ran Microchip: > > | Every time a new PIC part came out, you'd have to ship your Picstart > |Plus to Intelchip so new hardware with the capability to program that > |new chip could be installed. And pay for that "privilege" - of course! > |(Free Firmware upgrades are nice, aren't they? And at least no hardware > |upgrades are necessary, for most new parts.) > > Where do you get this idea? Intel uses the same programming method > for all their microcontrollers within their 8x51 family, and their > 8x86's are either programmed with a normal EPROM burner or in-circuit. | You've missed entirely that I'm talking Intel PC family habits (how |they treat 80x86 *PC* CPU buyers), versus Microchip Microcontrollers, |I'm guessing? (I think we're MIS-communicating here The microcontroller and PC markets are very different. It would seem like Intel's behavior if they ran Microchip would more likely resemble their behavior with regard to the 8x51 than their behavior with regard to the 8x86. | You may've missed some Intel hardware bugs where they initially |insisted folks "just buy a new CPU" etc. if they really wanted parts |that worked right, didn't have "significant" hardware math bugs, etc. |etc. Intel really botched the PR on that one. What they should have said from the start is what they would eventually offer a free replacement to everyone with a 'bad' CPU, but that because of limitted production capacity people who would not likely be hurt by the Pentium bug were not a high priority. > | Every time additional memory space was added to an existing part, you > |would need to learn some new addressing scheme - so a 16C62 program > |ported to a 16C63 or 16C66 would require massive re-writes if it grew > |much at all, plus the addition of a proprietary and expensive memory > |management library that had to be licensed at considerable expense from > |someone. (Paging's a MINOR annoyance, by comparison, to this!) And > |newer parts would not (of course!) ever be pin-compatible with previous > |parts - all such "features" installed for your safety, of course. > > The progression from the 8088/8086 to the 80286 was pretty clunky. > I'll defend the 8088's architecture but the 80286 is IMHO just plain > annoying. As for the 80386, it can access all of system memory in > the same way as the 80286 (whereas the 80286 can only access the first > meg using anything resembling 8088 methods), though it adds an altern- > ative method which works much more smoothly (just use a 32-bit number > as a pointer, since full 32-bit math abilities are supported). | Do some Desktop PC game programming ; |Instead of memory access methods being supplied by Intel, lots of people |just BUY pricey memory access libraries so they can use memory easily in |real-time games such as Doom / Quake / UnReal etc. All those |texture-mapped colored moving surfaces need lots of large-chunk data |accesses in RAM, quickly, and for non-embedded PC users it's the PITS! |Different usage than embedded machines, I agree! Under any 32-bit operating system, there is no problem with any program accessing as much memory as it can get its hands on. I've actually written some code (when testing algorithms) with lines like the following: long foo[1000000]; That's an array that takes up almost 4 megs of RAM, but I access it no differently from any other. When using 32-bit registers for memory access, such things just work out wonderfully. The reason people use goofy things like DOS 4/GW, etc. is that they want to be able to use 'big memory' from within programs that mostly use the old-style techniques to access memory. Again, the problem is not that accessing 'big memory' requires different tech- niques than accessing memory under a meg; the problem is that the CPU makes it hard to access the upper memory without entering a special mode which is incompatible with everything else. | Also: Microchip writes APP notes & gives it to buyers, free; Intel's |mostly busy building the next generation of hardware (This |difference between Cathedral method, and the Bazaar method, is even more |noted with the recent "let the CPU do all the work, you don't need |peripherals in your home PC" insanity IMHO ) If you haven't read |this article, (the Cathedral/Bazaar one) do - interesting! Where should I find that article? > | Opcodes would be picked out by rolling 26-sided dice (I always liked > |Zilog's opcodes better than Intel's, back in the Z80 days. Sigh ) > > The 8x86 opcode names make a lot more sense than those of many other > CPU's. To subtract AX from BX, you would write: > > sub BX,AX ; Destination,source > > To do so with propagated carry/borrow flag > > subb BX,AX | Personal preference for Zilog opcodes is all, here. On the 8080/Z80, |this is the difference (in effect) between Parallax and Microchip |opcodes - same object code, different but equivalent assembler |instructions. I'm not talking about different CPU's (though if you want |easiest to teach to assembler newbies, I'd vote for the old CDC |mainframes' assembler as it's about the best I've ever seen.) Personal |taste always IS personal, o'course I know the 8080 opcodes were notorious, but Intel seems to have made their opcodes for subsequent CPU's much better. > | And finally, any new PIC chip would have initially cost $500 or more, > |until a competitor came on the market, then and only then have been > |reduced in price (Pricing could be lower, but it's not THAT bad, on > |PICs.) > > You mean you could actually BUY a PIC which ran more than 5 million > instructions per second? | I mean that Microchip treats it's customers better than Intel treats |it's customers, mostly We heard rumors recently of a recall of a |certain new windowed part - I did not hear anyone laughing their head |off! It was taken as a very likely possibility here on the list, if |there was a real hardware problem in the /JW parts! Again, I think it's a function of which market (PC vs micro- controller). Of course, unless your name is "Ford" you're apt to find that Motorola's treatment of customers makes many other companies appear saintly.