----- Original Message ----- From: Dave Tweed To: Sent: Tuesday, May 24, 2005 4:27 PM Subject: Re: [PIC] XCASM (was Re: CC5X errors - help) > > I obviously didn't make myself very clear on this. If you start off with > > a well defined processor core and target software that runs on that core > > then move it to an FPGA, there will be tweeks that you can do to the > > instruction set that will give you great performace boosts. For example > > you might decide to add some extra bits to each instruction and increase > > the number of directly accessable RAM locations, or add a second W > > register, or implement some special high level instructions. MPLAB would > > not be able to deal with this at all. Whereas XCASM could accomodate this > > in minutes. > > As a tool vendor, you have a highly distorted view of the world. > > Isn't it obvious that while I could indeed extend a processor architecture > in the ways that you suggest, it would have absolutely no effect on the > performance of my application code UNLESS I completely rewrite that code > to take advantage of the extensions? No it is not completely obvious to me. Something that is obvious to me (as someone who has written a hell of a lot of software, both high level and low level and someone who has written compilers and CASE tools), is that 90% of embedded applications that need a speed increase can easily gain this speed increase using a small hardware boost and trivial software modifications. When apps running on a PC need to go faster, users tend to add faster CPUs or RAM or higher performance chipsets (via a new mother board). When this is still insufficent, they add faster graphics cards or hard disc controllers or network interface cards. Ok, you need different device drivers for these but the underlying app stays the same. When Intel wanted to boost its Pentium processors, it did not make the machine code incompatible with previous processors, it extended the instruction set and added extra hardware to the CPU. Most apps saw an immediate improvement and in some cases small sections of the apps were modified to take advantage of the new CPU features. >A few minutes of work in XCASM is only > the beginning! No I disagree. Say for the sake of argument that you increased the instruction word width from 14 bits to 16 bits and you used the extra bits to extend the register address field for instructions that access registers and the destination field of instructions that change the PC. You would see an immediate improvement in processor performance by simply changing your bank and page select macros to do nothing. Furthermore because the address is now completely contained within the instruction it becomes much easier to add features to the CPU to speed it up. If you added a second W register you could improve the performance of small time critical sections of code without changing the majority of your code. If you added specialised instructions (maybe shift W by N places) again you could improve small sections of code. The point is that MOST of your code would not change. And this holds true for other CPUs not just the PIC. In fact look at some of the 6502 varients which are capable of running legacy 6502 code with only the tinyest of changes in the start up / config code, yet they have additional hardware and registers which can be used to boost performance of very small time critical sections of code. >Also, I would have to completely abandon whatever tool chain > that I used to create that code in the first place. > > Granted, MPLAB can't be used if you change the width of an instruction, but > all of the other cases can be handled by an appropriate set of macros > and/or access functions. > > And if I was really interested in performance on an FPGA, I'd start with an > architecture that's already fairly optimal for the FPGA in question (e.g., > Nios on Altera, Microblaze on Xilinx) and start extending from there. These > architectures come with well-developed toolchains that are intended to > handle extensions to the ISA. Or I'd start translating critical sections of > application code into HDL for direct implementation on the hardware. But your whole argument was that you have established well debugged libraries of code and macros. You would need to ditch these to move to another CPU. Ok so the migration path between the XCASM and MPASM assemblers could be a lot better but the fundamental premis that you continue to use your existing app (with MINOR tweeks) on a faster CPU is still valid. Also "translating time critical sections of the application into HDL for direct implementaion on the hardware" integrates better (read faster and more compact) if the extensions can be triggered directly via excuting instructions instead of through ports. Again this comes down to extending the machine code instructions. Regards Sergio Masci http://www.xcprod.com/titan/XCSB - optimising PIC compiler FREE for personal non-commercial use -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist