> From: John Dammeyer > [cut] > Here's one version of a binary-decimal conversion routine. > Call with r.hl having binary value; returns 4 digit packed BCD > in r.de -- I neglected the fifth digit :-( It can be > rewritten to preserve that extra digit. To produce ASCII, > obviously you can append a code fragment to add '0' to each > unpacked digit. > > BinDec: LD B,16 ;7 > LD DE,0 ;10 > > Loop: ADD HL,HL ;11 > > LD A,E ;4 > ADC A,A ;4 > DAA ;4 > LD E,A ;4 > > LD A,D ;4 > ADC A,A ;4 > DAA ;4 > LD D,A ;4 > > DJNZ Loop ;13/8 > > RET ;10 > [cut] Hmmm... suspiciously like Z80 code, even down to the clock cycle counts, no? > Finally I do welcome all input. Especially if someone can improve on the > speed/space of this one. DAA is a bit tekno for this task. My philosophy when it comes to binary to decimal decoding is that decimal is almost always used by humans (or printers) and hence speed is not an overriding concern. I would go for the most compact code. Successive subtraction of powers of ten would be my first choice. Actual division is certainly unnecessary. Converting an unsigned 16-bit number to 5-digit decimal requires a maximum of 27 2-byte subtractions, 20 1-byte subtractions and 5 additions. Yes of course there are much smarter ways, but this is easy to understand and can be implemented fairly easily (even on a PIC). Now I realise there are always exceptions, but if you find yourself requiring more than a few decimal conversions per second, then rethink the application because no human can read that fast. Regards, SJH Canberra, Australia