Thanks for your thoughts, Kelly. Your raise an interesting point: > Why not just send an 8-bit byte as the ascii equivalent? > One possible reason would be ambiguity between data and a sync byte. That's the reason, in a nutshell. My current implementation sends the pure binary as a "packet" of six bytes (topped and tailed with the usual start and stop bits, of course). Trouble is, how do you tell when you've got to the end of one packet and the start of the next? You can't implement a "framing" byte because it isn't possible to assing a unique bit pattern to it. The real data can contain all possible 256 values. At the moment I'm cheating: I'm relying on the fact that there is always a short interval of time between packets. This means I have to implement a "timeout" in the receiving software. This works fine in NT, which is almost-real-time multitasking, but under W95 the software fails intermittently when the operating system is away doing something else. Hence the decision to convert the whole lot to an ASCII string. Then I can use a null, or even a carriage return, to signify the end of a string. It also makes the data file that gets recorded on the PC human readable. The hex approach is attractive. It means I can stick to ASCII characters but the conversion becomes trivial: just a sixteen bit look-up table. I'll have to think about how to sort it out in the receiving software. Shouldn't be too difficult. Thanks again. Steve Thackery Suffolk, England. Web Site: http://www.btinternet.com/~stevethack/ "Having children is hereditary. If your parents didn't have any, neither will you." - Henry Morgan