On Tue, Feb 07, 2006 at 12:11:30AM +0100, Tomas Larsson wrote: > > > The ports are designed for one purpose only, to > > communicate, they are > > > not designed for timing purposes. > > > > Where exactly did timing come into this? > > You need precise timing (at least sort of) if you are supposed to program a > PIC. Sort of is correct. The timing requirements are minimal. 1 uS between commands and some very tiny clock width that is trivially met by virtually any PC serial/parallel interface. As one of the other posters showed, a pic can be programmed by hand if necessary. > > > > > BTW as async devices serial port's TX/RX have very precise timing. > > Yes, handled by the usart which is programmed to a given baud-rate. Hence > precise timing. To use others arguments myself: what difference does it make which part of the system causes the precise timing? The PC truly is a black box with edge interfaces. The only problem I see is that those edge interfaces are fast turning USB only. The USB/serial cable I continue to propose as an interface wedge turns the edge back into a RS232 edge. > > [more snippage] > > > Real-time control involving the host CPU, involves to flush > > all cash > > > memory both data and instruction, and since today's cpus > > are heavily > > > pipelined it takes quit long time to do this, even on a > > 3000+ cpu. > > > The actual operating-system doesn't matter much since it is the > > > HW-architecture that puts the limits. > > > > It has to matter. A simple example. Take a look at the LIRC > > infra-red transmitter here: > > > > http://www.lirc.org/images/simple_transmitter.gif > > > > Trivial. Now the Linux LIRC driver wiggles that DTR pin > > precisely at 38 kHz. Now please explain how that can be when > > you state that the underlaying hardware architecture cannot > > support suck precise timing? > > > > BAJ > > > I'm not saying that you can't do it, there is always ways to circumvent > possible obstacles, what I'm saying is that the underlying hardware in a pc > is not designed to do it, hence troubles to directly control hardware in a > way it was not design to work. I give pause to the contradiction of "not designed to do it" with the fact that for 20 years it did exactly that. PC serial and parallel interfaces were so defacto a standard as to be virtually dejure. I agree with you that the times are changing. I disagree that simply admitting defeat because of the interface change. > > A serial port is designed to do one thing only, communicate to a given > standard (RS232), EIA-232C technically. And the PC serial port is a bastardization of that standard. > i.e. send and receive a stream of data in a very > well-defined way in a well-defined speed. > The usart is programmed to do this with the baudrate, number of start and > stop-bits and possibly a parity-bit. That's all. But by your argument, the USART is a black box of the serial port. The fact of the matter is that up until now that simply hasn't been true. > then some intelligent > people found out, on the early 8086-80286 era that it might be possible to > do more, at that era CPU's didn't use any cache, the CPU's were not that > much pipelined, which means that these things were a fairly easy task to do, > but with today's CPU's it is not that easy anymore. It has nothing to do with cache or pipelining. All I'm saying is that simple hardware can still be used to capture the resulting output stream, which is unchanged. > Still the support in HW isn't there, but you can circumvent, but I still > don't understand why. > Proper programmers are so cheap, and they work very well, are independent of > the host. And most of all they don't cause any problems. "Proper programmers" have single source, cost, support,and software issues. Each leads to some set of issues for the DIY hacker. BAJ -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist