> Would using the DMX protocol be of any use here? Relatively easy to > bit bang, and with some judicious coding, the different channels could > be made to 'self address' a new node. > > By that I mean, the chips at the node could be programmed to accept > channel 1 (a sort of broadcast channel), the node chip would read it > itself discover it has a non assigned node number, accepts the address > information contained in the broadcast, updates itself, so that a > channel 1 address won't activate it again, but the new real node > channel now will. Some fancy mechanism could be thunked up, to > persuade a node to respond to channel 1 again if re-addressing became > necessary. The drawback here would be that nodes could only be brought > online one at a time. Other than that there are 511 channels worth of > slaves to be had. > > Colin DMX now has Remote Device Management, which includes a discovery mode to find devices on the network. It runs at 250kbps over EIA485. I did a lot with DMX for about 15 years, but never implemented RDM. I think it's a bit fancier than is required here. The use of the break as a start of frame marker in DMX is useful though. I ended doing the break in hardware. When I was doing DMX, the UARTs in PICs could not do a break. Those that can do it will not do a long enough break to meet the DMX spec. It's more designed for LIN. Harold -- FCC Rules Updated Daily at http://www.hallikainen.com - Advertising opportunities available! -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist