Hi, My problem is slightly off topic from pic's, but can anyone explain to me (or point to any source of info) the idea of a CRC error detection algorithm, which I've stumbled across ? I've thoroughly read the "painless guide to CRC error detection algorithms" document available on the net, but this algorithm works another way and is still dark to me. It's a part of a RDS (Radio Data System) decoding software. It computes a 10-bit value from a serially received 26-bit block containing 16 bit of data and a 10-bit CRC checkword. MSB is received first. CRC uses the following polynomial: g(x) = x10 + x8 + x7 + x5 + x4 + x3 + 1 The algorithm is very simple: Clear the Register with zero. For each received bit set: Register = Register XOR table[consecutive bit number] And here is an equivalent C program: r=0; /* computed value */ for (i=0; i<26; i++) if (received_bit()==1) r^=table[i]; int table[] = { 0x8000, 0x4000, 0x2000, 0x1000, 0x0800, 0x0400, 0x0200, 0x0100, 0x0080, 0x0040, 0xB700, 0x5B80, 0x2DC0, 0xA1C0, 0xE7C0, 0xC4C0, 0xD540, 0xDD80, 0x6EC0, 0x8040, 0xF700, 0x7B80, 0x3DC0, 0xA9C0, 0xE3C0, 0xC6C0 }; I've tested this algorithm and it returns 0 if the checksum of the received block is correct, but how does it work and how was the table created ? Any help would be appreciated. Piotr Piotr Piatek pisielek@inet.com.pl