Well sort of....=20 If it does compress well it's definitely not very random If it does not compress well, it may or may not be random, depending on the compressor. If it does not compress well on any of several compressors, it's more likely to be random, however I think that sequences with long repeating patterns (e.g. a shift-register/xor algorithm with a nonmaximal sequence length) would probably not be caught by most compressors as this is not common in the 'real world' data they are designed for.=20 On Mon, 22 Jul 2002 20:16:06 +0300, you wrote: >On Tue, 23 Jul 2002, Jinx wrote: > >>> Maybe a little late... >>> >>> Remember the good old check of PRNG output entrophy. If you >>> gather a fair value of randomness, try to compress it with common >>> utilities, e.g.gzip. The higher the entrophy, i.e. "randomness", the >>> less compressable the data is. Of course, the quality of the >>> compression algorithm is a factor but today's best compression >>> utilities should be fairly good at detecting any kind of pattern or >>> non-randomness >> >>I've got just a basic knowledge of compressors, but not modern >>ones. So you're saying a file made of supposedly random numbers >>probably shouldn't decrease in size ? At all, or maybe just a little ? >>The algorithms I used to write for compression would actually >>make a file bigger the more disparate the data, and really were >>suitable for repetitive or regular data like black & white line = drawings > >If you try to compress a file with too much entropy the file *grows* (by >the size of the compression headers at least). > >Peter -- http://www.piclist.com hint: The list server can filter out subtopics (like ads or off topics) for you. See http://www.piclist.com/#topics