What is the total length of all your menu text? I ask this question because it indicates the feasibility / desirability of using some Really Sneaky Tricks (TM) to crunch down the space required. You run a program on a PC to take your library of strings and compress it (many different algorithms can be used). This program then outputs a sequence of DB statements that just look like gibberish. A decompressor written for the PIC then extracts the data you want. Obviously you need to take into account the size of the decompression routine to determine if this is a win or not. Remember, that you have incredibly powerful resources on the PC to crunch and optimize the data. You just have to do so into a form that can be cheaply decoded on the PIC. An example of a compression scheme that has the characteristic that compression is expensive but decompression is almost free is LZSS. Another example is one I used to be able to pack about 15 pounds of text onto a 5 pound floppy disk: - The basic idea was to crunch the text into a sequence of one and two byte codes that an interpreter could use to recreate the text. - The one-byte codes always had the MSBit set, the two-byte codes had it clear. Some of the one-byte codes represented short stringns that had a very high occurance rate in the input data. They were string (not byte) indexes into a pseudo-array of strings that was stored as one long string with nulls between the pseudo array elements. Other one-byte codes were used to indicate runs of a single character, or the start of a counted string of characters that couldn't be effectively compressed. The two-byte codes represented a word and possibly characters following it. In my particular data, words were nearly always followed by either: (1) a space (2) a comma (3) a comma and a space (4) the end of a "record" I represented the four cases above using two bits out of the 16 bits in the two-byte code. After deducting the top bit that is always clear in the two-byte codes this leaves 13 bits or 8192 values to indicate a particular word. These 8192 values Then served as an index into two tables. The first table defined the offset into a large string (the dictionary) where the word begain, the second (which was defined as nibbles instead of bytes) deifned the length of the word. One neat aspect was that the compression program on the PC could look for ways to nest words in the dictionary. This reduced the raw size of the dictionary by about 20%. For example: if one of the words was "internationalize" and others were "interrn', "nation", "national" and "nationalize" then the latter four could all be embedded n the first one! "So, this sentence might be coded using the above system as something like this." 1 byte code indicating that four uncomprssed btyes are to follow: The uncompressed 4 bytes "So, " 2 byte code for dictionary word "this " 2 byte code for dictionary word "sentence " 2 byte code for dictionary word "might " 1 byte code for common string "be " 2 byte code for dictionary word "coded " 2 byte code for dictoinary work "using" 1 byte code for common string 1 byte code for common string "as " 2 byte code for dictionary word "something " 2 byte code for dictionary word "liike " 2 byte code for dictionary word "this" 1 byte code for common string "." Thus, this string compresses down from 79 characters to 18 bytes (not counting resources shared by many strings like dictionary and the pointers to it and the table of "very common strings" Bob Ammerman RAm Systems -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist