As has been posted by others, the CCS compiler defines int as being 8 bits. I can understand why the compiler does this, as the PIC chips only handle 8 bit math directly. The K&R "The C Programming Language" book states that the precision of both int and float depend on the particular machine you are using (page 9). I have to remind myself of this sometimes, since I also code for the pc, where int is either 16 bit (DOS or 16 bit Windows) or 32 bit (for 32 bit Windows or Unix), but at least I have to make a conscious effort on the PIC code to have the compiler generate extra (less efficient) code to handle 16 bit variables (by declaring them as long or unsigned long). The only data type specifically defined by K&R is char, which should be a single byte. By using a full ANSI C compiler where int is at least 16 bits, it causes extra work to make sure the compiler generates efficient code for an 8 bit processor. I guess I'm just used to using int as the default for variables, not char. Maybe CCS could make this an option, so that 16 bit PIC processors can be handled differently, but I like this for the 8 bit processors. Howard At 07:37 AM 10/17/00, you wrote: >Hang on a millisecond.. What is the big deal over int? A signed int >should be >+3286X to -3276X, and an unsigned int should be 0-65535. Whenever yall >discuss >converting to -6, that applies to a char or uchar only. The confusion >comes in >with compilers that have a short int, and or a long which is a 16 not 32 bit >variable. So I am still puzzled as to why everyone thinks that a int >converts 250 >to -6. Someone should look on the variable types page of CCS and find out >where >they stand. > >One more time: >uchar 8 bit >char 8 bit >uint 16 bit >int 16 bit >ulong 32 bit >long 32 bit > >Chris Eddy~ > >"M. Adam Davis" wrote: > > > Bob Ammerman wrote: > > > This is _not_ true. In ANSI "C", a comparison of an 'int' and an > 'unsigned > > > int' is performed as a 'unsigned int' comparison, _not_ a 'signed long' > > > comparison. > > > > If so, what is the conversion? You can't represent 250 (or even 128) in a > > signed int. I suppose that there is no conversion, it just refers to > the uint > > as an int, in which case it becomes -6. > > > > Well, this is a reason for good coding practices. Always cast > variables when > > they are dissimilar... Portability is better then, as well. > > > > -Adam > > > > -- > > http://www.piclist.com#nomail Going offline? Don't AutoReply us! > > use mailto:listserv@mitvma.mit.edu?body=SET%20PICList%20DIGEST > >-- >http://www.piclist.com#nomail Going offline? Don't AutoReply us! >use mailto:listserv@mitvma.mit.edu?body=SET%20PICList%20DIGEST -- http://www.piclist.com#nomail Going offline? Don't AutoReply us! use mailto:listserv@mitvma.mit.edu?body=SET%20PICList%20DIGEST