John W. Temples wrote: > The original poster was correct. Hi John, Thanks for your note John, and my apologies to the original poster if I was presumptuous in my post -- I took it to mean that a 16-bit signed integer can *only* take on the values of -32767 to +32767 inclusive, but with the C programming language aside, the decimal range of a signed (2s compliment) 16-bit integer is actually -32768 to +32767. There is one thing about from "The C Programming Language" (second edition) by Brian Kernighan and Dennis Ritchie that is confusing to me (I'm not sure if that edition came out before the C90 standard): "The values are acceptable minimum magnitudes; larger values may be used." . . . INT_MAX +32767 INT_MIN -32767 . . . Does "larger values may be used" mean that the following is legal?: signed int Number = -32768; Do these defined constants mean that you can't use the *full* range of a 16-bit signed number in S16 integer types in your C programs? I honestly don't understand why they did not use -32768 for INT_MIN -- this appears lazy to me. I must be missing something obvious. Why are they not using the full range in the defined constants? HI-TECH for example defines: #define INT_MAX 32767 /* max for int */ #define INT_MIN (int)-32768 /* min for int */ Does this mean that HI-TECH is not ANSI C compliant? Thanks for any clarification. I appreciate your time. Best regards, Ken Pergola -- http://www.piclist.com hint: The list server can filter out subtopics (like ads or off topics) for you. See http://www.piclist.com/#topics