> > 1) Perform the multiplication on the raw data of the string, that is > > the ascii value - this is arguably what would happen in some cases in > > C, > > since a string is just an array of int8's. > > 2) At compile time, realize that you can't multiply two strings and > > throw a compile error. This is the "pascal" way. > > -or- > > 3) realize that strings need to be converted to numbers first before > > they can be multiplied, so do the conversion, complete the > > multiplication and save the result as a number. > > > > I agree that #1 is the worst possible outcome. You are arguing for > > #2. I would rather have #3. > > #3 has the drawback of not catching unintentional conversions. It is good > to have a language where likely mistakes cause syntax errors instead of > unwanted automatic conversions. Remember that maintainence is the big cost > of software, not the initial writing. Adding a few inline functions or > whatever is trivial when first writing the code. Chasing down runtime bugs > later when a change is made with a datatype a bit misunderstood is a lot > more expensive. I'll take #2 as well. #3 leads you down the path where VB ended up, after slowly drifting from #2 to #3. Consider: X = 1 + "2" X = 1 & "2" Without knowing what X is defined as, the result could be 3, "3", 12 or "12". Fun! No thanks. Actually, I could be wrong about the results, it's hard to remember exactly what it does these days. As I said, no thanks. Tony -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist