Low drive capability of the digital IO pins (only 4.5mA for the low end parts at 3.6V.)
Setup of the onboard peripherals is complex and confusing. The documentation is minimal.
There's no 40 pin DIP part. Aside from the small 20 pin parts, the chips tend to end up in amateur-unfriendly packages (48 pin SOP, 64 QFP, etc.)
The clock generator is just WEIRD. I guess most of it's oddness stems from the low power characteristics. One of the goals is to be able to exit the low power modes within 6us of an appropriate event, which is MUCH quicker than some other microcontrollers can restart their clock. All of the MSP430 chips are designed to operate with an external low-frequency (32.769kHz) crystal (ACLK), and they all have an internal RC based clock with digital control (MCLK.)
In the high end parts, the MCLK is locked to the ACLK using something TI calls a "Frequency Locked Loop", and you simply set the desired multiplier for the 32kHz that you want the system to run at (the default is about 1MHz, highest supported speed is about 4MHz.) In the low end parts, the FLL is missing, so you deal with either a less accurate clock, or implement something similar to the FLL in software. In either case, the peripherals operate off of the (more stable) ACLK. (actually, the crystal CAN be omitted entirely...)
One of the interesting (IMHO) side effects of this is that the main CPU clock (MCLK) is NOT a perfect square wave. It can have assorted amounts of jitter, and for frequencies that are not power-of-two multiples (or something like that) of the ACLK, it's derived from multiple "taps" of counters, so it doesn't look much like an unstable square wave either. This can have implications if you need to write code that is self timing down to the single cycle level - not all your cycles are the same length!
The MSP430 does have weak pull-ups for the ports. Set the port as an input and write a 1 to it, to activates the weak pull-up.
Its always a good idea to write ISR's for all interrupts, even the ones you are not using. I like to have two routines for each. One while debugging and writing code and one for released code. The debugging code holds the interrupt and lets me know it got triggered. This way I know I've got a problem and can figure out why the ISR was triggered. Late in the debug stage this hold needs to be replaced with something else. Something else depends on the system. Could be a reset, could be just return to where you were. Either way, you know what the system is going to do.
If you have to disable interruptes for some short section of code, do not just re-enable them. To do so assumes that they were enabled before you disabled them. Save the state, disable, then re-enable only if they were enabled before. (PUSH SR; DINT) then (POP SR) to restore the interrupt state to what it was. This is especially true when working in C, instead of doing to work yourself, search the compiler docs for "atomic" or "monitor" declarations which will cause the compiler to take care of that for you
Interrupt service latency may not include the time required to finish executing a multi cycle instruction that started just prior to the interrupt. E.g. if a 6 cycle instruction starts, then an interrupt occurs, the actual latency will be 6 cycles more than expected.
To retain variable values through a watch dog reset when using IAR, add __no_init keyword to the variable declaration.
F5438 gets stuck in BSL mode and will not clear SYSBSLIND without POR or: Toggle the TEST pin twice when RST/NMI is high followed by RST.
See also:
As far as we can tell, in the MSP430F543# line, the hardware I2C modes always consume more power than bit banged because they leave the lines pulling against the resistors far to long. Also, the trailing edge of the clock pulse is dangerously close to the change of the data line.