Hi there, everyone. This is my first post to the list in a long, long time. Over the last few years, I've been slowly writing my own little operating system for the PIC. I'm up to working on the networking and multitasking functions, but Other Things are sucking up a lot of time these days. So, I thought I might try to do a service and release the libraries I have written, (and even partially tested!) so that even if I don't finish, at least some good has come of the work. All of these libraries have been written specifically for the 16-bit PIC architecture, and include a mix of C and Assembly as appropriate for portability and speed. So far I have finished: * Self-balancing Red-Black binary tree implementation. (uses minimal overhead to ensure Log(n) worst-case insert, seek, and delete times.) Perfect for RT systems which need time guarantees. Does not allocate memory for operations, but uses 'slots' in existing memory blocks, so a record can be added and removed from multiple trees without ANY allocation overhead) * LZ-based compression library. Intended to be flexible enough to stream compressed flash blocks with no 'temporary buffer' overhead. An optional enhancement to the basic LZ algorithm gains another half-bit per symbol of compression, at the cost of requiring a temporary buffer. * BTC: The 'Block Transfer Computation' language: Sort of an equivalent to XSLT for small binary streams. Intended to assemble new records from parts of other records without requiring hard code. (parameter juggling) * Multiple-Zone memory allocator. This is probably the masterpiece of the system; since the newer PICs have memory models which divide the address range into X and Y sections for DSP operations and only allow device DMA within the lower half, requesting a block of memory is no longer a simple matter. You have to know what you want the memory for. Also, monolithic memory managers have multithreading issues. And there are many memory-allocation methods which are optimal in different circumstances. This memory management library allows 'sub-zones' to be created which are controlled by seperate memory allocators with 'capability flags'. When you request a block of memory (from your own memory allocator) you indicate the capabilities you want, as well as the size. (like a block of 32 bytes of 'X' RAM with DMA) Needless to say, malloc() has become a little more complicated. Three basic allocators are provided: The trivial 'next block' allocator that cannot free blocks but has zero overhead, a Linux-style linear allocator that reclaims blocks with minimal overhead, and an advanced binary-tree based allocator that gives time guarantees and optimal use of fragmented memory, but with the most overhead. The basic scheme is: A global tree-based allocator 'owns' all memory, but once a block is allocated it can be assigned to a new memory manager which is optimal for the task at hand. (This is recursive, so you can have a whole structure of layered memory managers) * Multiwriter Lock-Free Queues. Allows near-constant time appending to a FIFO queue regardless of it's previous state. (For example, interrupts can always add to the queue, even if other operations were in progress during the interrupt) For a full explanation (and some code) you can read my paper: http://arxiv.org/abs/0709.4558 As you can see, these are all major building blocks for any attempt at an OS. Each library is about 1-2K compiled. Yes, that small. I'm trying to fit the entire OS in 16K. My future work plan is something like this: * A FSM-based event/thread processing system called 'Protocol' which forms the core of multiple network stacks. * LDPC (low-density parity check) library for physical-layer data block coding. (LDPC is superior to all other error-correction systems for a bunch of reasons) * A SFTP (simple FTP, not secure) based protocol to get/set 'properties', not unlike a web server. (but with less overhead) * Relocatable Modules: Small code blocks (like device drivers) which can be uploaded to PICs on the fly. (Tricky, but incredibly useful) (Anyone who may have already done this, PLEASE let me know.) Now the downsides: * The libraries are not as tested as I would like. (but YOU can help :-) * These are not generally 'drop in' replacements for old ways of doing things. New thinking is required. * The documentation is seriously lacking. On the last point, I'm prepared to improve the documentation for libraries which are wanted: so let me know. Documentation will be completed in order of the loudest screams. Finally, is there an optimum place to put the code for general dissection? Somewhere with collaborative features? -- Jeremy Lee BCompSci (Hons) The Unorthodox Engineers www.unorthodox.com.au -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist