I would use perl script for this kind of tasks, that should not be too difficult. Perl itself is free. There are lots of tutorials and examples on the web, probably you can even find usable script for the task. Most possibly somebody already had that problem before you... General structure would be collecting main page with e.g. LWP::Simple or even curl. Then parse it to get all links to subdocuments recursively and get them. Then just remove unnecessary headers and join those files together. Probably some small cleanup needed, as replacing links to subdocuments with links to sections of this same document, so the page would be browsable. Other option would be to use wget to collect all document tree, and use perl script just fot joining. All that is much easier to do un linux than windows. On Thu, Sep 23, 2010 at 6:44 PM, John Gardner wrote: > This does document conversions. The price is right... > > http://www.tech-recipes.com/rx/4001/convert-pdf-doc-txt-and-html-files-fo= r-reading-on-your-kindle/ > > Jack > -- > http://www.piclist.com PIC/SX FAQ & list archive > View/change your membership options at > http://mailman.mit.edu/mailman/listinfo/piclist > --=20 KPL --=20 http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .