Offline web site access works well static content and directly linked content. Web site rippers don't work well on websites that are backed by databases, which is probably one of the problems you're having. If you can give us an example website that you seem to have trouble with I'm sure someone will have a few suggestions on how to handle it. As much as I dislike abusing websites with rippers, it has been good especially in cases such as Circuit Cellar Online where the site essentially languished. I got all the PDFs of the articles just before it went completely down and it's great content that, apparantly, Circuit Cellar doesn't have the full rights to showing on their main site. It's in legal copyright limbo as far as I can tell. Archive.org is good, but it doesn't get everything... -Adam Buehler, Martin wrote: >i would like to take some 'complete' web sites with me to places, where >i have not web access. > >i tried downloading a complete site using quadsucker, which i have used >a few years ago with success, but this does not seem to work with '.asp' >pages, which all have the same address with different arguments. >the downloaded pages do not work, neither in internet explorer nor in >firefox. > >is there a tool for doing so, that almost 'eats' everything? > >thanx! >tino > > > -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist