On Wed, 30 Aug 2006 13:16:08 -0700, Richard Clark wrote:
Hi Walt,
The paths are tied intimately to my file system hierarchy (which is
pretty deep). Rather, I will give you a walk-thru. In spite of the
apparent complexity (it is a technician's tool), it is quite simple to
use with only two or three particulars to satisfy:
1. As directed on the front page, press the NEXT button;
2. On the next page for Project Name, enter Reg Edwards G4FGQ;
3. Below that (skip the category), click the ellipses button to open
a storage path and select an existing folder the website will be
stored here in a folder named Reg Edwards G4FGQ;
4. Press the NEXT button at the bottom;
5. leave the ACTION selection at "Download web site(s)";
6. past Reggie's top level page,
http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text
box;
7. Press the NEXT button at the bottom;
8. At the next page, Press the FINISH button at the bottom.
This will start the robots harvesting with a view of them on a new
page that shows each robot in its own thread - about half a dozen of
them running simultaneously. Depending on the load at the server, the
entire process should take 5 minutes or so at T1 speeds. The total
download is 4.33MB.
The robot activity screen will disappear at the end of the harvesting.
I can zip up a copy (sorry Reg) and mail it for those who want a copy
that is located at the drive root (I did this down load again to
confirm the steps described above).
73's
Richard Clark, KB7QHC
Richard,thanks for the walk-thru outline, but none of the pages that come up
with the url above have a 'next' button. Apparently, a different page comes up
when you access the url. Any suggestions?
Walt