Home |
Search |
Today's Posts |
|
#1
![]() |
|||
|
|||
![]()
On Wed, 30 Aug 2006 13:40:53 +0100, "Reg Edwards"
wrote: We regret to inform you of the passing away of Reg Edwards (G4FGQ) on Monday 28th August 2006. We would like to thank all correspondants on his behalf for the many years of entertainment that you have given him. Regards, The Edwards Family ************** repeated post from another thread ********************* On Wed, 30 Aug 2006 15:52:52 -0400, Walter Maxwell wrote: On Wed, 30 Aug 2006 12:12:20 -0700, Richard Clark wrote: On Wed, 30 Aug 2006 10:17:44 -0700, Jim Kelley wrote: I hope it's not true about Reg. That would be a huge loss. Perhaps somebody will archive his website. Hi Jim, The announcement came from his account, and his computer. I've archived his site, but for others who wish to do the same, the best tool for that purpose can be found at: http://www.httrack.com This is a website harvesting robot that will replicate an entire website into the directory of your choice (changing links so that you can browse it on your system). 73's Richard Clark, KB7QHC Richard, I've just reviewed the url above, and found that I don't know how to use it to download Reg's web page. I see that you've already downloaded it, so could I download it from your copy? Walt, W2DU Hi Walt, The paths are tied intimately to my file system hierarchy (which is pretty deep). Rather, I will give you a walk-thru. In spite of the apparent complexity (it is a technician's tool), it is quite simple to use with only two or three particulars to satisfy: 1. As directed on the front page, press the NEXT button; 2. On the next page for Project Name, enter Reg Edwards G4FGQ; 3. Below that (skip the category), click the ellipses button to open a storage path and select an existing folder the website will be stored here in a folder named Reg Edwards G4FGQ; 4. Press the NEXT button at the bottom; 5. leave the ACTION selection at "Download web site(s)"; 6. past Reggie's top level page, http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text box; 7. Press the NEXT button at the bottom; 8. At the next page, Press the FINISH button at the bottom. This will start the robots harvesting with a view of them on a new page that shows each robot in its own thread - about half a dozen of them running simultaneously. Depending on the load at the server, the entire process should take 5 minutes or so at T1 speeds. The total download is 4.33MB. The robot activity screen will disappear at the end of the harvesting. I can zip up a copy (sorry Reg) and mail it for those who want a copy that is located at the drive root (I did this down load again to confirm the steps described above). 73's Richard Clark, KB7QHC |
#2
![]() |
|||
|
|||
![]() Hi Walt, The paths are tied intimately to my file system hierarchy (which is pretty deep). Rather, I will give you a walk-thru. In spite of the apparent complexity (it is a technician's tool), it is quite simple to use with only two or three particulars to satisfy: 1. As directed on the front page, press the NEXT button; 2. On the next page for Project Name, enter Reg Edwards G4FGQ; 3. Below that (skip the category), click the ellipses button to open a storage path and select an existing folder the website will be stored here in a folder named Reg Edwards G4FGQ; 4. Press the NEXT button at the bottom; 5. leave the ACTION selection at "Download web site(s)"; 6. past Reggie's top level page, http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text box; 7. Press the NEXT button at the bottom; 8. At the next page, Press the FINISH button at the bottom. This will start the robots harvesting with a view of them on a new page that shows each robot in its own thread - about half a dozen of them running simultaneously. Depending on the load at the server, the entire process should take 5 minutes or so at T1 speeds. The total download is 4.33MB. The robot activity screen will disappear at the end of the harvesting. I can zip up a copy (sorry Reg) and mail it for those who want a copy that is located at the drive root (I did this down load again to confirm the steps described above). 73's Richard Clark, KB7QHC Thanks, Richard, as Reg would say, "I'll give it a go." Walt |
#3
![]() |
|||
|
|||
![]()
Thanks, Richard, as Reg would say, "I'll give it a go."
For Linux and other Unix-type-system users, a simple wget -r http://www.btinternet.com/~g4fgq.regp/ will grab everything, and fix all of the links in the .html files to be page-relative so that the mirrored copy is self-consistent. -- Dave Platt AE6EO Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#4
![]() |
|||
|
|||
![]()
On Wed, 30 Aug 2006 13:16:08 -0700, Richard Clark wrote:
Hi Walt, The paths are tied intimately to my file system hierarchy (which is pretty deep). Rather, I will give you a walk-thru. In spite of the apparent complexity (it is a technician's tool), it is quite simple to use with only two or three particulars to satisfy: 1. As directed on the front page, press the NEXT button; 2. On the next page for Project Name, enter Reg Edwards G4FGQ; 3. Below that (skip the category), click the ellipses button to open a storage path and select an existing folder the website will be stored here in a folder named Reg Edwards G4FGQ; 4. Press the NEXT button at the bottom; 5. leave the ACTION selection at "Download web site(s)"; 6. past Reggie's top level page, http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text box; 7. Press the NEXT button at the bottom; 8. At the next page, Press the FINISH button at the bottom. This will start the robots harvesting with a view of them on a new page that shows each robot in its own thread - about half a dozen of them running simultaneously. Depending on the load at the server, the entire process should take 5 minutes or so at T1 speeds. The total download is 4.33MB. The robot activity screen will disappear at the end of the harvesting. I can zip up a copy (sorry Reg) and mail it for those who want a copy that is located at the drive root (I did this down load again to confirm the steps described above). 73's Richard Clark, KB7QHC Richard,thanks for the walk-thru outline, but none of the pages that come up with the url above have a 'next' button. Apparently, a different page comes up when you access the url. Any suggestions? Walt |
#5
![]() |
|||
|
|||
![]()
The problem may be due to too quickly writing up the instructions, All
instructions are specific to HTTRACK operations, note ammendments: On Wed, 30 Aug 2006 16:49:01 -0400, Walter Maxwell wrote: 1. As directed on the front page, press the NEXT button; 2. On the next page for Project Name, enter Reg Edwards G4FGQ; 3. Below that (skip the category), click the ellipses button to open a storage path and select an existing folder the website will be stored here in a folder named Reg Edwards G4FGQ; 4. Press the NEXT button at the bottom; 5. leave the ACTION selection at "Download web site(s)"; 6. past Reggie's top level page, http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text box; PASTE Reggie's top level page, http://www.btinternet.com/~g4fgq.regp, into the Web Addresses text box IN HTTRACK; 7. Press the NEXT button at the bottom; 7. Press the NEXT button at the bottom IN HTTRACK; 8. At the next page, Press the FINISH button at the bottom. Richard,thanks for the walk-thru outline, but none of the pages that come up with the url above have a 'next' button. Apparently, a different page comes up when you access the url. Any suggestions? Hi Walt, The url was for cut and paste into HTTRACK's text window. This is for the robots to use as their top level address to drill-down into the website. No browsers need be open to perform this. 73's Richard Clark, KB7QHC |
#6
![]() |
|||
|
|||
![]() Hi Walt, The url was for cut and paste into HTTRACK's text window. This is for the robots to use as their top level address to drill-down into the website. No browsers need be open to perform this. 73's Richard Clark, KB7QHC Richard, we just returned from eating out, the reason for the delay in continuing the attempt to download Reg's web page. Anyway, using the url you gave I cannot find any page to enter Reg's web address for downloading, also no place for entering a Project Name. Are you getting a different screen than I with that url? What am I doing wrong? Walt |
#7
![]() |
|||
|
|||
![]()
On Wed, 30 Aug 2006 20:28:00 -0400, Walter Maxwell wrote:
Hi Walt, The url was for cut and paste into HTTRACK's text window. This is for the robots to use as their top level address to drill-down into the website. No browsers need be open to perform this. 73's Richard Clark, KB7QHC Richard, we just returned from eating out, the reason for the delay in continuing the attempt to download Reg's web page. Anyway, using the url you gave I cannot find any page to enter Reg's web address for downloading, also no place for entering a Project Name. Are you getting a different screen than I with that url? What am I doing wrong? Walt Richard, the various screens that come up using the url you gave are totally unfriendly to a user trying to enter anything to start a download. Like other programs, they say 'you can do this...,etc,' but they allow no entry. Am I using an incorrect url? Walt |
#8
![]() |
|||
|
|||
![]() "Walter Maxwell" wrote in message ... Hi Walt, The url was for cut and paste into HTTRACK's text window. This is for the robots to use as their top level address to drill-down into the website. No browsers need be open to perform this. 73's Richard Clark, KB7QHC Richard, we just returned from eating out, the reason for the delay in continuing the attempt to download Reg's web page. Anyway, using the url you gave I cannot find any page to enter Reg's web address for downloading, also no place for entering a Project Name. Are you getting a different screen than I with that url? What am I doing wrong? Walt Walt are you running the program called Httrack ? This is a special program that lets you download a web site to your computer so you can look at it at any time or do other things with it. I think it is a free download. Just do a Google search for it. YOu start the program. Then click near the bottom where it says NEXT. Type in anything for the project name. Click next. In the blank white spece under where it says Web Address ![]() download. In this case http://www.btinternet.com/~g4fgq.regp/ Then NEXT. Then Finish. From there it should start downloading the web site and place it in the directory that shows up a few screens back , usually c:\my web sites. |
#9
![]() |
|||
|
|||
![]()
In article ,
Richard Clark wrote: On Wed, 30 Aug 2006 13:40:53 +0100, "Reg Edwards" wrote: We regret to inform you of the passing away of Reg Edwards (G4FGQ) on Monday 28th August 2006. We would like to thank all correspondants on his behalf for the many years of entertainment that you have given him. Regards, The Edwards Family I was very sorry to hear the news. he always answered my crazy posts and helped me off line w/any technical questions learned a lot always ml |
#10
![]() |
|||
|
|||
![]() ml wrote: In article , Richard Clark wrote: On Wed, 30 Aug 2006 13:40:53 +0100, "Reg Edwards" wrote: We regret to inform you of the passing away of Reg Edwards (G4FGQ) on Monday 28th August 2006. We would like to thank all correspondants on his behalf for the many years of entertainment that you have given him. Regards, The Edwards Family Gawd, Seems ALL the good guys are leaveing us. W8JK, and now, Reg, may they be discussing the loading of Halo Ants! 73 Jim NN7K |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Joe Bev. Talks About Daws Butler on The Bob Edwards Show | Broadcasting | |||
Presidential Visit Set for Daytona Beach Saturday Oct 16, 2004 &John Edwards on Sunday. | Scanner | |||
AL FRANKEN AND BOB EDWARDS TO APPEAR ON "THE COMEDY-O-RAMA ELECTION SPECIAL" | Broadcasting | |||
Paging Reg Edwards... | Homebrew |