Ñò ÷¼Jc@s>dZdZdZdZdZddklZlZlZdS(s€A high-level cross-protocol url-grabber. Using urlgrabber, data can be fetched in three basic ways: urlgrab(url) copy the file to the local filesystem urlopen(url) open the remote file and return a file object (like urllib2.urlopen) urlread(url) return the contents of the file as a string When using these functions (or methods), urlgrabber supports the following features: * identical behavior for http://, ftp://, and file:// urls * http keepalive - faster downloads of many files by using only a single connection * byte ranges - fetch only a portion of the file * reget - for a urlgrab, resume a partial download * progress meters - the ability to report download progress automatically, even when using urlopen! * throttling - restrict bandwidth usage * retries - automatically retry a download if it fails. The number of retries and failure types are configurable. * authenticated server access for http and ftp * proxy support - support for authenticated http and ftp proxies * mirror groups - treat a list of mirrors as a single source, automatically switching mirrors if there is a failure. s3.9.1s 2009/09/25svMichael D. Stenner , Ryan Tomayko Seth Vidal s*http://linux.duke.edu/projects/urlgrabber/iÿÿÿÿ(turlgrabturlopenturlreadN( t__doc__t __version__t__date__t __author__t__url__tgrabberRRR(((s7/usr/lib/python2.6/site-packages/urlgrabber/__init__.pyt-s