

With ZipFile(BytesIO(url.read())) as my_zip_file:įor contained_file in my_zip_file.namelist(): I'd like to offer an updated Python 3 version of Vishal's excellent answer, which was using Python 2, along with some explanation of the adaptations / changes, which may have been already mentioned. Pickle.dump(downloadedLog, open('downloaded.pickle', "wb" )) # file successfully downloaded and extracted store into local log and filesystem log Print "Saving extracted file to ",outputFilename If url in downloadedLog or os.path.isfile(outputFilename): # retrieve list of URLs from the webservers # remove entries older than 5 days (to maintain speed) # open logfile for downloaded data and save to local variableĭownloadedLog = pickle.load(open('downloaded.pickle')) # check for extraction directories existence Here is my current script which works but unfortunately has to write the files to disk. I would prefer not to actually write any of the zip or extracted files to disk if I could get away with it. My primary goal is to download and extract the zip file and pass the contents (CSV data) via a TCP stream. I am now at a loss to achieve the next step. ZIP files from a URL and then proceeds to extract the ZIP files and writes them to disk. I have managed to get my first python script to work which downloads a list of.
