admin管理员组文章数量:1794759
利用Python下载文件
利用Python下载文件也是十分方便的:
小文件下载下载小文件的话考虑的因素比较少,给了链接直接下载就好了:
import requests image_url = "www.python/static/community_logos/python-logo-master-v3-TM.png" r = requests.get(image_url) # create HTTP response object with open("python_logo.png",'wb') as f: f.write(r.content) 大文件下载如果是小文件的话,一次性下载就OK了,但是如果文件比较大的话,那么下载下来的文件先放在内存中,内存还是比较有压力的。所以为了防止内存不够用的现象出现,我们要想办法把下载的文件分块写到磁盘中:
import requests file_url = "codex.cs.yale.edu/avi/db-book/db4/slide-dir/ch1-2.pdf" r = requests.get(file_url, stream=True) with open("python.pdf", "wb") as pdf: for chunk in r.iter_content(chunk_size=1024): if chunk: pdf.write(chunk) 批量文件下载:批量文件下载的思路也很简单,首先读取网页的内容,再从网页中抽取链接信,比如通过a标签,然后再从抽取出的链接中过滤出我们想要的链接,比如在本例中,我们只想下载MP4文件,那么我们可以通过文件名过滤所有链接:
import requests from bs4 import BeautifulSoup archive_url = "www-personal.umich.edu/~csev/books/py4inf/media/" def get_video_links(): r = requests.get(archive_url) soup = BeautifulSoup(r.content, 'html5lib') links = soup.findAll('a') video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')] return video_links def download_video_series(video_links): for link in video_links: file_name = link.split('/')[-1] print("Downloading file:%s" % file_name) r = requests.get(link, stream=True) # download started with open(file_name, 'wb') as f: for chunk in r.iter_content(chunk_size=1024 * 1024): if chunk: f.write(chunk) print("%s downloaded!\\n" % file_name) print("All videos downloaded!") return if __name__ == "__main__": video_links = get_video_links() download_video_series(video_links)版权声明:本文标题:利用Python下载文件 内容由林淑君副主任自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.xiehuijuan.com/baike/1686909123a117064.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论