scrapy的大文件下载(基于一种形式的管道类实现)
scrapy的大文件下载(基于一种形式的管道类实现)
爬虫类中将解析到的图片地址存储到item,将item提交给指定的管道
在管道文件中导包:
from scrapy.pipelines.images import ImagesPipeline
基于
ImagesPipeline
父类,自定义一个管道类重写管道类中的如下三个方法:
from scrapy.pipelines.images import ImagesPipeline import scrapy class ImgporPipeline(ImagesPipeline): #指定文件存储的目录(文件名) def file_path(self,request,response=None,info=None): #接受mate item = request.meta['item'] return item['img_name'] #对指定资源进行请求发送 def get_media_requests(self,item,info): #meta可以传递给file_path yield scrapy.Request(item['img_src'],meta={'item':item}) #用于返回item,将item传递给下一个即将被执行的管道类 def item_completed(self,request,item,info): return item
settings.py文件中
#指定文件存储的目录 IMAGES_STORE = './imgs'
爬虫文件
import scrapy from imgPor.items import ImgporItem class ImgSpider(scrapy.Spider): name = 'img' # allowed_domains = ['www.xxx.com'] start_urls = ['http://www.521609.com/daxuemeinv/'] def parse(self, response): li_list = response.xpath('//*[@id="content"]/div[2]/div[2]/ul/li') for li in li_list: img_src = 'http://www.521609.com' + li.xpath('./a[1]/img/@src').extract_first() img_name = li.xpath('./a[2]/b/text() | ./a[2]/text()').extract_first() + '.jpg' print(img_name) item = ImgporItem() item['img_src'] = img_src item['img_name'] = img_name yield item
相关推荐
andrewwf 2020-11-11
Arvinzx 2020-10-28
CycloneKid 2020-10-27
paleyellow 2020-10-25
baifanwudi 2020-10-25
heyboz 2020-10-21
wumxiaozhu 2020-10-16
ZHANGRENXIANG00 2020-07-27
zhangll00 2020-07-05
javaraylu 2020-06-28
ZHANGRENXIANG00 2020-06-28
Catastrophe 2020-06-26
Catastrophe 2020-06-26
fangjack 2020-06-25
andrewwf 2020-06-16
qyf 2020-06-14
荒乱的没日没夜 2020-06-14