scrapy项目的代码书写流程
scrapy项目的代码书写流程
第一步:选择一个文件夹,进入控制台,输入命令scrapy startproject qidian
第二步:切换到内层的spiders文件加 cd qidian/qidian/spiders 输入命令 scrapy genspider qidianyuedu qidian.com(域名)
注意点:爬虫的名字 qidianyuedu 不能和工程的名字重复
第三步:在工程的路径下,建立一个启动文件starts.py
1 from scrapy import cmdline 2 cmdline.execute(["scrapy","crawl","qidianyuedu"])
第四步:修改settings文件,主要修改内容如下
# 添加headers
USER_AGENT = ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36‘
# robot.txt
ROBOTSTXT_OBEY = False
# 打开pipeline
ITEM_PIPELINES = {
‘qidian.pipelines.QidianPipeline‘: 300,
}第五步:根据要爬取的数据,设置相对应的item字段
class QidianItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = Field()
url = Field()
author = Field()
category = Field()
status = Field()
bref = Field()第六步:书写pipeline,这里以将数据保存到mysql为例
import pymysql
class QidianPipeline(object):
def __init__(self):
self.db = pymysql.connect(host="xx.xx.xx.xx",
port=3306,
user="root",
password="xxx",
db="xxx",
charset="utf8mb4")
self.cur = self.db.cursor()
def process_item(self, item, spider):
sql = """insert into qqyuedu(title,url,author,category,
status,bref)
VALUES (%s,%s,%s,%s,%s,%s)"""
data = (item["title"],item["url"],item["author"],item["category"],item["status"]
,item["bref"])
try:
self.cur.execute(sql,data)
except:
pass
else:
self.db.commit()
return item
def __del__(self):
self.cur.close()
self.db.close()第七步:书写爬虫主要的程序 spiders 下面的那个文件
分成两种格式进行总结:
1. 使用starts_url的方式,使用offset配合翻页
class Douban250Spider(scrapy.Spider):
name = ‘douban250‘
offset = 0
allowed_domains = [‘movie.douban.com‘]
start_urls = [‘https://movie.douban.com/top250?start=0&filter=‘]
def parse(self, response):
item = DoubanItem()
li_list = response.css(".grid_view li")
for li in li_list:
item["name"] = li.css(".info")[0].xpath(".//span[@class=\"title\"][1]/text()")[0].extract()
item["info"] = "".join("".join(li.css(".info .bd")[0].xpath("./p//text()").extract()).split())
item["score"] = float(li.css(".info .star")[0].xpath("./span[@class=\"rating_num\"]/text()")[0].extract())
item["access"] = li.css(".info .star")[0].xpath("./span[4]/text()")[0].extract()
item["bref"]= li.css(".info .quote")[0].xpath("./span[@class=\"inq\"]/text()")[0].extract()
yield item
if self.offset < 250:
self.offset += 25
url = "https://movie.douban.com/top250?start="+str(self.offset)+"&filter="
yield scrapy.Request(url,callback=self.parse,dont_filter=True)2.重写start_requests
class QidianyueduSpider(scrapy.Spider):
name = ‘qidianyuedu‘
allowed_domains = [‘book.qidian.com‘]
def start_requests(self):
page_num = self.get_page_num()
for i in range(1,page_num+1):
url = "https://www.qidian.com/all?orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page="+str(i)
yield scrapy.Request(url,callback=self.parse,
headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"})
def parse(self, response):
li_list = response.css(".book-img-text li")
for li in li_list:
item = QidianItem()
item["title"] = li.css(".book-mid-info h4 a::text")[0].extract()
item["url"] = "https:"+li.css(".book-mid-info h4 a::attr(href)")[0].extract()
item["author"] = li.css(".book-mid-info .author a")[0].xpath("./text()")[0].extract()
category = ""
a_list = li.css(".book-mid-info .author a")[1:]
for a in a_list:
a_text = a.css("a::text")[0].extract()
category += a_text
category += " "
item["category"] = category.strip()
item["status"] = li.css(".book-mid-info .author span::text")[0].extract()
yield item第八步:解析数据,在解析数据的时候我们可以借助着scrapy shell xxxxx 要爬取的网站 进入代码输入区域,首先输入view(response) 查看要爬取的网页是否是目标网页,然后在使用css/xpath的方式进行提取
注意:当我们提取的网络中的数据文字多,想进行拼接操作的时候,会有很多空白字符进行妨碍,解决方法
1 "".join("".join(li.css(".info .bd")[0].xpath("./p//text()").extract()).split())
从shell中将所有要提取的数据提取成功了,在转移到代码中即可,代码见第七步
深化一个问题,就是item分裂的问题
在一个页面的提取并不满足所有的item数据,需要深层次的网页的数据提取,这个时候就需要进行item的传递,实际上就是Request(url,meta={"meta":item},callback=self.parse_detail)的传递,和item = response.meta["meta"]
的解包,在新的解析函数中继续使用,在yield返回即可
class QidianyueduSpider(scrapy.Spider):
name = ‘qidianyuedu‘
allowed_domains = [‘book.qidian.com‘]
def get_page_num(self):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"}
url = "https://www.qidian.com/all?orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page=1"
res = requests.get(url, headers=headers)
html = res.content.decode("utf-8")
soup = BeautifulSoup(html, "lxml")
num = int(soup.select(".count-text span")[0].get_text())
if num%20 == 0:
page = num//20
else:
page = (num//20)
return page
def start_requests(self):
page_num = self.get_page_num()
for i in range(1,page_num+1):
url = "https://www.qidian.com/all?orderId=&style=1&pageSize=20&siteid=1&pubflag=0&hiddenField=0&page="+str(i)
yield scrapy.Request(url,callback=self.parse,
headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"})
def parse(self, response):
li_list = response.css(".book-img-text li")
for li in li_list:
item = QidianItem()
item["title"] = li.css(".book-mid-info h4 a::text")[0].extract()
item["url"] = "https:"+li.css(".book-mid-info h4 a::attr(href)")[0].extract()
item["author"] = li.css(".book-mid-info .author a")[0].xpath("./text()")[0].extract()
category = ""
a_list = li.css(".book-mid-info .author a")[1:]
for a in a_list:
a_text = a.css("a::text")[0].extract()
category += a_text
category += " "
item["category"] = category.strip()
item["status"] = li.css(".book-mid-info .author span::text")[0].extract()
yield scrapy.Request(item["url"],meta={"meta":item},
callback=self.parse_detial,
headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"})
def parse_detial(self,response):
item = response.meta["meta"]
item["bref"] = "".join("".join(response.css(".book-intro p")[0].xpath(".//text()").extract()).split())
yield item