Scrapy是Python开发的一个快速、高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据。
安装好scrapy类库之后,就可以创建scrapy项目了,pycharm不能直接创建scrapy项目,必须通过命令行创建,打开pycharm的Terminal终端,输入scrapy startproject scrapy_demo命令。需要注意的是,环境变量必须要配好才能在cmd中显示scrapy命令.
对于Mac,由于Mac的python有多个版本,如果使用3.6的版本,不能直接在命令行运行scrapy,需要创建软链接(注意对应的版本)。
ln -s /Library/Frameworks/Python.framework/Versions/3.6/bin/scrapy /usr/local/bin/scrapy
看到下面的信息则说明创建成功了。
此时可以看到项目自动创建了以下几个文件
接下来,将对IEEE和arXiv网站进行爬虫,其中middlewares.py,__init__.py文件保持默认。
①根据网站内容在item.py中定义爬取的数据结构
Python
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
# 在items.py文件中定义我们要抓取的数据
import scrapy
class ScrapyDemoItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
authors = scrapy.Field()
subjects = scrapy.Field()
# year = scrapy.Field()
# type = scrapy.Field()
# publisher = scrapy.Field()
②在该目录的Spiders文件夹下面建立自己的爬虫
arXiv_Spider.py
需要注意的是难点是对于HTML元素的提取,此处不具体解释提取的代码如何编写。
Python
import scrapy
import re
from scrapy_demo.items import ScrapyDemoItem
class arXivSpider(scrapy.Spider):
name = "arXiv_Spider"
allowed_domains = ["arxiv.org"]
start_urls = ['https://arxiv.org/list/cs.AI/recent']
def parse(self, response):
# get num line
num = response.xpath('//*[@id="dlpage"]/small[1]/text()[1]').extract()[0]
# get max_index
max_index = int(re.search(r'\d+', num).group(0))
for index in range(1, max_index + 1):
item = ScrapyDemoItem()
# get title and clean data
title = response.xpath('//*[@id="dlpage"]/dl/dd[' + str(index) + ']/div/div[1]/text()').extract()
# remove blank char
title = [i.strip() for i in title]
# remove blank str
title = [i for i in title if i is not '']
# insert title
try:
item['title'] = title[0]
except IndexError:
item['title'] = 'error'
authors = ''
# authors' father node
xpath_fa = '//*[@id="dlpage"]/dl/dd[' + str(index) + ']/div/div[2]//a/text()'
author_list = response.xpath(xpath_fa).getall()
authors = str.join('', author_list)
item['authors'] = authors
item['subjects'] = response.xpath(
'string(//*[@id="dlpage"]/dl/dd[' + str(5) + ']/div/div[5]/span[2])').extract_first()
yield item
③配置settings.py
即将一些注释吊的部分根据自己的需要去掉注释并补充,这里我注释了一下几个内容:
设置用户代理,可以多设置几个,反爬虫:
Python
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
pipelines需要增加:
Python
ITEM_PIPELINES = { 'scrapy_demo.pipelines.ScrapyDemoPipeline': 300, }
完整的settings.py: Python
# -*- coding: utf-8 -*-
# Scrapy settings for scrapy_demo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'scrapy_demo'
SPIDER_MODULES = ['scrapy_demo.spiders']
NEWSPIDER_MODULE = 'scrapy_demo.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scrapy_demo (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'scrapy_demo.middlewares.ScrapyDemoSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'scrapy_demo.middlewares.ScrapyDemoDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'scrapy_demo.pipelines.ScrapyDemoPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
④在终端运行爬虫文件
scrapy crawl arXiv_Spider
结果: