前两篇我详细的讲解了CSS和XPath表达式在网页解析中的用法,但是都是以列举和解释为主,并没有用于解决实战问题,今天这一篇,我使用urllib+lxml工具组合,结合XPath表达式来做一个小案例。
该案例是刘顺祥大神【公众号:每天进步一点点】中使用的爬虫实战案例,他用的request+BeautifulSoup,这样刚好扩展下XPath的用法,丰富一下该案例:
https://read.douban.com/search?q=Python
#! /usr/bin/env python
#coding=utf-8
from urllib.request import urlopen,Request
import pandas as pd
import numpy as np
from lxml import etree
url='https://read.douban.com/search?q=Python'
header ={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36'}
构建解析函数:
def getcontent(url):
i=0
myresult={}
title=[];subtitle=[];author=[];category=[];price=[];rating=[];eveluate_nums=[]
for page in range(0,4):
link=url+'&start='+str(page*10)
content=urlopen(Request(link,headers=header)).read().decode('utf-8')
result=etree.HTML(content)
###计算每一页有多少条书籍信息:
length=len(result.xpath("//ol[@class='ebook-list column-list']/li"))
###提取图书标题信息:
title.extend(result.xpath("//ol/li//div[@class='title']/a/text()| //ol/li//h4/a/text()"))
###考虑作者不唯一的情况:
author_text=[np.nan]*length
for i in range(1,length+1):
author_text[i-1]=result.xpath("//li[{0}]//span[contains(text(),'作者')]/following-sibling::*[1]/a/text() | //li[{0}]//div[@class='author']/a/text()".format(i))
author.extend(author_text)
###考虑分类,枚举出所有分类标签
category.extend(result.xpath("//span[@class='category']/span[2]/span/text() | //p[@class='category']/span[@class='labled-text']/text() | //div[@class='category']/text()"))
###考虑副标题是否存在
subtext=[np.nan]*length
for i in range(1,length+1):
if result.xpath("//ol/li[{}]//p/text()".format(i)) != []:
subtext[i-1]=result.xpath("//ol/li[{}]//p/text()".format(i))
subtitle.extend(subtext)
###考虑评价是否存在:
eveluate_text=[np.nan]*length
for i in range(1,length+1):
if result.xpath("//ol/li[{}]//a[@class='ratings-link']/span/text()".format(i)) != []:
eveluate_text[i-1]=result.xpath("//ol/li[{}]//a[@class='ratings-link']/span/text()".format(i))
eveluate_nums.extend(eveluate_text)
###考虑评分是否存在:
rating_text=[np.nan]*length
for i in range(1,length+1):
if result.xpath("//ol/li[{}]//div[@class='rating list-rating']/span[2]/text()".format(i)) != []:
rating_text[i-1]=result.xpath("//ol/li[{}]//div[@class='rating list-rating']/span[2]/text()".format(i))
rating.extend(eveluate_text)
###考虑价格是否存在:
price_text=[np.nan]*length
for i in range(1,length+1):
if result.xpath("//ol/li[{}]/div[@class='info']//span[@class='price-tag discount']/span/text()".format(i)) != []:
price_text[i-1]=result.xpath("//ol/li[{}]/div[@class='info']//span[@class='price-tag discount']/span/text()".format(i))
price.extend(eveluate_text)
print("page {} is over!!!".format(page))
print("everything is OK")
myresult={"title":title,"subtitle":subtitle,"author":author,"category":category,"price":price,"rating":rating,"eveluate_nums":eveluate_nums}
return myresult
运行函数:
url="https://read.douban.com/search?q=Python"
myresult=getcontent(url)
查看变量信息
for i,m in myresult.items():
print(i+":"+str(len(m)))
title:39
subtitle:39
author:39
category:39
price:39
rating:39
eveluate_nums:39
铺平嵌套列表:
以上可以看到有几列是嵌套列表,会影响我们后期的数据分析,所以需要铺平列表,这里是一个我从网上找到的列表解除嵌套的代码。
def flatten(input_list):
output_list = []
while True:
if input_list == []:
break
for index, i in enumerate(input_list):
if type(i)== list:
input_list = i + input_list[index+1:]
break
else:
output_list.append(i)
input_list.pop(index)
break
return output_list
myresult['eveluate_nums']=flatten(myresult['eveluate_nums'])
myresult['price']=flatten(myresult['price'])
myresult['rating']=flatten(myresult['rating'])
myresult['subtitle']=flatten(myresult['subtitle'])
批量修改变量格式:
mydata=pd.DataFrame(myresult)
mydata=mydata.astype({'author':'str','category':'str','eveluate_nums':'float', 'price':'float', 'rating':'float', 'subtitle':'str','title':'str'})
mydata.columns
mydata.dtypes
author object
category object
eveluate_nums float64
price float64
rating float64
subtitle object
title object
dtype: object
参考资料: http://blog.csdn.net/vola9527/article/details/68964144 https://mp.weixin.qq.com/s?__biz=MzIxNjA2ODUzNg==&mid=2651435242&idx=1&sn=f9315b81911bbc4f83f41ddba23d054e
往期案例数据请移步本人GitHub: https://github.com/ljtyduyu/DataWarehouse/tree/master/File