我正在学习NLP,为此,我使用Scrapy抓取了一篇亚马逊书评。我已经提取了所需的字段,并将它们输出为Json文件格式。当此文件作为df加载时,每个字段都被记录为一个列表,而不是一个单独的逐行格式。我如何拆分这个列表,以便df对每个项目都有一行,而不是将所有项目条目记录在单独的列表中?代码:
import scrapy
class ReviewspiderSpider(scrapy.Spider):
name = 'reviewspider'
allowed_domains = ['amazon.co.uk']
start_urls = ['https://www.amazon.com/Gone-Girl-Gillian-Flynn/product-reviews/0307588378/ref=cm_cr_othr_d_paging_btm_1?ie=UTF8&reviewerType=all_reviews&pageNumber=1']
def parse(self, response):
users = response.xpath('//a[contains(@data-hook, "review-author")]/text()').extract()
titles = response.xpath('//a[contains(@data-hook, "review-title")]/text()').extract()
dates = response.xpath('//span[contains(@data-hook, "review-date")]/text()').extract()
found_helpful = response.xpath('//span[contains(@data-hook, "helpful-vote-statement")]/text()').extract()
rating = response.xpath('//i[contains(@data-hook, "review-star-rating")]/span[contains(@class, "a-icon-alt")]/text()').extract()
content = response.xpath('//span[contains(@data-hook, "review-body")]/text()').extract()
yield {
'users' : users.extract(),
'titles' : titles.extract(),
'dates' : dates.extract(),
'found_helpful' : found_helpful.extract(),
'rating' : rating.extract(),
'content' : content.extract()
}
示例输出:
users = ['Lauren', 'James'...'John']
dates = ['on September 28, 2017', 'on December 26, 2017'...'on November 17, 2016']
rating = ['5.0 out of 5 stars', '2.0 out of 5 stars'...'5.0 out of 5 stars']
所需输出:
index 1: [users='Lauren', dates='on September 28, 2017', rating='5.0 out of 5 stars']
index 2: [users='James', dates='On December 26, 2017', rating='5.0 out of 5 stars']
...
我知道可能应该编辑与爬行器相关的Pipeline来实现这一点,但是我的Python知识有限,无法理解Scrapy文档。我也尝试过here和here的解决方案,但是我知道的还不够多,无法将答案与我自己的代码结合起来。任何帮助都将不胜感激。
发布于 2018-07-07 22:54:24
编辑:我能够通过使用.css方法而不是.xpath来提出解决方案。我用来从一家时装零售商那里抓取衬衫清单的蜘蛛:
import scrapy
from ..items import ProductItem
class SportsdirectSpider(scrapy.Spider):
name = 'sportsdirect'
allowed_domains = ['www.sportsdirect.com']
start_urls = ['https://www.sportsdirect.com/mens/mens-shirts']
def parse(self, response):
products = response.css('.s-productthumbbox')
for p in products:
brand = p.css('.productdescriptionbrand::text').extract_first()
name = p.css('.productdescriptionname::text').extract_first()
price = p.css('.curprice::text').extract_first()
item = ProductItem()
item['brand'] = brand
item['name'] = name
item['price'] = price
yield item
相关的items.py脚本:
import scrapy
class ProductItem(scrapy.Item):
name = scrapy.Field()
brand = scrapy.Field()
name = scrapy.Field()
price = scrapy.Field()
json-lines文件的创建(在Anaconda提示中):
>>> cd simple_crawler
>>> scrapy crawl sportsdirect --set FEED_URI=products.jl
用于将创建的.jl文件转换为数据帧的代码:
import json
import pandas as pd
contents = open('products3.jl', "r").read()
data = [json.loads(str(item)) for item in contents.strip().split('\n')]
df2 = pd.DataFrame(data)
最终输出:
brand name price
0 Pierre Cardin Short Sleeve Shirt Mens £6.50
1 Pierre Cardin Short Sleeve Shirt Mens £7.00
...
发布于 2018-07-07 22:18:43
在重读你的问题后,我很确定这就是你想要的:
def parse(self, response):
users = response.xpath('//a[contains(@data-hook, "review-author")]/text()').extract()
titles = response.xpath('//a[contains(@data-hook, "review-title")]/text()').extract()
dates = response.xpath('//span[contains(@data-hook, "review-date")]/text()').extract()
found_helpful = response.xpath('//span[contains(@data-hook, "helpful-vote-statement")]/text()').extract()
rating = response.xpath('//i[contains(@data-hook, "review-star-rating")]/span[contains(@class, "a-icon-alt")]/text()').extract()
content = response.xpath('//span[contains(@data-hook, "review-body")]/text()').extract()
for user, title, date, found_helpful, rating, content in zip(users, titles, dates, found_helpful, rating, content):
yield {
'user': user,
'title': title,
'date': date,
'found_helpful': found_helpful,
'rating': rating,
'content': content
}
或者其他类似的东西。这就是我在第一条评论中试图暗示的。
https://stackoverflow.com/questions/51226028
复制