首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >无法正确使用蜘蛛内的csv管道和图像管道

无法正确使用蜘蛛内的csv管道和图像管道
EN

Stack Overflow用户
提问于 2022-01-02 18:32:03
回答 1查看 104关注 0票数 -1

我正在设法在csv文件中编写前两个字段,并使用最后两个字段同时下载文件夹中的图像。为此,我创建了两个定制管道。

这是蜘蛛:

代码语言:javascript
运行
复制
import scrapy

class PagalWorldSpider(scrapy.Spider):
    name = 'pagalworld'
    start_urls = ['https://www.pagalworld.pw/indian-pop-mp3-songs-2021/files.html']

    custom_settings = {
        'ITEM_PIPELINES': {
            'my_project.pipelines.PagalWorldImagePipeline': 1,
            'my_project.pipelines.CSVExportPipeline': 300
        },
        'IMAGES_STORE': r"C:\Users\WCS\Desktop\Images",
    }

    def start_requests(self):
        for start_url in self.start_urls:
            yield scrapy.Request(start_url,callback=self.parse)

    def parse(self, response):
        for item in response.css(".files-list .listbox a[href]::attr(href)").getall():
            inner_page_link = response.urljoin(item)
            yield scrapy.Request(inner_page_link,callback=self.parse_download_links)

    def parse_download_links(self,response):
        title = response.css("h1.title::text").get()
        categories = ', '.join(response.css("ul.breadcrumb > li > a::text").getall())

        file_link = response.css(".file-details audio > source::attr(src)").get()
        image_link = response.urljoin(response.css(".alb-img-det > img[data-src]::attr('data-src')").get())
        image_name = file_link.split("-")[-1].strip().replace(" ","_").replace(".mp3","")
        
        yield {"Title":title,"categories":categories,"image_urls":[image_link],"image_name":image_name}

如果按原样执行脚本,我将在csv文件中获得所有四个字段,这是我在parse_download_links方法中生成的字段。脚本还正在准确地下载和重命名图像。

前两个字段Title categories 是我希望写入csv文件的内容,而不是 image_urls image_name__.然而,这两个字段-- image_urls image_name --是用来下载和重命名图像的。

如何正确使用这两条管道?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-01-02 22:28:36

您不必为此目的创建CSV管道。读

代码语言:javascript
运行
复制
import scrapy


class PagalWorldSpider(scrapy.Spider):
    name = 'pagalworld'
    start_urls = ['https://www.pagalworld.pw/indian-pop-mp3-songs-2021/files.html']

    custom_settings = {
        'ITEM_PIPELINES': {
            'my_project.pipelines.PagalWorldImagePipeline': 1,
            # 'my_project.pipelines.CSVExportPipeline': 300
        },
        'IMAGES_STORE':  r'C:\Users\WCS\Desktop\Images',
        'FEEDS': {
            r'file:///C:\Users\WCS\Desktop\output.csv': {'format': 'csv', 'overwrite': True}
        },
        'FEED_EXPORT_FIELDS': ['Title', 'categories']
    }

    def start_requests(self):
        for start_url in self.start_urls:
            yield scrapy.Request(start_url, callback=self.parse)

    def parse(self, response):
        for item in response.css(".files-list .listbox a[href]::attr(href)").getall():
            inner_page_link = response.urljoin(item)
            yield scrapy.Request(inner_page_link, callback=self.parse_download_links)

    def parse_download_links(self,response):
        title = response.css("h1.title::text").get()
        categories = ', '.join(response.css("ul.breadcrumb > li > a::text").getall())

        file_link = response.css(".file-details audio > source::attr(src)").get()
        image_link = response.urljoin(response.css(".alb-img-det > img[data-src]::attr('data-src')").get())
        image_name = file_link.split("-")[-1].strip().replace(" ", "_").replace(".mp3", "")

        yield {"Title": title, "categories": categories, "image_urls": [image_link], "image_name": image_name}

输出:

代码语言:javascript
运行
复制
Heartfail - Mika Singh mp3 song Download PagalWorld.com,"Home, MUSIC, INDIPOP, Indian Pop Mp3 Songs 2021"
Fakir - Hansraj Raghuwanshi mp3 song Download PagalWorld.com,"Home, MUSIC, INDIPOP, Indian Pop Mp3 Songs 2021"
Humsafar - Suyyash Rai mp3 song Download PagalWorld.com,"Home, MUSIC, INDIPOP, Indian Pop Mp3 Songs 2021"
...
...
...

编辑:

main.py:

代码语言:javascript
运行
复制
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings


if __name__ == "__main__":
    spider = 'pagalworld'
    settings = get_project_settings()
    settings['USER_AGENT'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
    process = CrawlerProcess(settings)
    process.crawl(spider)
    process.start()
票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70558681

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档