我目前使用的请求-期货更快的网络抓取。问题是,它仍然很慢。大约每隔一秒一次。下面是ThreadPoolExecutor的外观:
with FuturesSession(executor=ThreadPoolExecutor(max_workers=8)) as session:
futures = {session.get(url, proxies={
'http': str(random.choice(proxy_list).replace("https:/", "http:/")),
'https': str(random.choice(proxy_list).replace("https:/", "http:/")),
}, headers={
'User-Agent': str(ua.chrome),
'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Content-Type': 'text/plain;charset=UTF-8',
}): url for url in url_list}
# ---
for future in as_completed(futures):
del futures[future]
try:
resp = future.result()
except:
print("Error getting result from thread. Ignoring")
try:
multiprocessing.Process(target=main_func, args=(resp,))
del resp
del future
except requests.exceptions.JSONDecodeError:
logging.warning(
"[requests.custom.debug]: requests.exceptions.JSONDecodeError: [Error] print(resp.json())")
我认为这很慢,因为as_completed for循环不是并发循环。至于我传递响应的main_func,这是使用bs4使用来自站点的信息的函数。如果as_completed for循环是并发的,那么它仍然会比这更快。我真的希望刮刀更快,我觉得我想继续使用请求-期货,但如果有东西是更快的更快,我会很高兴改变。因此,如果有人知道一些比请求更快的事情--期货,那么请随意分享。
有人能帮忙吗?谢谢
发布于 2022-02-27 14:08:21
下面是代码的重构,应该会有所帮助:
import requests
from concurrent.futures import ProcessPoolExecutor
import random
proxy_list = [
'http://107.151.182.247:80',
'http://194.5.193.183:80',
'http://88.198.50.103:8080',
'http://88.198.24.108:8080',
'http://64.44.164.254:80',
'http://47.74.152.29:8888',
'http://176.9.75.42:8080']
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1500.55 Safari/537.36',
'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Content-Type': 'text/plain;charset=UTF-8'}
url_list = ['http://www.google.com', 'http://facebook.com', 'http://twitter.com']
def process(url):
proxy = random.choice(proxy_list)
https = proxy.replace('http:', 'https:')
http = proxy.replace('https:', 'http:')
proxies = {'http': http, 'https': https}
try:
(r := requests.get(url, proxies=proxies)).raise_for_status()
# call main_func here
except Exception as e:
return e
return 'OK'
def main():
with ProcessPoolExecutor() as executor:
for result in executor.map(process, url_list):
print(result)
if __name__ == '__main__':
main()
proxy_list可能不适合你。使用您自己的代理列表。
很明显,url_list和你的不匹配。
关键是每个URL都是在自己的过程中处理的。在这个场景中,确实不需要混合线程和进程,特别是当您在运行子进程之前等待线程完成时,它增加了一定程度的同步性。
https://stackoverflow.com/questions/71284678
复制相似问题