我一直试图只从我拥有的一堆LinkedIn URL中刮取配置文件名。我正在与python一起使用bs4。但是,无论我做什么,bs4都返回空数组。这是怎么回事?
import requests
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import re
r1 = requests.get("https://www.linkedin.com/in/agazdecki/")
coverpage = r1.content
soup1 = BeautifulSoup(coverpage, 'html5lib')
name_container = soup1.find_all("li", class_ = "inline t-24 t-black t-normal break-words")
print(name_container)
发布于 2020-04-13 17:05:22
li
标记或class
或概要文件名称。我假设你在用一个会话
import requests , re , json
from bs4 import BeautifulSoup
r1 = requests.Session.get("https://www.linkedin.com/in/agazdecki/", headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"})
soup = BeautifulSoup(r1.content, 'html.parser')
info_tag = soup.find('code',text=re.compile('"data":{"firstName":'))
data = json.loads(info_tag.text)
first_name = data['data']['firstName']
last_name = data['data']['lastName']
occupation = data['data']['occupation']
print('First Name :' , first_name)
print('Last Name :' , last_name)
print('occupation :' , occupation)
输出:
First Name : Andrew
Last Name : Gazdecki
occupation : Chief Revenue Officer @ Spiff. Inc. 30 under 30 Entrepreneur.
发布于 2020-04-13 17:05:12
如果您尝试在不使用JavaScript的情况下加载页面,您将看到您要查找的元素不存在。换句话说,整个LinkedIn页面都加载了Javascript (类似于单页应用程序 )。实际上,BeautifulSoup正在按预期工作,并解析它得到的页面,该页面有JavaScript代码,而不是您期望的页面。
>>> coverpage = r1.content
>>> coverpage
b'<html><head>\n<script type="text/javascript">\nwindow.onload =
function() {\n // Parse the tracking code from cookies.\n var trk =
"bf";\n var trkInfo = "bf";\n var cookies = document.cookie.split(";
");\n for (var i = 0; i < cookies.length; ++i) {\n if
((cookies[i].indexOf("trkCode=") == 0) && (cookies[i].length > 8)) {\n
trk = cookies[i].substring(8);\n }\n else if
((cookies[i].indexOf("trkInfo=") == 0) && (cookies[i].length > 8)) {\n
trkInfo = cookies[i].substring(8);\n }\n }\n\n if
(window.location.protocol == "http:") {\n // If "sl" cookie is set,
redirect to https.\n for (var i = 0; i < cookies.length; ++i) {\n
if ((cookies[i].indexOf("sl=") == 0) && (cookies[i].length > 3)) {\n
window.location.href = "https:" +
window.location.href.substring(window.location.protocol.length);\n
return;\n }\n }\n }\n\n // Get the new domain. For international
domains such as\n // fr.linkedin.com, we convert it to www.linkedin.com\n
var domain = "www.linkedin.com";\n if (domain != location.host) {\n
var subdomainIndex = location.host.indexOf(".linkedin");\n if
(subdomainIndex != -1) {\n domain = "www" +
location.host.substring(subdomainIndex);\n }\n }\n\n
window.location.href = "https://" + domain + "/authwall?trk=" + trk +
"&trkInfo=" + trkInfo +\n "&originalReferer=" +
document.referrer.substr(0, 200) +\n "&sessionRedirect=" +
encodeURIComponent(window.location.href);\n}\n</script>\n</head></html>'
您可以尝试使用类似于硒的东西。
发布于 2020-06-17 06:50:35
我建议使用selenium来刮取数据。
从WebDriver下载Chrome 这里
from selenium import webdriver
driver = webdriver.Chrome("Path to your Chrome Webdriver")
#login using webdriver
driver.get('https://www.linkedin.com/login?trk=guest_homepage-basic_nav-header-signin')
username = driver.find_element_by_id('username')
username.send_keys('your email_id here')
password = driver.find_element_by_id('password')
password.send_keys('your password here')
sign_in_button = driver.find_element_by_xpath('//*[@type="submit"]')
sign_in_button.click()
driver.get('https://www.linkedin.com/in/agazdecki/') #change profile_url here.
name = driver.find_element_by_xpath('//li[@class = "inline t-24 t-black t-normal break-words"]').text
print(name)
https://stackoverflow.com/questions/61192281
复制相似问题