免費(fèi)學(xué)習(xí)推薦:python視頻教程
三種數(shù)據(jù)抓取的方法
- 正則表達(dá)式(re庫(kù))
- BeautifulSoup(bs4)
- lxml
*利用之前構(gòu)建的下載網(wǎng)頁(yè)函數(shù),獲取目標(biāo)網(wǎng)頁(yè)的html,我們以https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/為例,獲取html。
from get_html import download url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)
*假設(shè)我們需要爬取該網(wǎng)頁(yè)中的國(guó)家名稱(chēng)和概況,我們依次使用這三種數(shù)據(jù)抓取的方法實(shí)現(xiàn)數(shù)據(jù)抓取。
1.正則表達(dá)式
from get_html import downloadimport re url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)country = re.findall('class="h2dabiaoti">(.*?)</h2>', page_content) #注意返回的是listsurvey_data = re.findall('<tr><td bgcolor="#FFFFFF" id="wzneirong">(.*?)</td></tr>', page_content)survey_info_list = re.findall('<p> (.*?)</p>', survey_data[0])survey_info = ''.join(survey_info_list)print(country[0],survey_info)
2.BeautifulSoup(bs4)
from get_html import downloadfrom bs4 import BeautifulSoup url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'html = download(url)#創(chuàng)建 beautifulsoup 對(duì)象soup = BeautifulSoup(html,"html.parser")#搜索country = soup.find(attrs={'class':'h2dabiaoti'}).text survey_info = soup.find(attrs={'id':'wzneirong'}).textprint(country,survey_info)
3.lxml
from get_html import downloadfrom lxml import etree #解析樹(shù)url = 'https://guojiadiqu.bmcx.com/AFG__guojiayudiqu/'page_content = download(url)selector = etree.HTML(page_content)#可進(jìn)行xpath解析country_select = selector.xpath('//*[@id="main_content"]/h2') #返回列表for country in country_select: print(country.text)survey_select = selector.xpath('//*[@id="wzneirong"]/p')for survey_content in survey_select: print(survey_content.text,end='')
運(yùn)行結(jié)果:
最后,引用《用python寫(xiě)網(wǎng)絡(luò)爬蟲(chóng)》中對(duì)三種方法的性能對(duì)比,如下圖:
僅供參考。
相關(guān)免費(fèi)學(xué)習(xí)推薦:python教程(視頻)