欧美亚洲中文,在线国自产视频,欧洲一区在线观看视频,亚洲综合中文字幕在线观看

      1. <dfn id="rfwes"></dfn>
          <object id="rfwes"></object>
        1. 站長(zhǎng)資訊網(wǎng)
          最全最豐富的資訊網(wǎng)站

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          python網(wǎng)絡(luò)爬蟲(chóng)步驟:首先準(zhǔn)備所需庫(kù),編寫(xiě)爬蟲(chóng)調(diào)度程序;然后編寫(xiě)url管理器,并編寫(xiě)網(wǎng)頁(yè)下載器;接著編寫(xiě)網(wǎng)頁(yè)解析器;最后編寫(xiě)網(wǎng)頁(yè)輸出器即可。

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          本教程操作環(huán)境:windows7系統(tǒng)、python3.9版,DELL G3電腦。

          python網(wǎng)絡(luò)爬蟲(chóng)步驟

          (1)準(zhǔn)備所需庫(kù)

          我們需要準(zhǔn)備一款名為BeautifulSoup(網(wǎng)頁(yè)解析)的開(kāi)源庫(kù),用于對(duì)下載的網(wǎng)頁(yè)進(jìn)行解析,我們是用的是PyCharm編譯環(huán)境所以可以直接下載該開(kāi)源庫(kù)。

          步驟如下:

          選擇File->Settings

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          打開(kāi)Project:PythonProject下的Project interpreter

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          點(diǎn)擊加號(hào)添加新的庫(kù)

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          輸入bs4選擇bs4點(diǎn)擊Install Packge進(jìn)行下載

          python網(wǎng)絡(luò)爬蟲(chóng)步驟是什么

          (2)編寫(xiě)爬蟲(chóng)調(diào)度程序

          這里的bike_spider是項(xiàng)目名稱引入的四個(gè)類分別對(duì)應(yīng)下面的四段代碼url管理器,url下載器,url解析器,url輸出器。

          # 爬蟲(chóng)調(diào)度程序 from bike_spider import url_manager, html_downloader, html_parser, html_outputer   # 爬蟲(chóng)初始化 class SpiderMain(object):     def __init__(self):         self.urls = url_manager.UrlManager()         self.downloader = html_downloader.HtmlDownloader()         self.parser = html_parser.HtmlParser()         self.outputer = html_outputer.HtmlOutputer()      def craw(self, my_root_url):         count = 1         self.urls.add_new_url(my_root_url)         while self.urls.has_new_url():             try:                 new_url = self.urls.get_new_url()                 print("craw %d : %s" % (count, new_url))                 # 下載網(wǎng)頁(yè)                 html_cont = self.downloader.download(new_url)                 # 解析網(wǎng)頁(yè)                 new_urls, new_data = self.parser.parse(new_url, html_cont)                 self.urls.add_new_urls(new_urls)                 # 網(wǎng)頁(yè)輸出器收集數(shù)據(jù)                 self.outputer.collect_data(new_data)                 if count == 10:                     break                 count += 1             except:                 print("craw failed")          self.outputer.output_html()   if __name__ == "__main__":     root_url = "http://baike.baidu.com/item/Python/407313"     obj_spider = SpiderMain()     obj_spider.craw(root_url)

          (3)編寫(xiě)url管理器

          我們把已經(jīng)爬取過(guò)的url和未爬取的url分開(kāi)存放以便我們不會(huì)重復(fù)爬取某些已經(jīng)爬取過(guò)的網(wǎng)頁(yè)。

          # url管理器 class UrlManager(object):     def __init__(self):         self.new_urls = set()         self.old_urls = set()      def add_new_url(self, url):         if url is None:             return         if url not in self.new_urls and url not in self.old_urls:             self.new_urls.add(url)      def add_new_urls(self, urls):         if urls is None or len(urls) == 0:             return         for url in urls:             self.new_urls.add(url)      def get_new_url(self):         # pop方法會(huì)幫我們獲取一個(gè)url并且移除它         new_url = self.new_urls.pop()         self.old_urls.add(new_url)         return new_url      def has_new_url(self):         return len(self.new_urls) != 0

          (4)編寫(xiě)網(wǎng)頁(yè)下載器

          通過(guò)網(wǎng)絡(luò)請(qǐng)求來(lái)下載頁(yè)面

          # 網(wǎng)頁(yè)下載器 import urllib.request   class HtmlDownloader(object):      def download(self, url):         if url is None:             return None         response = urllib.request.urlopen(url)         # code不為200則請(qǐng)求失敗         if response.getcode() != 200:             return None         return response.read()

          (5)編寫(xiě)網(wǎng)頁(yè)解析器

          對(duì)網(wǎng)頁(yè)進(jìn)行解析時(shí)我們需要知道我們要查詢的內(nèi)容都有哪些特征,我們可以打開(kāi)一個(gè)網(wǎng)頁(yè)點(diǎn)擊右鍵審查元素來(lái)了解我們所查內(nèi)容的共同之處。

          # 網(wǎng)頁(yè)解析器 import re from bs4 import BeautifulSoup from urllib.parse import urljoin   class HtmlParser(object):      def parse(self, page_url, html_cont):         if page_url is None or html_cont is None:             return         soup = BeautifulSoup(html_cont, "html.parser", from_encoding="utf-8")         new_urls = self._get_new_urls(page_url, soup)         new_data = self._get_new_data(page_url, soup)         return new_urls, new_data      def _get_new_data(self, page_url, soup):         res_data = {"url": page_url}         # 獲取標(biāo)題         title_node = soup.find("dd", class_="lemmaWgt-lemmaTitle-title").find("h1")         res_data["title"] = title_node.get_text()         summary_node = soup.find("p", class_="lemma-summary")         res_data["summary"] = summary_node.get_text()         return res_data      def _get_new_urls(self, page_url, soup):         new_urls = set()         # 查找出所有符合下列條件的url         links = soup.find_all("a", href=re.compile(r"/item/"))         for link in links:             new_url = link['href']             # 獲取到的url不完整,學(xué)要拼接             new_full_url = urljoin(page_url, new_url)             new_urls.add(new_full_url)         return new_urls

          (6)編寫(xiě)網(wǎng)頁(yè)輸出器

          輸出的格式有很多種,我們選擇以html的形式輸出,這樣我們可以的到一個(gè)html頁(yè)面。

          # 網(wǎng)頁(yè)輸出器 class HtmlOutputer(object):      def __init__(self):         self.datas = []      def collect_data(self, data):         if data is None:             return         self.datas.append(data)      # 我們以html表格形式進(jìn)行輸出     def output_html(self):         fout = open("output.html", "w", encoding='utf-8')         fout.write("<html>")         fout.write("<meta charset='utf-8'>")         fout.write("<body>")         # 以表格輸出         fout.write("<table>")         for data in self.datas:             # 一行             fout.write("<tr>")             # 每個(gè)單元行的內(nèi)容             fout.write("<td>%s</td>" % data["url"])             fout.write("<td>%s</td>" % data["title"])             fout.write("<td>%s</td>" % data["summary"])             fout.write("</tr>")         fout.write("</table>")         fout.write("</body>")         fout.write("</html>")         # 輸出完畢后一定要關(guān)閉輸出器         fout.close()

          相關(guān)免費(fèi)學(xué)習(xí)推薦:python視頻教程

          贊(0)
          分享到: 更多 (0)
          網(wǎng)站地圖   滬ICP備18035694號(hào)-2    滬公網(wǎng)安備31011702889846號(hào)