這篇文章將為大家詳細(xì)講解有關(guān)Python爬蟲中搜索文檔樹的方法,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
濱城網(wǎng)站制作公司哪家好,找創(chuàng)新互聯(lián)!從網(wǎng)頁設(shè)計、網(wǎng)站建設(shè)、微信開發(fā)、APP開發(fā)、成都響應(yīng)式網(wǎng)站建設(shè)等網(wǎng)站項目制作,到程序開發(fā),運(yùn)營維護(hù)。創(chuàng)新互聯(lián)于2013年成立到現(xiàn)在10年的時間,我們擁有了豐富的建站經(jīng)驗和運(yùn)維經(jīng)驗,來保證我們的工作的順利進(jìn)行。專注于網(wǎng)站建設(shè)就選創(chuàng)新互聯(lián)。
搜索文檔樹
1.find_all(name, attrs, recursive, text, **kwargs)
1)name參數(shù)
name參數(shù)可以查找所有名字為name的Tag,字符串對象會被自動忽略掉。
a.傳字符串
最簡單的過濾器就是字符串,在搜索方法中傳入一個字符串參數(shù),Beautiful Soup會查找與字符串完整匹配所有的內(nèi)容,返回一個列表。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ # 創(chuàng)建 Beautiful Soup 對象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all("b")) print(soup.find_all("a"))
運(yùn)行結(jié)果
[<b>The Dormouse's story</b>] [<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
B.傳正則表達(dá)式
如果傳入正則表達(dá)式作為參數(shù),Beautiful Soup會通過正則表達(dá)式match()來匹配內(nèi)容。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup import re html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ # 創(chuàng)建 Beautiful Soup 對象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") for tag in soup.find_all(re.compile("^b")): print(tag.name)
運(yùn)行結(jié)果
body b
C.傳列表
如果傳入列表參數(shù),Beautiful Soup會將與列表中任一元素匹配的內(nèi)容以列表方式返回。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ # 創(chuàng)建 Beautiful Soup 對象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all(['a', 'b']))
2)keyword參數(shù)
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ # 創(chuàng)建 Beautiful Soup 對象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all(id="link1"))
運(yùn)行結(jié)果
[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]
3)text參數(shù)
通過text參數(shù)可以搜索文檔中的字符串內(nèi)容,與name參數(shù)的可選值一樣,text參數(shù)接受字符串,正則表達(dá)式,列表。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup import re html = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title" name="dromouse"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>, <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ # 創(chuàng)建 Beautiful Soup 對象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") # 字符串 print(soup.find_all(text = " Elsie ")) # 列表 print(soup.find_all(text = ["Tillie", " Elsie ", "Lacie"])) # 正則表達(dá)式 print(soup.find_all(text = re.compile("Dormouse")))
運(yùn)行結(jié)果
[' Elsie '] [' Elsie ', 'Lacie', 'Tillie'] ["The Dormouse's story", "The Dormouse's story"]
關(guān)于Python爬蟲中搜索文檔樹的方法就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,可以學(xué)到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
網(wǎng)站欄目:Python爬蟲中搜索文檔樹的方法
網(wǎng)站網(wǎng)址:http://www.rwnh.cn/article20/jsdeco.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供電子商務(wù)、營銷型網(wǎng)站建設(shè)、、網(wǎng)站改版、網(wǎng)站營銷、移動網(wǎng)站建設(shè)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)