爬虫工作的三个基本步骤: 爬取网页、解析内容、存储数据
准备
先安装爬取网页需要用到的第三方库: requests 和 bs4
pip install requests
pip install bs4
爬取网页
# coding: UTF-8
import requests
link = "http://www.santostang.com/"
headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
r = requests.get(link, headers=headers)
print(r.text)
程序运行后输出 网页的html代码
解析网页内容
# coding: UTF-8
import requests
from bs4 import BeautifulSoup
link = "http://www.santostang.com/"
headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
r = requests.get(link, headers=headers)
soup = BeautifulSoup(r.text, "lxml")
title = soup.find("h1", class_="post-title").a.text.strip()
print(title)
获取到了网页第一篇文章的 title,输出内容为:
第四章 – 4.3 通过selenium 模拟浏览器抓取
存储数据
# coding: UTF-8
import requests
from bs4 import BeautifulSoup
link = "http://www.santostang.com/"
headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
r = requests.get(link, headers=headers)
soup = BeautifulSoup(r.text, "lxml")
title = soup.find("h1", class_="post-title").a.text.strip()
with open('d:/title.txt', 'w') as f:
f.write(title)
运行程序后找到d:/title.txt 文件,发现文件的内容就是网页第一篇文章的title,即 “第四章 – 4.3 通过selenium 模拟浏览器抓取”
至此,讲解完了python爬虫的三个基本步骤和代码实现
本文内容到此结束,更多内容可关注公众号和个人微信号: