提取网页数据保存为csv文件

import requests
r = requests.get('https://www.baidu.com')

from bs4 import BeautifulSoup
soup = BeautifulSoup(r.text, 'html.parser')
results = soup.find_all('span', attrs={'class':'short-desc'})
records = []

for result in results:
    date = result.find('strong').text[0:-1]+',2017'
    lie = result.contents[1][1:-2]
    explanation = result.find('a').text[1:-1]
    url = result.find('a')['href']
    records.append((date, lie, expalanation, url))

imort pandas as pd 
df = pd.DataFrame(records, columns=['date', 'lie', 'explanation', 'url'])
df['date'] = pd.to_datetime[df['date']])
df.to_csv('trump_lies.csv', index=False, encoding='utf-8')

以上就是对提取网页数据保存为csv文件的认识。

猜你喜欢

转载自blog.csdn.net/CSDN_LYY/article/details/87903346