爬虫(4)爬取数据写入excel

话不多说,先贴代码

from selenium import webdriver
from bs4 import BeautifulSoup
import csv

driver=webdriver.Chrome()
url="https://www.kylc.com/stats/global/yearly/g_gdp/1960.html"
xpath="/html/body/div[2]/div[1]/div[5]/div[1]/div/div/div/table"
driver.get(url)
tablel=driver.find_element_by_xpath(xpath).get_attribute('innerHTML')

out=open('d:/gdp.csv','w',newline='')
csv_write=csv.writer(out,dialect='excel')

soup=BeautifulSoup(tablel,"html.parser")
table=soup.find_all('tr')
for row in table:
    cols=[col.text for col in row.find_all('td')]
    if len(cols)==0 or not cols[0].isdigit():
        continue
    csv_write.writerow(cols)
out.close()
driver.close()

这次更改的地方如下

import csv
out=open('d:/gdp.csv','w',newline='')
csv_write=csv.writer(out,dialect='excel')
for row in table:
    cols=[col.text for col in row.find_all('td')]
    if len(cols)==0 or not cols[0].isdigit():
        continue
    csv_write.writerow(cols)
out.close()
driver.close()

引入了csv库,将爬取内容写入D盘的gdp.csv文件中,
同时删除了print(cols)
因为爬取内容可以直接看excel
使用
out.close()
driver.close()
关闭文件和网页

猜你喜欢

转载自blog.csdn.net/qq_53029299/article/details/114851062