python 爬虫练习二, 爬取python标准库为pdf

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u011529752/article/details/78513614

爬取python标准库

想要把Python的标准库文档趴下来,试过直接存成html,但是简单的存储css的样式等都会丢失,遂想存为pdf。

需要一个工具pdfkit,结合之前的selenium爬下来。

首先需要

pip install pdfkit
# coding:utf-8
import urllib
from urllib import request
import os,time
from os import path
from selenium import webdriver
import pdfkit
import re

#去掉尖括号之间的内容
def transname(name):
    pattern = '<.*?>'
    res = re.compile(pattern).sub("",name)
    return res


url_root = 'https://docs.python.org/3/library/'
url_index = url_root + 'index.html'

result_dir = path.join(os.getcwd(),'result')
if not path.exists(result_dir):
    os.makedirs(result_dir)

#pdfkit.from_url(url_index,path.join(result_dir,'index.pdf'))

driver = webdriver.PhantomJS()
# driver = webdriver.Firefox()
driver.get(url_index)
html_index = driver.page_source

pattern = '<li class="toctree-l[12]"><a class="reference internal" href="(.+?)">(.+?)</a>'
res = re.compile(pattern,re.S).findall(html_index)
print(res)
ct = 0
amt = len(res)

for i,x in enumerate(res):
    if i<127:
        continue
    addr = url_root + x[0]
    if re.compile('.*\.html$').match(addr):
        name = re.compile(os.sep).sub(r'-',x[1])     #去掉系统转换的符号,防止误把路径分割了
        name = path.join(result_dir,name+'.pdf')
        name = transname(name)

        ct = ct + 1
        print(ct,'/',i+1,'/',amt,addr,name)

        pdfkit.from_url(addr,name)
    else:
        amt = amt - 1

结果
这里写图片描述

猜你喜欢

转载自blog.csdn.net/u011529752/article/details/78513614