读取爬取的网页xml,txt格式

#! /usr/bin/env python
#coding=utf-8
import pandas as pd
from bs4 import BeautifulSoup
import sys
import re

fin = open('C:/Users/Care/Desktop/SOUGOU/news.sohunews.01080611.txt', 'rb') #以读的方式打开输入文件
# for i in fin:
#     print(type(i.decode('utf-8','ignore')))
#     print(i.decode('utf-8', 'ignore'))
#     break



soup = BeautifulSoup(fin,'html.parser')
pp = soup.find_all('content')
print(pp[0].get_text())
print(type(pp[0].get_text()))
with open('C:/Users/Care/Desktop/SOUGOU/a.txt','a+',encoding='utf-8') as f:
    for i in range(len(pp)):
        f.write(pp[i].get_text())
    f.close()

1.使用BeautifulSoup操作xml格式文件,pp = soup.find_all('content')得到的数据格式为<class 'bs4.element.ResultSet'>,要想得到str格式,需要进一步操作,首先得明白

<class 'bs4.element.ResultSet'>  这里是字典外套了一个列表  ,即textPid = pp[0],

2.textPid格式为<class 'bs4.element.Tag'>,   print(textPid.get_text())即为str格式了。

3,代码with open('C:/Users/Care/Desktop/SOUGOU/a.txt','a+',encoding='utf-8') as f,是为了防止出现gbk不能编码其他特殊格式得问题。

猜你喜欢

转载自blog.csdn.net/x_iesheng/article/details/85029067