[Crawler login, crawl and comment - simple and practical]


Copyright statement: This article is an original article of the blogger. Please indicate the source of this article and the author's screen name in a prominent position for reprinting. It cannot be used for commercial purposes without the author's permission.

The crawler logs in, crawls and comments

Requirements: log in to read the web, crawl and comment
Environment: windows system
Development language: python

Use tool class:
bs4
requests

One: Obtain the login_url and data parameters of the login page

You can use the wrong password to quickly obtain the login login_urladdress, and then use the correct login to obtain the login data_commentparameter information in the request

insert image description here

requests.session()Cookies used to

from bs4 import BeautifulSoup
import requests

#一:获取登录cookies------------------------------------------------------------------------------
#获取登录网页内容
login_url = 'https://xx.xxxxxxxxx.cn/wp-admin/admin-ajax.php'
data = {
    
    
'action':'ajaxlogin',
'username':'[email protected]',
'password':'xxxxxxx',
'remember':'true'
}
header = {
    
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0'}
session = requests.session()
login_resp = session.post(url=login_url,headers=header,data=data)
# print(login_resp.cookies)
# #存储登录网页内容
# with open('./login_resp.html','w',encoding='utf-8') as fp:
#     fp.write(login_resp.text)

Two: Use a session with login parameters to obtain the content of the webpage to be crawled

Locate the required comment content
insert image description here

#二:用登录cookies获取要爬取的网页内容--------------------------------------------------------------
detail_url = 'https://xx.xxxxxxxxx.cn/psychology/8384/'
detail_resp = session.get(url=detail_url,headers=header)
# #存储要爬取的网页内容
# with open('./detail_resp.html','w',encoding='utf-8') as fp:
#     fp.write(detail_resp.text)
bs = BeautifulSoup(detail_resp.text,'html.parser')
comment_list = bs.find_all('li',class_='comment-item')
for info in comment_list:
    name_list = info.find('div',class_='comment-txt').find('div',class_='hd').find('cite',class_='fn').text
    data_list = info.find('div',class_='comment-txt').find('div',class_='hd').find('p',class_='date').text
    text_list = info.find('div',class_='comment-txt').find('div',class_='bd').find('p').text
    reply_list = info.find('div',class_='comment-txt').find('div',class_='bd').find('a')['href']
    inf = {
    
    
        '评论者':name_list,
        '评论日期':data_list,
        '评论内容':text_list,
        # '回复链接':reply_list
    }
    print(inf)



Three: Use login cookies to obtain the content of the webpage to be crawled

Comment it yourself to get detail_url: anddata_commentinsert image description here

insert image description here

#三:提交评论------------------------------------------------------------------------------
comment_url = 'https://xx.xxxxxxxxx.cn/wp-comments-post.php'
data_comment={
    
    
'comment':'comment_3',
'submit':'',
'comment_post_ID':'8384',
'comment_parent':'0'
}
comment_resp = session.post(comment_url,data=data_comment)
if comment_resp.status_code ==200:
    print('评论成功')
else:
    print('评论发送失败,状态码为{}'.format(comment_resp.status_code))

Copyright statement: This article is an original article of the blogger. Please indicate the source of this article and the author's screen name in a prominent position for reprinting. It cannot be used for commercial purposes without the author's permission.

Guess you like

Origin blog.csdn.net/weixin_45711406/article/details/123753354
Recommended