[Case Study] Python Python reptile crawling Taobao shop and reviews

Install some libraries needed to develop

(1) The driver installation mysql: On Windows open press win + r cmd command line input, command input pip install pymysql, enter.

(2) Installation of automated testing driving selenium: Enter pip install selenium input on the command line.

(3) Install the label parsing library pyquery: Enter pip install pyquery input on the command line.

(4) Anaconda refers to is an open source Python release, which contains conda, Python and more than 180 scientific package and its dependencies, download anaconda, configure the environment variables after the installation, add in the path E: \ Anaconda3 \ anaconda \ library \ bin, restart the computer so that the environment variables to take effect, install jieba library, enter pip install jieba enter the command line.

(5) Download ChromeDriver, the official site at: http://chromedriver.storage.googleapis.com/

index.html, and chromedriver.exe placed in the installation directory of the Python Scripts folder.

achieve

  • Crawling for data is mainly used in the Pyquery, selenium library, the following code mainly realize the retrieval of Taobao, flip and extract the data.
'''
遇到不懂的问题?Python学习交流群:821460695满足你的需求,资料都已经上传群文件,可以自行下载!
'''
# 设置网站最大响应时间
wait=WebDriverWait(driver,50)
class TaoBaoSearch:
# 初始化,默认搜索为None,创建数据库连接
    def __init__(self,search=None):
        self.name=search
        self.mysql=to.Data_oper()
# 对淘宝网的搜索
    def search(self):
# 设置源网站,这里设置淘宝网站为源网站
        driver.get("https://www.taobao.com/")#J_TSearchForm > div.search-button > button
# “q”为淘宝首页输入框的标签,这里定位到该输入框,并设置要搜索商品的名字
        imput=driver.find_element_by_id("q")
        imput.send_keys(self.name)
# wait.until()该方法的作用是加载出来搜索结果总页数之后开始往下执行
        pageText=wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"#mainsrp-pager > div > div > div > div.total")))
        total=re.search("\d+",pageText.text)
# 该方法返回搜索结果的总页数
        return total.group(0)
# 提取出相应的数据
    def parseHtml(self):
        html=driver.page_source#获取网页源代码
        doc=qp(html)
# 得到到class为m-itemlist下面的class是.items .item的div
        iteams=doc(".m-itemlist .items .item").items()
# 根据标签选择器提取出需要的数据
        for item in iteams:
            # src=item(".pic .img").attr("src")
            src=item(".row .J_ClickStat").attr("href")  # 该店铺的链接
            person=item(".row .deal-cnt").text()    #购买该商品的人数
            title=item(".row .J_ClickStat").text().split("\n")  # 标题
            shop=item(".row .shopname").text()  # 商品
            location=item(".row .location").text()  # 地区
       # 将提取到的数据放到数组中保存起来
            data=[]
            data.append(str(title[0].strip()))
            data.append(str(shop.strip()))
            data.append(str(location.strip()))
       # 剔除无用字
            data.append(str(person[:-3].strip()))
            data.append(str(src).strip())
# 调用mysql.insert_data()方法将提取到的数据插入到数据库中
            self.mysql.insert_data(data)
#  对网页进行翻页的方法
    def nextpage(self,pagenumber):
# 定位到翻页的按钮前的输入框,也就是对其进行跳转
        pageInput=driver.find_element_by_css_selector("#mainsrp-pager > div > div > div > div.form > input")
        pageInput.clear()
        pageInput.send_keys(pagenumber)
# 定位到跳转按钮,对其进行翻页
        pageButton=driver.find_element_by_css_selector("#mainsrp-pager > div > div > div > div.form > span.btn.J_Submit")
        pageButton.click()
        wait.until(EC.text_to_be_present_in_element((By.CSS_SELECTOR,"#mainsrp-pager > div > div > div > ul > li.item.active > span"),str(pagenumber)))
        self.parseHtml()
# 定义主函数,调用上面的的方法
    def main(self):
        total=int(self.search())
        for i in range(2,total):
            self.nextpage(i)
        self.mysql.close()

The following code is a sorting algorithm, its main role is how many rows of data displayed on the screen, the main idea is: a user input to create a digital array of data obtained by reading the database, the number of later isolated and converted to type int, each time a data is added to the array, the array when the user wants to display a length equal to the maximum number of rows, their data array decreasing order, then, each time after reading a data, for the smallest of the array is compared, if compared to small, skipped, otherwise, the data insertion, the smallest of the data before and delete, save the last array is the largest number of pre-purchase n data.

The main code is as follows:

'''
遇到不懂的问题?Python学习交流群:821460695满足你的需求,资料都已经上传群文件,可以自行下载!
'''
#对数据进行排序,data为购买人数
def shot_data(self,data,i=10):    # i为用户想要显示的最大行数,默认为10行
    top=[]
    if i>len(data):
        i=len(data)
    for x in data:
        if len(top)<i:     # 控制数组的长度,另其大小等于i
            top.append(x)
            if len(top)==i:
                top.sort(reverse=True)        # 数组内的数据进行排序
        else:
            l=len(top)
            y=len(top)
            t=1
            if x>top[l-1]:    # 判断其数值是否大于数组内的最小值
                while x>top[l-t] and y>0:    # 控制循环条件
                    t+=1
                    y-=1
                if y!=0:    # y的值若是==0,那么该数值就是最大值
                    for c in range(1,t):
                        top[l-c]=top[l-c-1]
                    top[l-t+1]=x
                else:
                   for c in range(1,t):
                       top[l-c]=top[l-c-1]
                   top[0]=x
    return top    # 返回装有最大的前i个数的数组

The following code is extracted keywords to comments, we use some method jieba library.
The main code is as follows:

def dis_an(self):
# 清空显示界面
    self.txtMess.delete(1.0,END)
    t=to.Data_oper()
# 得到数据库中的存储信息
    test=t.dis_only_discuss()
# 定义字符串adg,v
    adg=""
    v=""
# 对评论进行分割并标注词性
    word=psg.cut(test)
# w为词意,f为词性
    for w,f in word:
    # 判断词性是否为形容词
        if f.startswith('a'):
            print(w)
            adg=adg+","+w
    # 判断词性是否为动词
        elif f.startswith('v'):
            v=v+","+w
    # 根据该词的权重提取出前5个词
tags=jieba.analyse.extract_tags(adg,topK=5)
    tags1=jieba.analyse.extract_tags(v,topK=5)

Guess you like

Origin www.cnblogs.com/Pythonmiss/p/11298086.html