Python爬虫--爬取Stanford University、Harvard University关于Professor的相关信息

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/zhou_pp/article/details/83758392
                        Python爬虫

要求:
Institute Bschool faculty directory
Stanford University https://www.gsb.stanford.edu/faculty-research/faculty
Harvard University https://www.hbs.edu/faculty/Pages/browse.aspx
Massachusetts Institute of Technology http://mitsloan.mit.edu/faculty-and-research/faculty-directory/
University of Cambridge https://www.jbs.cam.ac.uk/faculty-research/faculty-a-z/
University of Oxford https://www.sbs.ox.ac.uk/about-us/people?f[0]=department_facet%3A230
University of Chicago https://www.chicagobooth.edu/faculty/directory
Columbia University https://www8.gsb.columbia.edu/faculty-research/faculty-directory?full_time=y&division=All&op=Search&form_build_id=form-Gl3ByqgZuJU6goJNDyaIByhMhWNTlR8iWuhntfhsjf0&form_id=all_dept_form
Yale University https://som.yale.edu/faculty-research/faculty-directory
University of California, Berkeley http://facultybio.haas.berkeley.edu/faculty-photo/
University of Pennsylvania https://www.wharton.upenn.edu/faculty-directory/
只抓取以下职称:Professor,Associate Professor,Assistant Professor,Professor Emeritus
姓名、职称、邮箱、地址、电话、个人主页地址
个人背景介绍、研究领域、研究成果、教学(研究成果一般比较多,可以提供一个网页链接)
在这里插入图片描述
以下以Stanford University为例,进行爬虫。
因为该网站的反爬虫检测比较明显,所以需要设置代理ProxyHandler处理器。使用代理ip常常时爬虫和反爬虫比较好用的,因为很多网站会检测某一时间段某个Ip的访问次数,如果访问不像正常人,就会被禁止,而你的代码中就会报错(显示urlError,远程主机断开或者拒绝访问),而此时就要设置多个代理服务器,避免某个IP访问过于频繁。在urllib.request库中,通过ProxyHandler来设置使用代理服务器。
免费的开放代理获取基本没有成本,我们可以在一些代理网站上收集这些免费代理,测试后如果可以用,就把它收集起来用在爬虫上面。

免费短期代理网站举例:
西刺免费代理IP
快代理免费代理
Proxy360代理
全网代理IP
如果代理IP足够多,就可以像随机获取User-Agent一样,随机选择一个代理去访问网站。

import urllib.request
import urllib
import xlwt
from lxml import etree
import requests
from bs4 import BeautifulSoup
import time
import random

#36个
USER_AGENT = [
         "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
         "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
         "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
         "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
         "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
         "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
         "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
         "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
         "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
         "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
         "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
         "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)",
         "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
         "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
         "Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
         "MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
         "Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10",
         "Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
         "Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+",
         "Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0",
         "Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124",
         "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)",
         "UCWEB7.0.2.37/28/999",
         "NOKIA5700/ UCWEB7.0.2.37/28/999",
         "Openwave/ UCWEB7.0.2.37/28/999",
         "Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999",
         # iPhone 6:
         "Mozilla/6.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/8.0 Mobile/10A5376e Safari/8536.25",

     ]
# 代理ip
IPS = [  {'http':'115.223.253.171:9000'},
        {'http':'58.254.220.116:53579'},
        {'http':'119.254.94.103:45431'},
        {'http':'117.35.57.196:80'},
        {'http':'221.2.155.35:8060'},
        { 'http': '118.190.95.35:9001'},
        {'http': '124.235.181.175:80'},
        { 'http': '110.73.6.70:8123'},
        { 'http': '110.73.0.121:8123'},
        { 'http': '222.94.145.158:808'},
        { 'http': '118.190.95.35:9001'},
        { 'http': '124.235.181.175:80'},
        {'http':'112.230.247.164:8060'},
        {'http':'121.196.196.105:80'},
        {'http':'219.145.170.23'},
        {'http':'115.218.215.184:9000'},
        {'http':'58.254.220.116:53579'},
        {'http':'119.254.94.103:45431'},
        {'http':'117.35.57.196:80'},
        {'http':'221.2.155.35:8060'},
        {'http':'47.106.92.90:8081'}
]

访问主页的url,并抓取主页上的teacher,(至于想要通过例如职位之类的筛选,可以之后再处理)

k = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'K', 'U', 'V', 'W',
     'X', 'Y', 'Z']
def get_html(i):
    proxy = random.choice(IPS)
    b = random.choice(USER_AGENT)
    header2 = {"User-Agent": b}
    httpproxy_handler = urllib.request.ProxyHandler(proxy)
    opener = urllib.request.build_opener(httpproxy_handler)
    url = 'https://www.gsb.stanford.edu/faculty-research/faculty?last_name=' + k[i]
    req = urllib.request.Request(url, headers=header2)
    response=opener.open(req)
    html = response.read().decode('utf-8')
    return html

因为在主页中,并不是把所有的老师都放在同一个页面上,而是按照名字的字母进行分类,所以需要先把所有的老师的name,以及个人主页的网址爬出来。

def get_data(html):
    soup = BeautifulSoup(html, 'html.parser')
    div_people_list = soup.find_all('div' , attrs = {'class' : 'field-item even'})
    div_job_list=soup.find_all('div' , attrs = {'class' : 'person-position'})
    for list in div_people_list:
        a_s=list.find_all('a')
        for a in a_s :
            df=a['href'].replace(' ','')
            df.replace('\n','')
            url = 'https://www.gsb.stanford.edu%s'%df
            url.replace('\n','')
            url=url.replace('\n','')
            name = a.get_text()
            names.append(name)
            PersonalWebsites.append(url)

    for jlist in div_job_list:
        if(jlist!=None):
           j_s=jlist.string
        else:
           j_s='null'
        jobs.append(j_s)
    time.sleep(2)

再每次url切换的时候,可以通过time.sleep()、response.close等方式来减缓访问过于频繁。
然后就是根据每次得到的某个teacher的个人主页进行,爬取自己想要的部分数据

def get_data2(url):
     proxy = random.choice(IPS)
     print(proxy)
     b = random.choice(USER_AGENT)
     header2 = {"User-Agent": b}
     httpproxy_handler = urllib.request.ProxyHandler(proxy)
     opener = urllib.request.build_opener(httpproxy_handler)
     req = urllib.request.Request(url, headers=header2)
     response = opener.open(req)
     html = response.read().decode('utf-8')
     soup = BeautifulSoup(html, 'html.parser')  # html.parser是解析器
     s = etree.HTML(html)
     #研究领域(Academic Area)
     aca=soup.find('div', attrs={'class': 'field-name-field-academic-area-single'})
     if(aca!=None):
         acade=aca.find('a')
         academic=acade.get_text()
     else:
         academic='null'
     Academics.append(academic)

     emi = s.xpath('//*[@id="block-system-main"]/div/div/div[1]/fieldset/div/div[2]/div/div/span/a/@href')
     if (emi != []):
         email = emi[0]
     else:
         email = 'null'
     emails.append(email)

     # 获取联系方式
     te = s.xpath('//*[@id="block-system-main"]/div/div/div[1]/fieldset/div/div[1]/div[1]/div/div/a/text()')
     if (te!=[]):
         tel = te[0]
     else:
         tel = 'null'
     tels.append(tel)

     # 获取背景介绍
     s = etree.HTML(html)
     backgro = s.xpath('//*[@id="block-system-main"]/div/div/div[2]/div/div[1]/div[1]/p[1]/text()')
     if (backgro == []):
         background = 'null'
     else:
         background=backgro[0]
     Backgrounds.append(background)

     # 研究兴趣(Research Interests)
     inter = s.xpath('//*[@id="block-system-main"]/div/div/div[2]/div[4]/div/ul/li/text()')
     interes=''
     if (inter!=[]):
         for ints in inter:
             interes=interes+ints+"  "
         interest = interes
     else:
         interest='null'
     InterestsAreas.append(interest)
     response.close()
     time.sleep(2)

最后就是写主函数,调用自己写的方法了。完整的源码以及最后得到的结果可以从我上传的资源里查看(附上的是斯坦福和哈佛的)。
我遇到的其中一个问题是,本来是利用BeautifulSoup包,通过选择器来定位,但是存在一个问题,就是tag标签的嵌套问题。
比如一个段落里面<p>m,ssmn<br></br>nbmd<a href='www.baidu.com'>wewerw</a></p>,如果想要<p>之间的所有文本,就会出错,返回空列表;而利用xpath的方法定位到p标签的位置,可以得到最前部分的文本,但文本不全。

如果有人知道怎么解决这个问题,欢迎留言

猜你喜欢

转载自blog.csdn.net/zhou_pp/article/details/83758392
今日推荐