JAVA网络爬虫01-http client爬取网络内容


网络爬虫(Web crawler),是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本。我们一直以来都是使用HTTP协议访问互联网的网页,网络爬虫需要编写程序,在这里使用同样的HTTP协议访问网页。这里我们使用Java的HTTP协议客户端 HttpClient这个技术,来实现抓取网页数据。

环境准备

引入maven依赖

  <!-- HttpClient -->
    <dependency>
      <groupId>org.apache.httpcomponents</groupId>
      <artifactId>httpclient</artifactId>
      <version>4.5.3</version>
    </dependency>

    <!-- 日志 -->
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-log4j12</artifactId>
      <version>1.7.25</version>
    </dependency>

加入日志配置文件

log4j.rootLogger=DEBUG,A1
log4j.logger.cn.itcast = DEBUG

log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c]-[%p] %m%n

http Get请求

代码如下:

public static void getTest()throws Exception{
    CloseableHttpClient httpClient = HttpClients.createDefault();
    HttpGet httpGet = new HttpGet("http://www.itcast.cn?pava=zhangxm");
    CloseableHttpResponse response = httpClient.execute(httpGet);
    if (response.getStatusLine().getStatusCode() == 200) {
        String content = EntityUtils.toString(response.getEntity(), "UTF-8");
        System.out.println(content);
    }
}

http POST请求

/**
     * java 代码发送post请求并传递参数
     * @throws Exception
     */
    public static void postTest () throws Exception{
        //创建HttpClient对象
        CloseableHttpClient httpClient = HttpClients.createDefault();

        RequestConfig requestConfig = RequestConfig.custom()
                .setConnectTimeout(1000)//设置创建连接的最长时间
                .setConnectionRequestTimeout(500)//设置获取连接的最长时间
                .setSocketTimeout(10 * 1000)//设置数据传输的最长时间
                .build();

        //创建HttpGet请求
        HttpPost httpPost = new HttpPost("http://www.itcast.cn/");
        httpPost.setConfig(requestConfig);
        CloseableHttpResponse response = null;
        try {

            //声明存放参数的List集合
            List<NameValuePair> params = new ArrayList<NameValuePair>();
            params.add(new BasicNameValuePair("pava", "zhangxm"));

            //创建表单数据Entity
            UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(params, "UTF-8");

            //设置表单Entity到httpPost请求对象中
            httpPost.setEntity(formEntity);
            //使用HttpClient发起请求
            response = httpClient.execute(httpPost);
            //判断响应状态码是否为200
            if (response.getStatusLine().getStatusCode() == 200) {
                //如果为200表示请求成功,获取返回数据
                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
                //打印数据长度
                System.out.println(content);
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            //释放连接
            if (response == null) {
                try {
                    response.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
                httpClient.close();
            }
        }


    }

httpClient 连接池

如果每次请求都要创建HttpClient,会有频繁创建和销毁的问题,可以使用连接池来解决这个问题。

/**

  • 使用 httpClient 连接池 不用每次都新建销毁 client
  • @throws IOException
    */
    public static void connPoolTest() throws IOException {
    PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
    //设置最大连接数
    cm.setMaxTotal(200);
    //设置每个主机的并发数
    cm.setDefaultMaxPerRoute(20);
    for(int i=0;i<10;i++){
    CloseableHttpClient httpClient = HttpClients.custom().setConnectionManager(cm).build();
    System.out.println(“httpClient:”+httpClient);
    httpClient.close();
    }
    }

猜你喜欢

转载自blog.csdn.net/zhangxm_qz/article/details/109443783