【爬虫基础】java用正则表达式处提取网页信息

在网络爬虫这一方面,java并不如Python好用。本文只用正则表达式提取信息,如果想要更精确地从html文件中提取信息,必须使用网页地解析器。可以通过第三方库,比如Jsoup等。

我们提取出豆瓣的Top250电影名

没有网页解析器,这是一件比较困难的事情。我们首先获得网页。JDK9开始新增的net.http包,这比原来的方式要简单许多
package newHTTP;

import java.io.IOException;
import java.net.URI;
import java.net.URLEncoder;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class HttpClientDoPost
{
    
    
	public static void main(String[] args) throws InterruptedException,IOException
	{
    
    
		doPost();
	}
	
	public static void doPost() throws InterruptedException
	{
    
    
		try
		{
    
    
			//创建客户机
			HttpClient client=HttpClient.newHttpClient();
			//定义请求,配置参数
			HttpRequest request=HttpRequest.newBuilder()
					.uri(URI.create("https://movie.douban.com/top250"))
					.header("User-Agent","HTTPie/0.9.2")
					.header("Content-Tpye", "application/x-www-form-urlencoded;charset=utf-8")
					.POST(HttpRequest.BodyPublishers.ofString("tAddress="+URLEncoder.encode("1 Market Street", "UTF-8")
					+"&tCity="+URLEncoder.encode("San Francisco","UTF-8")))
					.build();
			//得到网页
			HttpResponse<String> response=client.send(request, HttpResponse.BodyHandlers.ofString());
			System.out.println(response.body());
		}catch(IOException e)
		{
    
    
			e.printStackTrace();
		}
	}
}

获得网页后,我们注意到,每个电影名称都在alt标签下,于是我们用预搜索(零宽断言)来判定其位置

Pattern pattern=Pattern.compile("[!\s·,:()\\u4e00-\\u9fa5]*\\d{0,4}[!\s·,:()\\u4e00-\\u9fa5]+(?<=alt.*)");

代码不多,如下:

package spider;

import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.charset.StandardCharsets;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

public class DoubanTop250 
{
    
    
	public static void main(String[] args) throws InterruptedException {
    
    
		int count=0;
	for(int i=0;i<10;i++) {
    
    
		String content=getHtml("https://movie.douban.com/top250?start="+i*25).replace(" ", "");
		//根据电影名称及位置写出正则表达式
		Pattern pattern=Pattern.compile("[!\s·,:()\\u4e00-\\u9fa5]*\\d{0,4}[!\s·,:()\\u4e00-\\u9fa5]+(?<=alt.*)");
		Matcher matcher=pattern.matcher(content);
	
		while(matcher.find())
		{
    
    
			
			String match=matcher.group();
			if(!match.contains("豆瓣")) {
    
    
			System.out.println(match);
			count++;
			}
			
		}
		
		matcher.reset();
		
	}
	System.out.println(count);
	}
	

	private static String getHtml(String url) throws InterruptedException
	{
    
    
		try
		{
    
    
		
			HttpClient client=HttpClient.newHttpClient();
		
			HttpRequest request=HttpRequest.newBuilder(URI.create(url)).build();
			
			HttpResponse<String> response=client.send(request, BodyHandlers.ofString(StandardCharsets.UTF_8));
			
			String content= response.body();
		    return content;
					
		}catch(IOException e)
		{
    
    
			return null;
		}
	}
}

当然,笔者费劲心思,也没有得到250个,原因是电影名的情况太多,不容易匹配(可以看色戒就没有成功显示),如果要想更快更好的爬取内容,需要使用解析器。

猜你喜欢

转载自blog.csdn.net/m0_47202518/article/details/108330913