Android: Simple understanding and use of the OkHttp principle of Android study notes

OkHttp

insert image description here

0. Foreword

This article mainly includes the following contents

  • 1. What is the overall process of OKHttp request?
  • 2. How does the OKHttp distributor work?
  • 3. How does the OKHttp interceptor work?
  • 4. How does OKHttp multiplex TCP connections?
  • 5. How to clear OKHttp idle connection?
  • 6. What are the advantages of OKHttp?
  • 7. What design patterns are used in the OKHttp framework?

The hierarchical structure diagram of Okhttp's subsystems is as follows:
insert image description here
insert image description here
In the entire Okhttp system, we need to understand the following key roles:
insert image description here

1. Request and response process

insert image description here

  • (1), when we OkhttpClientcreate one Calland initiate a synchronous or asynchronous request;
  • (2), okhttpthrough the unified management Dispatcherof all our RealCall(Call specific implementation classes), and through the execute() and enqueue() methods to solve 同步or request;异步
  • (3), execute() and enqueue() these two methods will finally call RealCallthe getResponseWithInterceptorChain() method to obtain the return result from the blocker chain;
  • (4) In the blocker chain, pass through RetryAndFollowUpInterceptor(重定向阻拦器), BridgeInterceptor(桥接阻拦器), CacheInterceptor(缓存阻拦器), ConnectInterceptor(连接阻拦器), CallServerInterceptor(网络阻拦器)request retry in turn, cache processing, and after establishing a connection with the service, get the returned data, and then solve the above blockers in sequence, and finally return the result to the caller.

The calling process is as follows:

insert image description here

1.1 Encapsulation of requests

The request is sent by Okhttp, and the real request is encapsulated in the RealCall implementation class of the interface Call, as shown below:

The Call interface looks like this:

public interface Call extends Cloneable {
    
    
    
  //返回当前请求
  Request request();

  //同步请求方法,此方法会阻塞当前线程知道请求结果放回
  Response execute() throws IOException;

  //异步请求方法,此方法会将请求添加到队列中,然后等待请求返回
  void enqueue(Callback responseCallback);

  //取消请求
  void cancel();

  //请求是否在执行,当execute()或者enqueue(Callback responseCallback)执行后该方法返回true
  boolean isExecuted();

  //请求是否被取消
  boolean isCanceled();

  //创建一个新的一模一样的请求
  Call clone();

  interface Factory {
    
    
    Call newCall(Request request);
  }
}

The construction method of RealCall is as follows:

final class RealCall implements Call {
    
    
    
  private RealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
    
    
    //我们构建的OkHttpClient,用来传递参数
    this.client = client;
    this.originalRequest = originalRequest;
    //是不是WebSocket请求,WebSocket是用来建立长连接的,后面我们会说。
    this.forWebSocket = forWebSocket;
    //构建RetryAndFollowUpInterceptor拦截器
    this.retryAndFollowUpInterceptor = new RetryAndFollowUpInterceptor(client, forWebSocket);
  }
}

RealCall implements the Call interface, which encapsulates the call of the request. The logic of this constructor is also very simple:

  • Assign OkHttpClient, Request and forWebSocket passed in from outside,
  • And created a retry and redirection interceptor RetryAndFollowUpInterceptor.

1.2 Sending of requests

The entire request of Okhttp is divided into 同步two 异步types:

  • 1. The synchronous request Call.exectute()directly returns the current request by calling the methodResponse
    • Because synchronous requests do not require a thread pool, there are no restrictions. So the distributor just makes a record. Subsequent synchronization requests can be made in the order of joining the queue
synchronized void executed(RealCall call) {
    
    
	runningSyncCalls.add(call);
}


final class RealCall implements Call {
    
    
    @Override public Response execute() throws IOException {
    
    
      synchronized (this) {
    
    
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
      }
      captureCallStackTrace();
      try {
    
    
        client.dispatcher().executed(this);
        Response result = getResponseWithInterceptorChain();
        if (result == null) throw new IOException("Canceled");
        return result;
      } finally {
    
    
        client.dispatcher().finished(this);
      }
    }
}


  • 2. The asynchronous request call Call.enqueue()method (AsyncCall)adds the request to the request queue
    • When the task being executed does not exceed the maximum limit 64and the number of simultaneous 同一Hostrequests does not exceed 5, it will be added to the executing queue and submitted to the thread pool at the same time. Otherwise, join the waiting queue first.
synchronized void enqueue(AsyncCall call) {
    
    
	//请求数最大不超过64,同一Host请求不能超过5个
	if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) 	  {
    
    
		runningAsyncCalls.add(call);
		executorService().execute(call);
	} else {
    
    
		readyAsyncCalls.add(call);
	}
}


final class RealCall implements Call {
    
    
    
      @Override public void enqueue(Callback responseCallback) {
    
    
        synchronized (this) {
    
    
          if (executed) throw new IllegalStateException("Already Executed");
          executed = true;
        }
        captureCallStackTrace();
        client.dispatcher().enqueue(new AsyncCall(responseCallback));
      }
}

AsyncCallEssentially one Runable, Dispatcherwill be scheduled ExecutorServiceto execute these Runable.

final class AsyncCall extends NamedRunnable {
    
    
    private final Callback responseCallback;

    AsyncCall(Callback responseCallback) {
    
    
      super("OkHttp %s", redactedUrl());
      this.responseCallback = responseCallback;
    }

    String host() {
    
    
      return originalRequest.url().host();
    }

    Request request() {
    
    
      return originalRequest;
    }

    RealCall get() {
    
    
      return RealCall.this;
    }

    @Override protected void execute() {
    
    
      boolean signalledCallback = false;
      try {
    
    
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
    
    
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
    
    
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response);
        }
      } catch (IOException e) {
    
    
        if (signalledCallback) {
    
    
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
    
    
          responseCallback.onFailure(RealCall.this, e);
        }
      } finally {
    
    
        client.dispatcher().finished(this);
      }
    }
  }


Regardless of whether it is a synchronous request or an asynchronous request, it will eventually be obtained getResponseWithInterceptorChain(), Responsebut the asynchronous request has an additional thread scheduling and asynchronous execution process.

1.3 Scheduling of requests

public final class Dispatcher {
    
    
    
      private int maxRequests = 64;
      private int maxRequestsPerHost = 5;
    
      /** Ready async calls in the order they'll be run. */
      private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();
    
      /** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
      private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();
    
      /** Running synchronous calls. Includes canceled calls that haven't finished yet. */
      private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();
      
      /** Used by {@code Call#execute} to signal it is in-flight. */
      synchronized void executed(RealCall call) {
    
    
        runningSyncCalls.add(call);
      }

      synchronized void enqueue(AsyncCall call) {
    
    
      //正在运行的异步请求不得超过64,同一个host下的异步请求不得超过5个
      if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
    
    
        runningAsyncCalls.add(call);
        executorService().execute(call);
      } else {
    
    
        readyAsyncCalls.add(call);
      }
    }
}

Dispatcher is a task scheduler that internally maintains three double-ended queues:

  • readyAsyncCalls: ready-to-run 异步request
  • runningAsyncCalls: the running 异步request
  • runningSyncCalls: the running 同步request

1. The synchronous request is directly added to the running synchronous request queue runningSyncCalls,

2. The asynchronous request will make a judgment:

  • If the running asynchronous request does not exceed 64, and the asynchronous request under the same host does not exceed 5个, add the request to the running synchronous request queue runningAsyncCallsand start executing the request, otherwise add to readyAsyncCallscontinue waiting.

1.4 Processing of requests

getResponseWithInterceptorChain() This method is where the request is actually initiated and processed

final class RealCall implements Call {
    
    
      Response getResponseWithInterceptorChain() throws IOException {
    
    
        // Build a full stack of interceptors.
        List<Interceptor> interceptors = new ArrayList<>();
        //这里可以看出,我们自定义的Interceptor会被优先执行
        interceptors.addAll(client.interceptors());
        //添加重试和重定向烂机器
        interceptors.add(retryAndFollowUpInterceptor);
        interceptors.add(new BridgeInterceptor(client.cookieJar()));
        interceptors.add(new CacheInterceptor(client.internalCache()));
        interceptors.add(new ConnectInterceptor(client));
        if (!forWebSocket) {
    
    
          interceptors.addAll(client.networkInterceptors());
        }
        interceptors.add(new CallServerInterceptor(forWebSocket));
    
        Interceptor.Chain chain = new RealInterceptorChain(
            interceptors, null, null, null, 0, originalRequest);
        return chain.proceed(originalRequest);
      }
}

Interceptor unifies functions such as network requests, caching, and transparent compression. Its implementation adopts the chain of responsibility model, each performing its own duties. Each function is one. After the upper-level Interceptorprocessing is completed, it is passed to the next level, and they are finally connected. became aInterceptor.Chain

insert image description here
The position determines the function. The one with the higher position is executed first, and the last one is copied and communicated with the server. The request is passed from the RetryAndFollowUpInterceptorbeginning layer by layer CallServerInterceptor, and each layer processes the request accordingly, and the processed structure is CallServerInterceptorreturned from layer to layer RetryAndFollowUpInterceptor, and finally requests The initiator of got the result returned by the server.

expand:

Responsibility chain, as the name suggests, is an execution chain used to handle related transaction responsibilities. There are multiple nodes on the execution chain, and each node has the opportunity (condition matching) to process the request transaction. The demand is passed to the next node to continue processing or returns to complete processing.

2. Interceptor

insert image description here
The order and function of adding the chain of responsibility shown above are shown in the following table:
insert image description here
The method of each interceptor follows these rules:

@Override public Response intercept(Chain chain) throws IOException {
    
    
    Request request = chain.request();
    //1 Request阶段,该拦截器在Request阶段负责做的事情

    //2 调用RealInterceptorChain.proceed(),其实是在递归调用下一个拦截器的intercept()方法
    response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);

    //3 Response阶段,完成了该拦截器在Response阶段负责做的事情,然后返回到上一层的拦截器。
    return response;     
    }
  }

From the above description we can see that:

  • Request is processed forward in the order of interpreter, while Response is processed in reverse. This refers to the principles of the OSI seven-layer model.
  • CallServerInterceptorEquivalent to the lowest physical layer, requests are packaged and delivered layer by layer from top to top, and responses are packaged and returned layer by layer from bottom to top. Very nice design.

The execution order of the interceptor: RetryAndFollowUpInterceptor -> BridgeInterceptor ->
CacheInterceptor -> ConnectInterceptor -> CallServerInterceptor.
insert image description here

Android advanced exploration of OkHttp principle

2.1 RetryAndFollowUpInterceptor

RetryAndFollowUpInterceptor is responsible for failure retry and redirection
insert image description here

2.2 BridgeInterceptor

The retry and redirection interceptor only works when it encounters an exception or needs to be redirected during the request process. After it receives the request, it will pass the request directly to the next interceptor through the interceptor chain, that is, BridgeInterceptor processing.

The reason why BridgeInterceptor is called header build interceptor is because the information we set for Request lacks some header information, which is needed at this time BridgeInterceptor 把缺失的首部放到 Request 中.

insert image description here

2.3 CacheInterceptor

We know that 节省流量和提高响应速度Okhttp 减轻服务端的访问压力has its own caching mechanism, and CacheInterceptor is used to read and update the cache.
insert image description here

2.3.1, HTTP cache principle

In the era of HTTP 1.0, the response uses Expiresa header to identify the validity period of the cache, and its value is one 绝对时间, eg Expires:Thu,31 Dec 2020 23:59:59 GMT. When the client sends out a network request again, it can compare the current time with the expires time of the last response to decide whether to use the cache or initiate a new request.

  • ExpiresThe biggest problem with using the header is that 依赖客户端的本地时间if the user modifies the local time, it will be impossible to accurately determine whether the cache has expired.

Therefore, starting from HTTP 1.1 Cache-Control, the header is used to indicate the cache status, and its priority is higher than that Expires, and the common values ​​are one or more of the following.

  • 1. private, the default value, identifies those private business logic data, such as recommendation data delivered according to user behavior. In this mode, nodes such as proxy servers in the network link should not cache this part of data, because it has no practical significance.
  • 2. publicContrary to private, public is used to identify those general business data, such as obtaining a news list, and everyone sees the same data, so both the client and the proxy server can cache it.
  • 3. no-cacheIt can be cached, but before the client uses the cache, it must go to the server to verify the validity of the cache resources, that is, the comparison cache part below, which we will introduce later.
  • 4. max-ageThe unit of cache duration is seconds, which refers to a period of time, such as one year, and is usually used for static resources that do not change frequently.
  • 5. no-storeAny node is prohibited from using cache.

2.3.2. Mandatory caching

On the basis of the above cache header specification, mandatory caching means that the network request response header identifies Expires or Cache-Control with max-age information. At this time, the client computing cache has not expired, so the local cache content can be used directly. Instead of actually launching a network request.

2.3.3. Negotiation cache (comparison with cache)

  • The biggest problem with forced caching is that once the server resources are updated, the client cannot obtain the latest resources until the cache time expires (unless the no-store header is manually added when requesting)
  • In addition, in most cases, the resources of the server cannot directly determine the cache expiration time, so it is more flexible to use the comparison cache.

Use Last-Modify / If-Modify-Sincethe header to implement negotiation caching. The specific method is to add the header to identify the last modification time of the resource in the server response header Last-Modify, and the unit is seconds. When the client initiates the request again, add the header and assign it the value of the header If-Modify-Sinceobtained in the last request .Last-Modify

After receiving the request, the server judges whether the cached resource is still valid. If it is valid, it returns status code 304 and the body is empty, otherwise, it sends the latest resource data. If the client finds that the status code is 304, it will take out the local cached data as a response.

There is a problem with using this scheme, that is, resource files have certain limitations in using the last modification time:

  • 1. The unit of Last-Modify is seconds. If some files are modified within one second, the modification time cannot be accurately identified.
  • 2. The resource modification time cannot be used as the only basis for resource modification. For example, the resource file is Daily Build and new ones are generated every day, but its actual content may not have changed.

Therefore, HTTP also provides another set of header information to deal with caching, ETag/If-None-Match. The process is the Last-Modifysame as that, only the header of the server response is changed Last-Modify, and the header sent by the client is changed If-None-Match. ETagIt is the unique identifier of the resource, and the resource change on the server will definitely cause the ETag to change. The specific generation method is controlled by the server, and the influencing factors of the scene include the final modification time of the file, file size, file number, and so on.

2.3.4, cache implementation of OKHttp

So much has been said above, in fact OKHttp is to implement the above process with code, namely:

  • 1. After getting the response for the first time, decide whether to cache according to the header information.
  • 2. In the next request, determine whether there is a local cache, whether it is necessary to use a comparison cache, encapsulate request header information, and so on.
  • 3. If the cache is invalid or needs to be compared with the cache, a network request is sent, otherwise the local cache is used.
2.3.4.1, cache strategy

The HTTP caching mechanism is also implemented depending on the parameter classes in the request and response headers. Whether the final response is retrieved from the cache or retrieved from the server, the process of the HTTP caching mechanism is as follows. The two mentioned above
insert image description here
insert image description here
are 强制缓存used IDs:

  • Expires: The value of Expires is the expiration time returned by the server, that is, when the next request is made, the request time is less than the expiration time returned by the server, and the cached data is directly used. The expiration time is generated by the server, and there may be discrepancies between the time of the client and the server.
  • Cache-Control: Expires has a time verification problem. All HTTP1.1 uses Cache-Control instead of Expires.
    insert image description here

Let's take a look 对比缓存at the two logos:

1、Last-Modified/If-Modified-Since

Last-Modified indicates when the resource was last modified.

When the client sends the first request, the server returns the time when the resource was last modified:

Last-Modified: Tue, 12 Jan 2016 09:31:27 GMT

When the client sends it again, it will carry If-Modified-Since in the header. Upload the resource time returned by the server last time to the server.

If-Modified-Since: Tue, 12 Jan 2016 09:31:27 GMT 

The server receives the resource modification time from the client and compares it with its own current resource modification time

  • If your own resource modification time is greater than the resource modification time sent by the client, it means that the resource has been modified, and return 200 to indicate that the resource needs to be re-requested.
  • Otherwise, return 304 to indicate that the resource has not been modified, and you can continue to use the cache.

The above is a 时间戳way to mark whether the resource has been modified, and there is also a ETagway to mark whether the resource has been modified. If it is 标识码发生改变, it means that the resource has been modified ETag优先级高于Last-Modified.

2、Etag/If-None-Match

ETag is an identification code of a resource file. When the client sends the first request, the server will return the identification code of the current resource:

ETag: "5694c7ef-24dc"

The code client sends again, and the resource identification code returned by the server last time will be carried in the header:

If-None-Match:"5694c7ef-24dc"

When the server receives the resource identification code sent by the client, it will compare it with its own current resource.

  • If it is different, it means that the resource has been modified, then return 200,
  • If they are the same, it means that the resource has not been modified, return 304, and the client can continue to use the cache.
2.3.4.2, Okhttp caching strategy

Okhttp's cache strategy is implemented according to the above flowchart. The specific implementation class is that CacheStrategythere are two parameters in the constructor of CacheStrategy:

CacheStrategy(Request networkRequest, Response cacheResponse) {
    
    
this.networkRequest = networkRequest;
this.cacheResponse = cacheResponse;
}

The meanings of these two parameter parameters are as follows:

  • networkRequest: network request.
  • cacheResponse: Cache response, file cache based on DiskLruCache, can be the md5 of the url in the request, and value is the cache queried in the file, which we will talk about below.

CacheStrategy uses these two parameters to generate the final strategy, which is a bit like a map operation. It will networkRequestinput cacheResponsethese two values, and then output these two values ​​after processing. The result of their combination is as follows:

  • If networkRequest is null, cacheResponse is null: only-if-cached (indicates that no network request is made, and the cache does not exist or expires, and a 503 error will be returned).
  • If networkRequest is null, cacheResponse is non-null: no network request is made, and the cache can be used, and the cache is returned directly without requesting the network.
  • If networkRequest is non-null and cacheResponse is null: a network request is required, and the cache does not exist or expires, directly access the network.
  • If networkRequest is non-null, cacheResponse is non-null: Header contains ETag/Last-Modified tags, which need to be used under conditional requests or need to access the network.

So how these four situations are judged, let's take a look.

CacheStrategy is Factory模式constructed by using. CacheStrategy.FactoryAfter the object is constructed, call its get()method to get the specific one CacheStrategy. CacheStrategy.Factory.get()The method is called inside CacheStrategy.Factory.getCandidate()the method, which is the core implementation.

public static class Factory {
    
    
    
        private CacheStrategy getCandidate() {
    
    
          //1. 如果缓存没有命中,就直接进行网络请求。
          if (cacheResponse == null) {
    
    
            return new CacheStrategy(request, null);
          }
    
          //2. 如果TLS握手信息丢失,则返回直接进行连接。
          if (request.isHttps() && cacheResponse.handshake() == null) {
    
    
            return new CacheStrategy(request, null);
          }

          //3. 根据response状态码,Expired时间和是否有no-cache标签就行判断是否进行直接访问。
          if (!isCacheable(cacheResponse, request)) {
    
    
            return new CacheStrategy(request, null);
          }
    
          //4. 如果请求header里有"no-cache"或者右条件GET请求(header里带有ETag/Since标签),则直接连接。
          CacheControl requestCaching = request.cacheControl();
          if (requestCaching.noCache() || hasConditions(request)) {
    
    
            return new CacheStrategy(request, null);
          }
    
          CacheControl responseCaching = cacheResponse.cacheControl();
          if (responseCaching.immutable()) {
    
    
            return new CacheStrategy(null, cacheResponse);
          }
    
          //计算当前age的时间戳:now - sent + age
          long ageMillis = cacheResponseAge();
          //刷新时间,一般服务器设置为max-age
          long freshMillis = computeFreshnessLifetime();
    
          if (requestCaching.maxAgeSeconds() != -1) {
    
    
            //一般取max-age
            freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
          }
    
          long minFreshMillis = 0;
          if (requestCaching.minFreshSeconds() != -1) {
    
    
            //一般取0
            minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
          }
    
          long maxStaleMillis = 0;
          if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
    
    
            maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
          }
    
          //5. 如果缓存在过期时间内则可以直接使用,则直接返回上次缓存。
          if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
    
    
            Response.Builder builder = cacheResponse.newBuilder();
            if (ageMillis + minFreshMillis >= freshMillis) {
    
    
              builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
            }
            long oneDayMillis = 24 * 60 * 60 * 1000L;
            if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
    
    
              builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
            }
            return new CacheStrategy(null, builder.build());
          }
    
          //6. 如果缓存过期,且有ETag等信息,则发送If-None-Match、If-Modified-Since、If-Modified-Since等条件请求
          //交给服务端判断处理
          String conditionName;
          String conditionValue;
          if (etag != null) {
    
    
            conditionName = "If-None-Match";
            conditionValue = etag;
          } else if (lastModified != null) {
    
    
            conditionName = "If-Modified-Since";
            conditionValue = lastModifiedString;
          } else if (servedDate != null) {
    
    
            conditionName = "If-Modified-Since";
            conditionValue = servedDateString;
          } else {
    
    
            return new CacheStrategy(request, null); // No condition! Make a regular request.
          }
    
          Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
          Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
    
          Request conditionalRequest = request.newBuilder()
              .headers(conditionalRequestHeaders.build())
              .build();
          return new CacheStrategy(conditionalRequest, cacheResponse);
        }
}

The logic of the whole function is implemented according to the above HTTP cache determination flow chart. The specific process is as follows:
insert image description here

insert image description here

2.3.4.3, cache management

In this article, we will analyze the caching mechanism of Okhttp, which is based on DiskLruCachethe

2.4 ConnectInterceptor

A StreamAllocation object is initialized in RetryAndFollowUpInterceptor. We say that a Socket object is initialized in this StreamAllocation object for connection, but there is no real connection. After the hader and cache information are processed, ConnectInterceptor is called to make a real connection.

public final class ConnectInterceptor implements Interceptor {
    
    
    
      @Override public Response intercept(Chain chain) throws IOException {
    
    
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        Request request = realChain.request();
        StreamAllocation streamAllocation = realChain.streamAllocation();
    
        boolean doExtensiveHealthChecks = !request.method().equals("GET");
        //创建输出流
        HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
        //建立连接
        RealConnection connection = streamAllocation.connection();
    
        return realChain.proceed(request, streamAllocation, httpCodec, connection);
      }
}

ConnectInterceptor establishes a connection in the Request phase, and the processing method is also very simple. Two objects are created:

  • HttpCodec: used to encode HTTP requests and decode HTTP responses
  • RealConnection: connection object, responsible for initiating a connection with the server.

Summarize:

  • The connection interceptor is to obtain a real connection with the server.
  • If there is currently a reusable one, use the reusable one.
  • If not, take it from the connection pool, if there is no connection pool, create a new connection, then put it into the connection pool, and then return the connection.
  • The connection pool in OkHttp can accommodate a maximum of 5 idle connections, which will be recycled after five minutes of idle time

2.5 CallServerInterceptor

CallServerInterceptor is responsible for reading the response data from the server and exchanging data with the server.

We have connected to the server through ConnectInterceptor, and then we are writing the request data and reading the returned data. the entire process:

  • write request header
  • write request body
  • read response header
  • read response body

3. Connection mechanism

  1. TCP/IP communication transport stream
    insert image description here

  2. TCP socket programming

insert image description here
The creation of the connection is completed under the coordination of the StreamAllocation object. We also said that it was created as early as RetryAndFollowUpInterceptor. The StreamAllocation object is mainly used to manage two key roles:

  • RealConnection: The object that actually establishes a connection, using Socket to establish a connection.
  • ConnectionPool: Connection pool, used to manage and reuse connections.

A StreamAllocation object is initialized in it. We say that a Socket object is initialized in this StreamAllocation object for connection, but there is no

3.1 Establish a connection

As we have said in the previous ConnectInterceptor analysis, onnectInterceptor is used to complete the connection. The real connection is implemented in RealConnect, and the connection is managed by the connection pool ConnectPool. The connection pool maintains connection keep-alive of up to 5 addresses, and the duration of each keep-alive is 5 minutes, and there is an asynchronous thread to clean up invalid connections.
It is mainly done by the following two methods:

HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();

insert image description here
After the above method is completed, a RealConnection object will be created, and then the connect() method of this method will be called to establish a connection, and finally the connect() method in the Socket in Java will be called.

3.2 Connection pool

We know that in a responsible network environment, frequent establishment of Sokcet connections (TCP three-way handshake) and disconnection of Sockets (TCP four-way handshake) are very consuming network resources and a waste of time. The keepalive connection in HTTP can reduce latency and improve Speed ​​plays a very important role.

Multiplexing connections requires connection management, and the concept of connection pool is introduced here.

Okhttp supports 5 concurrent KeepAlives. The default link life is 5 minutes (the time to keep alive after the link is idle). The connection pool is implemented by ConectionPool, which recycles and manages connections.

3.2.1 How does OKHttp reuse TCP connections?

ConnectInterceptorThe code to find the connection in will eventually call ExchangeFinder.findConnectionthe method, as follows:

# ExchangeFinder
//为承载新的数据流 寻找 连接。寻找顺序是 已分配的连接、连接池、新建连接
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
    int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
    
    
  synchronized (connectionPool) {
    
    
    // 1.尝试使用 已给数据流分配的连接.(例如重定向请求时,可以复用上次请求的连接)
    releasedConnection = transmitter.connection;
    result = transmitter.connection;

    if (result == null) {
    
    
      // 2. 没有已分配的可用连接,就尝试从连接池获取。(连接池稍后详细讲解)
      if (connectionPool.transmitterAcquirePooledConnection(address, transmitter, null, false)) {
    
    
        result = transmitter.connection;
      }
    }
  }

  synchronized (connectionPool) {
    
    
    if (newRouteSelection) {
    
    
      //3. 现在有了IP地址,再次尝试从连接池获取。可能会因为连接合并而匹配。(这里传入了routes,上面的传的null)
      routes = routeSelection.getAll();
      if (connectionPool.transmitterAcquirePooledConnection(address, transmitter, routes, false)) {
    
    
        foundPooledConnection = true;
        result = transmitter.connection;
      }
    }

  // 4.第二次没成功,就把新建的连接,进行TCP + TLS 握手,与服务端建立连接. 是阻塞操作
  result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
      connectionRetryEnabled, call, eventListener);

  synchronized (connectionPool) {
    
    
    // 5. 最后一次尝试从连接池获取,注意最后一个参数为true,即要求 多路复用(http2.0)
    //意思是,如果本次是http2.0,那么为了保证 多路复用性,(因为上面的握手操作不是线程安全)会再次确认连接池中此时是否已有同样连接
    if (connectionPool.transmitterAcquirePooledConnection(address, transmitter, routes, true)) {
    
    
      // 如果获取到,就关闭我们创建里的连接,返回获取的连接
      result = transmitter.connection;
    } else {
    
    
      //最后一次尝试也没有的话,就把刚刚新建的连接存入连接池
      connectionPool.put(result);
    }
  }
 
  return result;
}

Part of the code has been simplified above. It can be seen that the connection interceptor uses 5 methods to find the connection

  • 1. First, it will try to use the connection that has been assigned to the request. (In the case of an already allocated connection, such as a re-request during redirection, it means that there was already a connection last time)
  • 2. If there is no allocated available connection, try to obtain it from the connection pool. Because there is no routing information at this time, the matching conditions: the address is the same - the host, port, proxy, etc. are the same, and the matching connection can accept new requests.
  • 3. If it is not obtained from the connection pool, pass in the routes and try to obtain it again. This is mainly an operation for Http2.0. Http2.0 can reuse the connection between square.com and square.ca
  • 4. If it is not obtained for the second time, create a RealConnection instance, perform a TCP + TLS handshake, and establish a connection with the server.
  • 5. At this time, in order to ensure the multiplexing of Http2.0 connections, it will match from the connection pool for the third time. Because the handshake process of the newly established connection is not thread-safe, the same connection may be newly stored in the connection pool at this time.
  • 6. If it is matched for the third time, use the existing connection and release the newly created connection; if not, store the new connection in the connection pool and return.

The above is the operation of the connection interceptor trying to reuse the connection. The flow chart is as follows:
insert image description here

3.2.2 How to clear OKHttp idle connections?

As mentioned above, we will build a TCP connection pool, but if there are no more tasks, the idle connections should be cleared in time. How does OKHttp do it?

  # RealConnectionPool
  private val cleanupQueue: TaskQueue = taskRunner.newQueue()
  private val cleanupTask = object : Task("$okHttpName ConnectionPool") {
    
    
    override fun runOnce(): Long = cleanup(System.nanoTime())
  }

  long cleanup(long now) {
    
    
    int inUseConnectionCount = 0;//正在使用的连接数
    int idleConnectionCount = 0;//空闲连接数
    RealConnection longestIdleConnection = null;//空闲时间最长的连接
    long longestIdleDurationNs = Long.MIN_VALUE;//最长的空闲时间

    //遍历连接:找到待清理的连接, 找到下一次要清理的时间(还未到最大空闲时间)
    synchronized (this) {
    
    
      for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
    
    
        RealConnection connection = i.next();

        //若连接正在使用,continue,正在使用连接数+1
        if (pruneAndGetAllocationCount(connection, now) > 0) {
    
    
          inUseConnectionCount++;
          continue;
        }
		//空闲连接数+1
        idleConnectionCount++;

        // 赋值最长的空闲时间和对应连接
        long idleDurationNs = now - connection.idleAtNanos;
        if (idleDurationNs > longestIdleDurationNs) {
    
    
          longestIdleDurationNs = idleDurationNs;
          longestIdleConnection = connection;
        }
      }
	  //若最长的空闲时间大于5分钟 或 空闲数 大于5,就移除并关闭这个连接
      if (longestIdleDurationNs >= this.keepAliveDurationNs
          || idleConnectionCount > this.maxIdleConnections) {
    
    
        connections.remove(longestIdleConnection);
      } else if (idleConnectionCount > 0) {
    
    
        // else,就返回 还剩多久到达5分钟,然后wait这个时间再来清理
        return keepAliveDurationNs - longestIdleDurationNs;
      } else if (inUseConnectionCount > 0) {
    
    
        //连接没有空闲的,就5分钟后再尝试清理.
        return keepAliveDurationNs;
      } else {
    
    
        // 没有连接,不清理
        cleanupRunning = false;
        return -1;
      }
    }
	//关闭移除的连接
    closeQuietly(longestIdleConnection.socket());

    //关闭移除后 立刻 进行下一次的 尝试清理
    return 0;
  }

1. When the connection is added to the connection pool, the scheduled task will be started

2. If there is an idle connection, if the longest idle time is greater than 5 minutes or the idle number is greater than 5, remove and close the longest idle connection; if the idle number is not greater than 5 and the longest idle time is not greater than 5 minutes, then Return to the rest of the 5 minutes, then wait that time before cleaning.

3. If there is no idle connection, wait 5 minutes before trying to clean up.

4. If there is no connection, it will not be cleaned up.
insert image description here

4. What are the advantages of OKHttp?

1. It is easy to use. The appearance mode is used in the design to hide the complexity of the entire system, and the subsystem interface is uniformly exposed through a client OkHttpClient.

2. Strong scalability, you can use custom application interceptors and network interceptors to meet various user-defined needs

3. Powerful functions, supporting various protocols such as Spdy, Http1.X, Http2, and WebSocket

4. Reuse the underlying TCP (Socket) through the connection pool to reduce request delay

5. Seamlessly support GZIP to reduce data traffic

6. Support data caching to reduce repeated network requests

7. It supports automatic retrying of other ip of the host if the request fails, and automatic redirection

5. What design patterns are used in the OKHttp framework?

1. Builder mode: Both OkHttpClient and Request use the builder mode.

  • The main purpose is to separate the creation and representation of objects, and use Builder to assemble various configurations.

2. Appearance mode: OkHttp uses the appearance mode to hide the complexity of the entire system and expose the subsystem interface uniformly through a client OkHttpClient.

3. Responsibility chain mode: The core of OKHttp is the responsibility chain mode, which completes the request configuration through the responsibility chain composed of five default interceptors

4. Flyweight mode: The core of the Flyweight mode is reuse in the pool. OKHttp uses the connection pool when multiplexing TCP connections, and also uses the thread pool in asynchronous requests.

5. Factory mode The factory mode is similar to the builder mode, the difference is that the factory mode focuses on the object generation process, while the builder mode mainly focuses on the parameter configuration of the object.

  • An example is another CacheStrategy object in the CacheInterceptor interceptor

6. Observer mode, the use of websocket in Okhttp, because webSocket is a long connection, so it needs to be monitored, here is the use of observer mode

5. Summary

OKhttp is an open source network request project, a lightweight framework for Android network requests, supports file upload and download, supports https and other functions. The usage process is as follows:

(1) When we create a Call through OkhttpClient and initiate a synchronous or asynchronous request;

(2), okhttp will manage all our RealCall (Call implementation class) in a unified way through Dispatcher, and solve synchronous or asynchronous requests through execute() and enqueue() methods. Asynchronous needs to join double queues and use threads pool scheduling;

(3), execute() and enqueue() will finally call the getResponseWithInterceptorChain() method in RealCall to obtain the return result from the blocker chain;

(4) In the blocker chain, the requests are requested sequentially through RetryAndFollowUpInterceptor (redirection blocker), BridgeInterceptor (bridge blocker), CacheInterceptor (cache blocker), ConnectInterceptor (connection blocker), and CallServerInterceptor (network blocker) After retrying, cache processing, and establishing a connection with the service, get the returned data, and after the above-mentioned blockers solve it in turn, finally return the result to the caller.
insert image description here

6. Frequently asked questions

6.1. OkHttpClient has several ways of initiating requests, what are the differences?

There are two types, synchronous requests and asynchronous requests.

  • 1. Synchronous request Add the request (task) to the runningSyncCalls (executing) queue of the dispatcher (Dispatcher), and then directly call getResponseWithInterceptorChain
  • 2. Asynchronous request Add the request to the dispatcher (Dispatcher), go through two stages: readyAsyncCalls, runningAsyncCalls, and then call
    getResponseWithInterceptorChain.
    • Because the task being executed does not exceed the maximum limit 64, and the number of simultaneous 同一Hostrequests does not exceed5

6.2 What is the Dispatcher scheduler in OkHttp, and how does it implement scheduling

The scheduler is the scheduling of task execution, Dispatcherwhich maintains three queues, which are:

  • A ready asynchronous execution queue readyAsyncCalls,
  • Two executing queues (asynchronously executing queue: runningAsyncCalls, synchronously executing queue: runningSyncCalls)

在每个任务执行完成最后都会执行dispatch.finish()方法,之后会重新执行到promoteAndExecute()方法, meaning: promote and execute. 只要队列中有任务,就会一直重复执行.

At the same time, two variables for the number of concurrent executions are defined in the scheduler, which respectively limit the number of concurrent executions. The maximum number of simultaneous executions is 64, and the number of executions on the same host is 5.

A ExecutorServicethread , and the thread pool is used to execute tasks.

6.3. How is the thread pool in OkHttp implemented?

A thread pool is initialized in OkHttp's scheduler,

  • In the thread pool 没有核心线程, the maximum supported thread is Int.MAX_VALUE ( 无界线程池), and an idle time of 60 seconds is defined, and empty threads exceeding 60 seconds will be recycled.
  • And the thread generated by ThreadFactory defines the thread name.
  • Its work queue is SynchronousQueue synchronous queue.
  • When a task arrives, it will be added to the synchronization queue. If there is an idle thread, it will be taken out of the synchronization queue and processed in the idle thread.
  • When there is no idle thread, a new thread will be created before accepting tasks for execution.
  • The OkHttp thread pool is designed with a core thread of 0 because the client may not have network requests for a period of time. In order to avoid wasting unnecessary thread memory, the minimum thread is not reserved.
  • At the same time, the maximum thread is set to Int.MAX_VALUE. In order to prevent a large number of requests from entering at the same time, causing some requests to be discarded, set 60 seconds as the maximum idle time of the thread, and recycle the thread when it is not used for a period of time.

6.4. What interceptors does OkHttp have and what are their functions?

insert image description here
insert image description here

6.5, cache interceptor

The cache interceptor mainly does the following things:

  • 1. Obtain the cache through the cache strategy (CacheStrategy)
  • 2. Determine whether the network can be used. If the network is not allowed: the cache is empty, then return 504 directly, and return the cache directly if the cache is not empty
  • 3. When the network can be used, call the subsequent interceptor to continue the network request and obtain the result of the network request
  • 4. Under the request result, if there is a cache ratio, check whether the comparison code is 304, if yes, the cache can still be used, return the cache directly, and update the cache
  • 5. If the cache is unavailable, add/update the cache and return the latest network request result.

There are a few questions here:

  • (1) How is the cache stored and retrieved?
    • The OkHttp cache uses the URL of the Request as the key to store, and uses DiskLruCachean algorithm to store the cache.
  • (2) Will every request go to store and get the cache?
  • (3) How does the cache strategy (CacheStrategy) deal with the network and cache? When is networkRequest empty?

(2) Will every request go to store and get the cache?

Here you need to see if the cache is empty? If the cache variable is empty, there will be no cache, so when is this variable initialized?
insert image description here
The above figure shows that the cache should be created when OkHttpClient is created:

 val client = OkHttpClient.Builder().cache(Cache(cacheDir, 10 * 1024 * 1024))

The second question should be: when the developer sets the cache, the cache will be stored and retrieved, and the cache size is limited.

(3) How does the cache strategy (CacheStrategy) deal with the network and cache? When is networkRequest empty?

class CacheStrategy internal constructor(
val networkRequest: Request?,
val cacheResponse: Response?
)

  fun compute(): CacheStrategy {
    
    
    val candidate = computeCandidate()
    return candidate
  }
  private fun computeCandidate(): CacheStrategy {
    
    
    //没有缓存情况下,返回空缓存
    if (cacheResponse == null) {
    
    
      return CacheStrategy(request, null)
    }
    //...
    //缓存控制不是 no-cache,且未过期
    if (!responseCaching.noCache && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
    
    
      val builder = cacheResponse.newBuilder()
      return CacheStrategy(null, builder.build())
    }
    return CacheStrategy(conditionalRequest, cacheResponse)
  }

During the survival of this cache strategy, there is only one case where the cache will be returned, that is, if the cache control is not no-cache, and the cache has not expired, the cache will be returned, and then the networkRequest will be set to empty.

6.6. Why do we need a connection pool? How did it happen?

We know that in a responsible network environment, frequent establishment of Sokcet connections (TCP three-way handshake) and disconnection of Sockets (TCP four-way handshake) are very consuming network resources and a waste of time. HTTP connections are important for reducing latency and improving speed keepalive. has a very important role.

What is the keepalive mechanism? That is, multiple copies of data can be continuously sent in one TCP connection without disconnecting. Therefore, the multiple use of connections, that is, multiplexing, becomes extremely important, and multiplexing connections requires management of connections, so there is the concept of connection pools.

  • OkHttp uses ConectionPool to implement the connection pool, which supports 5 concurrent KeepAlives by default, and the default link life is 5 minutes.

How did it happen?

1) First, ConectionPool maintains a queue 双端队列Deque, which is a queue that both ends can enter and exit, to store connections.

2) Then in the ConnectInterceptor, which is the interceptor responsible for establishing the connection, it will first find the available connection, that is, obtain the connection from the connection pool, and specifically call the get method of the ConectionPool.

RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    
    
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
    
    
      if (connection.isEligible(address, route)) {
    
    
        streamAllocation.acquire(connection, true);
        return connection;
      }
    }
    return null;
  }

That is, the double-ended queue is traversed. If the connection is valid, the acquire method will be called to count and return the connection.

3) If no available connection is found, a new connection will be created, and the established connection will be added to the double-ended queue, and the threads in the thread pool will start running at the same time. In fact, the put method of ConectionPool is called.

public final class ConnectionPool {
    
    
    void put(RealConnection connection) {
    
    
        if (!cleanupRunning) {
    
    
            //没有连接的时候调用
            cleanupRunning = true;
            executor.execute(cleanupRunnable);
        }
        connections.add(connection);
    }
}

4) In fact, there is only one thread in this thread pool, which is used to clean up the connection, that is, the abovecleanupRunnable

  • This runnable will keep calling the cleanup method to clean up the thread pool, return the time interval for the next cleanup, and then enter wait to wait.
  • When the idle connection maxIdleConnectionsexceeds 5个or keepalivetime 大于5分钟, the connection will be cleaned up.

5) There is a question here, how does it belong to an idle connection?

In fact, it is about a method acquire counting method just mentioned:

  public void acquire(RealConnection connection, boolean reportedAcquired) {
    
    
    assert (Thread.holdsLock(connectionPool));
    if (this.connection != null) throw new IllegalStateException();

    this.connection = connection;
    this.reportedAcquired = reportedAcquired;
    connection.allocations.add(new StreamAllocationReference(this, callStackTrace));
  }

In RealConnection, there is one StreamAllocation虚引用列表allocations. Every time a connection is created, the corresponding connection will be StreamAllocationReferenceadded to the list, and if the connection is closed, the object will be removed.

Summarize:

The work of the connection pool is so much and not complicated. The main thing is to manage the double-ended queue Deque. The available connections are used directly, and then the connections are cleaned up regularly. At the same time, automatic recycling is realized through the reference count of StreamAllocation.

reference

1. Android open source framework source code appreciation: Okhttp
2. [Knowledge points] OkHttp principle 8 consecutive questions
3. Android advanced exploration OkHttp principle
4. Interview question analysis of OkHttp principle analysis
5. Talk about several interview questions of OKHttp
6. Android interview Android advanced (seventeen) - OkHttp related questions

Guess you like

Origin blog.csdn.net/JMW1407/article/details/122698278
Recommended