Analytical principle and packaging okhttp

 The flowchart okhttp

Process Picture
title

1. okhttpClient do initialization

2. Create a new Call object,

Call call = client.newCall(request);
public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
   @Override 
   public Call newCall(Request request) {
    return new RealCall(this, request, false /* for web socket */);
   }

}

RealCall realized Call.Factory interface to create an instance of RealCall

final class RealCall implements Call {
   @Override 
   public void enqueue(Callback responseCallback) {
   synchronized (this) {
   if (executed) throw new IllegalStateException("Already Executed");
      executed = true;
   }
    captureCallStackTrace();
    client.dispatcher().enqueue(new AsyncCall(responseCallback));
  }
}

1) to check whether the call has been executed, each call can only be executed once, if you want a completely different call, can be cloned using the call # clone method.

2) using client.dispatcher (). Enqueue (this) to perform the actual

3) AsyncCall is a subclass of RealCall

final class AsyncCall extends NamedRunnable {
    private final Callback responseCallback;

    AsyncCall(Callback responseCallback) {
      super("OkHttp %s", redactedUrl());
      this.responseCallback = responseCallback;
    }

  @Override protected void execute() {
  boolean signalledCallback = false;
  try {
     Response response = getResponseWithInterceptorChain();
  if (retryAndFollowUpInterceptor.isCanceled()) {
     signalledCallback = true;
     responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
  } else {
    signalledCallback = true;
    responseCallback.onResponse(RealCall.this, response);
  }
 } catch (IOException e) {
  ......
  responseCallback.onFailure(RealCall.this, e);
        
} finally {
    client.dispatcher().finished(this);
  }
}

AsyncCall inherited NamedRunnable implements the execute method, first call getResponseWithInterceptorChain () method to get a response, then get onReponse method is successful, it calls the callback, if that fails, call onFailure callback method. Finally, call the finished method of Dispatcher.

public abstract class NamedRunnable implements Runnable {
  ......

  @Override 
  public final void run() {
   ......
    try {
      execute();
    }
    ......
  }

  protected abstract void execute();
}

We can see NamedRunnable realized Runnbale interfaces and is an abstract class, abstract method execute (), which is invoked in run method, which means NamedRunnable is a task, and its subclasses AsyncCall achieved execute method

The Dispatcher (scheduler) Introduction

public synchronized ExecutorService executorService() {
    if (executorService == null) {
      executorService = new ThreadPoolExecutor(
//corePoolSize 最小并发线程数,如果是0的话,空闲一段时间后所有线程将全部被销毁
    0, 
//maximumPoolSize: 最大线程数,当任务进来时可以扩充的线程最大值,当大于了这个值就会根据丢弃处理机制来处理
    Integer.MAX_VALUE, 
//keepAliveTime: 当线程数大于corePoolSize时,多余的空闲线程的最大存活时间
    60, 
//单位秒
    TimeUnit.SECONDS,
//工作队列,先进先出
    new SynchronousQueue<Runnable>(),   
//单个线程的工厂         
   Util.threadFactory("OkHttp Dispatcher", false));
    }
    return executorService;
  }

OkHttp, we constructed as a single thread pool Example ExecutorService:

In Okhttp construct a core for the [0, Integer.MAX_VALUE] thread pool, it does not keep any minimum number of threads to create more threads at any time, when the thread is idle only live for 60 seconds, it uses a not blocking work queue storage element, called a "OkHttp Dispatcher" thread factory

synchronized void enqueue(AsyncCall call) {
    if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
      runningAsyncCalls.add(call);
      executorService().execute(call);
    } else {
      readyAsyncCalls.add(call);
    }
  }

The call realCall can see the current implementation of concurrent requests, then join runningAsyncCalls, executed immediately, or else join readyAsyncCalls queue.

For synchronous request, Dispatcher uses a Deque to save the synchronization tasks; for asynchronous request, Dispatcher uses two Deque , a request is ready to execute the save, a save request is being executed

1. Run queue immediately executed asynchronously

2. The FIFO buffer queue order

 

//Dispatcher的finished函数
void finished(AsyncCall call) {
    finished(runningAsyncCalls, call, true);
  }

private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
    int runningCallsCount;
    Runnable idleCallback;
    synchronized (this) {
      if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
      if (promoteCalls) promoteCalls();
      runningCallsCount = runningCallsCount();
      idleCallback = this.idleCallback;
    }

    if (runningCallsCount == 0 && idleCallback != null) {
      idleCallback.run();
    }
  }

//打开源码,发现它将正在运行的任务Call从队列runningAsyncCalls中移除后,获取运行数量判断是否进入了Idle状态,接着执行promoteCalls()函数,下面是promoteCalls()方法

private void promoteCalls() {
    if (runningAsyncCalls.size() >= maxRequests) return; // Already running max capacity.
    if (readyAsyncCalls.isEmpty()) return; // No ready calls to promote.

    for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
      AsyncCall call = i.next();

      if (runningCallsForHost(call) < maxRequestsPerHost) {
        i.remove();
        runningAsyncCalls.add(call);
        executorService().execute(call);
      }

      if (runningAsyncCalls.size() >= maxRequests) return; // Reached max capacity.
    }
  }

Interceptor chain

RealCall execute method have such a code: Response result = getResponseWithInterceptorChain ();

Flowchart: First custom interceptor

 


Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.addAll(client.interceptors());
    interceptors.add(retryAndFollowUpInterceptor);
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
      interceptors.addAll(client.networkInterceptors());
    }
    interceptors.add(new CallServerInterceptor(forWebSocket));
 
    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
        originalRequest, this, eventListener, client.connectTimeoutMillis(),
        client.readTimeoutMillis(), client.writeTimeoutMillis());
 
    return chain.proceed(originalRequest);

It can be seen in this method, we turn to add a user-defined interceptor, retryAndFollowUpInterceptor, BridgeInterceptor, CacheInterceptor, ConnectInterceptor, networkInterceptors, CallServerInterceptor, and pass blocker gave the RealInterceptorChain. The reason may in turn call interceptor, and then eventually return Response from previous post, we are dependent on the proceed method RealInterceptorChain
 

public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
      RealConnection connection) throws IOException {
    if (index >= interceptors.size()) throw new AssertionError();
 
    ......
 
    // Call the next interceptor in the chain.
    RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
        connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
        writeTimeout);
    Interceptor interceptor = interceptors.get(index);
    Response response = interceptor.intercept(next);
 
    ......
 
    return response;

Intercept execution of the current method interception, and calls the next (index + 1) interceptors. The next (index + 1) call dependence in the current interceptor Intercept method interception, the call to proceed method of RealInterceptorChain 

Retry interceptor and followup

 @Override
        public Response intercept(Chain chain) throws IOException {
            Request request = chain.request();//获取Request对象
            RealInterceptorChain realChain = (RealInterceptorChain) chain;//获取拦截器链对象,用于后面的chain.proceed(...)方法
            Call call = realChain.call();
            EventListener eventListener = realChain.eventListener();//监听器
 
            streamAllocation = new StreamAllocation(client.connectionPool(), createAddress(request.url()),
                    call, eventListener, callStackTrace);
 
            int followUpCount = 0;
            Response priorResponse = null;
            while (true) {//循环
                if (canceled) {
                    streamAllocation.release();
                    throw new IOException("Canceled");
                }
 
                Response response;
                boolean releaseConnection = true;
                try {
                    response = realChain.proceed(request, streamAllocation, null, null);//调用下一个拦截器
                    releaseConnection = false;
                } catch (RouteException e) {
                    // The attempt to connect via a route failed. The request will not have been sent.
                    if (!recover(e.getLastConnectException(), false, request)) {//路由异常,尝试恢复,如果再失败就抛出异常
                        throw e.getLastConnectException();
                    }
                    releaseConnection = false;
                    continue;//继续重试
                } catch (IOException e) {
                    // An attempt to communicate with a server failed. The request may have been sent.
                    boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
                    if (!recover(e, requestSendStarted, request)) throw e;连接关闭异常,尝试恢复
                            releaseConnection = false;
                    continue;//继续重试
                } finally {
                    // We're throwing an unchecked exception. Release any resources.
                    if (releaseConnection) {
                        streamAllocation.streamFailed(null);
                        streamAllocation.release();
                    }
                }
 
                // Attach the prior response if it exists. Such responses never have a body.
                if (priorResponse != null) {//前一个重试得到的Response
                    response = response.newBuilder()
                            .priorResponse(priorResponse.newBuilder()
                                    .body(null)
                                    .build())
                            .build();
                }
                //Figures out the HTTP request to make in response to receiving {@code userResponse}. This will
                //either add authentication headers, follow redirects or handle a client request timeout. If a
                //follow-up is either unnecessary or not applicable, this returns null.
                // followUpRequest方法的主要作用就是为新的重试Request添加验证头等内容
                Request followUp = followUpRequest(response);
                
                if (followUp == null) {//如果一个请求得到的响应code是200,则followUp是为null的。
                    if (!forWebSocket) { streamAllocation.release(); } return response; } 
                closeQuietly(response.body());
                //-------------------------------异常处理--------------------------------------------- 
                // if (++followUpCount > MAX_FOLLOW_UPS) {//超过最大的次数,抛出异常 
                 streamAllocation.release(); 
                throw new ProtocolException("Too many follow-up requests: " + followUpCount); }
            if (followUp.body() instanceof UnrepeatableRequestBody) {
                streamAllocation.release();
            } throw new HttpRetryException("Cannot retry streamed HTTP body", response.code()); 
        } if (!sameConnection(response, followUp.url())) {
            streamAllocation.release();
            streamAllocation = new StreamAllocation(client.connectionPool(), createAddress(followUp.url()), call, eventListener, callStackTrace);
        } else if (streamAllocation.codec() != null) {
            throw new IllegalStateException("Closing the body of " + response + " didn't close its backing stream. Bad interceptor?");
        }
        //--------------------------------------------------------------------------------
        request = followUp;//得到处理之后的Request,以用来继续请求,在哪继续请求?肯定还是沿着拦截器链继续搞呗 
         priorResponse = response;//由priorResponse持有 
         }
    }
 }

The main role of the interceptor is retried and followup 

When a request fails due to various reasons, if the routing or connection failure, the recovery is attempted, otherwise, the response code (ResponseCode), followup method Request will be reprocessed to obtain new Request, then the interceptor chain continue new Request. Of course, if responseCode is 200 words, which process is over
 

BridgeInterceptor 

BridgeInterceptor main role is to add request header request (request before), was added in response to the first response (Response Before). Look Source:

@Override public Response intercept(Chain chain) throws IOException {
    Request userRequest = chain.request();
    Request.Builder requestBuilder = userRequest.newBuilder();
//----------------------request----------------------------------------------
    RequestBody body = userRequest.body();
    if (body != null) {
      MediaType contentType = body.contentType();
      if (contentType != null) {//添加Content-Type请求头
        requestBuilder.header("Content-Type", contentType.toString());
      }
 
      long contentLength = body.contentLength();
      if (contentLength != -1) {
        requestBuilder.header("Content-Length", Long.toString(contentLength));
        requestBuilder.removeHeader("Transfer-Encoding");
      } else {
        requestBuilder.header("Transfer-Encoding", "chunked");//分块传输
        requestBuilder.removeHeader("Content-Length");
      }
    }
 
    if (userRequest.header("Host") == null) {
      requestBuilder.header("Host", hostHeader(userRequest.url(), false));
    }
 
    if (userRequest.header("Connection") == null) {
      requestBuilder.header("Connection", "Keep-Alive");
    }
 
    // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
    // the transfer stream.
    boolean transparentGzip = false;
    if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
      transparentGzip = true;
      requestBuilder.header("Accept-Encoding", "gzip");
    }
 
    List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
    if (!cookies.isEmpty()) {
      requestBuilder.header("Cookie", cookieHeader(cookies));
    }
 
    if (userRequest.header("User-Agent") == null) {
      requestBuilder.header("User-Agent", Version.userAgent());
    }
 
    Response networkResponse = chain.proceed(requestBuilder.build());
//----------------------------------response----------------------------------------------
    HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());//保存cookie
 
    Response.Builder responseBuilder = networkResponse.newBuilder()
        .request(userRequest);
 
    if (transparentGzip
        && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
        && HttpHeaders.hasBody(networkResponse)) {
      GzipSource responseBody = new GzipSource(networkResponse.body().source());
      Headers strippedHeaders = networkResponse.headers().newBuilder()
          .removeAll("Content-Encoding")//Content-Encoding、Content-Length不能用于Gzip解压缩
          .removeAll("Content-Length")
          .build();
      responseBuilder.headers(strippedHeaders);
      String contentType = networkResponse.header("Content-Type");
      responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
    }
 
    return responseBuilder.build();
  }

CacheInterceptor

 http caching Talking

HTTP caching

When the server receives a request, it will send the resources back in the 200 OK Last-Modified and ETag headers (will have two heads Oh, if the server supports caching), the resource is stored in the client cache, and this record two properties. When the client needs to send the same request, to judge according to whether Date + Cache-control cache expires, if expired, it carries and If-Modified-Since two If-None-Match header in the request. Values ​​of the two heads are Last-Modified value and ETag in response to the head. Analyzing two heads server through the local resource is not changed, the client does not need to be downloaded again returns response 304

CacheStrategy caching strategy is a class that tells CacheInterceptor is using the cache or the use of a network request;

Cache is encapsulates the actual operation of the cache;

DiskLruCache: Cache based DiskLruCache;


@Override public Response intercept(Chain chain) throws IOException {
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())//以request的url而来key,获取缓存
        : null;
 
    long now = System.currentTimeMillis();
    //缓存策略类,该类决定了是使用缓存还是进行网络请求
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest;//网络请求,如果为null就代表不用进行网络请求
    Response cacheResponse = strategy.cacheResponse;//缓存响应,如果为null,则代表不使用缓存
 
    if (cache != null) {//根据缓存策略,更新统计指标:请求次数、使用网络请求次数、使用缓存次数
      cache.trackResponse(strategy);
    }
    //缓存不可用,关闭
    if (cacheCandidate != null && cacheResponse == null) {
      closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }
    //如果既无网络请求可用,又没有缓存,则返回504错误
    // If we're forbidden from using the network and the cache is insufficient, fail.
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }
 
    // If we don't need the network, we're done.缓存可用,直接返回缓存
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }
 
    Response networkResponse = null;
    try {
      networkResponse = chain.proceed(networkRequest);//进行网络请求,得到网络响应
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }
    //HTTP_NOT_MODIFIED缓存有效,合并网络请求和缓存
    // If we have a cache response too, then we're doing a conditional get.
    if (cacheResponse != null) {
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();
 
        // Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);//更新缓存
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }
 
    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();
 
    if (cache != null) {
      //有响应体 & 可缓存
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        // Offer this request to the cache.
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);//写缓存
      }
 
      if (HttpMethod.invalidatesCache(networkRequest.method())) {//判断缓存的有效性
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }
 
    return response;
  }

The result returned by the cache policy class:
1, if the network is not available and no cache valid, an error is returned 504;
2, continue, if not the network request, the cache is used directly;
3, continue, if desired network is available, the network request;
4, continued, if there is a cache, and the network request returns HTTP_NOT_MODIFIED, described cache is still valid, and the network-responsive buffer combined results. At the same time update the cache;
5, to continue, if no cache, then writes the new cache;
we can see, CacheStrategy played a key role in the CacheInterceptor. This class is the network request or decide to use the cache. The key code for this class is getCandidate () method:
 

 private CacheStrategy getCandidate() {
      // No cached response.
      if (cacheResponse == null) {//没有缓存,直接网络请求
        return new CacheStrategy(request, null);
      }
 
      // Drop the cached response if it's missing a required handshake.
      if (request.isHttps() && cacheResponse.handshake() == null) {//https,但没有握手,直接网络请求
        return new CacheStrategy(request, null);
      }
 
      // If this response shouldn't have been stored, it should never be used
      // as a response source. This check should be redundant as long as the
      // persistence store is well-behaved and the rules are constant.
      if (!isCacheable(cacheResponse, request)) {//不可缓存,直接网络请求
        return new CacheStrategy(request, null);
      }
 
      CacheControl requestCaching = request.cacheControl();
      if (requestCaching.noCache() || hasConditions(request)) {
        //请求头nocache或者请求头包含If-Modified-Since或者If-None-Match
        //请求头包含If-Modified-Since或者If-None-Match意味着本地缓存过期,需要服务器验证
        //本地缓存是不是还能继续使用
        return new CacheStrategy(request, null);
      }
 
      CacheControl responseCaching = cacheResponse.cacheControl();
      if (responseCaching.immutable()) {//强制使用缓存
        return new CacheStrategy(null, cacheResponse);
      }
 
      long ageMillis = cacheResponseAge();
      long freshMillis = computeFreshnessLifetime();
 
      if (requestCaching.maxAgeSeconds() != -1) {
        freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
      }
 
      long minFreshMillis = 0;
      if (requestCaching.minFreshSeconds() != -1) {
        minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
      }
 
      long maxStaleMillis = 0;
      if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
        maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
      }
      //可缓存,并且ageMillis + minFreshMillis < freshMillis + maxStaleMillis
      // (意味着虽过期,但可用,只是会在响应头添加warning)
      if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
        Response.Builder builder = cacheResponse.newBuilder();
        if (ageMillis + minFreshMillis >= freshMillis) {
          builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
        }
        long oneDayMillis = 24 * 60 * 60 * 1000L;
        if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
          builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
        }
        return new CacheStrategy(null, builder.build());//使用缓存
      }
 
      // Find a condition to add to the request. If the condition is satisfied, the response body
      // will not be transmitted.
      String conditionName;
      String conditionValue;
      //流程走到这,说明缓存已经过期了
      //添加请求头:If-Modified-Since或者If-None-Match
      //etag与If-None-Match配合使用
      //lastModified与If-Modified-Since配合使用
      //前者和后者的值是相同的
      //区别在于前者是响应头,后者是请求头。
      //后者用于服务器进行资源比对,看看是资源是否改变了。
      // 如果没有,则本地的资源虽过期还是可以用的
      if (etag != null) {
        conditionName = "If-None-Match";
        conditionValue = etag;
      } else if (lastModified != null) {
        conditionName = "If-Modified-Since";
        conditionValue = lastModifiedString;
      } else if (servedDate != null) {
        conditionName = "If-Modified-Since";
        conditionValue = servedDateString;
      } else {
        return new CacheStrategy(request, null); // No condition! Make a regular request.
      }
 
      Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
      Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
 
      Request conditionalRequest = request.newBuilder()
          .headers(conditionalRequestHeaders.build())
          .build();
      return new CacheStrategy(conditionalRequest, cacheResponse);
    }

General process is as follows: (Relationship if-else ah)
1, no cache, direct network request;
2, if HTTPS, but no handshake, direct network request;
3, non-cacheable, a direct network request;
4, request headers nocache or request header contains if-Modified-Since or if-None-Match, it is necessary to verify the server is not local cache can continue to use, direct network request;
5 cacheable, and ageMillis + minFreshMillis <freshMillis + maxStaleMillis (although expired means , but can only be added in response to the first warning), the use of the cache;
6, the cache has expired, a request to add header: if-Modified-Since or if-None-Match, the network request;


ConnectInterceptor (core, connection pool)

Behind unfinished refilling

okhttp resolve

OkHttp 3.7 source code analysis (a) - Overall architecture

Guess you like

Origin blog.csdn.net/chengzuidongfeng/article/details/91867832