okhttp源码分析(二)

前言

文章很长,希望能够帮助到你,文末有干货!

《Android进阶之光》一书中以及网上一下写okhttp源码分析的文章,会对okhttp的源码分析以下几个点:
1、okhttp请求流程

  • 任务调度
  • 拦截器
  • 缓存策略
  • 失败重连

2、okhttp的复用连接池
基本上了解了这几个点对okhttp的源码分析算是完成了。加粗的是我自己比较想了解的,也是大部分博客会分析的。

源码分析工具

之前Eventbus的源码分析我是用IDEA导入进去看的,今天发现几个挺好用Mac系统的工具
1、Understand
挺不错的一款源码分析工具,除了点击相应类可以跳转意外还可以生成很多对应的关系图。
具体介绍可以看看这个博客:https://blog.csdn.net/weixin_42560563/article/details/80887684
下载地址:https://download.csdn.net/download/aa642531/10576608
这个是别人分享的需要积分下载。

2、StarUML
实际上如果需要彻底理解源码,最好还是要自己画一下UML图,搞清楚类的结构以及执行顺序等等。下载地址以及破解方法可以看看下面的博客,亲测可行。
https://blog.csdn.net/jonwu0102/article/details/81387083

3、OmniGraffle
实际上我一开始想搜的是这个,上面的那些是顺便的。因为我看到别的okhttp源码分析博客有这么一张图,觉得挺不错的。搜索了一下画这种图的软件。发现这个在mac系统中评分比较高。

下载地址:https://www.jb51.net/softs/581402.html
附:window系统
用window系统的朋友这样的软件多的是,也很多都有破解版的,自己网上找找吧。

okhttp源码分析

通过这张图可以发现,okhttp的核心类是okhttpClient,其他类都是以此为基础展开的。

一、okhttp请求过程
先看看之前的一段代码。可以看上面别人画好的流程图理解。

/**
 * 异步get请求
 */
public static void get() {
  //1.创建OkHttpClient对象
  OkHttpClient okHttpClient =new OkHttpClient.Builder().retryOnConnectionFailure(true).connectTimeout(3000, TimeUnit.SECONDS).build();

  //2.创建Request对象,设置一个url地址,设置请求方式。
  Request request = new Request.Builder().url("http://wanandroid.com/wxarticle/chapters/json")
          .method("GET", null)
          .build();
  //3.创建一个call对象,参数就是Request请求对象
  Call call = okHttpClient.newCall(request);
  //4.请求加入调度,重写回调方法
  call.enqueue(new Callback() {
    //请求失败执行的方法
    @Override
    public void onFailure(Call call, IOException e) {
      Log.d(TAG, "onFailure: 失败===》" + e.getMessage());
    }

    //请求成功执行的方法
    @Override
    public void onResponse(Call call, Response response) throws IOException {
      Log.d(TAG, "onResponse: " + response.body().string());
    }
  });
}

无论什么请求都需要用到okhttpclient,我们可以通过new的方式也可以通过Builder(建造者)的方式获取。显然这个方法也是用来初始化一些配置参数的。为了不占空间就不贴源码了。
接着okhttpClient会调用newCall方法,这个方法把请求的参数传入

/**
 * Prepares the {@code request} to be executed at some point in the future.
 */
@Override public Call newCall(Request request) {
  return RealCall.newRealCall(this, request, false /* for web socket */);
}

返回一个RealCall,这个类把okhttpclient和请求参数做了封装,执行execute或者enqueue真正开始执行请求。

enqueque()

@Override public void enqueue(Callback responseCallback) {
  synchronized (this) {
    if (executed) throw new IllegalStateException("Already Executed");
    executed = true;
  }
  captureCallStackTrace();
  eventListener.callStart(this);
  client.dispatcher().enqueue(new AsyncCall(responseCallback));
}

先看看我们用的比较多的异步请求enqueue,实际上在enqueque方法中把请求交给了okhttpClient里面封装的dispatch(任务调度器)的enqueue。所以看看Dispatch这个类
成员变量如下,通过起名其实大概可以知道用途,当然还是看了书的

/**最大并发数**/
private int maxRequests = 64;
/**每个主机最大请求数**/
private int maxRequestsPerHost = 5;
private @Nullable Runnable idleCallback;
/** 线程池 Executes calls. Created lazily. */
private @Nullable ExecutorService executorService;
/**正在等待的异步双端队列 Ready async calls in the order they'll be run. */
private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();
/** 正在运行的异步双端队列 Running asynchronous calls. Includes canceled calls that haven't finished yet. */
private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();
/** 正在运行的同步双端队列 Running synchronous calls. Includes canceled calls that haven't finished yet. */
private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();

再看看构造方法

public Dispatcher(ExecutorService executorService) {
  this.executorService = executorService;
}

public Dispatcher() {
}

public synchronized ExecutorService executorService() {
  if (executorService == null) {
    executorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS,
        new SynchronousQueue<>(), Util.threadFactory("OkHttp Dispatcher", false));
  }
  return executorService;
}

如果有配置线程池就用配置的,如果没有就用默认的。这里感觉跟系统的asynctask差不多。默认的适合执行大量耗时比较少的操作,提供自定义线程池一般看需求,不过一般用默认的就够了。回到Dispatch的enqueue方法

void enqueue(AsyncCall call) {
  synchronized (this) {
    readyAsyncCalls.add(call);
  }
  promoteAndExecute();
}

/**
 * Promotes eligible calls from {@link #readyAsyncCalls} to {@link #runningAsyncCalls} and runs
 * them on the executor service. Must not be called with synchronization because executing calls
 * can call into user code.
 *
 * @return true if the dispatcher is currently running calls.
 */
private boolean promoteAndExecute() {
  assert (!Thread.holdsLock(this));

  List<AsyncCall> executableCalls = new ArrayList<>();
  boolean isRunning;
  synchronized (this) {
    for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
      AsyncCall asyncCall = i.next();

      if (runningAsyncCalls.size() >= maxRequests) break; // Max capacity.
      if (runningCallsForHost(asyncCall) >= maxRequestsPerHost) continue; // Host max capacity.
      i.remove();
      executableCalls.add(asyncCall);
      runningAsyncCalls.add(asyncCall);
    }
    isRunning = runningCallsCount() > 0;
  }

  for (int i = 0, size = executableCalls.size(); i < size; i++) {
    AsyncCall asyncCall = executableCalls.get(i);
    asyncCall.executeOn(executorService());
  }

  return isRunning;
}

这里的代码跟之前的版本不一样了,看了篇博客以前的代码是这样的。

synchronized void enqueue(AsyncCall call) {
  if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
    runningAsyncCalls.add(call);
    executorService().execute(call);
  } else {
    readyAsyncCalls.add(call);
  }
}

以前是先判断有没有到临界点再加入到等待队列中缓存,现在是先经过缓存在执行的时候判断。看看现在代码,有个continue,意思就是这里如果如果一直有大于5个请求,这里是个死循环,会一直执行这个for循环直到asyncCalls执行异步方法executeOn。asyncCalls是一个任务的封装。如果是异步执行的需要传入一个线程池获取线程执行任务.

在来看看的aysncCalls源码

final class AsyncCall extends NamedRunnable {
  private final Callback responseCallback;

  AsyncCall(Callback responseCallback) {
    super("OkHttp %s", redactedUrl());
    this.responseCallback = responseCallback;
  }

  String host() {
    return originalRequest.url().host();
  }

  Request request() {
    return originalRequest;
  }

  RealCall get() {
    return RealCall.this;
  }

  /**
   * Attempt to enqueue this async call on {@code executorService}. This will attempt to clean up
   * if the executor has been shut down by reporting the call as failed.
   */
  void executeOn(ExecutorService executorService) {
    assert (!Thread.holdsLock(client.dispatcher()));
    boolean success = false;
    try {
      executorService.execute(this);
      success = true;
    } catch (RejectedExecutionException e) {
      InterruptedIOException ioException = new InterruptedIOException("executor rejected");
      ioException.initCause(e);
      eventListener.callFailed(RealCall.this, ioException);
      responseCallback.onFailure(RealCall.this, ioException);
    } finally {
      if (!success) {
        client.dispatcher().finished(this); // This call is no longer running!
      }
    }
  }

  @Override protected void execute() {
    boolean signalledCallback = false;
    timeout.enter();
    try {
      Response response = getResponseWithInterceptorChain();
      if (retryAndFollowUpInterceptor.isCanceled()) {
        signalledCallback = true;
        responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
      } else {
        signalledCallback = true;
        responseCallback.onResponse(RealCall.this, response);
      }
    } catch (IOException e) {
      e = timeoutExit(e);
      if (signalledCallback) {
        // Do not signal the callback twice!
        Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
      } else {
        eventListener.callFailed(RealCall.this, e);
        responseCallback.onFailure(RealCall.this, e);
      }
    } finally {
      client.dispatcher().finished(this);
    }
  }

其实我们还有一个地方没有看,同步请求,结合一起看可能会更好理解

@Override public Response execute() throws IOException {
  if (originalRequest.body instanceof DuplexRequestBody) {
    DuplexRequestBody duplexRequestBody = (DuplexRequestBody) originalRequest.body;
    return duplexRequestBody.awaitExecute();
  }
  synchronized (this) {
    if (executed) throw new IllegalStateException("Already Executed");
    executed = true;
  }
  captureCallStackTrace();
  timeout.enter();
  eventListener.callStart(this);
  try {
    client.dispatcher().executed(this);
    Response result = getResponseWithInterceptorChain();
    if (result == null) throw new IOException("Canceled");
    return result;
  } catch (IOException e) {
    e = timeoutExit(e);
    eventListener.callFailed(this, e);
    throw e;
  } finally {
    client.dispatcher().finished(this);
  }
}

同步执行的excute方法是RellCall里面的方法,和AsyncCall里面的excute都是关键的代码。
Response response = getResponseWithInterceptorChain()

这里的代码先不看,这个是同步和异步后续要执行的动作。同样的同步执行的excute也会经过调度器,但是通过源码发现,这里只是单纯的添加到容器和从容器中删除。这样做目的是为了方便统一取消请求以及需要记录请求数量(异步+同步)。

来到这里可以发现,Dispatch的作用跟它的命名一样只是用来调度的,如果是异步请求会在这里创建线程池、把异步同步的任务分配到asyncCalls执行。如果是同步把请求添加到双端队列中。而RellCall则是封装了okhttpclient以及请求request(请求参数)的请求的发起类。

接着看
Response response = getResponseWithInterceptorChain()

Response getResponseWithInterceptorChain() throws IOException {
  // 添加一系列拦截器,注意添加的顺序
  List<Interceptor> interceptors = new ArrayList<>();
  interceptors.addAll(client.interceptors());
  interceptors.add(retryAndFollowUpInterceptor);
  // 桥拦截器
  interceptors.add(new BridgeInterceptor(client.cookieJar()));
  // 缓存拦截器:从缓存中拿数据
  interceptors.add(new CacheInterceptor(client.internalCache()));
  // 网络连接拦截器:建立网络连接
  interceptors.add(new ConnectInterceptor(client));
  if (!forWebSocket) {
    interceptors.addAll(client.networkInterceptors());
  }
  // 服务器请求拦截器:向服务器发起请求获取数据
  interceptors.add(new CallServerInterceptor(forWebSocket));
  // 构建一条责任链
  Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
          originalRequest, this, eventListener, client.connectTimeoutMillis(),
          client.readTimeoutMillis(), client.writeTimeoutMillis());
  // 处理责任链
  return chain.proceed(originalRequest);
}

这个方法添加了一堆拦截器,并且可以看到有Chain,这里是一个责任链模式。

责任链模式的定义:使多个对象都有机会处理请求,从而避免请求的发送者和接受者之间的耦合关系, 将这个对象连成一条链,并沿着这条链传递该请求,直到有一个对象处理他为止。

看回早年在学校看的《Android设计模式》画过的类图以及应用场景

责任链模式应用场景:
1、对多个对象都可以处理同一请求,但具体由哪一个处理则在运行时决定。
2、在请求处理者不明确的情况下向多个对象中的一个提交一个请求
3、需要动态指定一组对象处理请求时

显然知道概念以后这里的拦截器是非常符合使用该模式的,所以只需要理解了责任链模式的基本实现相信这里也不会很难。

事实上设计模式很多只是前人对解决一类问题总结出来的经验。因为当一类问题或者需求出现以后,最后解决问题的方法都是一样或者类似的代码时,我们就给他起个名字叫xx模式。所以我觉得对于设计模式更多的应该是理解概念,以及应用场景。

回到源码中,看看proceed方法到底做了什么

public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
    RealConnection connection) throws IOException {
...    
  // Call the next interceptor in the chain.
  RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
      connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
      writeTimeout);
  Interceptor interceptor = interceptors.get(index);
  Response response = interceptor.intercept(next);

...


  return response;
}

会发现其实调用了proceed时因为index每次+1会经过下一个拦截器,当下一个拦截器内部调用proceed时会以此类推的往下执行。回想一下我们自定义拦截器时,最后必然会返回的就是proceed方法。
return chain.proceed(builder.build());

如果不再深入拦截器的话到这里其实已经可以结束了,因为通过拦截器我们已经拿到了要返回的Response。随后调用
responseCallback.onResponse(RealCall.this, response);
便到了我们使用里面的回调。
Log.d(TAG, "onResponse: " + response.body().string());

分析到这里okhttp的大概流程如下

拦截器内部实现


别人博客的一张图,对应源码其实可以知道,一开始添加的是自定义intercept的list。而随后依次添加的拦截器一次的作用是重连–>构建请求参数–>缓存(如果有则事件在这里消耗)–>网络请求

一、重试和重定向:RetryAndFollowUpInterceptor

@Override public Response intercept(Chain chain) throws IOException {
  // ...
  // 注意这里我们初始化了一个 StreamAllocation 并赋值给全局变量,它的作用我们后面会提到
  StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
          createAddress(request.url()), call, eventListener, callStackTrace);
  this.streamAllocation = streamAllocation;
  // 用来记录重定向的次数
  int followUpCount = 0;
  Response priorResponse = null;
  while (true) {
    if (canceled) {
      streamAllocation.release();
      throw new IOException("Canceled");
    }

    Response response;
    boolean releaseConnection = true;
    try {
      // 这里从当前的责任链开始执行一遍责任链,是一种重试的逻辑
      response = realChain.proceed(request, streamAllocation, null, null);
      releaseConnection = false;
    } catch (RouteException e) {
      // 调用 recover 方法从失败中进行恢复,如果可以恢复就返回true,否则返回false
      if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
        throw e.getLastConnectException();
      }
      releaseConnection = false;
      continue;
    } catch (IOException e) {
      // 重试与服务器进行连接
      boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
      if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
      releaseConnection = false;
      continue;
    } finally {
      // 如果 releaseConnection 为 true 则表明中间出现了异常,需要释放资源
      if (releaseConnection) {
        streamAllocation.streamFailed(null);
        streamAllocation.release();
      }
    }

    // 使用之前的响应 priorResponse 构建一个响应,这种响应的响应体 body 为空
    if (priorResponse != null) {
      response = response.newBuilder()
              .priorResponse(priorResponse.newBuilder().body(null).build())
              .build();
    }

    // 根据得到的响应进行处理,可能会增加一些认证信息、重定向或者处理超时请求
    // 如果该请求无法继续被处理或者出现的错误不需要继续处理,将会返回 null
    Request followUp = followUpRequest(response, streamAllocation.route());

    // 无法重定向,直接返回之前的响应
    if (followUp == null) {
      if (!forWebSocket) {
        streamAllocation.release();
      }
      return response;
    }

    // 关闭资源
    closeQuietly(response.body());

    // 达到了重定向的最大次数,就抛出一个异常
    if (++followUpCount > MAX_FOLLOW_UPS) {
      streamAllocation.release();
      throw new ProtocolException("Too many follow-up requests: " + followUpCount);
    }

    if (followUp.body() instanceof UnrepeatableRequestBody) {
      streamAllocation.release();
      throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
    }

    // 这里判断新的请求是否能够复用之前的连接,如果无法复用,则创建一个新的连接
    if (!sameConnection(response, followUp.url())) {
      streamAllocation.release();
      streamAllocation = new StreamAllocation(client.connectionPool(),
              createAddress(followUp.url()), call, eventListener, callStackTrace);
      this.streamAllocation = streamAllocation;
    } else if (streamAllocation.codec() != null) {
      throw new IllegalStateException("Closing the body of " + response
              + " didn't close its backing stream. Bad interceptor?");
    }

    request = followUp;
    priorResponse = response;
  }
}

使用缓存:CacheInterceptor

public final class CacheInterceptor implements Interceptor {
  @Override public Response intercept(Chain chain) throws IOException {
    Response cacheCandidate = cache != null ? cache.get(chain.request()) : null;
    long now = System.currentTimeMillis();
    // 根据请求和缓存的响应中的信息来判断是否存在缓存可用
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest; // 如果该请求没有使用网络就为空
    Response cacheResponse = strategy.cacheResponse; // 如果该请求没有使用缓存就为空
    if (cache != null) {
      cache.trackResponse(strategy);
    }
    if (cacheCandidate != null && cacheResponse == null) {
      closeQuietly(cacheCandidate.body());
    }
    // 请求不使用网络并且不使用缓存,相当于在这里就拦截了,没必要交给下一级(网络请求拦截器)来执行
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
              .request(chain.request())
              .protocol(Protocol.HTTP_1_1)
              .code(504)
              .message("Unsatisfiable Request (only-if-cached)")
              .body(Util.EMPTY_RESPONSE)
              .sentRequestAtMillis(-1L)
              .receivedResponseAtMillis(System.currentTimeMillis())
              .build();
    }
    // 该请求使用缓存,但是不使用网络:从缓存中拿结果,没必要交给下一级(网络请求拦截器)执行
    if (networkRequest == null) {
      return cacheResponse.newBuilder().cacheResponse(stripBody(cacheResponse)).build();
    }
    Response networkResponse = null;
    try {
      // 这里调用了执行链的处理方法,实际就是交给自己的下一级来执行了
      networkResponse = chain.proceed(networkRequest);
    } finally {
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }
    // 这里当拿到了网络请求之后调用,下一级执行完毕会交给它继续执行,如果使用了缓存就把请求结果更新到缓存里
    if (cacheResponse != null) {
      // 服务器返回的结果是304,返回缓存中的结果
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
                .headers(combine(cacheResponse.headers(), networkResponse.headers()))
                .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
                .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
                .cacheResponse(stripBody(cacheResponse))
                .networkResponse(stripBody(networkResponse))
                .build();
        networkResponse.body().close();
        cache.trackConditionalCacheHit();
        // 更新缓存
        cache.update(cacheResponse, response);
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }
    Response response = networkResponse.newBuilder()
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
    // 把请求的结果放进缓存里
    if (cache != null) {
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);
      }
      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }
    return response;
  }
}

缓存拦截器会根据请求的信息和缓存的响应的信息来判断是否存在缓存可用,如果有可以使用的缓存,那么就返回该缓存给用户,否则就继续使用责任链模式来从服务器中获取响应。当获取到响应的时候,又会把响应缓存到磁盘上面。

连接:ConnectInterceptor

public Response intercept(Chain chain) throws IOException {
  RealInterceptorChain realChain = (RealInterceptorChain) chain;
  Request request = realChain.request();
  StreamAllocation streamAllocation = realChain.streamAllocation();

  // We need the network to satisfy this request. Possibly for validating a conditional GET.
  boolean doExtensiveHealthChecks = !request.method().equals("GET");
  HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
  RealConnection connection = streamAllocation.connection();

  return realChain.proceed(request, streamAllocation, httpCodec, connection);
}

这里只是向服务器发起连接,而且真正的连接只是在RealConnection里面。
private final ConnectionPool connectionPool;

里面有个复用连接池,类似于线程池。就是我们想知道的连接复用的最核心的地方了。

public final class ConnectionPool {
  /**
   * Background threads are used to cleanup expired connections. There will be at most a single
   * thread running per connection pool. The thread pool executor permits the pool itself to be
   * garbage collected.
   */
  private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
      Integer.MAX_VALUE /* maximumPoolSize */, 60L /* keepAliveTime */, TimeUnit.SECONDS,
      new SynchronousQueue<>(), Util.threadFactory("OkHttp ConnectionPool", true));

  /** The maximum number of idle connections for each address. */
  private final int maxIdleConnections;
  private final long keepAliveDurationNs;
  private final Runnable cleanupRunnable = () -> {
    while (true) {
      long waitNanos = cleanup(System.nanoTime());
      if (waitNanos == -1) return;
      if (waitNanos > 0) {
        long waitMillis = waitNanos / 1000000L;
        waitNanos -= (waitMillis * 1000000L);
        synchronized (ConnectionPool.this) {
          try {
            ConnectionPool.this.wait(waitMillis, (int) waitNanos);
          } catch (InterruptedException ignored) {
          }
        }
      }
    }
  };

  private final Deque<RealConnection> connections = new ArrayDeque<>();
  final RouteDatabase routeDatabase = new RouteDatabase();
  boolean cleanupRunning;

  /**
   * Create a new connection pool with tuning parameters appropriate for a single-user application.
   * The tuning parameters in this pool are subject to change in future OkHttp releases. Currently
   * this pool holds up to 5 idle connections which will be evicted after 5 minutes of inactivity.
   */
  public ConnectionPool() {
    this(5, 5, TimeUnit.MINUTES);
  }

  public ConnectionPool(int maxIdleConnections, long keepAliveDuration, TimeUnit timeUnit) {
    this.maxIdleConnections = maxIdleConnections;
    this.keepAliveDurationNs = timeUnit.toNanos(keepAliveDuration);

    // Put a floor on the keep alive duration, otherwise cleanup will spin loop.
    if (keepAliveDuration <= 0) {
      throw new IllegalArgumentException("keepAliveDuration <= 0: " + keepAliveDuration);
    }
  }

  /** Returns the number of idle connections in the pool. */
  public synchronized int idleConnectionCount() {
    int total = 0;
    for (RealConnection connection : connections) {
      if (connection.allocations.isEmpty()) total++;
    }
    return total;
  }

  /** Returns total number of connections in the pool. */
  public synchronized int connectionCount() {
    return connections.size();
  }

  /**
   * Acquires a recycled connection to {@code address} for {@code streamAllocation}. If non-null
   * {@code route} is the resolved route for a connection.
   */
  void acquire(Address address, StreamAllocation streamAllocation, @Nullable Route route) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, route)) {
        streamAllocation.acquire(connection, true);
        return;
      }
    }
  }

  /**
   * Replaces the connection held by {@code streamAllocation} with a shared connection if possible.
   * This recovers when multiple multiplexed connections are created concurrently.
   */
  @Nullable Socket deduplicate(Address address, StreamAllocation streamAllocation) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, null)
          && connection.isMultiplexed()
          && connection != streamAllocation.connection()) {
        return streamAllocation.releaseAndAcquire(connection);
      }
    }
    return null;
  }

  void put(RealConnection connection) {
    assert (Thread.holdsLock(this));
    if (!cleanupRunning) {
      cleanupRunning = true;
      executor.execute(cleanupRunnable);
    }
    connections.add(connection);
  }

  /**
   * Notify this pool that {@code connection} has become idle. Returns true if the connection has
   * been removed from the pool and should be closed.
   */
  boolean connectionBecameIdle(RealConnection connection) {
    assert (Thread.holdsLock(this));
    if (connection.noNewStreams || maxIdleConnections == 0) {
      connections.remove(connection);
      return true;
    } else {
      notifyAll(); // Awake the cleanup thread: we may have exceeded the idle connection limit.
      return false;
    }
  }

  /** Close and remove all idle connections in the pool. */
  public void evictAll() {
    List<RealConnection> evictedConnections = new ArrayList<>();
    synchronized (this) {
      for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
        RealConnection connection = i.next();
        if (connection.allocations.isEmpty()) {
          connection.noNewStreams = true;
          evictedConnections.add(connection);
          i.remove();
        }
      }
    }

    for (RealConnection connection : evictedConnections) {
      closeQuietly(connection.socket());
    }
  }

  /**
   * Performs maintenance on this pool, evicting the connection that has been idle the longest if
   * either it has exceeded the keep alive limit or the idle connections limit.
   *
   * <p>Returns the duration in nanos to sleep until the next scheduled call to this method. Returns
   * -1 if no further cleanups are required.
   */
  long cleanup(long now) {
    int inUseConnectionCount = 0;
    int idleConnectionCount = 0;
    RealConnection longestIdleConnection = null;
    long longestIdleDurationNs = Long.MIN_VALUE;

    // Find either a connection to evict, or the time that the next eviction is due.
    synchronized (this) {
      for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
        RealConnection connection = i.next();

        // If the connection is in use, keep searching.
        if (pruneAndGetAllocationCount(connection, now) > 0) {
          inUseConnectionCount++;
          continue;
        }

        idleConnectionCount++;

        // If the connection is ready to be evicted, we're done.
        long idleDurationNs = now - connection.idleAtNanos;
        if (idleDurationNs > longestIdleDurationNs) {
          longestIdleDurationNs = idleDurationNs;
          longestIdleConnection = connection;
        }
      }

      if (longestIdleDurationNs >= this.keepAliveDurationNs
          || idleConnectionCount > this.maxIdleConnections) {
        // We've found a connection to evict. Remove it from the list, then close it below (outside
        // of the synchronized block).
        connections.remove(longestIdleConnection);
      } else if (idleConnectionCount > 0) {
        // A connection will be ready to evict soon.
        return keepAliveDurationNs - longestIdleDurationNs;
      } else if (inUseConnectionCount > 0) {
        // All connections are in use. It'll be at least the keep alive duration 'til we run again.
        return keepAliveDurationNs;
      } else {
        // No connections, idle or in use.
        cleanupRunning = false;
        return -1;
      }
    }

    closeQuietly(longestIdleConnection.socket());

    // Cleanup again immediately.
    return 0;
  }

  /**
   * Prunes any leaked allocations and then returns the number of remaining live allocations on
   * {@code connection}. Allocations are leaked if the connection is tracking them but the
   * application code has abandoned them. Leak detection is imprecise and relies on garbage
   * collection.
   */
  private int pruneAndGetAllocationCount(RealConnection connection, long now) {
    List<Reference<StreamAllocation>> references = connection.allocations;
    for (int i = 0; i < references.size(); ) {
      Reference<StreamAllocation> reference = references.get(i);

      if (reference.get() != null) {
        i++;
        continue;
      }

      // We've discovered a leaked allocation. This is an application bug.
      StreamAllocation.StreamAllocationReference streamAllocRef =
          (StreamAllocation.StreamAllocationReference) reference;
      String message = "A connection to " + connection.route().address().url()
          + " was leaked. Did you forget to close a response body?";
      Platform.get().logCloseableLeak(message, streamAllocRef.callStackTrace);

      references.remove(i);
      connection.noNewStreams = true;

      // If this was the last allocation, the connection is eligible for immediate eviction.
      if (references.isEmpty()) {
        connection.idleAtNanos = now - keepAliveDurationNs;
        return 0;
      }
    }

    return references.size();
  }
}

核心参数:
ececutor线程池、Deque双向队列维护RealConnect。也就是Socket的包装、RounteDatabase记录连接失败时的的路线名单

构造方法中的参数:
最大连接数默认为5个、保活时间为5分钟

  • 判断当前的连接是否可以使用:流是否已经被关闭,并且已经被限制创建新的流;
  • 如果当前的连接无法使用,就从连接池中获取一个连接;
  • 连接池中也没有发现可用的连接,创建一个新的连接,并进行握手,然后将其放到连接池中。

请求:CallServerInterceptor

public final class CallServerInterceptor implements Interceptor {

   @Override public Response intercept(Chain chain) throws IOException {
      RealInterceptorChain realChain = (RealInterceptorChain) chain;
      // 获取 ConnectInterceptor 中初始化的 HttpCodec
      HttpCodec httpCodec = realChain.httpStream();
      // 获取 RetryAndFollowUpInterceptor 中初始化的 StreamAllocation
      StreamAllocation streamAllocation = realChain.streamAllocation();
      // 获取 ConnectInterceptor 中初始化的 RealConnection
      RealConnection connection = (RealConnection) realChain.connection();
      Request request = realChain.request();

      long sentRequestMillis = System.currentTimeMillis();

      realChain.eventListener().requestHeadersStart(realChain.call());
      // 在这里写入请求头 
      httpCodec.writeRequestHeaders(request);
      realChain.eventListener().requestHeadersEnd(realChain.call(), request);

      Response.Builder responseBuilder = null;
      if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
         if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
            httpCodec.flushRequest();
            realChain.eventListener().responseHeadersStart(realChain.call());
            responseBuilder = httpCodec.readResponseHeaders(true);
         }
         // 在这里写入请求体
         if (responseBuilder == null) {
            realChain.eventListener().requestBodyStart(realChain.call());
            long contentLength = request.body().contentLength();
            CountingSink requestBodyOut =
                  new CountingSink(httpCodec.createRequestBody(request, contentLength));
            BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
            // 写入请求体
            request.body().writeTo(bufferedRequestBody);
            bufferedRequestBody.close();
            realChain.eventListener()
                  .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
         } else if (!connection.isMultiplexed()) {
            streamAllocation.noNewStreams();
         }
      }
      httpCodec.finishRequest();
      if (responseBuilder == null) {
         realChain.eventListener().responseHeadersStart(realChain.call());
         // 读取响应头
         responseBuilder = httpCodec.readResponseHeaders(false);
      }
      Response response = responseBuilder
            .request(request)
            .handshake(streamAllocation.connection().handshake())
            .sentRequestAtMillis(sentRequestMillis)
            .receivedResponseAtMillis(System.currentTimeMillis())
            .build();
      // 读取响应体
      int code = response.code();
      if (code == 100) {
         responseBuilder = httpCodec.readResponseHeaders(false);
         response = responseBuilder
               .request(request)
               .handshake(streamAllocation.connection().handshake())
               .sentRequestAtMillis(sentRequestMillis)
               .receivedResponseAtMillis(System.currentTimeMillis())
               .build();
         code = response.code();
      }
      realChain.eventListener().responseHeadersEnd(realChain.call(), response);
      if (forWebSocket && code == 101) {
         response = response.newBuilder()
               .body(Util.EMPTY_RESPONSE)
               .build();
      } else {
         response = response.newBuilder()
               .body(httpCodec.openResponseBody(response))
               .build();
      }
      // ...
      return response;
   }
}

责任链最后一个拦截器,拿到请求结果后返回给上一级。

总结

历时几天,后面拦截器的部分有点没什么耐心写了,但是也算大概有些了解一些。总得来说okhttp可以学到的是责任链模式在拦截器上的应用、socket如何模拟http请求、线程池以及缓存等方面性能的优化。额外的收获是当自己看会以前写的设计模式应用场景时又有了新的感触,以及建议多画流程图以及类图、时序图等等都有助于自己的思维更加具体化、逻辑更清晰理解过的东西更加难忘。

希望我的分享能够帮助到大家,这些是我根据下面的高级工程师技术大纲整理的一套系统全面而且非常深入的Android进阶资料
高级进阶技术大纲
Android系统进阶资料
这些资料都可以免费分享给大家!QQ群:【Android技术开发交流②】979045005:https://jq.qq.com/?_wv=1027&k=5gc0B9E
欢迎大家进群,领取资料,一起学习交流!

猜你喜欢

转载自blog.csdn.net/weixin_43902172/article/details/88537074