Analysis of Android's OkHttp framework

Okhttp is a network request framework commonly used in Android development. Below, I will divide okhttp into three main lines for analysis according to my own understanding.

Usage

   OkHttpClient okHttpClient= new OkHttpClient.Builder().build();
        Request request=new Request.Builder().url("").build();
        Call call=okHttpClient.newCall(request);
        call.enqueue(new Callback() {
    
    
            @Override
            public void onFailure(@NotNull Call call, @NotNull IOException e) {
    
    
                
            }

            @Override
            public void onResponse(@NotNull Call call, @NotNull Response response) throws IOException {
    
    

            }
        });

The first main line of OkHttp: Where are the requests sent?

First look at the call.enqueue method, in the call interface

/**
   * Schedules the request to be executed at some point in the future.
   *
   * <p>The {@link OkHttpClient#dispatcher dispatcher} defines when the request will run: usually
   * immediately unless there are several other requests currently being executed.
   *
   * <p>This client will later call back {@code responseCallback} with either an HTTP response or a
   * failure exception.
   *
   * @throws IllegalStateException when the call has already been executed.
   */
  void enqueue(Callback responseCallback);

Take another look at the implementation class of the call interface

  @Override public void enqueue(Callback responseCallback) {
    
    
    synchronized (this) {
    
    
      if (executed) throw new IllegalStateException("Already Executed");
      executed = true;
    }
    captureCallStackTrace();
    eventListener.callStart(this);
    client.dispatcher().enqueue(new AsyncCall(responseCallback));
  }

Go in and take a look at the client.dispatcher().enqueue(new AsyncCall(responseCallback)) method

  private int maxRequests = 64;
  private int maxRequestsPerHost = 5;
  synchronized void enqueue(AsyncCall call) {
    
    
    if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
    
    
      runningAsyncCalls.add(call);
      executorService().execute(call);
    } else {
    
    
      readyAsyncCalls.add(call);
    }
  }

You can find that there are two queues in the dispatcher class, one is runningSyncCalls and the other is readyAyncCalls, which is the running queue and the waiting queue.
And there are judgment conditions. When the number of running queue requests is less than 64 and the number of accesses to the same target machine is less than 5, the request will be put into the running queue, otherwise it will be put into the waiting queue.
But you will find a problem. Suppose we have 84 requests, then we will put 64 requests into the running queue, and the remaining 86-64 requests into the waiting queue. Then when will the waiting queue wait? What about processing? What if there is data in both queues and there are 20 more pieces of data? At this time, let’s look at the second main line.

The second main line of OkHttp: How are requests consumed?

Click to see executorService().execute(call)

public interface Executor {
    
    

    /**
     * Executes the given command at some time in the future.  The command
     * may execute in a new thread, in a pooled thread, or in the calling
     * thread, at the discretion of the {@code Executor} implementation.
     *
     * @param command the runnable task
     * @throws RejectedExecutionException if this task cannot be
     * accepted for execution
     * @throws NullPointerException if command is null
     */
    void execute(Runnable command);
}

You can find that the focus is on the call we passed in, back to


  synchronized void enqueue(AsyncCall call) {
    
    
    if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
    
    
      runningAsyncCalls.add(call);
      executorService().execute(call);
    } else {
    
    
      readyAsyncCalls.add(call);
    }
  }

Observe AsyncCall again

final class AsyncCall extends NamedRunnable{
    
    
    private final Callback responseCallback;

    AsyncCall(Callback responseCallback) {
    
    
      super("OkHttp %s", redactedUrl());
      this.responseCallback = responseCallback;
    }

    String host() {
    
    
      return originalRequest.url().host();
    }

    Request request() {
    
    
      return originalRequest;
    }

    RealCall get() {
    
    
      return RealCall.this;
    }

    @Override protected void execute() {
    
    
      boolean signalledCallback = false;
      try {
    
    
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
    
    
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
    
    
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response);
        }
      } catch (IOException e) {
    
    
        if (signalledCallback) {
    
    
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
    
    
          eventListener.callFailed(RealCall.this, e);
          responseCallback.onFailure(RealCall.this, e);
        }
      } finally {
    
    
        client.dispatcher().finished(this);
      }
    }
  }

Click in and take a look at the inherited class NamedRunnable

public abstract class NamedRunnable implements Runnable {
    
    
  protected final String name;

  public NamedRunnable(String format, Object... args) {
    
    
    this.name = Util.format(format, args);
  }

  @Override public final void run() {
    
    
    String oldName = Thread.currentThread().getName();
    Thread.currentThread().setName(name);
    try {
    
    
      execute();
    } finally {
    
    
      Thread.currentThread().setName(oldName);
    }
  }

  protected abstract void execute();
}

It is found that the execute() method will be executed in the run() method, and the execute method here is abstract,
which means that when we execute executorService().execute(call);, we will actually call the execute() method of AsyncCall. It means that when we put the request in the running queue, it will be immediately put into the thread pool for execution.

 @Override protected void execute() {
    
    
      boolean signalledCallback = false;
      try {
    
    
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
    
    
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
    
    
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response);
        }
      } catch (IOException e) {
    
    
        if (signalledCallback) {
    
    
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
    
    
          eventListener.callFailed(RealCall.this, e);
          responseCallback.onFailure(RealCall.this, e);
        }
      } finally {
    
    
        client.dispatcher().finished(this);
      }
    }

The code here represents how to request access to the server. Click to take a look at getResponseWithInterceptorChain. getResponseWithInterceptorChain is a method for processing requests.

Response getResponseWithInterceptorChain() throws IOException {
    
    
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.addAll(client.interceptors());
    interceptors.add(retryAndFollowUpInterceptor);
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
    
    
      interceptors.addAll(client.networkInterceptors());
    }
    interceptors.add(new CallServerInterceptor(forWebSocket));

    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
        originalRequest, this, eventListener, client.connectTimeoutMillis(),
        client.readTimeoutMillis(), client.writeTimeoutMillis());

    return chain.proceed(originalRequest);
  }

This section uses the chain of responsibility model. Assume that the client's request reaches the server, and there are three node requirements in the middle. It needs to meet the requirements of the nodes to reach the server. If you cannot pass what you need to verify on the first node, then you can reach the server. If it cannot reach the second node, it will be intercepted at the first node. Using this design pattern can save a lot of wasted effort and achieve optimization.
Insert image description here
Insert image description here

You can also add your own defined interceptors to increase the scalability of the framework.

The third main line of OkHttp: How are requests maintained?

    @Override protected void execute() {
    
    
      boolean signalledCallback = false;
      try {
    
    
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
    
    
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
    
    
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response);
        }
      } catch (IOException e) {
    
    
        if (signalledCallback) {
    
    
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
    
    
          eventListener.callFailed(RealCall.this, e);
          responseCallback.onFailure(RealCall.this, e);
        }
      } finally {
    
    
        client.dispatcher().finished(this);
      }
    }

Click client.dispatcher().finished(this); in the execute() method;

  /** Used by {@code AsyncCall#run} to signal completion. */
  void finished(AsyncCall call) {
    
    
    finished(runningAsyncCalls, call, true);
  }

Come in again

  private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
    
    
    int runningCallsCount;
    Runnable idleCallback;
    synchronized (this) {
    
    
      if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
      if (promoteCalls) promoteCalls();
      runningCallsCount = runningCallsCount();
      idleCallback = this.idleCallback;
    }

    if (runningCallsCount == 0 && idleCallback != null) {
    
    
      idleCallback.run();
    }
  }

There is a promoteCalls() method

  private void promoteCalls() {
    
    
    if (runningAsyncCalls.size() >= maxRequests) return; // Already running max capacity.
    if (readyAsyncCalls.isEmpty()) return; // No ready calls to promote.

    for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
    
    
      AsyncCall call = i.next();

      if (runningCallsForHost(call) < maxRequestsPerHost) {
    
    
        i.remove();
        runningAsyncCalls.add(call);
        executorService().execute(call);
      }

      if (runningAsyncCalls.size() >= maxRequests) return; // Reached max capacity.
    }
  }

If the number of running queues is greater than or equal to maxRequests64, it will return directly. If the waiting queue is empty, it will also return directly. Otherwise, cycle through the waiting queue, remove the waiting queue and put it into the running queue, and then put it into the thread pool for processing just like the last request was consumed.
It explains that when a request is consumed, it will come back to check whether there is data in the waiting queue. If there is data, the data will be removed from the waiting queue, then put into operation, and then directly handed over to the thread pool for processing .

common problem

Below are a few common questions. If there are any mistakes, please correct them!

  1. Why is the maximum running queue size 64 ?
    Because when okhttp wrote the source code, a lot of reference was made to the default source code of the browser, which is 64 to prevent the mobile phone and the server from creating too many threads and causing memory overflow; and this number can be changed.
  2. Why is the number of accessed unified target machines 5 ?
    This may be due to server pressure considerations. If the same mobile phone makes too many requests to the same server, the performance and stability of the server may be affected. This number is also set by the developers of OkHttp based on experience and practice. There is no definite reason or standard.
  3. Why are two queues designed? Can one queue be used ?
    Waiting queue (readyAsyncCalls): used to store asynchronous requests that have not yet been executed, waiting for the scheduler to allocate threads for execution .
    Running queue (runningAsyncCalls): used to store asynchronous requests being executed. When the number of running queue requests is less than 64 and the number of accesses to the same target machine is less than 5, the request can be put into the running queue, limiting the maximum number of concurrencies .
    The Dispatcher class is responsible for scheduling and distributing requests in these two queues, as well as reusing and managing the thread pool.
    The reason why okhttp designs two queues is to complete scheduling and reuse, as well as control execution and distribution, so as to ensure the order and efficiency of requests. , and avoid resource waste and conflict . One queue may not be able to meet these needs.
  4. Why do queues use Deque? Can I use arraylist?hashmap? Can a normal array work?
    Deque: Double-ended queue, which can access, add and delete data at the head and tail.
    ArrayList: A list implemented based on arrays. The elements are ordered and repeatable, and support random access.
    HashMap: A mapping based on arrays and key-value pairs. The elements are unordered and the keys cannot be repeated. It supports quick search of values ​​based on the keys.
    Array: A basic data structure whose elements are ordered and repeatable, supporting fast access based on subscripts.
    Why do queues use Deque? Because Deque can implement first-in-first-out or last-in-first-out operations, it is suitable for storing requests waiting for execution.
    If you need to store ordered elements and don't need to look up values ​​based on keys, you can use ArrayList or arrays. If you need to store unordered key-value pairs and need to find a value based on the key, you can use HashMap. However, these data structures may not be as convenient and efficient as Deque for head-to-tail operations.
  5. Why does the default queue in the thread pool use SynchronousQueue?
    SynchronousQueue is a blocking queue without capacity . It can ensure that each submitted task will be executed without being cached or discarded. Can support fairness strategy, allowing the producer or consumer with the longest waiting time to execute first. It can avoid too many or too few threads in the thread pool and improve resource utilization and response speed.
  6. What design patterns were used?
    The most direct use of the chain of responsibility model in OkHttp is the use of Interceptor. The writing is simple and beautiful, and it is very convenient to use. You only need to call the addInterceptor() method on OkHttpClient.Builder and add the class that implements the Interceptor interface. The scalability and customization are very convenient.
OkHttpClient httpClient = new OkHttpClient.Builder()
                .addInterceptor(new HeaderInterceptor())
                .addInterceptor(new LogInterceptor())
                .addInterceptor(new HttpLoggingInterceptor(logger)
                ......
                .readTimeout(30, TimeUnit.SECONDS)
                .cache(cache)
                .build();

The most direct use of the builder pattern in OkHttp is the use of XXBuilder. OkHttpClient, Request, Response, HttpUrl, Headers, MultipartBody, etc. in OkHttp use a large number of similar builder patterns.

public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
    
    
......
  public static final class Builder {
    
    
    Dispatcher dispatcher;
    @Nullable Proxy proxy;
    int callTimeout;
    int connectTimeout;
......

    public OkHttpClient build() {
    
    
      return new OkHttpClient(this);
    }
  }
}

public class Headers {
    
    
......
  public static final class Builder {
    
    
    final List<String> namesAndValues = new ArrayList<>(20);
......

    public Headers build() {
    
    
      return new Headers(this);
    }
  }
}

public class Request {
    
    
......

  public static class Builder {
    
    
    @Nullable HttpUrl url;
    String method;
    Headers.Builder headers;
    @Nullable RequestBody body;
......

    public Request build() {
    
    
      return new Request(this);
    }
  }
}

Separating the creation and presentation of objects, the Builder is responsible for assembling various configuration parameters and generating objects. The target object provides an interface to the outside world, conforming to the single principle of a class . Compared with the builder pattern, the process of generating objects in the factory pattern is more complicated and focuses on the object generation process, such
as

public interface Call extends Cloneable {
    
    
  
  Request request();
  Response execute() throws IOException; 
  void enqueue(Callback responseCallback);
  void cancel();  
  boolean isExecuted();  
  boolean isCanceled(); 
  Call clone();
  //创建Call实现对象的工厂
  interface Factory {
    
    
    //创建新的Call,里面包含了Request对象。
    Call newCall(Request request);
  }
}
public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
    
    
  @Override public Call newCall(Request request) {
    
    
    return RealCall.newRealCall(this, request, false /* for web socket */);
  }
}final class RealCall implements Call {
    
    
  ......
}

In the Call interface, there is an internal factory Factory interface. In this way, you only need to do the following:
implement the Call interface and implement the corresponding function, RealCall;
use a certain class (OkHttpClient) to implement the Call.Factory interface, and return the RealCall object in newCall, and that's it.

  1. Why is it designed like this? If it were you, can you think of any other better design patterns?
    The purpose of these design patterns is to improve the scalability, maintainability and readability of OkHttp. If it were me, I might consider using the observer pattern, which allows users to register some callback functions to monitor the status changes of requests and responses. This makes it easier for users to handle asynchronous requests and exceptions.
  2. What is the significance of each interceptor design?
    RetryAndFollowUpInterceptor: Responsible for implementing the retry redirection function when the request fails.
    BridgeInterceptor: Responsible for converting user-constructed requests into requests sent to the server, adding some necessary header information.
    CacheInterceptor: Responsible for processing caching logic and determining whether to use the cache or update the cache based on the cache policy and response headers.
    ConnectInterceptor: Responsible for establishing connections, selecting routes, negotiating protocols, etc.
    CallServerInterceptor: Responsible for sending requests to the server and receiving responses, handling GZIP compression, etc.
  3. Why use Socket connection pool? What are the benefits?
    The reason why okhttp uses Socket connection pooling is to improve performance and efficiency. The connection pool can reuse existing connections, avoid frequently creating and closing connections, and reduce request delays and network overhead. If the HTTP/2 protocol is used, the connection pool can also multiplex requests from the same host to further improve concurrency capabilities.
  4. Why do we need to maintain the requests in the queue after each request is run? What are the benefits of this design?
    If it is the previous volley framework, it will start a thread and have a code while (true), which means that this thread has always been in an infinite loop. Assuming that the client keeps sending requests to the server continuously, then there is no problem. However, if no requests are sent and the thread keeps running, this will cause waste.
    But okhttp is different. If a request is sent, the request will be placed in the thread pool. The thread pool will execute the request to the server. After the request is completed, it will go back and check. If no request is sent, it is impossible to add the request to the running queue. , it is impossible to put it in the thread pool, nor to send it to the server, causing the framework to be in a stopped state. Requests will not be sent continuously like while(true), causing waste and performance degradation.
  5. The okhttp network request process is roughly as follows.
    OkHttp is an efficient Http request framework.
    It builds OkHttpClient and Request through the builder mode.
    OkHttpClient initiates a new request through newCall. It
    maintains the request queue and thread pool through the distributor to complete the request deployment and
    completes the request through five default interceptors. A series of operations such as retrying, caching, and establishing connections
    to obtain network request results.
  6. How does the okhttp distributor work?
    The main function of the distributor is to maintain the request queue and thread pool. For example, if we have 100 asynchronous requests, we must not request them at the same time. Instead, we should queue them into categories and divide them into those that are being requested. list and the waiting list. After the request is completed, the waiting request can be taken out from the waiting list to complete all requests.

Here, the synchronous requests and the asynchronous requests are slightly different.

synchronous request

synchronized void executed(RealCall call) {
    
    
	runningSyncCalls.add(call);
}

Because synchronous requests do not require a thread pool, there are no restrictions. So the distributor only does a record. Subsequent requests can be made synchronously in the order in which they were added to the queue.
Asynchronous requests

synchronized void enqueue(AsyncCall call) {
    
    
	//请求数最大不超过64,同一Host请求不能超过5个
	if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) 	  {
    
    
		runningAsyncCalls.add(call);
		executorService().execute(call);
	} else {
    
    
		readyAsyncCalls.add(call);
	}
}

When the tasks being executed do not exceed the maximum limit of 64 and the number of requests to the same Host does not exceed 5, they will be added to the execution queue and submitted to the thread pool. Otherwise, join the waiting queue first.
After each task is completed, the dispatcher's finished method will be called, which will take out the tasks in the waiting queue and continue execution.

Guess you like

Origin blog.csdn.net/ChenYiRan123456/article/details/131484627