Android network programming (7) Volley principle analysis

1 Introduction

Volley is an open source network communication framework launched by Goole at the 2013 Google I / O Conference. Volley's characteristics are simple to use and suitable for network operations with small data volume and frequent communication. For network operations with large data volume, such as downloading files, its performance will be very bad, because Volley saves all responses during parsing. In memory. Volley brings convenience to our developers. It encapsulates all the communication details and thread processing of Http inside. We only need a simple call to complete the communication operation. We have already introduced about how to use Volley in the previous article "Use of Volley Framework for Android Network Programming (5)" . Today we are mainly to understand the principle of Volley's internal implementation, revealing why it is only suitable for small data volume And the network operation with frequent communication and how its internal thread works.

2 Usage review

The use of Volley is very simple, in fact, it is just 5 steps:

1. Dependence of com.android.volley: volley: 1.1.1 in Gradle

2. Add access network permission in AndroidManifest.xml <uses-permissionandroid: name = "android.permission.INTERNET" />

3. Create a request queue RequestQueue object

4. Create a Request object

5. Add the Request object to the RequestQueue object

Examples:

RequestQueue requestQueue = Volley.newRequestQueue(getApplicationContext());
Request stringRequest = new StringRequest(Request.Method.GET, "http://www.xxx.com",
        new Response.Listener<String>() {
            @Override
            public void onResponse(String s) {
                // 请求成功
            }
        },
        new Response.ErrorListener() {
            @Override
            public void onErrorResponse(VolleyError volleyError) {
                // 请求失败
            }
        });
requestQueue.add(stringRequest);

3 Principle analysis

3.1 Work flow

Before we start, let's take a look at the work flow chart of Volley on the official website:

 

1. The three colors in the figure represent three threads: blue is the main thread, green is the cache thread, and orange is the network thread.

2. As reviewed in the code above, we add the Request object of the request to the RequestQueue in the main thread, and Volley officially starts working. In fact, after creating the request queue, RequestQueue mainly does two things, creating and starting a cache thread CacheDispatcher and creating and starting N network request threads NetWorkDIspatcher.

3. Combined with the above figure, after the main thread requests, it will first go through the green part, which is the cache thread CacheDispatcher, which will make some judgments to determine whether the currently requested data is in the cache. If it exists, the data is taken out, and then processed according to the incoming Request object type, and then returned to the main thread; if it does not exist, the request is handed over to the network request thread NetWorkDIspatcher.

4. The network request thread NetWorkDIspatcher will have N, the default is 4, mainly responsible for making network requests, at the same time will determine whether the downloaded data can be cached, when the request is successful, it will be like a cache thread, according to the incoming The Request object type is processed and then returned to the main thread.

3.2 The creation and working principle of RequestQueue

We can see in the code above that everything starts with Volley.newRequestQueue () creating the RequestQueue object, let's take a look at the source code of this method:

Volley.java

public static RequestQueue newRequestQueue(Context context) {
    return newRequestQueue(context, (BaseHttpStack) null);
}
public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack) {
    BasicNetwork network;
    if (stack == null) {
        if (Build.VERSION.SDK_INT >= 9) {
            network = new BasicNetwork(new HurlStack());
        } else {
            String userAgent = "volley/0";
            try {
                String packageName = context.getPackageName();
                PackageInfo info =
                        context.getPackageManager().getPackageInfo(packageName, /* flags= */ 0);
                userAgent = packageName + "/" + info.versionCode;
            } catch (NameNotFoundException e) {
            }
            network = new BasicNetwork(new HttpClientStack(AndroidHttpClient.newInstance(userAgent)));
        }
    } else {
        network = new BasicNetwork(stack);
    }

    return newRequestQueue(context, network);
}

The newRequestQueue method is two static public methods. The one-parameter method will eventually call the two-parameter method. The main purpose of the method is to create a BasicNetwork object and pass it to the newRequestQueue method. The creation of BasicNetwork is divided into three cases. First, the second parameter of the method, BaseHttpStack, is judged to be empty. First, if it is not empty, it is directly passed to the BasicNetwork constructor to instantiate the BasicNetwork object; if it is not empty, that is Further judge whether the current Android SDK is> = 9, that is, if it is> = Android 2.3 If it is, create a HurlStack object and then instantiate the BasicNetwork object, otherwise create an HttpClientStack object and then instantiate the BasicNetwork object.

If you look further at the source code of HurlStack and HttpClientStack, you will find that HurlStack is implemented based on HttpURLConnection, and HttpClientStack is based on HttpClient. The reason is that HttpURLConnection will be very unreliable before Android2.3, there are some bugs, such as calling the close () method when reading InputStream, it may cause the connection pool to be invalid, so in the past if Android2.3 Before using it, the usual solution is to disable the connection pool function. Since Android 2.3, the bug mentioned above no longer exists, and because HttpURLConnection has a simple API, compression and caching mechanisms can effectively reduce network access. Traffic, improved network speed and other optimizations, so in general, HttpURLConnection is used instead of HttpClient, and HttpClient has been removed from the SDK by default after Android 6.0.

Back to the topic, so the BasicNetwork object is used for network requests. After the BasicNetwork object is created, the next step is to pass it to the static private method of the newRequestQueue with the same name:

Volley.java

private static RequestQueue newRequestQueue(Context context, Network network) {
    File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
    RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
    queue.start();
    return queue;
}

In this method, the RequestQueue is first instantiated by the created DiskBasedCache object and the incoming Network object, and the start method of the RequestQueue object is called. First look at the construction method of the RequestQueue class:

RequestQueue.java

private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;
public RequestQueue(Cache cache, Network network) {
    this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
    this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) {
    mCache = cache;
    mNetwork = network;
    mDispatchers = new NetworkDispatcher[threadPoolSize];
    mDelivery = delivery;
}

The construction method of the RequestQueue class that is routinely called will eventually be transferred to the overload method that receives 4 parameters. The method only assigns global variables, and there is not much logic, so let's take a look at the meaning of these parameters.

DiskBasedCache is used to maintain a cache of disk responses

Network Network interface for performing HTTP requests

threadPoolSize The number of network scheduler threads to be created, the default is 4, so Volley is suitable for network operations with small data volume and frequent communication .

ResponseDelivery is bound to the result distributor of the UI thread Looper

Because the Start method is first called after the RequestQueue object is created, let's continue to look at its start method source code:

RequestQueue.java

public void start() {

    // Make sure any currently running dispatchers are stopped.
    stop();
    // Create the cache dispatcher and start it.
    mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
    mCacheDispatcher.start();

    // Create network dispatchers (and corresponding threads) up to the pool size.
    for (int i = 0; i < mDispatchers.length; i++) {
        NetworkDispatcher networkDispatcher =
                new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery);
        mDispatchers[i] = networkDispatcher;
        networkDispatcher.start();
    }
}

public void stop() {
    if (mCacheDispatcher != null) {
        mCacheDispatcher.quit();
    }
    for (final NetworkDispatcher mDispatcher : mDispatchers) {
        if (mDispatcher != null) {
            mDispatcher.quit();
        }
    }
}

The start method does three things. First, the stop method is called to reset the mCacheDispatcher and mDispatchers, followed by the creation and start of the CacheDispatcher object, and finally, the for loop creates N NetworkDispatcher objects and starts. CacheDispatcher and NetworkDispatcher both inherit Thread thread. As you can see from the comments, CacheDispatcher is used for cache scheduling thread, and NetworkDispatcher is used for network request thread. Let's continue to look at what they are doing.

3.2.1 CacheDispatcher thread

The CacheDispatcher construction method receives mCacheQueue on behalf of the cache queue, mNetworkQueue on behalf of the network request queue, mCache and mDelivery are the cache object and result distributor object passed in by the RequestQueue class construction method to maintain the response to the disk. Continue to see what is done inside the thread:

CacheDispatcher.java

@Override
public void run() {
    ……
    while (true) {
        try {
            processRequest();
        } catch (InterruptedException e) {
            ……
        }
    }
}
private void processRequest() throws InterruptedException {
    final Request<?> request = mCacheQueue.take();
    processRequest(request);
}
@VisibleForTesting
void processRequest(final Request<?> request) throws InterruptedException {
    request.addMarker("cache-queue-take");

    // 如果请求取消,则结束流程
    if (request.isCanceled()) {
        request.finish("cache-discard-canceled");
        return;
    }

    // 尝试从缓存中检索数据
    Cache.Entry entry = mCache.get(request.getCacheKey());
    if (entry == null) {
        request.addMarker("cache-miss");
        // 不存在缓存,则将请求添加到网络请求队列去
        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            mNetworkQueue.put(request);
        }
        return;
    }

    // 如果缓存过期,则将请求添加到网络请求队列去
    if (entry.isExpired()) {
        request.addMarker("cache-hit-expired");
        request.setCacheEntry(entry);
        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            mNetworkQueue.put(request);
        }
        return;
    }

    // 命中缓存,解析缓存数据
    request.addMarker("cache-hit");
    Response<?> response =
            request.parseNetworkResponse(
                    new NetworkResponse(entry.data, entry.responseHeaders));
    request.addMarker("cache-hit-parsed");
       // 缓存新鲜度判断
    if (!entry.refreshNeeded()) {
        // 缓存有效不需要刷新,直接进行结果分发
        mDelivery.postResponse(request, response);
    } else {
        // 缓存需要刷新,并要进行网络请求
        request.addMarker("cache-hit-refresh-needed");
        request.setCacheEntry(entry);
        // Mark the response as intermediate.
        response.intermediate = true;

        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            // Post the intermediate response back to the user and have
            // the delivery then forward the request along to the network.
            mDelivery.postResponse(
                    request,
                    response,
                    new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Restore the interrupted status
                                Thread.currentThread().interrupt();
                            }
                        }
                    });
        } else {
            // request has been added to list of waiting requests
            // to receive the network response from the first request once it returns.
            mDelivery.postResponse(request, response);
        }
    }
}

Inside the CacheDispatcher thread is an infinite loop. After reading the Request from the cache queue mCacheQueue, it starts to process the Request. As can be seen from the code comments above, the cache has passed: the existence, expiration, and freshness judgment. If the cache acquisition fails, it will join the mNetworkQueue network request queue, otherwise it will call the ResponseRelivery result distributor's postResponse distribution result.

3.2.2 NetworkDispatcher thread

The NetworkDispatcher construction method receives mNetworkQueue on behalf of the network request queue, mNetwork, mCache, and mDelivery are the network interfaces used by the Construction method of the RequestQueue class to perform HTTP requests, the cache object used to maintain the response to the disk, and the result distributor object. Continue to see what is done inside the thread:

NetworkDispatcher.java

@Override
public void run() {
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
    while (true) {
        try {
            processRequest();
        } catch (InterruptedException e) {
            ……

        }
    }
}

private void processRequest() throws InterruptedException {
    Request<?> request = mQueue.take();
    processRequest(request);
}

@VisibleForTesting
void processRequest(Request<?> request) {
    long startTimeMs = SystemClock.elapsedRealtime();
    try {
        request.addMarker("network-queue-take");

        // 如果请求取消,则直接返回结果,结束流程
        if (request.isCanceled()) {
            request.finish("network-discard-cancelled");
            request.notifyListenerResponseNotUsable();
            return;
        }

        addTrafficStatsTag(request);

        // 使用mNetwork执行网络请求.
        NetworkResponse networkResponse = mNetwork.performRequest(request);
        request.addMarker("network-http-complete");

        // 如果服务器返回304,表示没有改动过,则直接返回结果,结束流程

        if (networkResponse.notModified && request.hasHadResponseDelivered()) {
            request.finish("not-modified");
            request.notifyListenerResponseNotUsable();
            return;
        }

        // 根据传入的Request子类(StringRequest/JsonRequest…)进行解析出相应的Response对象
        Response<?> response = request.parseNetworkResponse(networkResponse);
        request.addMarker("network-parse-complete");

        // 写入缓存
        if (request.shouldCache() && response.cacheEntry != null) {
            mCache.put(request.getCacheKey(), response.cacheEntry);
            request.addMarker("network-cache-written");
        }

        // 将结果分发
        request.markDelivered();
        mDelivery.postResponse(request, response);
        request.notifyListenerResponseReceived(response);
    } catch (VolleyError volleyError) {
        volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
        parseAndDeliverNetworkError(request, volleyError);
        request.notifyListenerResponseNotUsable();
    } catch (Exception e) {
        VolleyLog.e(e, "Unhandled exception %s", e.toString());
        VolleyError volleyError = new VolleyError(e);
        volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
        mDelivery.postError(request, volleyError);
        request.notifyListenerResponseNotUsable();
    }
}

Similarly, the NetworkDispatcher thread is also an infinite loop. After reading the Request from the network request queue mNetworkQueue, it starts processing the Request. As can be seen from the code comments above, mNetwork was used to make network requests, and the result corresponding to the type of Request we passed in was returned through the parseNetworkResponse method, and the cache was written, and finally the postResponse of the result distributor was called to distribute the results.

3.3 Creation of Request

Request is an abstract class, its subclasses are: StringRequest, JsonRequest, ImageRequest, etc., we can also customize the Request object according to the type returned after our actual request. When we perform custom Request inheritance, the biggest difference between different Requests is to rewrite the parseNetworkResponse method, that is, the parseNetworkResponse method called after the above NetworkDispatcher thread completes the grid request returns the corresponding type result. Such as the source code in the StringRequest class:

StringRequest.java

@Override
@SuppressWarnings("DefaultCharset")
protected Response<String> parseNetworkResponse(NetworkResponse response) {
    String parsed;
    try {
        parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
    } catch (UnsupportedEncodingException e) {
        // Since minSdkVersion = 8, we can't call
        // new String(response.data, Charset.defaultCharset())
        // So suppress the warning instead.
        parsed = new String(response.data);
    }
    return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
}

3.4 Add Request to RquestQueue

After the RquestQueue and Request objects are created, the last step is to add the Request object to the queue through the add method of RquestQueue, see the source code:

RquestQueue.java

public <T> Request<T> add(Request<T> request) {
    // Tag the request as belonging to this queue and add it to the set of current requests.
    request.setRequestQueue(this);
    synchronized (mCurrentRequests) {
        mCurrentRequests.add(request);
    }

    // Process requests in the order they are added.
    request.setSequence(getSequenceNumber());
    request.addMarker("add-to-queue");

    // If the request is uncacheable, skip the cache queue and go straight to the network.
    if (!request.shouldCache()) {
        mNetworkQueue.add(request);
        return request;
    }
    mCacheQueue.add(request);
    return request;
}

When the request is added to the two queues of mNetworkQueue and mCacheQueue, there are two threads in 3.2.1 and 3.2.2 that read their corresponding queues in an infinite loop, and then perform the corresponding logical processing.

4 Summary

The use and principle of Volley is written here. The overall design idea of ​​Volley is very simple. Although its usage scenarios have obvious limitations, and the current popular network request framework is Okhttp3, after all, it is still necessary to learn excellent framework design ideas. For more information about Volley, please visit its official website .

 

 

 

Published 106 original articles · praised 37 · 80,000 views

Guess you like

Origin blog.csdn.net/lyz_zyx/article/details/73302109