Principle Analysis of Volley

foreword

In the process of Android development and programming, it is inevitable to involve network programming, which requires our developers to be more proficient in network programming. Haha, let's start by analyzing Volley today!

1.0 What is Volley

Volley is an HTTP library launched at the Googel I/O conference in 2013, which can help Android applications perform network requests more easily. It can not only access the network to obtain data, but also access the network to obtain pictures.

Advantages and disadvantages of 2.0 Volley

Volley has the following advantages:
  • Automatically dispatch network requests
  • High concurrent network connection
  • Memory-transparent responses cached to disk via standard HTTP cache coherence
  • Support for specifying the priority of requests
  • Cancel request API, or specify a region in the cancel request queue
  • Frameworks are easily customizable. For example, custom retry or callback functions
  • Strong ordering can make it easier to asynchronously load network data and correctly display it to the UI
  • Includes debugging and tracing tools
Disadvantages of Volley:
  • Not suitable for downloading large data files

3.0 Volley's network request queue

The process of using Volley is to create a RequestQueue(request queue) object and then Requestsubmit (requests) to it.

// 代码[1]

final TextView textView = (TextView) MainActivity.this.findViewById(R.id.tv);

//[1.0]创建 RequestQueue 的实例
RequestQueue requestQueue = Volley.newRequestQueue(MainActivity.this); // 1

String url = "http://gank.io/api/data/Android/10/1";

//[2.0]构造一个 request(请求)
StringRequest request = new StringRequest(Request.Method.GET, url, new Response.Listener<String>() {
    @Override
    public void onResponse(String response) {
        textView.setText("Response is: " + response.toString());
    }
}, new Response.ErrorListener() {

    @Override
    public void onErrorResponse(VolleyError error) {
        textView.setText("Error is happenning");
    }
});

//[3.0]把 request 添加进请求队列 RequestQueue 里面
requestQueue.add(request);

The logic of the above code is mainly to construct an StringRequestinstance of and then add this instance to the request queue RequestQueue. Let's look at Volley.newRequestQueue(Context)the source code of the method in Note 1:

// 源码[2]

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);  // 1

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {  // 2
                stack = new HurlStack();
            } else {
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); // 3
        queue.start(); // 4

        return queue;
    }

public static RequestQueue newRequestQueue(Context context) {
    return newRequestQueue(context, null);
}

As can be seen from the source code above, newRequestQueue(Context)it is newRequestQueue(Context, HttpStack)the overloaded method of . Next, we mainly look at newRequestQueue(Context, HttpStack)the method. At Note 1, new File(File, String)a cache (this is a noun) is constructed through initialization cacheDir. At Note 3, new DiskBasedCache(cacheDir)5M is allocated for this cache. Storage space;
back to Note 2, when the SDK version is greater than or equal to 9, that is, the Android version number is greater than or equal to 2.3, then create based on , otherwise create HttpURLConnectionbased HurlStackon execution request HttpClient, HttpClientStackand then in Note 3, By new RequestQueue(Cache, Network)creating a request queue, let's look at new RequestQueue(Cache, Network)the source code of:

// 源码[3]

    ...

private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;

    ...

public RequestQueue(Cache cache, Network network) {
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    }
    ...

Construction method new RequestQueue(Cache, Network)Allocate 4 threads for network requests. For RequestQueuethis class, the main function is to serve as a thread pool for dispatching requests in the queue: when calling, the add(Request)incoming request (Request) will be parsed in the cache queue (cache) or network queue (network), and then passed back to The main thread, that is, the callback function in code [1] onResponse(String)and onErrorResponse(VolleyError)gets the request content of the callback;
in the comment 4 of code [1], the method is called add(Request):

// 源码[4]

public <T> Request<T> add(Request<T> request) {
        request.setRequestQueue(this);
        //同步代码块,保证 mCurrentRequests.add(request) 在一个进程里面的所有线程里面,有且只能在
        //一个线程里面执行
        synchronized (mCurrentRequests) { 
            mCurrentRequests.add(request);
        }

        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        //如果不可以存储,就把请求(request)添加进网络调度队列里面
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);  
            return request;
        }

        //如果可以存储:
        //同步代码块
        synchronized (mWaitingRequests) {  
            String cacheKey = request.getCacheKey();
            //如果之前有相同的请求并且还没有返回结果的,就把此请求加入到 mWaitingRequests 里面
            if (mWaitingRequests.containsKey(cacheKey)) {   
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                //如果没有请求在进行中,重新初始化 Queue<Request<?>> 的实例
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                //如果没有的话,就把请求添加到 mCacheQueue 队列里面
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);  
            }
            return request;
        }
    }
}

As can be seen from the source code of the above add(Request)method, the main logic is: if the request (request) cannot be cached, it will be directly added to the cache queue, otherwise it will be added to the cache queue; after getting and , return the request and mNetworkQueuecall mCacheQueuethe start()method (such as Note 4 of the source code [2]), when viewing start()the source code of the method:

// 源码[5]

public void start() {
        stop();  // 确保当前的缓存调度和网络调度都已经停止
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

It mainly creates CacheDispatcher(BlockingQueue<Request<?>>, BlockingQueue<Request<?>>, Cache, ResponseDelivery)an instance of and mCacheDispatcher.start()starts the cache scheduling thread through , and creates NetworkDispatcher(BlockingQueue<Request<?>>,Network, Cache,ResponseDelivery)an instance of and networkDispatcher.start()starts the network scheduling thread through .

4.0 Network dispatcher thread NetworkDispatcher

The network scheduling thread NetworkDispatcheris a Threadthread inherited from , by viewing run()the source code of its task method :

// 源码[6]

@Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request<?> request;
        while (true) {
            try {  
                // 从网络队列中取出请求      
                request = mQueue.take();
            } catch (InterruptedException e) {           
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // 如果请求被取消
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // 把网络请求到的实体缓存到 源码[2] 的注释 1 处的本地缓存文件里面
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                request.markDelivered();
                // 回调请求到的响应给主线程
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                ...

            } catch (Exception e) {
                ...

            }
        }
    }

The main logic of network scheduling is to first judge whether the network request has been canceled. If not, mNetwork.performRequest(request)execute the request network response. After getting the response, first cache it locally, and then call back to the main thread.

5.0 cache scheduling thread CacheDispatcher

The source code of the cache scheduling thread CacheDispatcheris as follows:

@Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        mCache.initialize();

        while (true) {
            try {            
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                //如果请求被取消
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // 尝试在本地缓存中取回数据
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    //本地缓存丢失或者没有
                    request.addMarker("cache-miss");                   
                    mNetworkQueue.put(request);
                    continue;
                }

                // 本地缓存过期
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // 命中缓存
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // 回调相应给主线程
                    mDelivery.postResponse(request, response);
                } else {

                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    response.intermediate = true;

                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                ...
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                ...
            }
        }
    }

Similarly, if the request is not cancelled, the local cache is retrieved. When the local cache does not exist, is lost, or has expired, the request is added to the network request queue. When the local cache is hit, the cached response is called back. to the main thread;

6.0 Volley principle analysis

In order to send a request, you just need to construct a request and add()add it to via the method RequestQueue. Once the request is added, it goes through the queue, through a series of dispatches, and then gets the raw response data back.

When the method is executed add(), Volley triggers the execution of a cache handler thread and a series of network handler threads. When adding a request to the queue, it will be caught and triggered by the cache thread: If the request can be processed by the cache, then the parsing of the response data will be performed in the cache thread and returned to the main thread. If the request cannot be handled by the cache, it is put on a network queue. The first available network thread in the network thread pool will get the request from the queue and execute the HTTP operation, parse the response data of the worker thread, write the data to the cache and return the parsed data to the main thread.

The life cycle of a request

The lifecycle of a request

summary

At this point, the analysis of the Volley library is temporarily over. This time, the author mainly analyzes the process from when a request is added to the request queue to when it returns a response, and understands how Volley schedules a request, and then gets a response and returns it. Haha thanks for reading...


Source code download TestVolley

Guess you like

Origin blog.csdn.net/HongHua_bai/article/details/78308331