Volley core source code analysis (3)

Volley's task scheduling model

follows the RequestQueue in the previous section. When Volley initializes the RequestQueue, the start() method of the RequestQueue will be executed, and the cache scheduler and the network scheduler will be initialized in the start method.

   public void start() {
        stop(); // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start() ;

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers. length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }


What are NetworkDispatcher and CacheDispatcher?

NetworkDispatcher extends Thread

CacheDispatcher extends Thread


The result is both thread classes.

Look at the loop body assigning several NetworkDispatchers to an array called mDispatchers[i], and each NetworkDispatcher is started when it is created.

Find the definition of mDispatchers:
private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;

mDispatchers = new NetworkDispatcher[threadPoolSize];

It turned out to be a thread group with 4 elements by default.

Seeing this, I seem to have thought of the basic concept of thread pools. Yes, this is a thread pool.



See what a specific NetworkDispatcher looks like?

Anyone who has done JAVA development knows that inheriting the thread class must rewrite the run() method. The run method

in NetworkDispatcher is as follows:

The method is too long.

Because it is a thread pool, the thread has to work all the time, so the run method is while(true){}
,

  Request<?> request = mQueue.take();

mQueue: The

take method of mNetworkQueue in the PriorityBlockingQueue queue is as follows :
   public E take() throws InterruptedException {
        final ReentrantLock lock = this.lock;
        lock.lockInterruptibly();
        E result;
        try {
            while ( (result = dequeue()) == null)
                notEmpty.await();
        } finally {
            lock.unlock();
        }
        return result;
    } means lock-->take out a request->release lock->reduce one request in the original queue, which ensures that when using volley to send requests, they can be sent in the order in which they are sent. implement.





Next, after removing the request from the queue, set a note for the request to determine whether the request is canceled, and if it is canceled, finish.

request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }



Next, execute the request:
  // Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);

followed by some status judgment and analysis of networkResponse.

Response<?> response = request.parseNetworkResponse(networkResponse);

The specific method of parsing NetworkResponse in volley is handed over to the subclass of Request to implement. For example, ImageRequest, JsonRequest, StringRequest, etc.

At the same time, cache the executed request:
if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request. addMarker("network-cache-written");
                }


Seeing this, we can draw a conclusion that when Volley initialized RequestQueue, it created a thread pool with a
default size of 4, and each thread synchronizes the request queue from the PriorityBlockingQueue type Take out a request and execute it.
Next, take a look at the CacheDispatcher cache task scheduling:




    /** The queue of requests coming in for triage. */
    private final BlockingQueue<Request<?>> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue<Request<?>> mNetworkQueue ;

Unlike NetworkDispatcher, CacheDispatcher not only has a cache queue but also a request queue,

it is also while (true) { request = mCacheQueue.take();; ...}

keeps taking out cached requests from mCacheQueue:
// Attempt to retrieve this item from
                cache.Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }
Here, when the cached request is seen, it is retrieved from the cache according to the requested cacheKey. If the retrieved cache is empty, the request is directly placed in mNetworkQueue, that is, the request is handed over to NetworkDispatcher to execute.

If the requested cache object expires, same as above:
  // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry( entry);
                    mNetworkQueue.put(request);
                    continue;
                }
Here we see a key code:
Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));


what does this mean?

  Volley not only caches the request itself, but also caches the response result of the request. What it does at this time is to fetch the response result from the cache, so that there is no need to send the request to the server.

Next,

if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                   mDelivery. postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing. request.
                    addMarker("cache-hit-refresh-needed");
                    setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    final Request<?> finalRequest = request;
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(finalRequest);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

If the cached data does not need to be refreshed, just use the return value of the interface, otherwise, put the request into the mNetworkQueue and wait for the NetworkDispatcher to execute.



So far, Volley's threading model analysis has come to an end. We have seen that the thread group NetworkDispatcher[] and the single-threaded CacheDispatcher skillfully implement the alternate rotation of network request tasks through two BlockingQueues.



Next section Volley's cache http://f303153041.iteye.com/blog/2281360
















Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326993805&siteId=291194637