Volley core source code analysis (2)

Request Queue RequestQueue

Every colleague who has used Volley has used the RequestQueue.add(request) method to see what this method does:

public <T> Request<T> add(Request<T> request) {
        / / Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }


        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }


        // Insert request into stage if there's already a request with the same cache key in flight.
       synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
               mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);

            }
            return request;
        }
    }

We see that there are two synchronized blocks in it Locked objects: mCurrentRequests, mWaitingRequests

So what are these two?
    /**
     * The set of all requests currently being processed by this RequestQueue. A Request
     * will be in this set if it is waiting in any queue or currently being processed by
     * any dispatcher.
     */
    private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();


/**
     * Staging area for requests that already have a duplicate request in flight.
     *
     *

         *     <li>containsKey(cacheKey) indicates that there is a request in flight for the given cache
         *          key.</li>
         *     <li>get(cacheKey) returns waiting requests for the given cache key. The in flight request
         *          is not contained in that list. Is null if no requests are staged.</li>
         *

     */
    private final Map<String, Queue<Request<?>>> mWaitingRequests =
            new HashMap<String, Queue<Request<?>>>();


See the English comment meaning:
mCurrentRequests The general idea is that all current requests in this queue are Collection of executed requests. All requests that are being executed or that are queued for execution should be placed in this set.

mWaitingRequests: a temporary area that contains repeated requests. In layman's terms,

one is a collection of requests that are executing and waiting to be executed, and the other is a temporary area that stores repeated requests. A map.




When we call the add method, we first add the request To mCurrentRequests, of course, this process is synchronous.
At this time, volley also sets the sequence number and remarks for each request:
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");


Next is as follows:
if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

This code is very interesting. If a request does not support being cached, the request is immediately handed over to mNetworkQueue
for execution, and the subsequent code is no longer executed.

mNetworkQueue is defined as follows:

/** The queue of requests that are actually going out to the network. */
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();


this is a brand new blocking queue. The role of this queue will be discussed later.

In the Request class we see
private boolean mShouldCache = true;
indicating that each request in Volley is to be cached by default.

Next, the request's cacheKey is used to distinguish whether the request is repeated. If there is a repeated request, the queue to which the request belongs is taken out, and the new request is added to the queue, and then stored in mWaitingRequests.
If there is no such request in mWaitingRequests, use the cacheKey of this request to save a NULL value,

to put it succinctly; each request will be put into mWaitingRequests at the same time as it is executed, and it will also be put into the cache queue,
mCacheQueue:
private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();



When the request in mCacheQueue is executed, it will be discussed later.






The next section Volley's task scheduling model  http://f303153041.iteye.com/blog/2281352








Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326968991&siteId=291194637