Volley comprehensive resolution, with you completely understand the perspective from source

Volley Android is Google's lightweight asynchronous network requests load frame and picture frame. Posted in Google I / O 2013 conference. Adapted scenario data amount is small, frequent communications network operation.

main feature:
(1). Scalability. Volley in the design of the interface is mostly based, highly configurable.
(2). Http some extent meet specifications, including the process returns ResponseCode (2xx, 3xx, 4xx, 5xx) , the request processing head support caching mechanism. And support retry and priorities defined.
(3) The default Android2.3 or more based HttpURLConnection, 2.3 Based on the following HttpClient.
    HttpURLConnection and AndroidHttpClient (HttpClient Package) and how to choose reasons:
Before Froyo (2.2), HttpURLConnection have a major Bug, call the close () function will affect the connection pool, resulting in connection multiplexing failure, so use HttpURLConnection before Froyo needs to close keepAlive. Also in Gingerbread (2.3) HttpURLConnection gzip compression enabled by default, to improve the performance of HTTPS, Ice Cream Sandwich (4.0) HttpURLConnection support request cached results. Coupled with HttpURLConnection API itself is relatively simple, so Android, it is recommended to use after a 2.3 HttpURLConnection, before the recommended AndroidHttpClient.
(4) Provide a simple image loading tool.

 

 
(A) Basic use
     (A) Use of various Request
volley use and simple, we only need to create a RequestQueue request queue, and then to throw http request to queue inside, volley will
Continuously removed from the request queue inside and then to work a pile thread. http request here is encapsulated by the Request class, we only created by the Request object, and to provide parameters such as a url or the like. All network operations are processed in the sub-thread, we do not have to worry about blocking the UI thread.
The results asynchronous network requests will be returned to us, we just need to deal with Request a callback.
Request itself is an abstract class, you can not create an instance directly, volley for us to achieve a number of Request, such as StringRequest ordinary http request, JsonRequest can encapsulate json data, and returns data encapsulated into JsonObject service side, ImageRequest network can request a picture and the return data package into the service side of the Bitmap.
Request the use of three steps: 1. Create RequestQueue queue; 2 Request object is created and added to the queue; 3 handle the callback event..

RequestQueue create simple, static method call Volley class newRequestQueue, and you can specify Context:

 

The RequestQueue mqueue null = Private; 
// Create Request Queue ... 
mqueue = Volley.newRequestQueue (the this); // represents the current context of the this

 

   1.StringRequest       
StringRequest default request method is GET, other requests may be another embodiment which overloads.
String url = "http://192.168.56.1:8080";
StringRequest request = new StringRequest(url,new Response.Listener<String>()
		{
			@Override
			public void onResponse(String response)//success callbacks
			{
                //handle it
			}
			
		}, new Response.ErrorListener()//error callbacks
		{
			@Override
			public void onErrorResponse(VolleyError error)
			{
				error.printStackTrace();
			}
		});
		//add request to queue...
		mQueue.add(request);
Response.Listener processing the request callback success. Callback failure Response.ErrorListener process. The code is simple, but much introduction.
         2.JsonRequest
JsonObject here is android built org.json library instead of their own Gson, and that needs attention.
Map <String, String> the params = new new the HashMap <String, String> (); 
		params.put ( "name", "zhangsan"); 
		params.put ( "Age", ". 17"); 
		the JSONObject JSONRequest the JSONObject new new = ( the params); 
		Log.i (the TAG, jsonRequest.toString ()); 
		// if the data is empty json get request is, or is a post request 
		// jsonrequest If not null, volley will jsonObject object into a string json intact to the server, and it will not turn into kv right, because I do not know how to convert volley 
       String url = "http://192.168.56.1:8080/volley_test/servlet/JsonServlet"; 
		JsonObjectRequest Request new new JsonObjectRequest = ( url, jsonRequest, new Response.Listener <JSONObject >()
		{
			@Override
			public void onResponse(JSONObject response)
			{
		      //handle it
			}
			 
		},new Response.ErrorListener()
		{
			@Override
			public void onErrorResponse(VolleyError error)
			{
				error.printStackTrace();
			}
		});
		mQueue.add(request);
3.ImageRequest
        ImageRequest can control the width and height of the picture, the photo quality. If the aspect ratio of the original wide upper primary, it will be compressed.
ImageRequest request = new ImageRequest("http://192.168.56.1:8080/volley_test/image.jpg",new Response.Listener<Bitmap>()
		{
			@Override
			public void onResponse(Bitmap response)
			{
				mImageView.setImageBitmap(response);
			}
		},0,0, Bitmap.Config.ARGB_8888,new Response.ErrorListener()
		{//参数0 0 代表不压缩
			@Override
			public void onErrorResponse(VolleyError error)
			{
				show(error.getMessage());
				//可以去显示默认图片
			}
		});
		
		mQueue.add(request);
Of course, there are two other ways to load the picture, we re-introduced into the lower part.
         4. Add request header
Sometimes we need to add the request headers for the Request, this time you can go to rewrite the Request getHeaders method.
String url = "http://192.168.56.1:8080/volley_test/servlet/JsonServlet";
JsonObjectRequest request = new JsonObjectRequest(url, null,resplistener,errlistener)
		{
			//添加自定义请求头
			@Override
			public Map<String, String> getHeaders() throws AuthFailureError
			{
				Map<String,String> map = new HashMap<String,String>();
				map.put("header1","header1_val");
				map.put("header2","header2_val");
				return map;
			}
		};
  5. Add post request parameters
 Post Add Request Request parameter GetParams method can be rewritten, it takes another parameter modification request is a POST.
String url = "http://192.168.56.1:8080/volley_test/servlet/PostServlet";
StringRequest request = new StringRequest(Method.POST,url,listener, errorListener)
		{
			//post请求需要复写getParams方法
			@Override
			protected Map<String, String> getParams() throws AuthFailureError
			{
				Map<String,String> map = new HashMap<String,String>();
				map.put("KEY1","value1");
				map.put("KEY2", "value2");
				return map;
			}			
		};
Request there is also some getXXX method, we refer to the code on her own now.
         6. cancellation request
When Activity destroyed, we may need to cancel a number of network requests, this time can be as follows:
Request req = ...;
request.setTag("MAIN_ACTIVITY");
onDestroy()
{
    ...
    mQueue.cancelAll("MAIN_ACTIVITY");
}
To request all belonging to the Activity plus Tag, and then when you need to destroy incoming call cancelAll tag can be.
RequestQueue # cancelAll there is another form of heavy-duty, can be passed RequestFilter, own designated a filtering policy.
If we need to get rid of all requests and follow-up is no longer a network request, you can get rid of RequestQueue, calls its method to stop.
         7. The global shared RequestQueue
RequestQueue is not necessary for each Activity which creates, maintains a globally can. This time naturally think of using the Application. We can create in Application RequestQueue inside, and exposed to get out approach. Code is very simple, I believe we will write.   
        
     (B) Use image loading frame
The above describes ImageRequest network load pictures, but this is not enough to streamline, volley other offers ImageLoader and NetworkImageView. Of course, they are internal use ImageRequest.
          1.ImageLoader
Mqueue = ... RequestQueue; 
ImageCache mCache = ...; 
; Loader = new new ImageLoader (mqueue, mImageCache) 
when ImageListener listener = ImageLoader.getImageListener (mImageView / * associated iamgeView * /, R.drawable.ic_launcher / * image loading display display * /, R.drawable.task_icon / * images fail to load * /); 
loader.get ( "http://192.168.56.1:8080/volley_test/image.jpg", listener, 0, 0);
First create RequestQueue Needless to say, then create ImageLoader instance, also need to queue incoming requests and ImageCache. This ImageCache is an interface, we need to implement, it is a picture cache, usually in conjunction LRUCache we are using as a memory cache, of course, if you do not want to use a memory cache, then give it an empty can achieve. Then when the need to bind ImageView ImageLoader and image loading, image loading image resource for failure. The last incoming call the get method url request network.
About ImageCache cache:
ImageCache Interface:
 public interface ImageCache {
        public Bitmap getBitmap(String url);
        public void putBitmap(String url, Bitmap bitmap);
    }
See, if we need to image cache, then the realization getBitmap / putBitmap can be. Each time ImageLoader # get method is called, the first look from ImageCache (getBitmap), if the request is not found only added to the queue.
What should achieve ImageCache:
Usually combined LRUCache. It is noteworthy that ImageCache need to make Singleton, global shared . The following is ImageCache LRUCache binding code:
/**
	 * @author Rowandjj
	 *图片缓存需要做成单例,全局共享
	 */
	private static class LruImageCache implements ImageCache
	{
		private LruImageCache(){}
		
		private static LruImageCache instance = new LruImageCache();
		
		public static final LruImageCache getInstance()
		{
			return instance;
		}
		private static final String TAG = "LruImageCache";
		private final int maxSize = (int) (Runtime.getRuntime().maxMemory()/8);
		private LruCache<String,Bitmap> mCacheMap = new LruCache<String,Bitmap>(maxSize)
		{
			protected int sizeOf(String key, Bitmap value)
			{
				return value.getRowBytes()*value.getHeight();
			}
		};
		
		@Override
		public Bitmap getBitmap(String url)
		{
			Bitmap bitmap = mCacheMap.get(url);
			Log.i(TAG, "url = "+url+",cache:"+bitmap);
			return bitmap;
		}
		@Override
		public void putBitmap(String url, Bitmap bitmap)
		{
			Log.i(TAG, "put url = "+url);
			mCacheMap.put(url, bitmap);
		}
		
	}
   2.NetworkImageView
This is a custom control, use similar with ImaegView.
<com.android.volley.toolbox.NetworkImageView
            android:id="@+id/niv"
             android:layout_width="0dp"
             android:layout_height="match_parent"
             android:layout_weight="1" 
             >
ImageLoader loader = ...;
mNetImageView = findViewById(R.id.niv);
mNetImageView.setDefaultImageResId(R.drawable.ic_launcher);
mNetImageView.setErrorImageResId(R.drawable.task_icon);
mNetImageView.setImageUrl("http://192.168.56.1:8080/volley_test/image.jpg", loader);
The code is simple, but much introduction.
    (C) custom request
volley is a highly scalable framework, we can on its basis, an increase of a lot of their stuff, such as custom Request.
The above describes JsonRequest json data transfer can be resolved, then let's customize a XMLRequest to parse XML data.
Request a custom parsing Request # parseNetworkResponse need to rewrite the response data network, Request # deliverResponse callback Listener.onResponse.
public class XMLRequest extends Request<XmlPullParser>
{
	private Listener<XmlPullParser> mListener;
	public XMLRequest(int method, String url, Listener<XmlPullParser> listener,
			ErrorListener errorListener)
	{
		super(method, url, errorListener);
		mListener = listener;
	}
	public XMLRequest(String url, Listener<XmlPullParser> listener,
			ErrorListener errorListener)
	{
		this(Method.GET, url, listener, errorListener);
	}
	@Override
	protected Response<XmlPullParser> parseNetworkResponse(NetworkResponse response)
	{
		try
		{
			String xmlString = new String(response.data,HttpHeaderParser.parseCharset(response.headers));
			XmlPullParser parser = Xml.newPullParser();
			parser.setInput(new StringReader(xmlString));//将返回数据设置给解析器
			return Response.success(parser,HttpHeaderParser.parseCacheHeaders(response));
		} catch (UnsupportedEncodingException e)
		{
			return Response.error(new VolleyError(e));
		} catch (XmlPullParserException e)
		{
			return Response.error(new VolleyError(e));
		}
	}
	@Override
	protected void deliverResponse(XmlPullParser response)
	{
		mListener.onResponse(response);
	}
}

Use:

/**
	 * xmlRequest 使用示例
	 */
	void test()
	{
		RequestQueue queue = Volley.newRequestQueue(context);
		String url = "";
		XMLRequest request = new XMLRequest(url,new Response.Listener<XmlPullParser>()
		{
			@Override
			public void onResponse(XmlPullParser response)
			{
				int type = response.getEventType();
				while(type != XmlPullParser.END_DOCUMENT)
				{
					switch (type)
					{
					case XmlPullParser.START_TAG:
						break;
					case XmlPullParser.END_TAG:
						break;
					default:
						break;
					}
					response.next();
				}
			}
		},new Response.ErrorListener()
		{
			@Override
			public void onErrorResponse(VolleyError error)
			{
			}
		});
	}

In fact, in addition to customize their Request, we can customize a lot of things, such as RequestQueue, see RequestQueue constructor:

public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery)
We found constructor need to specify caching policy (the default disk cache), the thread pool size, the results distribution policy, and so on. All of which are configurable. This framework volley write too Niubi!
 
(II). Source Analysis
Only know how to use it is not enough, we should analyze the source code to see its internal realization principle, so as to progress.     
    1. mainline
First we grasp volley workflow as a whole, to seize its main line.
(1) Create the request queue (The RequestQueue) of
Create a request queue jobs from Volley # newRequestQueue beginning of this internal method calls RequestQueue constructor, specifying some basic configuration, such as caching policy for the disk cache (DiskBasedCache), http request method is HttpURLConnection (level> 9) and HttpClient (level <9), the default size of the thread pool 4. Finally, call RequestQueue # start start request queue.
Detailed look at the code:
//Volley.java
 public static RequestQueue newRequestQueue(Context context) {
        return newRequestQueue(context, null);
    }

Calls another factory method:

//volley.java
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
          ... ...
        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }
        Network network = new BasicNetwork(stack);
        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();
        return queue;
    }
This specifies the location of the disk cache data / data / package_name / cache / volley / ..., Network type (implementation class is BasicNetwork) encapsulates a request method, and the selection of different tools in accordance with the current version of the API http. Finally, start the request queue.
See below RequestQueue constructor:
//RequestQueue.java
public RequestQueue(Cache cache, Network network) {
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    }

Specify the default thread pool size is 4.

//RequestQueue.java  
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }
 public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }
ResponseDelivery here is the result of the request of the distributor (concrete realization ExecutorDelivery), internal returns the result to the main thread ( to use the Looper Handler and the UI thread according to the code we should be able to guess ), and handle the callback event.
What happens after the start of the queue to see the following request to see RequestQueue # start method:
 public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();
        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

The logic is simple, create a CacheDispatcher and four NetworkDispatcher objects, and then start it separately. This CacheDispatcher NetworkDispatcher and are a subclass of Thread, wherein the processing go CacheDispatcher cache request, the processing request away four NetworkDispatcher network. CacheDispatcher injected through the constructor cache request queue (mCacheQueue), a network request queue (mNetworkQueue), a hard disk cache object (DiskBasedCache), the result distributor (mDelivery). The reason why the request queue is also injected into the network because part of the cache request may have expired, and this time need to re-obtained from the network. NetworkDispatcher In addition to the cache request queue is not injected, like other CacheDispatcher. Here RequestQueue the task is complete, after a request will be handed over to the dispatcher thread.

 

(2) addition request
Addition request is logical, add RequestQueue # add completion process is such that:
1. The request to join a set mCurrentRequests
2. Add request Serial No.
3. determining whether the request should be cached, if not, to join the network request queue
4. If the same request is being processed, the request is added to the same queue, the request queue or buffer was added.
public Request add(Request request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }
        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");
        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }
        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

 

By this method, the request is distributed to the two queues for each and NetworkDispatcher CacheDispatcher process.

(3) Processing the request
Processing request and by CacheDispatcher NetworkDispatcher to complete their run method to continuously withdrawn through an endless loop from the respective requests in the queue, processes it, and the result referred ResponseDelivery. Both deal with the same idea but the specific logic is a little bit different, we look at each.
     1. go caching requests
CacheDispatcher.java#run 
@Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        // Make a blocking call to initialize the cache.
        mCache.initialize();
        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request request = mCacheQueue.take();
                request.addMarker("cache-queue-take");
                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }
                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }
                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }
                // We have a cache hit; parse its data for delivery back to the reques
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");
                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);
                    // Mark the response as intermediate.
                    response.intermediate = true;
                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }


Generally this logic, the request is first removed from the queue to see whether it has been canceled, if it is returned, otherwise continue to go down. Then find the value (Cache.Entry) from the disk cache by the cache key, if not, then a queue request this request to join the network. Otherwise, cache expiration determination result (the page to be requested or assigned Cache-Control Last-Modified / Expires field, etc., and Cache-Control Expires higher priority than otherwise must request expired), if expired , the request to join the network queue. If not expired, then the method by request.parseNetworkResponse disk cache data is encapsulated into a Response object (Request parseNetworkResponse is the abstract, replication). Finally freshness judgment, if not refreshed, the call postResponse distribution results ResponseDelivery results dispenser. Otherwise, first results are returned, then the network request to the request queue refresh. [Read the code so cool, google engineers to write too praised! ] On specific process ResponseDelivery we leave to the next section to speak.

 2. take requests from the network

@Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request request;
        while (true) {
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
            try {
                request.addMarker("network-queue-take");
                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }
                // Tag the request (if API >= 14)
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
                    TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
                }
                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");
                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }
                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");
                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }
                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                mDelivery.postError(request, new VolleyError(e));
            }
        }
    }

这里的逻辑跟CacheDispatcher类似,也是构造Response对象,然后交由ResponseDelivery处理,但是这里的Response对象是通过NetworkResponse转化的,而这个NetworkResponse是从网络获取的,这里最核心的一行代码就是

 NetworkResponse networkResponse = mNetwork.performRequest(request);

这个mNetwork是BasicNetwork对象,我们看其performRequest的实现:

 public NetworkResponse performRequest(Request<?> request) throws VolleyError {
        long requestStart = SystemClock.elapsedRealtime();
        while (true) {
            HttpResponse httpResponse = null;
            byte[] responseContents = null;
            Map<String, String> responseHeaders = new HashMap<String, String>();
            try {
                // Gather headers.
                Map<String, String> headers = new HashMap<String, String>();
                addCacheHeaders(headers, request.getCacheEntry());
                httpResponse = mHttpStack.performRequest(request, headers);
                StatusLine statusLine = httpResponse.getStatusLine();
                int statusCode = statusLine.getStatusCode();
                responseHeaders = convertHeaders(httpResponse.getAllHeaders());
                // Handle cache validation.
                if (statusCode == HttpStatus.SC_NOT_MODIFIED) {
                    return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED,
                            request.getCacheEntry().data, responseHeaders, true);
                }
                // Some responses such as 204s do not have content.  We must check.
                if (httpResponse.getEntity() != null) {
                  responseContents = entityToBytes(httpResponse.getEntity());
                } else {
                  // Add 0 byte response as a way of honestly representing a
                  // no-content request.
                  responseContents = new byte[0];
                }
                // if the request is slow, log it.
                long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
                logSlowRequests(requestLifetime, request, responseContents, statusLine);
                if (statusCode < 200 || statusCode > 299) {
                    throw new IOException();
                }
                return new NetworkResponse(statusCode, responseContents, responseHeaders, false);
            } catch (SocketTimeoutException e) {
                attemptRetryOnException("socket", request, new TimeoutError());
            } catch (ConnectTimeoutException e) {
                attemptRetryOnException("connection", request, new TimeoutError());
            } catch (MalformedURLException e) {
                throw new RuntimeException("Bad URL " + request.getUrl(), e);
            } catch (IOException e) {
                int statusCode = 0;
                NetworkResponse networkResponse = null;
                if (httpResponse != null) {
                    statusCode = httpResponse.getStatusLine().getStatusCode();
                } else {
                    throw new NoConnectionError(e);
                }
                VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
                if (responseContents != null) {
                    networkResponse = new NetworkResponse(statusCode, responseContents,
                            responseHeaders, false);
                    if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
                            statusCode == HttpStatus.SC_FORBIDDEN) {
                        attemptRetryOnException("auth",
                                request, new AuthFailureError(networkResponse));
                    } else {
                        // TODO: Only throw ServerError for 5xx status codes.
                        throw new ServerError(networkResponse);
                    }
                } else {
                    throw new NetworkError(networkResponse);
                }
            }
        }
    }

这里最核心的是这一句:

httpResponse = mHttpStack.performRequest(request, headers);

它调用了HttpStack的performRequest,这个方法内部肯定会调用HttpURLConnection或者是HttpClient去请求网络。这里我们就不必继续向下跟源码了。

 

(4)请求结果的分发与处理
请求结果的分发处理是由ResponseDelivery实现类ExecutorDelivery完成的,ExecutorDelivery是在RequestQueue的构造器中被创建的,并且绑定了UI线程的Looper:
 public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }

ExecutorDelivery内部有个自定义Executor,它仅仅是封装了Handler,所有待分发的结果最终会通过handler.post方法交给UI线程。

public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

下面看我们最关心的postResponse方法:

@Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }
@Override
    public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }

最终执行的是ResponseDeliveryRunnable这个Runnable:

private class ResponseDeliveryRunnable implements Runnable {
        private final Request mRequest;
        private final Response mResponse;
        private final Runnable mRunnable;
        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
            mRequest = request;
            mResponse = response;
            mRunnable = runnable;
        }
        @SuppressWarnings("unchecked")
        @Override
        public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }
            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }
            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }
            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }
}

这里我们看到了request.deliverResponse被调用了,这个方法通常会回调Listener.onResponse。哈哈,到这里,整个volley框架的主线就看完了!读到这里,我真是由衷觉得google工程师牛逼啊!

 

 

2.一些支线细节
    (1)请求的取消
调用Request#cancel可以取消一个请求。cancel方法很简单,仅将mCanceled变量置为true。而CacheDispatcher/NetworkDispatcher的run方法中在取到一个Request后会判断是否请求取消了:
 if (request.isCanceled()) {
    request.finish("network-discard-cancelled");
      continue;
       }

如果请求取消就调用Request#finish,finish方法内部将调用与之绑定的请求队列的finish方法,该方法内部会将请求对象在队列中移除。

 

 

(2)请求队列的终止
调用RequestQueue#stop可以终止整个请求队列,并终止缓存请求线程与网络请求线程:
public void stop() {
        if (mCacheDispatcher != null) {
            mCacheDispatcher.quit();
        }
        for (int i = 0; i < mDispatchers.length; i++) {
            if (mDispatchers[i] != null) {
                mDispatchers[i].quit();
            }
        }
    }

XXXDispatcher的quit方法会修改mQuit变量并调用interrupt使线程抛Interrupt异常,而Dispatcher捕获到异常后会判断mQuit变量最终while循环结束,线程退出。

catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

 

 

  (3)ImageLoader
ImageLoader是对ImageRequest的封装,这里重点关注下get方法:
public ImageContainer get(String requestUrl, ImageListener imageListener,
            int maxWidth, int maxHeight) {
      ...
        final String cacheKey = getCacheKey(requestUrl, maxWidth, maxHeight);
        // Try to look up the request in the cache of remote images.
        Bitmap cachedBitmap = mCache.getBitmap(cacheKey);
        if (cachedBitmap != null) {
            // Return the cached bitmap.
            ImageContainer container = new ImageContainer(cachedBitmap, requestUrl, null, null);
            imageListener.onResponse(container, true);
            return container;
        }
 ...
        Request<?> newRequest =
            new ImageRequest(requestUrl, new Listener<Bitmap>() {
                @Override
                public void onResponse(Bitmap response) {
                    onGetImageSuccess(cacheKey, response);
                }
            }, maxWidth, maxHeight,
            Config.RGB_565, new ErrorListener() {
                @Override
                public void onErrorResponse(VolleyError error) {
                    onGetImageError(cacheKey, error);
                }
            });
        mRequestQueue.add(newRequest);
        mInFlightRequests.put(cacheKey,
                new BatchedImageRequest(newRequest, imageContainer));
        return imageContainer;
    }
首先会从缓存中获取如果没有则构造ImageRequest并添加到请求队列。
(4)关于缓存
Volley的CacheDispatcher工作时需要指定缓存策略,这个缓存策略即Cache接口,这个接口有两个实现类,DiskBasedCache和NoCache,默认使用DiskedBasedCache。它会将请求结果存入文件中,以备复用。Volley是一个高度灵活的框架,缓存是可以配置的。甚至你可以使用自己的缓存策略。
可惜这个DiskBasedCache很多时候并不能被使用,因为CacheDispatcher即使从缓存文件中拿到了缓存的数据,还需要看该数据是否过期,如果过期,将不使用缓存数据。这就要求服务端的页面可以被缓存,这个是由Cache-Control和Expires等字段决定的,服务端需要设定此字段才能使数据可以被缓存。否则缓存始终是过期的,最终总是走的网络请求。
服务端假如是servlet写的可以这样做:
Servlet#doPost/doGet()
/*设置缓存*/
resp.setDateHeader("Last-Modified",System.currentTimeMillis());
resp.setDateHeader("Expires", System.currentTimeMillis()+10*1000*60);
resp.setHeader("Cache-Control","max-age=10000");
resp.setHeader("Pragma","Pragma");

Cache-Control字段的优先级高于Expires。这个可以从HttpHeaderParser#parseCacheHeaders方法中看到。

public static Cache.Entry parseCacheHeaders(NetworkResponse response) {
        long now = System.currentTimeMillis();
        Map<String, String> headers = response.headers;
        long serverDate = 0;
        long serverExpires = 0;
        long softExpire = 0;
        long maxAge = 0;
        boolean hasCacheControl = false;
        String serverEtag = null;
        String headerValue;
        headerValue = headers.get("Date");
        if (headerValue != null) {
            serverDate = parseDateAsEpoch(headerValue);
        }
        headerValue = headers.get("Cache-Control");
        if (headerValue != null) {
            hasCacheControl = true;
            String[] tokens = headerValue.split(",");
            for (int i = 0; i < tokens.length; i++) {
                String token = tokens[i].trim();
                if (token.equals("no-cache") || token.equals("no-store")) {
                    return null;
                } else if (token.startsWith("max-age=")) {
                    try {
                        maxAge = Long.parseLong(token.substring(8));
                    } catch (Exception e) {
                    }
                } else if (token.equals("must-revalidate") || token.equals("proxy-revalidate")) {
                    maxAge = 0;
                }
            }
        }
        headerValue = headers.get("Expires");
        if (headerValue != null) {
            serverExpires = parseDateAsEpoch(headerValue);
        }
        serverEtag = headers.get("ETag");
        // Cache-Control takes precedence over an Expires header, even if both exist and Expires
        // is more restrictive.
        if (hasCacheControl) {
            softExpire = now + maxAge * 1000;
        } else if (serverDate > 0 && serverExpires >= serverDate) {
            // Default semantic for Expire header in HTTP specification is softExpire.
            softExpire = now + (serverExpires - serverDate);
        }
        Cache.Entry entry = new Cache.Entry();
        entry.data = response.data;
        entry.etag = serverEtag;
        entry.softTtl = softExpire;
        entry.ttl = entry.softTtl;
        entry.serverDate = serverDate;
        entry.responseHeaders = headers;
        return entry;
    }

这个方法是由Request子类的parseNetworkResponse方法调用的:

Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response))
另外,在使用图片加载框架时还有个ImageCache,它是作为图片加载的一级缓存,跟上面的DiskedBasedCache没有任何关系,大家不要混淆。ImageCache需要我们自己实现,通常结合LRUCache,具体第一部分已经介绍过了。
到这里我们把整个Volley框架全部分析完了。最后贴上Volley的整体架构图:

 

下面这幅图也很好:






转载于:https://www.cnblogs.com/wangzehuaw/p/5583919.html

Guess you like

Origin blog.csdn.net/weixin_34060299/article/details/93778501