Analysis and Countermeasures of Performance Bottleneck of a Web Service

1. Introduction 

QoS (Quality of Service, Quality of Service) control technology, as one of the core technologies of the next generation network, has increasingly become a hot issue in research and development in computer networks. The basic goal of QoS control is to provide performance assurance and differentiated services for Internet applications. With the explosive growth of Web applications on the Internet and the rapid development of e-commerce, how to provide users with satisfactory service performance guarantee has become a new research topic. Since traditional Web servers cannot provide service differentiation and performance guarantee for Web applications, so , with the in-depth research and application of QoS technology, Web QoS as a new important research field of QoS technology emerges as the times require. This technology oriented to Web clients and Http requests is QoS belonging to the application layer, an important condition for business-to-business transactions, and a necessary element in a Web server. It measures how users feel when interacting with Web sites. The service performance received, such as: transaction time, transaction reliability, etc. Various QoS attributes need to be met in the implementation of Web service applications, such as: availability, accessibility, integrity, performance, reliability, regularity, and security [1]. 

At present, the research on Web QoS control has been paid more and more attention by domestic and foreign academia and industry, and some achievements have been achieved. To sum up, the realization of Web QoS control technology can be divided into the following categories: Web request classification mechanism, Web server software QoS control mechanism, operating system Web QoS control mechanism and Web server cluster QoS control mechanism. This paper mainly studies the performance bottleneck of a Web service, analyzes it, and proposes corresponding strategies. 

2. The impact of Http on Web QoS and its countermeasures 

Due to the limitations of the underlying message transmission and transmission protocols, Web services will encounter performance bottlenecks. However, reliance on generally accepted protocols such as HTTP and SOAP makes them a permanent burden to bear. Therefore, this paper analyzes it and draws corresponding solutions. 

2.1. Http becomes a bottleneck restricting the performance of Web services 

Http is a best-effort transmission service, a stateless data forwarding mechanism, it does not guarantee that the data packet will be transmitted to the destination, and does not guarantee the order in which the data packet arrives. This creates a terrible problem: in the absence of available bandwidth, packets are simply dropped. As a result, many paying subscribers with high service levels are not guaranteed a service level. For example, transactions between enterprises require more reliable service guarantees than general browsing, and online transactions of a stock require real-time guarantees more than ordinary downloads. Therefore, with the increase in the amount of users and data running on the network and the rapid development of e-commerce, under the condition of limited bandwidth and network resources, Http is obviously a bottleneck restricting the performance of Web services, and the Http protocol cannot be used for Web servers. Provides differentiated services and performance guarantees. 

Although newly designed protocols such as "Reliable HTTP" (HTTPR), "Blocks Extensible Exchange Protocol" (BEEP) and "Direct Internet Message Encapsulation" (DIME) can be used [ 2], but widespread adoption of these new protocols for Web service transport, such as HTTPR and BEEP, will take some time. Therefore, application designers using Web services should design systems with an understanding of Web services performance issues, such as latency and availability. Given below are some strategies to improve the performance of Web services to solve this problem. 

2.2 Four Solution Strategies 

2.1.1 Using Asynchronous Message Queuing 

Traditionally, many applications use synchronous messaging, which is no problem when running the application on its own computer; the delay in component communication is measured in milliseconds . However, for Web services, they communicate over the Internet, which means delays in the tens, hundreds, or even thousands of microseconds. 

Applications that rely on remote Web services can use message queuing to improve reliability, but at the expense of response time. Applications and Web services within an enterprise can use message queuing such as "Java Messaging Service" (Java Messaging Service, JMS) or IBM MQSeries to make Web service calls [2]. Enterprise Messaging provides a reliable, flexible service for the asynchronous exchange of critical data across the enterprise. Message queues have two main advantages: 

(1) It is asynchronous: a messaging service provider can deliver messages to requesters as they arrive, and requesters don't have to request messages in order to receive them. 

(2) It is reliable: the messaging service can ensure that a message is delivered once, and only once. 

In the future, publish and subscribe messaging systems on the Internet such as the Utility Services package on alphaWorks can be used for Web service calls [3]. 

2.1.2 Classification of incoming Http requests 

An important part of implementing Web QoS is to classify incoming Http requests. In traditional Web services, HTTP requests are directly monitored by the worker process, which is responsible for all requests. A first-come, first-served approach is adopted, which obviously ignores the customer's priority level. A connection management module is usually used now, which can classify different requests and set their priorities, so that differentiated services for different users can be realized. Request classification is the core module for implementing Web differential services. It sets different priorities for different requests and puts them into corresponding queues. There are many classification methods, which can be selected according to actual needs. The commonly used methods can be divided into the following categories: 

(1) Classification according to different users 

要对客户进行分类,客户可以按服务器的要求输入一定的信息,服务器以此来判断客户的身份。一种方法是用客户的IP地址来区分客户,这种方法是在QoS Web服务器模型中,服务器可以对客户的服务请求设定不同的服务等级,按照预先定义的资源分配策略对客户的服务请求作出响应[4]。这种方法具有占用带宽小,容易实现,客户等等时延小的优点。但缺点是客户端的IP地址经常会被代理服务器或者防火墙所屏蔽,因此它的应用也受到限制。 

另外一种方法是基于Http Cookie的分类,它是将Web Cookie嵌入Http请求内,以表明客户所属的类别。HTTP请求中的Cookie是可以由服务器发送给浏览器的唯一标识符,它可以内嵌在HTTP请求中,用来表示不同的服务级别。服务商可以给某个特定的服务提供一个永久的Cookie以供给用户使用。这样,就可以为付费用户和免费用户设置不同的优先级,利用cookie 来识别用户信息,记录下用户在一段时间内的访问倾向,例如经常浏览哪一类的网页,或者常常购买哪一类的商品;并将有相同兴趣的用户归类分组。当用户访问网站时,服务器可以根据他们的兴趣倾向推荐他们可能接受的网页,而且可以预测用户将来可能的行为,以此来提高服务质量。 

和基于Http Cookie的分类相似,基于浏览器plug-in的分类是将特定的标识符嵌入Http请求内以表明客户所属的类别。浏览器中的plug-in插件是内嵌在客户端的又一种标识方法,购买了某种优先级服务的用户可以从服务器端下载特定的插件,把它放入HTTP的请求中。这些方法可以对客户进行分组,从而对高级别的客户提供更好的Web QoS保证。 

以上这些方法虽然很准确,但是比较繁琐,增加了客户的等待时延,同时也为判断客户的身份占用了额外的带宽。

(2)   根据请求的目标分类 

根据请求的目标所特有的一些属性和特点,我们可以对客户进行分类。我们都知道,由于URL请求类型或文件名路径可以区分不同请求,以及若多个站点访问同一Web服务器节点的时候,服务器可以识别其IP。所以可以用基于URL请求类型或请求的文件路径的分类和基于目标IP或端口的分类两种方法来实现Web QoS控制。这种分类方法同样也可以为高级别用户提供优先服务。较好地消除由于在网络少量较大时,由Http协议而产生的瓶颈[5]。 

不同的URL请求类型或者不同的请求文件路径表明了请求的不同的重要程度,在这种情况下请求的重要性与发送者是无关的。它侧重于对于不同的请求动作和请求目的进行分类。按照其重要程度,一般可以将请求分为紧急的 (Mission-critical) 、对时延敏感(delay-sensitive)的、和尽力而为(best-effort)传送3种。 例如,在电子商务应用中,购买商品的用户显然应当比仅仅浏览的用户获取更高的优先级目的地的IP地址,如果在同一个网络节点中架设多个Web站点,那么就要用目的地址来区分请求的重要性。 

(3) 利用其他网络参数 

Web服务器也可以将传输和路由中对数据包分级的参数集成到自己的连接管理模块中。例如,在因特网的差分服务体系中,IP数据报头的TOS域常被用作包的优先级标识,Web服务器可直接从IP头取出TOS域中的数据作为请求的优先级[5]。这样,也可以达到高级别用户服务的保证。 

2.1.3、通过备份使服务平滑降级 

在每个服务器上存储多份不同质量的Web 内容。当服务器超载时,可以使服务器有选择地为客户提供适宜质量的Web 内容,即以体面的方式为低优先级客户提供平滑的服务降级[6],而保证高优先级的客户不会受到降级服务。这样就可以在服务器过载的情况下自适应地提供连续的内容降级服务而不是简单地拒绝请求,从而能够更好地为用户提供Web QoS。 

2.1.4、其它的保证Web Qos的方法 

除了上文中所述的方法之外,我们还可以提供主动Web服务 QoS 的方法,如服务请求的高速缓存和负载平衡,服务提供者可以主动向服务请求者提供很高的 QoS。在Web服务器级别上和 Web 应用程序服务器级别上都可以完成高速缓存和负载平衡。负载平衡区分各种类型通信的优先次序,并确保适当地按照每个请求所表现出的价值对待它。 

3、 结论和展望 

随着Web 服务的广泛扩大,服务质量QoS 将变成一个判定服务提供者是否成功的重要因素。QoS 决定服务的可用性和实用性, 本文所列出的消除Http对于Web服务性能瓶颈的方法针对性强,易于实现。随着Web QoS研究技术的发展,基于中间件技术和Web服务器集群的QoS控制机制必将给Web提供更可靠的服务保障,有待我们进行更深入的研究。 

 

 

http://baalwolf.iteye.com/blog/1322302

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326604878&siteId=291194637