[Finagle of RPC framework]

1. Introduction to Finagle

Finagle is a fault-tolerant, protocol-independent RPC framework developed by Twitter based on Netty, which supports the core services of Twitter.

 

 

Twitter's service-oriented architecture was transformed from a large Ruby on Rails application. To accommodate this architectural change, a high-performance, fault-tolerant, protocol-agnostic, and asynchronous RPC framework is required. In a service-oriented architecture, a service spends most of its time waiting for a response from an upstream service, so using an asynchronous library allows the service to process requests concurrently, thus exploiting the full potential of the hardware. Finagle is built on top of Netty, not directly on top of native NIO, because Netty has solved many of the problems Twitter had and provided a clean API.

Twitter is built on multiple open source protocols, including HTTP, Thrift, Memcached, MySQL, and Redis. Therefore, the network stack needs to be flexible enough to communicate with these protocols and scalable enough to support the addition of new protocols. Netty itself is not bound to any specific protocol, and adding protocol support to it is very simple, just create the corresponding event handler. This extensibility has resulted in numerous community-driven protocol implementations, including SPDY, PostrgreSQL, WebSockets, IRC, and AWS.

Netty's connection management and protocol independence lay a good foundation for Finagle, but Twitter has some requirements that Netty doesn't natively support because these requirements are "higher-level". For example, clients need to connect to a cluster of servers and be load balanced. All services need to export metrics information (such as request rate, latency, etc.), which provide valuable internal information for debugging the behavior of the service. In a service-oriented architecture, a request may go through dozens of services, so debugging performance issues is nearly impossible without a tracing framework. To address these issues, Twitter built Finagle. In short, Finagle relies on Netty's IO multiplexing technology and provides a transaction-oriented framework on top of Netty's connection-oriented model.

 

 

2. How Finagle works

Finagle emphasizes the idea of ​​modularity, which combines independent components together. Each component can be replaced according to the configuration. For example, all tracers implement the same interface, so that tracers can be created to store trace data to a local file, keep it in memory and expose it as a read endpoint or write it to a network middle.

At the bottom of the Finagle stack is Transport, which represents a stream of objects that can be read and written asynchronously. Transport is implemented as Netty's ChannelHandler and inserted at the end of ChannelPipeline. When Finagle recognizes that the service is ready to read data, Netty will read the data from the wire and pass it through the ChannelPipeline, the data will be parsed by the codec, and then sent to Finagle's Transport. From here, Finagle sends data to its own stack.

For client connections, Finagle maintains a pool of Transports through which to balance the load. Depending on the provided connection pooling semantics, Finagle can request a new connection from Netty or reuse an idle connection. When a new connection is requested, a Netty ChannelPipeline is created based on the client's codec. Some additional ChannelHandlers are added to the ChannelPipeline for stats, logging and SSL functionality. If all connections are busy, requests will be queued according to the configured queuing policy.

On the server side, Netty manages functions such as codec, statistics, timeouts and logs through the provided ChannelPipelineFactory. In the server-side ChannelPipeline, the last ChannelHandler is the Finagle bridge. The bridge will wait for incoming connections and create a new Transport for each connection. Transport wraps a new channel before being passed to the server implementation. The information is then read from the ChannelPipeline and sent to the implemented server instance.



 

 

3. Bridging Netty and Finagle

The Finagle client uses ChannelConnector to bridge Finagle and Netty. ChannelConnector is a function that accepts a SocketAddress and returns a Future Transport. When a new Netty connection is requested, Finagle uses the ChannelConnector to request a new Channel and uses the Channel to create the Transport. The connection will be established asynchronously. If the connection is successful, the Future will be filled with the newly established Transport. If the connection cannot be established, a failure will occur. The Finagle client will dispatch requests based on this Transport.

Finagle server will bind an interface and port through Listener. When a new connection is created, the Listener creates a Transport and passes it to a given function. In this way, the Transport will be passed to the Dispatcher, which will distribute requests from the Transport to the Service according to the specified policy.

 

 

 

 

4. Management of request failures

In scalable architectures, failures are common, hardware failures, network congestion, and network connectivity failures can all cause problems. A library that supports high throughput and low latency doesn't make much sense if it can't handle failures. In order to obtain better failure management capabilities, Finagle makes certain sacrifices in throughput and latency.

Finagle can use a cluster of hosts for load balancing, where the client locally keeps track of every host it knows about. It does this by counting outstanding requests sent to a certain host. This way, Finagle will be able to send new requests to the least loaded host, while also getting the lowest latency.

If a failed request occurs, Finagle closes the connection to the failed host and removes it from the load balancer. In the background, Finagle will constantly try to reconnect, and if Finagle is able to reconnect, it will be added to the load balancer again.

 

 

Client Features

Connection Pooling

Load Balancing

Failure Detection

Failover/Retry

Distributed Tracing (à la Dapper)

Service Discovery (e.g., via Zookeeper)

Rich Statistics

Native OpenSSL Bindings

 

Server Features

Backpressure (to defend against abusive clients)

Service Registration (e.g., via Zookeeper)

Distributed Tracing

Native OpenSSL bindings

 

Supported Protocols

HTTP

HTTP streaming (Comet)

Thrift

Memcached/Kestrel

More to come!

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326871933&siteId=291194637