Caton Media Xstream: Redefining Live Content Delivery Services

  //  

Editor's note: As the public Internet becomes more complex, the basic prototype of best effort can no longer meet the increasing number of real-time content delivery services that require QoS guarantees. However, traditional solutions such as dedicated lines and satellites have problems such as high deployment costs and long deployment cycles, and cannot respond quickly to various needs. LiveVideoStackCon invited Wei Ling from Caton Technology to introduce us to the Caton Media Xstream platform solution.

Text/Wei Ling

Editor/LiveVideoStack

Hello everyone, I am Wei Ling from Caton Technology. Today, I am sharing a new real-time content delivery service platform launched by Caton Technology, Caton Media Xstream.

720ed69d777a3d6381d29f2e5d2331f0.png

Caton Technology is a high-tech company with technology as its core driving force. It has been committed to providing customers with complete solutions for high-quality transmission of next-generation IP networks, including codecs, large file transmission, IP distribution and many other services; We believe that through the power of IP networks, we can realize all-round, high-quality real-time interactions between people, people and things, and things and things, and connect the world together. Today, I am sharing Caton Media Xstream, Caton’s new service platform. I will introduce the features of the platform and the related technologies behind it from multiple dimensions.

-01-

Background introduction

The first part is a background introduction, including the current situation of the Internet, challenges and the characteristics of Caton Media Xstream.

99c8e6c8a6af8c77f783182bbc40789d.png

First of all, let me introduce why we want to build such a platform and what kind of problems we build this platform to solve. This is also the meaning of the platform’s existence.

Since the development of the public Internet, it has become more and more complex. Its complexity is manifested in several aspects:

1. Today’s public Internet is composed of a large number of independently operated ISP networks connected to each other through the Peering protocol; as of March 2021, there are more than 99,868 ISP operators in the world, and they are still growing. ISP providers can be roughly divided into three levels. Tier 1 ISP mainly provides long-distance cross-domain network connections, such as GTT in the United States; tier 2 ISP provides intra-regional network connection coverage, such as Cogent; tier 3 ISP provides the last mile Network connection coverage, such as FET in Taiwan;

2. Traffic transmission between so many ISP operators will use BGP routing. There are various opaque peering protocols between different ISPs. We can understand many principles. For example, the first operator will give priority. Use a paid connection; secondly, with hot potato routing, providers always want to quickly offload data packets to other ISP networks; thirdly, sometimes detours are chosen to obtain greater economic benefits; these problems will lead to public The basic protocols and facilities on the Internet cannot be managed globally to ensure high-quality data transmission.

Let’s look at two typical specific problems that exist in Internet transmission.

One is frequent network failures. Reports show that by the end of 2022, there will be an average of about 300 network failures worldwide every week, especially in the United States, reaching about 90 times; these network failures may occur at any time on the path of the business flow being transmitted;

Second, the so-called best path between ISPs is often unreliable and uncontrollable. For example, in a real business, we found that the transmission delay of two ISP networks in Tokyo, Japan reached nearly 200ms. We used tools to trace the path and found that the peering nodes of the two ISPs were in the United States, so the traffic would pass through The United States went around in a circle, causing the delay to become nearly 100 times greater.

cd0b6c5ca5201a9e0999d1eabf79d46c.png

While the problems facing the Internet are growing, our need for a public Internet is growing. Especially in the field of broadcast-level audio and video transmission, the requirements for network transmission quality are getting higher and higher, which is also the biggest challenge facing the public Internet at present.

First of all, from a business perspective, broadcast-level high-bitrate streaming media transmission scenarios are experiencing explosive growth; such as 7*24-hour professional media channels, live broadcast of professional sports events, remote concerts and theaters, telemedicine, etc.

These services have extremely high requirements on the quality of network transmission. Including high bandwidth, high reliability, low latency, anti-jitter network requirements, as well as long-distance, high-definition business requirements. Even for broadcast-level video streams, what needs to be ensured is not that the user has no subjective perception, but that there is no CC ERROR from the real data packet statistics (CC ERROR is an analysis indicator for TS streams in the broadcast field, that is, it needs to be Zero frame loss), PCR (program reference clock) accuracy requirements are extremely high, both at the nanosecond level. PCR is a basic clock for DVB streams in the field of radio and television. One function is to synchronize the 27M clock of the headend and terminal, which will affect the chroma balance and frame rate. Another important function is to ensure the time synchronization of video and audio. Our company can restore PCR accuracy to less than 500ns.

In response to the above scenarios and requirements, traditional solutions in the radio and television field often use methods such as circuit dedicated lines, MPLS dedicated lines, and satellite transmission.

Although these solutions can guarantee high reliability of 3 9s to 5 9s, their shortcomings are also obvious. The cost is extremely high, more than 100 times more expensive than the public Internet. The deployment cycle is long, sometimes taking several months or even longer. It cannot flexibly respond to demand; finally, even dedicated lines or satellites cannot avoid single points of failure;

These are real problems reported by Caton Technology’s customers when serving them. The radio and television field is also challenging the use of IP networks to transmit professional streaming media, which can reduce costs while still achieving dedicated line-level transmission quality. Caton Technology has been deeply involved in the field of broadcast-level professional streaming media transmission for decades, and is also deeply aware that this challenge is the biggest pain point in long-distance streaming media transmission on broadcast-level IP networks.

f14c7e35d5b13cd3363e2bf4a29c3bcb.png

Therefore, in response to the contradictions and pain points mentioned above, many popular IP solutions currently try to optimize from the protocol layer to ensure high-reliability transmission on IP networks, such as SRT, ZIXI protocol, etc. But Caton starts from another perspective and proposes a broadcast-level streaming media transmission service platform based on the new C3 architecture. It is not limited to transmission protocol optimization, but provides users with low-cost, high-reliability, and long-distance services from the entire system level. Complete transmission solution.

Let’s briefly list some basic features of the platform, including:

(1) Intelligence refers to intelligent routing strategies and traffic engineering, high throughput, and our platform supports ultra-high bit rate video streaming transmission;

(2) High reliability, which can achieve end-to-end transmission reliability of 6 nines;

(3) Low-cost, hybrid deployment of overlay core network based on global network infrastructure, the cost is extremely low;

(4) Low latency, supporting streaming media transmission with ultra-low latency QoS requirements;

(5) Finally, there is flexible deployment. The platform actively embraces cloud native, has extremely high scalability and flexibility, and can flexibly meet customers' transmission needs. I will select the first three characteristics related to transmission to elaborate.

-02-

technology sharing

The second part is technology sharing. I will introduce three features of our platform, which are also technical points related to transmission quality and reliability, including intelligent routing strategies, self-developed transmission protocols, and end-to-end reliable transmission. Intelligent routing strategies are the basic guarantee for achieving highly reliable transmission QoS. The self-developed protocol ensures point-to-point transmission quality, and the last point is to ensure an end-to-end reliable transmission mechanism. The three complement each other and jointly achieve the six nines of transmission reliability of our platform.

23b4f1942e72f309d907e406e930fa1e.png

The first is the intelligent routing strategy; the Caton Media Xstream platform adopts an intelligent routing strategy based on two-layer decision-making.

The first level is a centralized road map planning. If the business data packet is compared to a package that needs to be transmitted to the destination, the car is the carrier that carries the package. This step is to help the car plan all the road topologies that may be delivered to the destination on time and set the weight for each possible path. and priority. It contains three parts: real-time network panoramic map, MoDAG algorithm, and flow-based QoS classification strategy.

The second layer is distributed routing decision-making. Each node on the path map will maintain the network quality with neighboring nodes in real time. When the car drives to the node, it will select the best one based on the currently collected network quality of all sub-links and predict the changing trend of each link. The next hop path is used for transmission. If there is temporary road construction and road closure, analogous to a network failure, the car can automatically change routes and choose another path to reach the destination. The goal is to allow all packages to reach their destination safely and with the highest efficiency while meeting their respective business needs.

Centralized first, distributed later. The two work together to achieve a highly efficient and reliable routing strategy.

Below I will expand on the details of these two levels of strategy.

3945e2685aed56cd80347aa0c17f2c3a.png

The first is the centralized intelligent routing strategy. In order to realize the optimal path planning of traffic on a global scale, we must first master the network status information of the entire link, that is, the real-time network panoramic map. As shown in the figure, we have deployed multiple network servers around the world as pop nodes of the C3 core network, which will detect each other's network quality information in real time and report it to the C3 control plane; I believe that many companies can do this for themselves. We carry out efficient monitoring and alarming on the self-built core network, but we go one step further and not only collect network data between self-built core networks, but also detect and collect other detectable ASN network data in the area to build a truly aggregated network of many ASNs. Real-time network panorama map of network quality data. These big data also provide important big data support for the continuous optimization and iterative upgrade of the C3 architecture.

63926538bec16c542a17a480a7f99a52.png

The next step is to input the obtained real-time network panoramic map into the computing algorithm MoDAG (Multi-Source Directed Acyclic Graph Algorithm).

The first step is preprocessing. First, troubleshoot faulty links and faulty nodes on the original full mesh map. Defining faulty nodes and faulty links is not simply to see whether the status of the node or link is available, but to evaluate the stability of the link and node over a period of time. If the QoS requirements of the flow cannot be met, it is also necessary to Leave it out.

The second step is to normalize the various detected indicators , such as packet loss rate, RTT, Jitter, etc., and quantify them according to the mathematical model as the weight and cost of each node and link. QoS classification strategies also need to be considered here.

Finally, node computing resources and bandwidth are considered, and the overall core network is planned as a whole, load balancing is done, and bandwidth and resource utilization are improved.

6b9d5104087515ae03514422d30e752f.png

After completing the preprocessing process, you will enter the real planning algorithm. It is not convenient to disclose the details of the algorithm, but I can share with you some of the requirements that need to be met.

The first is the principle of disjoint paths , that is, there are at least two paths from the source to the destination, and they do not pass through the same network nodes. This actually includes explicit disjoint paths and absolutely disjoint paths. Explicit disjoint means that the nodes visible in the overlay network are disjoint. Absolutely disjoint paths refer to nodes including the underlying network nodes being disjoint. This requires a large amount of data support for the actual transmission path of the network to achieve. For example, two pop-up points in China that are accessed from abroad may seem to be disjoint, but their underlying networks may both enter the country through the same entrance.

The second point involves the concept of minimum cut in graph theory . Minimum cut refers to the minimum number of paths that exist in a directed graph. When these paths are disconnected, the source and destination of the graph cannot be connected, and this number is the minimum cut. Path planning needs to satisfy a minimum cut greater than or equal to 2, which is also to avoid interruption caused by a single link failure.

The third point is to optimize based on the classic shortest path algorithm when all conditions are met, implement the multi-source and multi-target directed acyclic graph shortest path algorithm, and assign priority to each sub-path.

Finally, the obtained MoDAG path map will be delivered to the edge nodes of the core network to guide the actual transmission of flows.

c3fb61fa21e996af73d898529baf0f5f.png

Next, let’s introduce the QoS classification strategy mentioned earlier. This policy is based on the QoS defined for each stream, which is roughly divided into broadcast-level audio and video streams and has extremely high reliability requirements. RTC audio and video streams, especially those involving interaction, have high requirements on delay; text data streams are completely reliable for transmission; remotely produced audio and video streams have high requirements on delay and jitter.

According to different types, we abstract them into quantified values ​​of Loss, Delay, Jitter and other indicators. These values ​​guide a series of operations of the platform, including the selection of edge coverage nodes, calculations in the MoDAG algorithm, load balancing, and cost optimization. All aspects should be considered comprehensively to achieve the best balance.

All in all, the centralized intelligent routing strategy is to output an optimal path directed acyclic graph, and send the result to the edge node at the source of the flow through the platform to guide the transmission of the flow.

be287493038c6ccba4e3512af17f0ded.png

So, how does each node use this path map to forward data packets? This is the second layer of our routing, distributed intelligent routing strategy.

Each node will conduct network quality detection on all possible sub-paths given in the path map to select the current best sub-path to forward traffic.

Because when calculating the path map, historical network quality data is used, and network quality is actually changing all the time. Therefore, the detection here is actually based on the latest network status to make decisions, but the range of decision-making options cannot be escaped. The path map planned by the platform. This is also the core of the dual-layer intelligent routing strategy.

The node's network quality detection will obtain more basic and high-order indicators, normalize and quantify the obtained indicators according to QoS, and adaptively adjust a series of parameters sent, such as ARQ, FEC, routing strategy, etc.

92d46bb8cab5e2ab51b75d0b76aaf07a.png

In each node, in addition to detecting network link quality, we also added a link prediction model. We will use filters to predict the changing trend of a single link. At the same time, when a link fails, we will try to predict the possible failure, whether it is a problem with the source egress network, the intermediate link, or the destination ingress network. Based on the predicted results, choose whether to switch the network, switch the next hop destination, or switch to another pop point in the same computer room.

It can also be seen from the figure that the self-built pop of our core network is actually a multi-ISP and multi-server structure, which greatly reduces the probability of single points of failure. It can also choose the best one when forwarding to different destinations. ISP routing to achieve the best forwarding effect.

68e550e3703fa322c1c69696d3980a70.png

For nodes, whether it is link detection or link prediction, they will eventually be used to guide the forwarding of traffic, that is, the selection of paths. When encountering network fluctuations or network failures, timely path switching without any frame loss errors is our ultimate goal.

All in all, the distributed intelligent routing strategy is to assign decision-making rights to each node, and the nodes independently select the best next-hop sub-link based on the link detection and link prediction mentioned above. The definition of the best link here needs to be distinguished based on the QoS level of the current business flow. For example, some business flows may be more sensitive to packet loss, so the weight of the Loss Rate will be relatively large when making decisions; there are also differences between each node. Supports simultaneous collaborative work. Any node can be used as a Relay Server selected by other nodes to jointly ensure reliable end-to-end transmission of the entire stream; finally, this path switching is at the packet level, and each switch will be forwarded in the next packet. It takes effect immediately, and the receiving end does not lose any data frames.

55bdd210a940a464f7d713a5281d707e.png

To summarize, our routing strategy is based on two-layer decision-making working together. The centralized strategy is to plan an optimal path map for the flow based on the panoramic network topology and historical network quality information.

Each node in this path map will select a route based on the latest network status information and has the final decision-making power for forwarding. Only in this way can we better respond to network emergencies and make the most timely changes while meeting QoS, and achieve the adaptive fault avoidance effect of zero frame loss.

daa6218b5d3f856331add0ad0f93d97f.png

So, after having an intelligent routing strategy, how to ensure the reliability of data packet transmission between two pop points? This is the second part I shared, our self-developed transmission protocol CTP. Our CTP protocol, like many emerging multimedia protocols, is based on UDP and optimizes some reliable transmission mechanisms. Basically, every technology company will develop its own protocol and add some features that are most suitable for its own business. Of course, we are no exception. So I will just start talking about a few of the features optimized for our system. I hope it can give you some inspiration. You are also welcome to discuss the optimization of this part of the transmission protocol.

Next, I will introduce the main features of CTP from four aspects.

164ddc8bcb30d70af187ee287a468740.png

The first is the HARQ algorithm . HARQ is Hybrid Automatic Repeat Request, a technology that combines FEC and ARQ. Based on the collected network big data, we have designed more than 40 HARQ modes and algorithms. Among them, we use the nack mechanism for the retransmission mechanism, and FEC uses a mechanism that combines linear block codes and fountain codes. In different scenarios, , different parameters and mechanisms will be selected to adapt to different network states and link types.

The picture above is an early performance test done by our colleagues, comparing the throughput of standard TCP. When the network delay is 200ms, the throughput is increased by 62 times. Of course, many current UDP-based protocols can achieve very high throughput, but our CTP has applied it to streaming media transmission for a long time, and will continue to optimize and iterate the protocol based on years of accumulated network experience.

151b01376aaf660a45a2475654a27e94.png

Our first-generation CTP was actually launched earlier than the currently popular SRT protocol, and the Multipath function was implemented earlier than SRT. Our colleagues shared the performance test of CTP on station B and added a comparative test of the SRT/RIST protocol. If you are interested, you can watch it yourself.

b73e816df73787e540caaf5311b9a0bc.png

The second is the prediction of network packet loss model . The network packet loss model can be simply divided into three modes. One is congestion packet loss, which shows that packet loss is relatively concentrated; one is random packet loss, and the packet loss distribution is relatively random; and the other is mixed mode. We use the improved Spike algorithm to identify packet loss patterns, apply the results to HARQ and congestion control algorithms under different physical links, and make some adaptive corrections to the parameters of the algorithm.

The main core idea here is that increasing the bit rate should be avoided in the case of congestion and packet loss, which will only aggravate network congestion. If the link causes random packet loss, the retransmission and FEC efficiency should be accelerated to improve the reception success rate of the receiving end. For the sending end, the two are actually completely different sending strategies. How to improve the accuracy of prediction is also the direction we have been optimizing.

1766a2f18ea2ce16c72bb03e37e62955.png

The third point is the classic congestion control algorithm . We referred to the GCC algorithm in WebRTC and designed an optimized multi-source congestion control algorithm; the general process is similar, including detection, analysis, results, and application.

In our self-developed protocol CTP, the main feature is that on the one hand, the results of the aforementioned packet loss mode will be applied to the congestion control analysis module for optimization; on the other hand, our multi-source and multi-target protocol based on the C3 system Architectural features, in the application module, some targeted strategic optimizations have been made. I will describe the multi-source and multi-target architecture in the third part. We have also been conducting research on congestion control.

a8151308096fe4ba076ebd6d238e6ebd.png

The last point, is network coding . The concept of network coding was actually proposed as early as 2000, introducing coding functions into traditional routers to improve information transmission efficiency. The picture shows the classic network structure that explains the principle of network coding: butterfly network, which assumes that each link can only transmit one data packet at a time. In the traditional sending method, due to the existence of the intermediate bottleneck link, the two receiving ends cannot receive two data packets at the same time. However, if each node joins network coding, it can use the recoding function to achieve simultaneous reception and improve network transmission efficiency. In our C3 overlay core network pop-up point, the encoding function is naturally supported, which is also the basis for us to introduce this technology. We were the first to implement the network coding function in certain business scenarios that require ultra-low latency and jitter, achieving excellent transmission effects. For other business scenarios, we are also actively exploring the potential of network coding to optimize network transmission quality.

e6b4c903e9bcba9e02901bbacef35559.png

Let’s make a brief summary of the second part of the self-research protocol. In fact, protocols are only a small part of transmission. Currently, there are many similar protocols used in various scenarios, such as QUIC protocol, SRT, ZIXI and other multimedia transmission protocols, etc. Our self-developed protocol is based on the C3 architecture and what we think is the most appropriate optimization strategy. Of course, we will continue to optimize in this area.

02cf89098b976922129d1cd7c49a64f7.png

The last part is also an introduction to the mechanism of C3 architecture to ensure end-to-end highly reliable transmission. In this part, I will describe the first-last-mile data access and the middle-mile core network transmission mechanism.

First-Last-mile mainly solves how to connect user-side data to the C3 core network with high reliability. The strategy we adopt is guaranteed by three mechanisms: multi-edge node coverage, dual multipath transmission, and packet-level adaptive path switching.

Middle-mile mainly solves network fluctuations and sudden network failures encountered during long-distance transmission. How to efficiently avoid problematic links with zero frame loss at the receiving end and ensure reliable transmission of data to the receiving end.

I will also introduce three main mechanisms, including edge aggregation mechanism, reflux obstacle avoidance mechanism and lossless online upgrade mechanism.

603915b358d0bfe495ca743d0c0745e6.png

Let’s look at the first point of First-Last-mile, multi-edge node coverage. We select up to 3 nodes for coverage for each ISP access of each client, whether it is the sending end or the receiving end; the matching principle will try to meet the hybrid pop coverage, that is, ensuring that there are cloud host pop points and self-built computer room pop points Coverage at the same time; cloud hosting mainly considers its stability, and self-built computer rooms mainly consider cost and bandwidth.

The second is the intelligent allocation strategy, which mainly includes the server redundancy principle, that is, the two places and three centers principle. Three edge nodes are located in at least two places, three different computer room centers, and the ISP redundancy principle (matching the user-side ISP On the premise of selecting multiple ISPs for coverage, we also considered the coverage characteristics of different levels of ISP operators, and automatically selected the best ISP matches based on historical big data). Of course, we need to consider the relationship between load balancing, balancing efficiency and cost; the last point is to support the function of whitelist and blacklist, mainly to meet the needs of customized edge coverage when some users bring their own pop points.

3f973da73e745999672de10727f9721c.png

When the user side also uses dual network cards for transmission, each network card will be allocated three edge nodes for coverage. At this time, up to 6 sub-links can be used. In the case of insufficient bandwidth, the bandwidth aggregation function is automatically enabled to protect users to the greatest extent. All traffic can successfully access the C3 core network; when network fluctuations or failures occur, the six sub-paths can achieve adaptive switching with zero frame loss. Finally, the multi-source congestion control mentioned earlier is also optimized based on this model. The congestion control in CTP no longer only provides the congestion control signal of a single link, but also comprehensively combines the conditions of all sub-links to give the user layer a comprehensive congestion control signal. On the one hand, it is used to guide the transmission strategy of each sub-link. , on the other hand, is used to guide the transmission strategy of the user layer, such as adjusting the code rate, etc.

642910594d681e7953458c00ff0a668c.png

Finally, and most importantly, packet-level adaptive path switching. The sending end SDK will use the original data packet or the link detection packet in the idle period to continuously detect the link quality. The first mechanism is the adaptive disaster recovery and avoidance mechanism. When a sudden failure is detected on a sub-link, it will quickly switch to another sub-link for transmission. Combined with the packet loss retransmission mechanism, it can ensure that as long as one sub-link is When the link is available, all data packets can be transmitted to the core network indiscriminately. We also implemented QoS hierarchical strategies, and different business flows can have different switching strategies. The third is to optimize for different physical links, because users often have multiple different link types, including cable, 5G, WiFi, etc. Different links will have different sending policy adjustments. Finally, there is connection migration, which is actually guaranteed by the agreement.

19920387282b518ca518dc028a55fe1e.png

After talking about First-Last-mile, let’s look at the reliability guarantee mechanism of Middle-mile within the core network.

The first is edge aggregation. When the data reaches the edge node and is forwarded to the receiver, the downlink fails. The edge node will automatically trigger the aggregation mechanism and forward the data packet to another edge node that has also established a connection with the receiver for relay transmission. The recipient is guaranteed to receive all data in a timely and error-free manner. To implement this mechanism, all edge nodes need to work together to achieve this effect.

Obviously, this mechanism will increase the bandwidth consumption of the receiver. For scenarios with limited bandwidth and insensitivity to RTT, we will also adaptively adjust the relay strategy and switch from simultaneous transmission to sequential transmission.

e9ba0374e77d4020cd8d7bfbbd1601c4.png

The second one is the middle-mile mechanism, which is the reflux obstacle avoidance mechanism. As shown in the figure, when traffic is transmitted to a node, if only one of the sub-links has a problem, the node will adaptively switch to another sub-link; but when all sub-links have network problems, the node will The node will automatically notify its parent node to actively change paths, and will reversely send the failed transmission packets to the parent node for compensatory transmission. As long as the transmission time of a data packet in the C3 core network does not exceed the maximum timeout specified by its application, the packet will not be actively discarded and all sub-paths will be tried until the transmission is successful.

This mechanism is used as a compensation mechanism. In our actual business, this phenomenon has occurred several times, and in the end the user did not perceive any error.

7c78a7e0ad56e0e1dcc716e322b721ff.png

Finally, there is the lossless online upgrade mechanism for core network pop points. As a SaaS platform, iteration and upgrade of functions are inevitable. First of all, we ensured that all traffic would be uninterrupted during the upgrade process, and that no frame loss errors would occur. At the same time, we also ensured that the overall carrying capacity of the core network would not decrease during the upgrade process. Some systems may, when upgrading a certain pop point, first switch the traffic from that point to another pop point. At this time, the receiving end will sense this behavior and may even lose frames. Moreover, this pop point cannot carry any traffic during the upgrade process. At this time, the maximum traffic that the entire network can carry actually decreases. But our upgrade can achieve zero frame loss without reducing the overall carrying capacity. If the core network is compared to a pipeline that transmits water flow, then during the upgrade process, not only the flow in the pipeline will not be affected in any way, but the maximum capacity of the pipeline will not change in any way. The principle of implementation is to adopt a dual-active-backup mechanism on both the control plane and the data plane, so that data between the active and backup devices can be synchronized without loss.

d8130972a4e72fb6fa0393b0bf76bc59.png

Our company has built an online demo display. The above video is a real-time transmission video stream. There are two video streams in total, one is transmitted from Singapore to Nanjing, and the other is transmitted from Singapore to Oregon in the United States. I recorded a short video in advance and at that moment the stream was delivered live via the Caton Media Xstream platform. The figure shows specific one-way delay data, and the right side displays the real-time transmission path. Users can intuitively observe how the current traffic is transmitted and switched in the path tree. If you are interested, we can show it to you offline.

48b1e816c40b6050772aead4d86482f0.png

To briefly summarize, the reason why the C3 platform can guarantee 99.9999% reliability is that on the one hand, it relies on the selection and deployment of network resources and relies on years of experience in network transmission to build a highly reliable overlay core network. This is the power of hardware. ; On the other hand, it relies on various mechanisms and algorithms to ensure end-to-end reliable transmission. This is the power of software; both are indispensable.

-03-

Caton Cloud

Finally, I will briefly introduce the services provided by Caton’s newly launched Caton Cloud platform ecosystem.

6b6a25b34a970b8ce72e4c67451612f5.png

Caton Cloud is a next-generation cloud service platform developed by our company based on new technologies and concepts.

  • Technically, it embraces cloud native and makes full use of the capabilities, reliability, and scalability provided by modern cloud computing platforms. It is incomparable and can meet customer needs agilely and quickly;

  • From the service concept, it is user-centered and pays more attention to user experience and feelings. Through professional services and technological innovation, we meet users' needs for quality and price.

  • Ecologically, Caton Cloud is a service system that focuses on network-related services. The Caton Media Xstream introduced earlier is the most critical network transmission service; in addition, we also provide a real-time network performance monitoring and diagnosis service. , NetScope.

3ce782cf93729525d7a8a487457482b3.png

NetScope is a SAAS service that provides network performance monitoring and diagnostic services. Why do we make this product? Let me give you an example of a problem I encountered in actual business: During an SRT transmission test from Singapore to Hong Kong, the SRT was interrupted for 41 seconds. If there is no network monitoring program or regular network monitoring, you can only see a decrease in network bandwidth. A lot of third-party tools may be needed to determine whether it is the sender's network or the receiver's network, or an intermediate network problem.

The NetScope analysis platform will not only automatically alert, but also discover the phenomenon and cause of the problem. In the above example, the platform has analyzed that the exchange between the two operators on the intermediate network was interrupted for 41 seconds. NetScope is a platform that truly has network fault attribution analysis capabilities.

604bf3eba760b41d67781541f948688c.png

NetScope also provides a variety of functional modules, including network performance overview, fault analysis details, and the most important weekly and monthly reports, to guide the maintenance of network equipment or video transmission systems. We can give users a summary report, telling them how long network failures have occurred in this week, and what problems often cause network interruptions in certain locations, such as local network cable problems, router problems, operator line access problems, etc., so that operators can Maintenance work must be truly targeted.

e0136f4cb8574c35fc3e9115daed674c.png

NetScope is a SaaS service for network performance monitoring and diagnostic services. It can be flexibly deployed on common platform devices. In addition to basic real-time monitoring of network status, its core function is to implement more intelligent attribution analysis of network events based on AI. It can discover whether line packet loss is due to internal network problems or If it is caused by router problems, broadband problems or cross-operator line switching problems, there is no need to use third-party tools to find clues, which greatly reduces the fault diagnosis time. Everyone is welcome to download and try it out. For more products and functions, everyone is welcome to communicate offline.

This is all I have to share this time, thank you all.


80c1109e09a0b91ef49bf5630acfb909.jpeg

Scan the QR code in the picture or click " Read the original text " 

Direct access to LiveVideoStackCon 2023 Shenzhen Station 10% off ticket purchase channel

Guess you like

Origin blog.csdn.net/vn9PLgZvnPs1522s82g/article/details/133152565