anyRTC November SDK update

Insert picture description here

The anyRTC SDK is updated in November, adding multiple cross-platform SDK support (Flutter and uni-app), AI noise reduction and video super-division functions, a new universal player SDK on the Native side, and the ability to push RTMP together on the client side;

At that time anyRTC SDK will provide one-stop service for interactive live broadcast and audio and video interaction. There is no need to integrate other SDKs, which greatly reduces the size of the application package. The function of inserting media stream makes the live broadcast more scalable, suitable for karaoke together. Watching movies and other scenes, the audio 3A algorithm is improved, and the performance and effects have reached the industry-leading level.

For more detailed documents and historical update content of anyRTC SDK, you can go to " anyRTC official website-Developer Center-Document Center " to view.

The following is a detailed introduction to this month's update content;

uni-app mobile plugin

The anyRTC uni-app mobile plug-in currently only adapts to the RTC SDK, and the RTM SDK is already on the way to its birth. uni-app is a front-end framework that uses Vue.js to develop cross-platform applications. Developers write a set of codes that can be compiled to multiple platforms such as iOS, Android, H5, and applets. Because of its strong scalability and low learning cost, it is deeply loved by developers.

The uni-app cross-platform SDK has a wide range of application scenarios, such as online education, online finance, smart terminals, mobile law enforcement, and transportation logistics.
Insert picture description here

Flutter mobile plugin

The anyRTC Flutter mobile plug-in is currently adapted to RTC and RTM SDK. Based on anyRTC Flutter SDK, developers can simply and efficiently implement cross-platform audio, video and real-time messaging functions.

Flutter is a new Google SDK for building cross-platform mobile apps. Write a code that can run on both Android and iOS platforms. Flutter's advantages are rapid development, expressive and flexible UI, and native performance.

AnyRTC Flutter SDK integration guide and sample DEMO
reference address: https://github.com/anyRTC/Flutter-SDK

anyRTC takes into account the user's application to create a real-time message Flutter-RTM
reference address: https://github.com/anyRTC/Flutter-RTM

Insert picture description here

anyRTC AI noise reduction

The anyRTC AI noise reduction function is currently turned on and off using a private interface. The following takes the iOS side as an example to show you, the other side is the same;


//iOS 打开降噪
NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:@"SetAudioAiNoise",@"Cmd",[NSNumber numberWithBool:YES],@"Enable",nil];
// JSON 转为字符串
NSString *openAiNoiseStr = [self returnJSONStringWithDictionary:dic];
[_rtcKit setParameters:openAiNoiseStr];

AI noise reduction is an indispensable and important function in the audio and video field. Whether it is in online education, online meetings, voice connection, and game blacking, it has very important applications.

AnyRTC AI noise reduction can effectively solve the above scenes. anyRTC can automatically detect your surroundings, separate your voice and surrounding noise, effectively highlight the voice, shield the noise, and ensure the quality of the call.

The following are the achievements of anyRTC in the AI ​​audio model:

  • Intelligent noise reduction : Based on the theory of computational auditory scene analysis and the application of deep learning technology, it can realize the separation of human voice and noise without relying on any hardware, and effectively suppress various noises in the environment.

Intelligent noise reduction demo scene

  • DHS deep howling suppression : Based on deep learning technology, intelligently block the sound feedback loop to suppress howling. Effectively solve the problem of howling in multi-person real-time conversation scenarios such as real-time games and online meetings.

Howling suppression demo scene

The key strategies of anyRTC AI noise reduction technology planning include core audio communication experience, sound scene classification and processing, audio pain points and difficulties, and differentiated experience. The ultimate goal is to improve speech intelligibility, naturalness, and comfort.

Super score function

The super-score function is another breakthrough of anyRTC in the field of AI artificial intelligence. The super-division function is to increase the resolution of the original image at the receiving end of the real-time communication video to obtain a high-resolution image. This function effectively reduces the network transmission bandwidth and brings the ultimate video experience for users on the mobile end.

Super-resolution is not only used for images and videos, but is now also used in virtual reality and large-scale game rendering screen sampling, so that the graphics rendering engine only needs to render low-resolution images, and users can still watch high-resolution images to reduce their calculations. pressure.

Bypass push

Live broadcast link PK is currently a very popular way of live broadcast. Anchor PK means that an anchor can challenge the anchor in another live room during the live broadcast. Once the challenge is accepted, the anchors in the two live broadcast rooms will start to interact with each other. The live broadcast interface is divided into two, and the pictures of the two anchors are displayed at the same time, and the fans of both parties will also enter the same live broadcast room.

anyRTC provides the following two streaming methods:

1. Server bypass push

  • Single anchor mode

    Suitable for live broadcasters on the web page or users who do not have a microphone requirement in the live broadcast room

  • Multi-host mode

    When multiple people are connected to the microphone for live broadcast, the transcoding function needs to be turned on when pushing the stream to the CDN to merge multiple streams into one stream. CDN viewers can watch Lianmai live broadcast through the CDN address (URL) of the stream.

img

Advantage

1. The live broadcast terminal can be a web terminal without installing plug-ins; the viewer terminal can watch live broadcasts through a web browser without installing an App.

2. The live broadcast on the server side does not occupy the additional bandwidth of the client. When the network jitters, the impact on the live broadcast is small.

3. Does not occupy additional performance on the device side.

2. Client side push streaming

  • Single anchor mode

    Just don't call it setLiveTranscoding, and the direct streaming inside the SDK is not transcoding and merging.

  • Multi-host mode

    The host calls setLiveTranscodingfor local transcoding to push the stream, and merges multiple streams into one stream. CDN viewers can watch Lianmai live broadcast through the CDN address (URL) of the stream.

    img

Advantage

1. Users do not need to settle the consumption of the bypass push stream, and can push the stream directly on the client.

2. Low delay: The host directly pushes the stream, which reduces the delay loss in the transmission process.

Enter online media stream

Entering online media streams can use audio and video streams as a sender into the ongoing live broadcast room. By inputting the audio and video being played into the live channel, the host and the audience can listen/watch the media stream together and interact in real time.

The application scenarios of inputting online media streams are very wide, and some are listed below;

1. Live video sharing

In the live broadcast of the game, the anchor can directly pull the audio and video streams of the game, which can realize the function of watching the game together and commenting on the game with the audience. Increased the interaction between the anchor and the audience.

2. Have fun together

In the same live broadcast room, the host can watch movies, listen to music, play games with the audience, and can communicate and discuss in real time. Provide users with an immersive look and feel.

3. Video source from drone or webcam

The drone or webcam directly captures the video, which is input into the live channel as an online media stream.

Insert picture description here

3A algorithm optimization

anyRTC optimized the 3A processing algorithm this month, and made targeted optimizations in echo cancellation, noise suppression and volume gain algorithms, with first-class dual-talk performance; increased 20dB+ signal-to-noise ratio, which can suppress noise without damaging voice quality ; And automatically adjust the microphone volume to enhance the user experience in noisy environments.

The above is the main content of this month's SDK iteration. More detailed documents and anyRTC SDK historical update content can be viewed in " anyRTC official website-Developer Center-Document Center ".

Guess you like

Origin blog.csdn.net/anyRTC/article/details/110430151