Introduction and use of c++ yalantinglibs (serialization, json and structure conversion, coroutine)

Introduction to yalantinglibs

Ya Lan Ting Ku, the name is very elegant, but also very powerful. It is a collection of modern C++ basic tool libraries open sourced by Ali, and now includes functions such as struct_pack, struct_json, struct_xml, struct_yaml, struct_pb, easylog, coro_rpc, coro_io, coro_http and async_simple, and has been continuously optimizing and adding more new functions .

The goal of yaLanTingLibs: To provide C++ developers with a high-performance, extremely easy-to-use modern C++ basic tool library, and help users build high-performance modern C++ applications.

Yalantinglibs open source address:

https://github.com/alibaba/yalantinglibs

For more information, see the official documentation introduction:

雅兰亭库 | A collection of C++20 libraries, include async_simple, coro_rpc and struct_pack.

compiler requirements

 If your compiler only supports C++17, yalantinglibs will only compile the serialization library (struct_* series, you can only use the functions of the struct_* series such as fast serialization, conversion between structure and json, etc.). If your compiler is lower than C++17, you can just watch it, but you can't use it. It is also recommended to upgrade to C++17 and above. Currently, C++17 is already very commonly used.

Make sure your compiler version is not lower than:

  • clang6++ (libstdc++-8 or higher).
  • g++9 or later.
  • msvc 14.20 or later.

If your compiler supports C++20, yalantinglibs will compile all libraries.

Make sure your compiler version is not lower than:

  • clang11++ (libstdc++-8 or higher).
  • g++10 or later.
  • msvc 14.29 or later.

Cmake options can be specified manually -DENABLE_CPP_20=ON or  -DENABLE_CPP_20=OFFto control.

Install & compile

Yalantinglibs is a head-only library, which means you can simply ./include/yltcopy it away. But the more recommended way is to use Cmake to install.

  • Clone repository
git clone https://github.com/alibaba/yalantinglibs.git
  • build, test and install

  • It is recommended to compile the sample/pressure test program and execute the test before installation:

cmake ..
cmake --build . --config debug # 可以在末尾加上`-j 选项, 通过并行编译加速
ctest . # 执行测试

Executable files of test/sample/pressure test are stored under the path ./build/output/.

  • You can also skip compilation:
# 可以通过这些选项来跳过编译样例/压测/测试程序
cmake .. -DBUILD_EXAMPLES=OFF -DBUILD_BENCHMARK=OFF -DBUILD_UNIT_TESTS=OFF
cmake --build .
  1. By default, it will be installed to the default include path of the system, and you can also customize the installation path through options.
cmake --install . # --prefix ./user_defined_install_path
  1. start programming
  • Using CMAKE:

After the installation is complete, you can directly copy and open the folder src/*/examples, and then execute the following command:

mkdir build
cd build
cmake ..
cmake --build .
  • Compile manually:
  1. Will  include/be added to the header file inclusion path (if it has been installed to the system default path, this step can be skipped)
  2. Will  include/ylt/thirdparty be added to the header file inclusion path (if the third-party dependencies have been installed through the Cmake option -DINSTALL_INDEPENDENT_THIRDPARTY=ON, this step can be skipped)
  3. If you use  coro_ any header files at the beginning, you need to add options under linux system  -pthread . g++You need to add options when using the compiler  -fcoroutines.
  4. All done. Please refer to for more details  example/cmakelist.txt.
  • More details: For more details, exceptexample/cmakelist.txt

Use in cmake

 The premise is that the previous steps have been built, and the example of using cmake to import ylt in the project:

cmake_minimum_required(VERSION 3.15)
project(file_transfer)
if(NOT CMAKE_BUILD_TYPE)
    set(CMAKE_BUILD_TYPE "Release")
endif()
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
find_package(Threads REQUIRED)
link_libraries(Threads::Threads)
find_package(yalantinglibs REQUIRED)
link_libraries(yalantinglibs::yalantinglibs)

add_executable(coro_io_example main.cpp)

struct_pack

struct_pack is an easy-to-use and high-performance serialization library based on compile-time reflection, head-only. Serialization/deserialization can be done with one line of code. The performance is far superior to traditional serialization libraries such as protobuf.

// 定义一个C++结构体
struct person {
  int64_t id;
  std::string name;
  int age;
  double salary;
};
// 初始化对象
person person1{.id = 1, .name = "hello struct pack", .age = 20, .salary = 1024.42};

// 一行代码序列化
std::vector<char> buffer = struct_pack::serialize(person1);

// 一行代码反序列化
auto person2 = deserialize<person>(buffer);

struct_pack  has excellent performance, and the fastest is an order of magnitude higher than protobuf, but before struct_pack was developed in C++20, it only supports C++20 compiler, and only supports serialization and deserialization of aggregate type, in order to allow more lower versions Users of the compiler can also use struct_pack, which now supports C++17.

struct_json

Based on the reflection-based json library, it is easy to realize the mapping between structure and json.

#include "ylt/struct_json/json_reader.h"
#include "ylt/struct_json/json_writer.h"

struct person {
  std::string name;
  int age;
};
REFLECTION(person, name, age);

int main() {
  person p{.name = "tom", .age = 20};
  std::string str;
  struct_json::to_json(p, str); // {"name":"tom","age":20}

  person p1;
  struct_json::from_json(p1, str);
}

json parsing benchmark 

struct_json is not only easier to use, but also has higher performance than rapidjson.

img

struct_json is faster than rapidjson in parsing 10 files. When parsing the gsoc-2018.json file, the performance of struct_json is 6.3 times that of rapidjson, which shows that struct_json is not only better in terms of ease of use and security, but also in terms of performance.

struct_json, struct_xml, and struct_yaml are just to free us from the cumbersome and unsafe handwritten code, and realize the automation of serialization and deserialization through the structure, which is safe, reliable, and has better performance. It is time to use them to replace the classic Rapidxxx series friends.

coro_rpc

coro_rpc is a high-performance rpc library based on stackless coroutines and compile-time reflection developed in C++20. The echo test qps on a single machine reaches 20 million (see the benchmark section for details), and its performance is much higher than rpc such as grpc and brpc library. However, high performance is not its main feature. The main feature of coro_rpc is ease of use, installation-free, header files are included, and an rpc server and client can be completed with a few lines of code.

The design concept of coro_rpc is: focus on ease of use, return to the essence of rpc, let users focus on business logic rather than details of rpc framework, rpc development can be completed with a few lines of code. What is the nature of rpc? The essence of rpc is a remote function. Except for the network IO at the bottom of rpc, other functions are the same as ordinary functions. Users do not need to pay attention to details such as network IO, routing, and serialization at the bottom of rpc. Users only need to focus on the business logic of rpc functions. This is the design concept of coro_rpc. Based on this design concept, coro_rpc provides a very simple Easy-to-use API for users. Let's see how easy to use coro_rpc is with an example.

rpc usage

1. Define the rpc function

// rpc_service.hpp
inline std::string echo(std::string str) { return str; }

2. Register the rpc function and start the server

#include "rpc_service.hpp"
#include <ylt/coro_rpc/coro_rpc_server.hpp>

int main() {

  // 初始化服务器
  coro_rpc_server server(/*thread_num =*/10, /*port =*/9000);

  server.register_handler<echo>(); // 注册rpc函数

  server.start(); // 启动server并阻塞等待
}

For the rpc server, the user only needs to define the rpc function and then start the server, without paying attention to other details, 5 or 6 lines of code can provide an rpc service, isn't it very simple! Let's take a look at how the client calls the rpc service hello.

rpc_client端

  1. Connect to the server
  2. rpc call
#include "rpc_service.hpp"
#include <ylt/coro_rpc/coro_rpc_client.hpp>

Lazy<void> test_client() {
  coro_rpc_client client;
  co_await client.connect("localhost", /*port =*/"9000");

  auto r = co_await client.call<echo>("hello coro_rpc"); //传参数调用rpc函数
  std::cout << r.result.value() << "\n"; //will print "hello coro_rpc"
}

int main() {
  syncAwait(test_client());
}

It is also simple for the client to call the rpc function, and the rpc call can be realized in 5 or 6 lines of code. It is very simple to call a remote rpc function just like calling a local function. Enter the function name and parameters in the call to realize the remote call.

The simple example above has fully demonstrated the ease of use and characteristics of coro_rpc, and also reflects the essence of rpc, that is, users can call remote functions like calling local functions, and users only need to pay attention to the business logic of rpc functions.

The ease of use of the coro_rpc interface is also reflected in the fact that the rpc function has almost no restrictions. The rpc function can have any number of parameters. The serialization and deserialization of parameters are automatically completed by the rpc library, and users do not need to care.

Ease of use

rpc usability comparison

RPC Do you need to define a DSL Whether to support coroutines hello world example code lines dependent library Whether header only
grpc Yes No 70+ helloworld 16 No
brpc Yes No 40+ helloworld 6 No
coro_rpc No Yes 9 3 Yes

c++ future evolution

If C++11 is a milestone of C++, then C++20 is a milestone of modern C++. Because it introduces several very important features, which will change the programming concept and model of C++, and of course make C++ more useful, such as Concepts, Modules, Coroutine, Ranges, etc. Taking coroutine as an example, it allows us to write asynchronous code in a synchronous manner, completely getting rid of the callback hell (Callback Hell), and the asynchronous callback model will be replaced by coroutines. With the new C++20 standard, what is lost is the winding Callback, and what is obtained is a simple and direct coroutine. Coroutines make asynchrony easier. In the next 3 to 5 years, the coroutineization of C++ network libraries will be the general trend.

Modules let us no longer need to introduce include header files, but just use import libraries like other languages, and it will greatly improve the compilation speed. After modularizing async_simple[2], the compilation speed is increased by 45%, and the performance is also slightly improved. It can be said that Modules has brought the dawn of C++ unified package management.

The evolution of the new C++ standard is relatively fast. Starting from C++11, it maintains an iterative speed of one version every three years, followed by C++23/26/29. In the future, the C++ standard will provide more good things, such as Executors, Networking, and compile-time reflection. Although the new standard is released relatively quickly, libraries based on the new standard cannot keep up. For example, the coroutine library. Except for async_simple, there are almost no stackless coroutine libraries that can be used. This is also a legacy of C++.

other resources

Conversion between json and C++ structures_c++ structure to json_xyz347's Blog-CSDN Blog

Efficient tool recommendation----vcpkg bzdww

Reflection realizes non-intrusive serialization

Github C++ project accumulation_xupeng1644's blog-CSDN blog

iguana reflection function - Zhihu

purecpp - a cool open source modern c++ community 

Guess you like

Origin blog.csdn.net/qq8864/article/details/132223608