[boost network library from bronze to king] Part 5: client and server examples of synchronous reading and writing in asio network programming

1 Introduction

Earlier we introduced the api function of boost::asio to read and write synchronously , now connect the previous api in series to make a running client and server . The client and server use blocking synchronous reading and writing to complete the communication.

2. Client design

The basic idea of ​​client design is to create an endpoint based on the ip and port of the server peer , and then create a socket to connect to this endpoint , after which data can be sent and received in a synchronous way of reading and writing.

  • Create endpoint (ip+port) .
  • Create sockets .
  • socket connection endpoint.
  • Send or receive data.

client.h:

#pragma once
#ifndef __CLIENT_H_2023_8_16__
#define __CLIENT_H_2023_8_16__

#include<iostream>
#include<boost/asio.hpp>
#include<string>

#define Ip "127.0.0.1"
#define Port 9273
#define Buffer 1024

class Client {
    
    
public:
	Client();
	bool StartConnect();

private:
	std::string ip_;
	uint16_t port_;
};

#endif

client.cpp:

#include"client.h"

Client::Client() {
    
    
	ip_ = Ip;
	port_ = Port;
}

bool Client::StartConnect() {
    
    
	try {
    
    
		//Step1: create endpoint
		boost::asio::ip::tcp::endpoint ep(boost::asio::ip::address::from_string(ip_), port_);

		//Step2: create socket
		boost::asio::io_context context;
		boost::asio::ip::tcp::socket socket(context, ep.protocol());

		//Step3: socket connect endpoint
		boost::system::error_code error = boost::asio::error::host_not_found;
		socket.connect(ep, error);
		if (error) {
    
    
			std::cout << "connect failed,error code is: " << error.value() << " .error message is:" << error.message() << std::endl;
			return false;
		}
		else {
    
    
			std::cout << "connect successed!" << std::endl;
		}

		while (true) {
    
    
			//Step4: send message
			std::cout << "Enter message:";
			char req[Buffer];
			std::cin.getline(req, Buffer);
			size_t req_length = strlen(req);
			socket.send(boost::asio::buffer(req, req_length));

			//Step5: receive message
			char ack[Buffer];
			size_t ack_length = socket.receive(boost::asio::buffer(ack, req_length));
			std::cout << "receive message: " << ack << std::endl;
		}

	}
	catch (boost::system::system_error& e) {
    
    
		std::cout << "Error occured!Error code: " << e.code().value() << ". Message: " << e.what() << std::endl;
		return e.code().value();
	}
	return true;
}

  • This code is a C++ program for the client, using the previously defined Client class to establish a connection and communicate with the server. Here's a line-by-line explanation of what the code does:
    • #include "client.h": Include the header file of your client class to use this class in this file.

    • Client::Client(): This is the constructor of the client class, which initializes the ip_ and port_ member variables.

    • bool Client::StartConnect(): This is a member function used to start connecting to the server and perform communication.

    • try: Starts an exception handling block for catching possible exceptions.

    • boost::asio::ip::tcp::endpoint ep(boost::asio::ip::address::from_string(ip_), port_);: create a TCP endpoint, specify the server IP address and port number to connect .

    • boost::asio::io_context context;: Create an I/O context object, which is used to manage asynchronous I/O operations.

    • boost::asio::ip::tcp::socket socket(context, ep.protocol());: Create a TCP socket using the previously created I/O context and the specified protocol.

    • boost::system::error_code error = boost::asio::error::host_not_found;: Create an error code object and initialize it to the error code of the host not found as the initial state of the connection.

    • socket.connect(ep, error);: Attempt to connect to the server, if the connection fails, the error code will be updated to reflect the information of the connection error.

    • if (error): Check the error code, if it is not 0, it means the connection failed.

    • std::cout << "connect failed, error code is: " << error.value() << " .error message is:" << error.message() << std::endl;: output connection failed Error codes and error messages.

    • else: If the connection is successful, enter this branch.

    • while (true): Infinite loop for sending and receiving messages continuously.

    • std::cout << "Enter message:";: Prompt the user to enter a message.

    • char req[Buffer];: Create a character array for storing the message entered by the user.

    • std::cin.getline(req, Buffer);: Read a line of messages from user input.

    • size_t req_length = strlen(req);: Get the length of the user input message.

    • socket.send(boost::asio::buffer(req, req_length));: Send the message entered by the user to the server.

    • char ack[Buffer];: Create a character array for receiving the message returned by the server.

    • size_t ack_length = socket.receive(boost::asio::buffer(ack, req_length));: Receive the message returned by the server.

    • std::cout << "receive message: " << ack << std::endl;: output the received message.

    • catch (boost::system::system_error& e): Catch an exception, if an exception occurs, enter this branch.

    • std::cout << "Error occurred! Error code: " << e.code().value() << ". Message: " << e.what() << std::endl;: output exception information , including the error code and error message.

    • return e.code().value();: returns the error code in the exception.

    • return true;: If there is no exception, return true, indicating that the communication is successful.

In summary, this code creates a client object, connects to the server and implements a simple loop that allows the user to enter a message and send it to the server, and then receives and displays the message returned by the server. At the same time, it can also handle abnormal situations that may occur during connection and communication.

main.cpp:

#include"client.h"

int main() {
    
    
	Client client;
	if (client.StartConnect()) {
    
    
		;
	}
	return 0;
}
  • This code is a main function that uses the client class you wrote earlier. Let me explain to you what each part does:

    • #include "client.h": This line includes the header file of your client class, so that you can use this class in the main function.

    • int main(): This is the main function of the program and it is the entry point of the program. All code will be executed from here.

    • Client client;: On this line, you create a client object called client, using the constructor of the Client class you defined earlier.

    • if (client.StartConnect()) { … }: This line begins a conditional statement. client.StartConnect() is called, which attempts to establish a connection with the server and perform communication. If the connection is successful and the communication is normal, the StartConnect() function will return true and enter the branch where the condition is met.

    • ;: This is an empty statement that does nothing. There doesn't seem to be any real action to be performed in your code, so it's represented by an empty statement.

    • return 0;: This line is the last line of the main function, which tells the program to return a status code of 0 after the main function ends, indicating that the program exits normally.

To sum up, this code creates a client object and calls its StartConnect() function to connect to the server and communicate. The program then exits normally with status code 0. If there is a problem with the connection or communication, you can add error handling code where appropriate.

3. Server design

3.1, session function

Create a session function, which handles client requests for the server, and calls this function every time we get a client connection. In the session function, read and write in echo mode. The so-called echo is response processing (request and response).

void Server::Session(std::shared_ptr<boost::asio::ip::tcp::socket> socket,uint32_t user_id) {
    
    
	try {
    
    
		for (;;) {
    
    
			char ack[Buffer];
			memset(ack, '\0', Buffer);
			boost::system::error_code error;
			size_t length = socket->read_some(boost::asio::buffer(ack, Buffer), error);
			if (error == boost::asio::error::eof) {
    
    
				std::cout << "the usred_id "<<user_id<<"connect close by peer!" << std::endl;
				socket->close();
				break;
			}
			else if (error) {
    
    
				throw boost::system::system_error(error);
			}
			else {
    
    
				if (socket->is_open()) {
    
    
					std::cout << "the usre_id " << user_id << " ip " << socket->remote_endpoint().address();
					std::cout << " send message: " << ack << std::endl;
					socket->send(boost::asio::buffer(ack, length));
				}
			}
		}
	}
	catch (boost::system::system_error& e) {
    
    
		std::cout << "Error occured ! Error code : " << e.code().value() << " .Message: " << e.what() << std::endl;
	}
}

3.2、StartListen function

The StartListen function creates a server acceptor to receive data based on the server ip and port , uses a socket to receive a new connection, and then creates a session for this socket .

bool Server::StartListen(boost::asio::io_context& context) {
    
    
	//create endpoint
	boost::asio::ip::tcp::endpoint ep(boost::asio::ip::tcp::v4(), port_);

	//create acceptor
	boost::asio::ip::tcp::acceptor accept(context, ep);

	//acceptor bind endport
	//accept.bind(ep);

	//acceptor listen
	/*accept.listen(30);*/

	std::cout << "start listen:" << std::endl;
	for (;;) {
    
    
		std::shared_ptr<boost::asio::ip::tcp::socket> socket(new boost::asio::ip::tcp::socket(context));
		accept.accept(*socket);
		user_id_ = user_id_ + 1;
		std::cout << "the user_id "<<user_id_<<" client connect,the ip:" << socket->remote_endpoint().address() << std::endl;
		
		//auto t = std::make_shared<std::thread>([&]() {
    
    
		//	this->Session(socket);
		//	});

		auto t = std::make_shared<std::thread>([this, socket]() {
    
    
			Session(socket,user_id_);
			});

		thread_set_.insert(t);
	}
	return true;
}

Creating a thread and calling the session function can allocate an independent thread for socket reading and writing, ensuring that the acceptor will not be blocked due to socket reading and writing.

3. Overall design

server.h:

#pragma once
#ifndef __SERVER_H_2023_8_16__
#define __SERVER_H_2023_8_16__

#include<iostream>
#include<boost/asio.hpp>
#include<string>
#include<set>

#define Port 9273
#define Buffer 1024
#define SIZE 30

class Server {
    
    
public:
	Server();
	bool StartListen(boost::asio::io_context& context);
	void Session(std::shared_ptr<boost::asio::ip::tcp::socket> socket,uint32_t user_id);

	std::set<std::shared_ptr<std::thread>>& GetSet() {
    
    
		return thread_set_;
	}
private:
	uint16_t port_;
	uint32_t user_id_;
	std::set<std::shared_ptr<std::thread>> thread_set_;
};

#endif

server.cp:

#include"server.h"

Server::Server() {
    
    
	port_ = Port;
	user_id_ = 0;
	thread_set_.clear();
}

void Server::Session(std::shared_ptr<boost::asio::ip::tcp::socket> socket,uint32_t user_id) {
    
    
	try {
    
    
		for (;;) {
    
    
			char ack[Buffer];
			memset(ack, '\0', Buffer);
			boost::system::error_code error;
			size_t length = socket->read_some(boost::asio::buffer(ack, Buffer), error);
			if (error == boost::asio::error::eof) {
    
    
				std::cout << "the usred_id "<<user_id<<"connect close by peer!" << std::endl;
				socket->close();
				break;
			}
			else if (error) {
    
    
				throw boost::system::system_error(error);
			}
			else {
    
    
				if (socket->is_open()) {
    
    
					std::cout << "the usre_id " << user_id << " ip " << socket->remote_endpoint().address();
					std::cout << " send message: " << ack << std::endl;
					socket->send(boost::asio::buffer(ack, length));
				}
			}
		}
	}
	catch (boost::system::system_error& e) {
    
    
		std::cout << "Error occured ! Error code : " << e.code().value() << " .Message: " << e.what() << std::endl;
	}
}

bool Server::StartListen(boost::asio::io_context& context) {
    
    
	//create endpoint
	boost::asio::ip::tcp::endpoint ep(boost::asio::ip::tcp::v4(), port_);

	//create acceptor
	boost::asio::ip::tcp::acceptor accept(context, ep);

	//acceptor bind endport
	//accept.bind(ep);

	//acceptor listen
	/*accept.listen(30);*/

	std::cout << "start listen:" << std::endl;
	for (;;) {
    
    
		std::shared_ptr<boost::asio::ip::tcp::socket> socket(new boost::asio::ip::tcp::socket(context));
		accept.accept(*socket);
		user_id_ = user_id_ + 1;
		std::cout << "the user_id "<<user_id_<<" client connect,the ip:" << socket->remote_endpoint().address() << std::endl;
		
		//auto t = std::make_shared<std::thread>([&]() {
    
    
		//	this->Session(socket);
		//	});

		auto t = std::make_shared<std::thread>([this, socket]() {
    
    
			Session(socket,user_id_);
			});

		thread_set_.insert(t);
	}
	return true;
}

main.cpp:

#include"server.h"

int main() {
    
    
    try {
    
    
        boost::asio::io_context context;
        Server server;
        server.StartListen(context);
        for (auto& t : server.GetSet()) {
    
    
            t->join();
        }
    }
    catch (std::exception& e) {
    
    
        std::cerr << "Exception " << e.what() << "\n";
    }
    return 0;
}

Every time the peer connects, the server will trigger the accept callback function to create a session . As for the reading and writing event triggering of the session and the accept triggering of the server , they are called back for us after the underlying multiplexing model of asio judges that the event is ready. Currently, it is a single-threaded mode, so they are all triggered in the main thread.

In addition , the server does not exit, not because there is a loop in the server , but because we call the run function of iocontext , which is provided by the bottom layer of asio and will dispatch ready events in a loop.

4. Effect test

insert image description here

5. Problems encountered

5.1. Problems encountered by the server

5.1.1. Call bind binding and listen monitoring functions without display

There are two ways, the early boost acceptor can be bound to the port, and the later boost is optimized, and the binding and monitoring can be realized by directly specifying the port when initializing the acceptor.

StartListen function:
insert image description here

bool Server::StartListen(boost::asio::io_context& context) {
    
    
	//create endpoint
	boost::asio::ip::tcp::endpoint ep(boost::asio::ip::tcp::v4(), port_);

	//create acceptor
	boost::asio::ip::tcp::acceptor accept(context, ep);

	//acceptor bind endport
	//accept.bind(ep);

	//acceptor listen
	/*accept.listen(30);*/

	std::cout << "start listen:" << std::endl;
	for (;;) {
    
    
		std::shared_ptr<boost::asio::ip::tcp::socket> socket(new boost::asio::ip::tcp::socket(context));
		accept.accept(*socket);
		user_id_ = user_id_ + 1;
		std::cout << "the user_id "<<user_id_<<" client connect,the ip:" << socket->remote_endpoint().address() << std::endl;
		
		//auto t = std::make_shared<std::thread>([&]() {
    
    
		//	this->Session(socket);
		//	});

		auto t = std::make_shared<std::thread>([this, socket]() {
    
    
			Session(socket,user_id_);
			});

		thread_set_.insert(t);
	}
	return true;
}

  • bool Server::StartListen(boost::asio::io_context& context): This is a member function of the Server class, which is used to start the listening process of the server.

  • boost::asio::ip::tcp::endpoint ep(boost::asio::ip::tcp::v4(), port_);: Creates a TCP endpoint using an IPv4 address and the specified port number.

  • boost::asio::ip::tcp::acceptor accept(context, ep);: Creates a TCP receiver using the previously created I/O context and endpoint .

  • std::cout << "start listen:" << std::endl;: Output the message to start listening.

  • for (;; ) { ... }: infinite loop, used to continuously wait for the client to connect and process the session.

  • std::shared_ptr<boost::asio::ip::tcp::socket> socket(new boost::asio::ip::tcp::socket(context));: create a smart pointing to tcp::socket Pointer to handle the connection with the client.

  • * accept.accept( socket);: Wait for and accept the client connection, and assign the connection socket to the previously created socket object.

  • user_id_ = user_id_ + 1;: Add user ID to identify different connections.

  • std::cout << "the user_id "<<user_id_<<" client connect,the ip:" << socket->remote_endpoint().address() << std::endl;: output the message of the client connection , Include the user ID and IP address of the client.

  • auto t = std::make_shared <std::thread> ( [this, socket] { … });: Create a thread for handling client sessions. In the thread, call the Session function through the lambda expression , passing the current socket and user ID .

  • thread_set_.insert(t); : Adds created threads to the thread set, waiting for them to complete before the main thread ends.

  • return true ;: Return true , indicating that the monitoring process has been started successfully.

  • Notes on accept.bind(ep) and accept.listen(30) :

    • accept.bind(ep) : In the above code, this method is not called, because the accept object has passed the endpoint ep when it was created , so no explicit binding is required. Binding means binding a socket to a specific IP address and port, but in this case the binding is already done when the receiver is created.

    • accept.listen(30): Again, in the above code, this method is not called. The listen() method is used to put the socket in the listening state, and the parameter indicates the maximum number of queued connections. But in this code, calling the accept() method automatically puts the socket in the listening state, waiting for the client to connect, so there is no need to explicitly call the listen() method.

5.1.2. Error occurred! Error code: 10009 .Message: The provided file handle is invalid. [system:10009]

start listen:
have client connect,the ip:127.0.0.1
Error occured!Error code : 10009 .Message: 提供的文件句柄无效。 [system:10009]
  • There are still some issues with the code that cause "The supplied file handle is invalid" error after the client connects. Since I cannot run the code directly in your environment, here are some possible causes and solutions:

    • Resource competition: Since multiple threads access the socket object at the same time, it may cause resource competition and socket state inconsistency. Ensure proper synchronization when reading and writing to the socket, using mechanisms such as mutexes.

    • Socket Lifecycle: Ensures that sockets are properly closed when done using them. Check your code to make sure each thread closes the socket when it's done using it. Don't close a socket in one thread and continue using it in another thread.

    • Handle reuse: Make sure your sockets are not being used or reused more than once. Attempting to use a socket while it is already closed may result in an "Invalid file handle provided" error.

    • Thread Synchronization: Make sure your threads wait for other threads to complete before completing their execution. Use t->join() in the main function to wait for all threads to finish executing.

    • Other error conditions: 10009 errors can have several possible conditions, such as using an invalid socket, socket being closed but still in use, etc. You may need to examine the context of the error code in detail for more information.

Putting it all together, the problem may be caused by properly managing the lifecycle and state of sockets in a multi-threaded environment. Carefully review your code to ensure that sockets are being used and closed properly in each thread, and that proper synchronization mechanisms are used to avoid race conditions. If the problem persists, you may need to examine the code in each thread in more detail to isolate the problem.

	auto t = std::make_shared<std::thread>([this, socket]() {
		Session(socket);
		}); 
		
		为啥	auto t = std::make_shared<std::thread>([&]() {
		this->Session(socket);
		});
		传引用不行

In the code, use [&] to pass the reference, but since you are using an asynchronous thread to handle the connection, the content of the reference may have been invalidated when the background thread is executed, resulting in access to invalid resources. This may be the cause of the error.

The correct way is to capture the parameter socket in the lambda function and pass it by value (rather than by reference), which ensures that the socket object is still valid when the thread executes. That's what the first example in the code does.

auto t = std::make_shared<std::thread>([this, socket]() {
    
    
    Session(socket);
});

  • The problem might be related to a race condition between threads. In C++, when you access shared variables in a multi-threaded environment, you need to ensure that there is no race condition where one thread modifies a resource while another thread accesses that resource causing undefined behavior.

  • In the above two ways of writing, there may be problems with dangling references. This is because an external variable (socket) is referenced in the Lambda function, but when the Lambda function is executed, the life cycle of the external variable may have ended, resulting in access to invalid resources.

  • In the first way of writing:

    • By capturing the socket , the socket object will be copied into the Lambda function, so there will be no invalidation problem.
auto t = std::make_shared<std::thread>([this, socket]() {
    
    
    Session(socket);
});

  • In the second way of writing:
    • By capturing the reference, the reference of the socket object is passed into the Lambda function. However, when the background thread is executing, the main thread may have ended or destroyed the socket object, resulting in access to invalid resources.

In order to avoid these problems, it is generally recommended in multi-threaded programming to ensure that when a thread accesses an external resource, the life cycle of the external resource does not end during the execution of the thread. Such problems can be solved by proper synchronization mechanisms, life cycle management and avoiding dangling references.

5.2. Send ordinary messages such as numbers 12 or strings. Why use protobuf if you send structure protocols and the like

  • In network communication, data transmission needs to consider multiple factors, including data format, serialization and deserialization, network byte order, etc. When you only need to transmit common messages and simple data types (such as integers and strings), you can directly use the original data format for transmission. However, when you need to transfer complex data structures, objects, classes, nested data, etc., using a serialization protocol can be more convenient, safe, and efficient.

  • Protocol Buffers (protobuf) is a popular serialization library developed by Google for serialization and deserialization of structured data on different platforms. protobuf provides a mechanism to serialize structured data into a binary format, which can then be transferred and parsed between different systems. It has the following advantages:

    • Cross-platform and language support: Protocol Buffers supports multiple programming languages, including C++, Java, Python, C#, etc., enabling applications on different platforms to exchange data conveniently.

    • Efficient serialization and deserialization: The serialization and deserialization process of Protocol Buffers is efficient, the generated binary data is small, and the transmission efficiency is high.

    • Version compatibility: When the data structure changes, Protocol Buffers provides a backward and forward compatible mechanism, which makes it easier to evolve and upgrade the protocol.

    • Strong type support: Protocol Buffers uses a well-defined message structure to force users to follow a specific message format when encoding and decoding, avoiding some errors.

If you need to transfer complex data structures, especially if you need to exchange data across platforms and languages, using Protocol Buffers is a good choice. It provides a clear message definition syntax, efficient binary serialization and deserialization, and support for multiple languages.

5.2.1. Modify the string or number message into a class or more complex object

#include"server.h"

Server::Server() {
    
    
	port_ = Port;
	user_id_ = 0;
	thread_set_.clear();
}

void Server::Session(std::shared_ptr<boost::asio::ip::tcp::socket> socket,uint32_t user_id) {
    
    
	try {
    
    
		for (;;) {
    
    
			char ack[Buffer];
			memset(ack, '\0', Buffer);
			boost::system::error_code error;
			size_t length = socket->read_some(boost::asio::buffer(ack, Buffer), error);
			if (error == boost::asio::error::eof) {
    
    
				std::cout << "the usred_id "<<user_id<<"connect close by peer!" << std::endl;
				socket->close();
				break;
			}
			else if (error) {
    
    
				throw boost::system::system_error(error);
			}
			else {
    
    
				if (socket->is_open()) {
    
    
					std::cout << "the usre_id " << user_id << " ip " << socket->remote_endpoint().address();
					std::cout << " send message: " << ack << std::endl;
					socket->send(boost::asio::buffer(ack, length));
				}
			}
		}
	}
	catch (boost::system::system_error& e) {
    
    
		std::cout << "Error occured ! Error code : " << e.code().value() << " .Message: " << e.what() << std::endl;
	}
}

bool Server::StartListen(boost::asio::io_context& context) {
    
    
	boost::asio::ip::tcp::endpoint ep(boost::asio::ip::tcp::v4(), port_);
	boost::asio::ip::tcp::acceptor accept(context, ep);

	std::cout << "start listen:" << std::endl;
	for (;;) {
    
    
		std::shared_ptr<boost::asio::ip::tcp::socket> socket(new boost::asio::ip::tcp::socket(context));
		accept.accept(*socket);
		user_id_ = user_id_ + 1;
		std::cout << "the user_id "<<user_id_<<" client connect,the ip:" << socket->remote_endpoint().address() << std::endl;
		
		//auto t = std::make_shared<std::thread>([&]() {
    
    
		//	this->Session(socket);
		//	});

		auto t = std::make_shared<std::thread>([this, socket]() {
    
    
			Session(socket,user_id_);
			});

		thread_set_.insert(t);
	}
	return true;
}

  • To send an instance of a structure or class, you need to use a serialization library such as **Protocol Buffers (protobuf)** to serialize the structure or class into a byte stream and then transmit it over the network. Here's how you can modify your code to support sending struct or class instances:
    • Define the struct or class: First, you need to define the struct or class you want to send. Let's take a sample struct as an example:
struct Message {
    
    
    int id;
    std::string content;
};

  • Use Protocol Buffers: When sending and receiving data, use Protocol Buffers for serialization and deserialization. First, define a .proto file to describe the structure of the message:
syntax = "proto3";

message Message {
    
    
    int32 id = 1;
    string content = 2;
}

  • Then use the Protocol Buffers compiler to generate C++ code:
    • Modify the session function: Modify the Server::Session function to support serialization and deserialization of structured messages.
#include "message.pb.h"  // Generated header from Protocol Buffers compiler

// ...

void Server::Session(socket_ptr socket) {
    
    
    try {
    
    
        for (;;) {
    
    
            Message received_message;
            char buffer[Buffer];
            memset(buffer, '\0', Buffer);
            boost::system::error_code error;
            size_t length = socket->read_some(boost::asio::buffer(buffer, Buffer), error);
            
            if (error == boost::asio::error::eof) {
    
    
                // 客户端连接关闭
                std::cout << "connect close by peer!" << std::endl;
                break;
            }
            else if (error) {
    
    
                // 发生了其他错误
                throw boost::system::system_error(error);
            }
            else {
    
    
                // 成功读取length个字节
                received_message.ParseFromArray(buffer, static_cast<int>(length));
                
                std::cout << "Received message from: " << socket->remote_endpoint().address() << std::endl;
                std::cout << "ID: " << received_message.id() << std::endl;
                std::cout << "Content: " << received_message.content() << std::endl;
                
                // 做出响应
                // ...
                
                // 将消息序列化并发送回客户端
                std::string serialized_message;
                received_message.SerializeToString(&serialized_message);
                socket->send(boost::asio::buffer(serialized_message.c_str(), serialized_message.size()));
            }
        }
    }
    catch (boost::system::system_error& e) {
    
    
        std::cout << "Error occured! Error code : " << e.code().value() << " .Message: " << e.what() << std::endl;
    }
}

This way, your server will parse the received serialized message into a Message struct, and send the corresponding serialized message back to the client after receiving the message.

Note that the sample code above assumes that you have defined the message structure using Protocol Buffers and generated the corresponding C++ code. Make sure to include the correct path to the header file and make appropriate modifications based on your actual structure and message format.

5.3. Error occurred! Error code: 10054 .Message: The remote host forcibly closed an existing connection. [system:10054]

Error code 10054 "The remote host forcibly closed an existing connection", usually because the remote host (client) closed the connection with the server. This could be due to the client actively closing the connection, or an unexpected problem on the network that caused the connection to be interrupted unexpectedly.

In the code, when the client closes the connection, it catches the boost::asio::error::eof error in the Session function, then tries to close the socket, and breaks out of the loop. This part of the logic is correct and should cause the server side to close the connection and handle it properly.

However, error code 10054 can be caused by several factors, including network issues, timeouts, operating system configuration, and more. If you are sure that the logic in your code that handles connection closing is correct, then the problem may lie elsewhere.

  • Here are some possible solutions and debugging methods:

    • Check your network connection: Make sure your network connection is stable with no packet loss or other issues.

    • Check the client: If the problem only occurs on a specific client, check the client's network configuration and status to make sure there are no abnormalities.

    • Check firewalls and security software: Firewalls or security software may be interfering with network connections, make sure they are not blocking the connection.

    • Check timeout settings: If the server has a timeout set, make sure it's reasonable and doesn't close connections prematurely.

    • Check server-side resources: If there are too many server-side connections, it may cause resource exhaustion. Make sure the server has enough resources to handle the connection.

    • Catching Exceptions: When catching exceptions, try to print more details to get a better idea of ​​what the problem is. You can output error codes and error messages to better troubleshoot problems.

    • Logging and Debugging: Use logging and debugging tools to monitor network connections and interactions for more detailed insight into why connections were closed.

Ultimately, error code 10054 may have multiple causes that require comprehensive investigation and troubleshooting. If the problem still exists, you may need to further consider network configuration, server-side resources, connection timeout settings, etc. for troubleshooting.
insert image description here

5.4, ​​std::shared_ptrstd::thread t = std::make_sharedstd::thread() and () add function difference and usage

auto t = std::make_shared<std::thread>();

  • This code snippet tries to create a std::thread object, but doesn't specify a function to execute, so it doesn't actually create a new thread.

  • When using std::make_shared , you are usually used to create smart pointers, such as std::shared_ptr . In this context, std::make_shared creates a std::thread object and returns a smart pointer, but it needs to specify the type and construction parameters of the object to be constructed.

  • For std::thread , it needs to specify the function to be executed as an argument, so that execution starts when the thread is created. If no function is specified to be executed, the created std::thread object has no valid work tasks, but a std::thread object is created.

  • auto t = std::make_shared<std::thread>(); creates a std::shared_ptr <std::thread> object named t , but does not pass any parameters to std::make_shared , so there is no Specifies the function to execute for the thread.

Typically, creating a thread requires specifying a callable function or function object (such as a function pointer, lambda function, class member function, ordinary function, etc.) to be executed in the thread. But in this code snippet, no such callable is provided, so this thread actually has no valid work tasks. The thread object thus created is idle, without any actual work content.

std::shared_ptr<std::thread> t = std::make_shared<std::thread>([this, socket] {
Session(socket, user_id_);
});

  • Each client session (Session) runs in a separate thread, created by std::thread . This means that each client session is handled in a separate thread without blocking each other.

  • When a client connects and sends a message, the Session function will be executed, and the loop in it will continuously try to read data (messages) from the client's socket . If there is no data to read, the read_some function will block until there is data to read. But since each client's session runs in a separate thread, blocking by one client won't affect other client's sessions.

  • That 's why when a single client doesn't close and keeps sending messages, other clients' sessions don't get blocked. Each session runs in a separate thread, unaffected by each other. While one client's session is waiting for data, other client's sessions can still continue executing.

  • It should be noted that although each client's session runs in an independent thread, there may still be race conditions and thread safety issues between threads. In a multithreaded environment, shared resources must be handled carefully to avoid potential problems.

  • Although the Session function is placed in an infinite loop, your code is calling different Session functions in different threads. Whenever a new client connects, a new thread will be created and the Session function will be called, and the loop will be executed in this thread. Therefore, although each Session function has an infinite loop, these loops run in different threads, independent of each other.

  • This is why sessions of different clients do not block each other: each client's session runs independently in a different thread, so that one client's session does not affect other client's sessions while it is waiting for data. Although the Session function is called inside the loop in the StartListen function, since each Session function runs in a different thread, their execution is parallel to each other, so they will not block each other.

  • In this code, a lambda expression in C++11 is used to create a new thread. The contents of the lambda expression will be executed in a new thread. Here, [this, socket] is the form of lambda expression, which means to capture the current object (this) and socket variables and pass them to the code in the new thread.

  • In the new thread, the Session function will be called to execute the client's session logic. Since each client connection executes the Session function in an independent thread, sessions between different clients can be processed in parallel without blocking each other.

  • To sum up, your code achieves the ability to handle multiple client connections at the same time by creating multiple threads to concurrently process the sessions of different clients. This concurrent processing improves server performance and responsiveness.

5.5. void Server::Session(std::shared_ptr<boost::asio::ip::tcp::socket> socket, uint32_t user_id) why read_some should be written in the for loop

  • read_some is a blocking function that waits until data arrives or an error occurs when there is no data to read. In this code, although the read_some function is called in the for loop, it will block and wait until the Buffer buffer has data to read. If the client sends a message, then read_some will return and read the data, and then enter the next loop.

  • Even inside a loop, the read_some function does not block the entire thread while waiting for data to arrive, but only the currently calling thread, allowing other threads to continue executing. This enables the server to handle multiple client connections concurrently, because each client connection's blocking wait does not affect the processing of other connections.

  • Therefore, although read_some is called inside the loop, it does not block the entire loop, but blocks and waits when there is no data to read, and will not return until data arrives. This enables the server to continuously receive messages from multiple clients.

  • If it is not written in the loop, the client can only send a message once at this time:

    • Yes, if you put read_some outside the loop, the message will only be received and processed once per client connection. Once the server receives a message from the client, read_some will no longer block because there is readable data in the buffer. However, once read_some is no longer blocked, there is no code in the loop waiting for new data to arrive, so the server will not continue to receive messages from the client.

    • If you want to implement the function of continuously receiving messages, you can put the entire logic of reading messages in a loop. This way, the server waits and receives new messages from clients in each loop. In your code, you can uncomment the for (; ; )) { … } part so that the server will keep receiving messages in a loop until the client disconnects or an error occurs.

    • This is because in the code, the read_some function is called outside the loop. Once the client sends a message and the server successfully reads the message, read_some will no longer block because there is readable data in the buffer. There is no logic outside the loop to wait for new messages to arrive, so the server will not continue to read and process subsequent messages.

    • To realize that the client can send messages multiple times and the server can continuously receive and process these messages, you need to put read_some inside the loop, so that the server can try to read the messages sent by the client in each loop iteration, thus achieving Constant communication. This allows the server to continue receiving and processing multiple messages on a connection until the client closes the connection.

6. std::make_shared and std::shared_ptr

 
shared_ptr<string> p1 = make_shared<string>(10, '9');  
 
shared_ptr<string> p2 = make_shared<string>("hello");  
 
shared_ptr<string> p3 = make_shared<string>(); 

Smart pointers are introduced in C++11 , and there is also a template function std::make_shared that can return a std::shared_ptr of a specified type :

// make_shared example
#include <iostream>
#include <memory>
 
int main () {
    
    
 
  std::shared_ptr<int> foo = std::make_shared<int> (10);
  // same as:
  std::shared_ptr<int> foo2 (new int(10));
 
  auto bar = std::make_shared<int> (20);
 
  auto baz = std::make_shared<std::pair<int,int>> (30,40);
 
  std::cout << "*foo: " << *foo << '\n';
  std::cout << "*bar: " << *bar << '\n';
  std::cout << "*baz: " << baz->first << ' ' << baz->second << '\n';
 
  return 0;
}

std::make_shared is a function template in the C++ standard library for creating objects managed by smart pointers (std::shared_ptr). Its role is to combine the creation of objects with the management of smart pointers to manage the life cycle of objects more safely and conveniently.

  • Specifically, the functions and meanings of std::make_shared are as follows:

    • Simplify object creation and management: When creating a smart pointer, if you directly use the std::shared_ptr constructor to create it, you need to allocate memory to the smart pointer object and the managed object at the same time. And std::make_shared(args...) can allocate memory to smart pointer objects and managed objects at one time, which is more efficient and concise.

    • Reduce the number of memory allocations: std::make_shared allocates the memory required by the smart pointer object and the managed object in memory at one time, which can reduce the number of memory allocations, improve performance, and reduce memory fragmentation.

    • Avoid resource leaks: std::make_shared uses smart pointers, which automatically manage the life cycle of objects, ensuring that objects are destroyed in due course when they are no longer needed, and avoid resource leaks.

#include <memory>

int main() {
    
    
    // 创建智能指针并初始化为一个 int 对象
    std::shared_ptr<int> num_ptr = std::make_shared<int>(42);

    // 创建智能指针并初始化为一个动态分配的数组
    std::shared_ptr<int[]> array_ptr = std::make_shared<int[]>(10);

    return 0;
}

In summary, std::make_shared is a recommended way to create and manage objects managed by smart pointers, which not only simplifies the code, but also provides better performance and resource management.

6.1, shared_ptr object creation method

  • Usually we have two ways to initialize a std::shared_ptr:
    • ① Through its own constructor.
    • ②Through std::make_shared.

6.1.2. What are the different characteristics of these two methods

shared_ptr is non-intrusive, that is, the value of the counter is not stored in shared_ptr , it actually exists elsewhere - on the heap, when a shared_ptr is created by a raw pointer of a piece of memory (native memory: refers to this There is no other shared_ptr pointing to this memory), this counter will be generated accordingly, and the memory of this counter structure will always exist-until all shared_ptr and weak_ptr are destroyed, this time is more clever, when all When shared_ptr is destroyed, this piece of memory has been released, but there may still be weak_ptr - that is to say, the destruction of the counter may occur long after the memory object is destroyed.

class Object
{
    
    
private:
	int value;
public:
	Object(int x = 0):value(x) {
    
    }
	~Object() {
    
    }
	void Print() const {
    
    cout << value << endl; }
};
int main()
{
    
    
	std::shared_ptr<Object> op1(new Object(10)); //①
	std::shared_ptr<Object> op2 = std::make_shared<Object>(10); //②
	return 0;
}

6.1.3. What is the difference between these two creation methods

  • When using the first method, op1 has three members, op1._Ptr , op1._Rep , op1._mD , op1._Ptr pointer points to Object object, op1._Rep points to reference counting structure, and reference counting structure also has three members: _Ptr , _Uses , _Weaks , _Ptr points to the Object object, _Uses and **_Weaks are both 1, in fact, the heap area is constructed twice, one is to construct the Object** object, and the other is to construct the reference counting structure

  • When the second method is used, the heap area is only constructed once. It calculates the size of the reference counting structure and the size of the Object object , and opens up such a large space at one time. The _Ptr pointer points to the Object object, _Uses and _Weaks value is 1
    insert image description here

6.1.4. Three advantages of std::make_shared

  • ①The heap area is only opened once, reducing the number of times to open and release the heap area:

    • The biggest benefit of using make_ptr is to reduce the number of single memory allocations. If the bad effects we will mention soon are not so important, this is almost the only reason we use make_shared. Another benefit is that it can increase the locality of large
      Cache ( Cache Locality ) : Using make_shared , the memory of the counter and the native memory are lined up on the heap. In this way, all our operations to access these two memories will reduce the cache misses by half compared with the other solution. Therefore, if the cache miss is right If it is a problem for you, you really have to think about make_shared .
  • ②In order to improve the hit rate, let the object and the reference counting structure be in the same space:

    • It can be hit quickly in the Cache block, because the space locality leads to accessing the memory blocks before and after the object after accessing the object, so the hit rate is very high, because the object and the reference counting structure are next to each other .
    • The theoretical basis for introducing Cache is the principle of program locality, including temporal locality and spatial locality, that is, the data recently accessed by the CPU will be accessed by the CPU in a short time (time); the data near the data accessed by the CPU will be accessed by the CPU in a short period of time . It also needs to access (space), so if the data that has just been accessed is cached in the Cache, the next time it is accessed, it can be directly fetched from the Cache, and its speed can be increased by an order of magnitude. The data to be accessed by the CPU is available in the Cache The cache is called a hit (Hit), otherwise it is called a miss (Miss) .

Execution order and exception safety are also issues that should be considered:

struct Object
{
    
    
	int i;
};
void doSomething(double d,std::shared_ptr<Object> pt)
double couldThrowException();
int main()
{
    
    
	doSomething(couldThrowException(),std::shared_ptr<Object> (new Object(10));
	return 0;
}

Analyzing the above code, at least three things are done before the dosomething function is called:

  • ① Construct and allocate memory to Object.
  • ② Construct shared_ptr.
  • ③couldThrowException()。

C++17 introduces a more strict method of identifying the order of function parameter construction, but before that, the execution order of the above three things should be like this:

  • ①new Object()。
  • ②Call the couldThrowException() function.
  • ③ Construct shared_ptr and manage the memory opened up in step 1.

The problem with the above is that once step 2 throws an exception, step 3 will never happen, so there is no smart pointer to manage the memory created in step 1—the memory leaks, but the smart pointer says it is innocent, and it has not had time yet Take a look into this world.

This is why we use std::make_shared as much as possible to make step 1 and step 3 close together, because you don't know what might happen in the middle

  • ③ In some cases where the order of calls is uncertain, objects can still be managed:
    • If you use doSomething(couldThrowException(),std::make_shared (10)); to build, the object and reference counting structure will be built together during construction, even if an exception is thrown later, this object will also be destructed Lose.

6.1.5. Disadvantages of using make_shared

When using make_shared , the most likely problem is that the make_shared function must be able to call the target type constructor or construction method. However, at this time, even setting make_shared as a class friend may not be enough, because in fact, the target type is constructed through a The auxiliary function is called - not the make_shared function

Another problem is the life cycle of our target memory (I am not talking about the life cycle of the target object). As mentioned above, even if the targets managed by shared_ptr are released, the counter of shared_ptr will continue to exist until The last weak_ptr pointing to the target memory is destroyed. At this time, if we use the make_shared function.

Here comes the problem: the program automatically manages the memory occupied by the managed object and the heap memory occupied by the counter as a whole, which means that even if the managed object is destroyed, the space is still there, and the memory may Not returned - it waits for all weak_ptrs to be cleared to be returned along with the memory occupied by the counter, if your object is a bit large, it means a considerable amount of memory is meaninglessly locked for a while
insert image description here
The shaded area is the memory of the object managed by shared_ptr , which is waiting for the counter of weak_ptr to become 0, and is released together with the light orange area above (the memory of the counter).

7. Summary: the advantages and disadvantages of synchronous reading and writing

  • The disadvantage of synchronous reading and writing is that reading and writing are blocked. If the client does not send data to the server, the read operation of the server is blocked, which will cause the server to be in a blocked waiting state.
  • You can open new threads to handle reading and writing for newly generated connections, but the number of threads opened by a process is limited, about 2048 threads. In the Linux environment, you can increase the number of threads opened by a process through unlimited, but too many threads are also It will result in more time slices consumed by switching.
  • The server and client are in response mode, and the actual scene is in full-duplex communication mode, and the sending and receiving should be separated independently.
  • The server and client do not take sticky packet handling into account.

To sum up, it is the problem of our server and client. In order to solve the above problem, I will continue to improve and improve it in the next article, mainly improving the above solution by asynchronous reading and writing.

Of course, the method of synchronous reading and writing also has its advantages. For example, when the number of client connections is small and the concurrency of the server is not high, the method of synchronous reading and writing can be used. Using synchronous reading and writing can simplify the coding difficulty.

Guess you like

Origin blog.csdn.net/qq_44918090/article/details/132341589