Single-threaded echo server network programming under the simplified version of TCP LAN


In this paper, written in c ++ in a Linux echo procedures, echo service, the server will receive the data back to the client, the main use of technology has Reactor mode, socket, non-blocking IO, process, single-threaded server programming and other knowledge.

1. The basic functions of echo server

The client sends to the content server's ROT13 encryption after sent back to the client.

2. The server IO model

useIO multiplexing non-blocking IO +Model (the Reactor mode), which uses a multiplexed IO epoll mechanism under Linux, first introduced below 5 IO model under Linux.

2.1 IO model

  • Blocking IO model
    the calling process will block until the operation is complete IO
  • Non-blocking IO model
    the calling process at the time of kernel data is not ready to take a polling way, the process will not be put to sleep.
  • IO multiplexing model
    calling client process select, poll or epoll system call, if the kernel data is not ready, will be blocked on all three system calls without blocking the client process.
  • Drive signal IO model
    kernel when data is ready, send SIGIO notify the calling process, when data is not ready, do not block the calling process.
  • Asynchronous IO model
    the calling process to tell the kernel to start a job, and let the kernel inform the calling process after the completion of the entire operation (including copying data from the kernel buffer to the user).

In summary,Blocking and non-blockingThe difference is: whether the calling process returns immediately;Synchronous and asynchronousThe difference is: copy data from kernel space to user space, the calling process is blocked.

2.2 Why IO multiplexing with non-blocking IO?

Here select manual gives the answer

Under Linux, select() may report a socket file descriptor as “ready for reading”, while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use O_NONBLOCK on sockets that should not block.

The meaning here is, if a new data socket receive buffer section arrives, and then select the socket descriptor readable report, but then, to check the protocol stack section the new checksum errors, then discard this section this time there was no call to read data readable, if the socket is not set nonblocking, read this will block the current thread. If the socket is nonblocking set, then there is no data to be read, an error is returned.

The difference between 2.3 epoll trigger mechanism

2.3.1 level trigger and edge trigger

select only supports level-triggered, epoll support level trigger and edge trigger modes, the default mode for the trigger level. So what is the difference between the level trigger and edge trigger is it?
Trigger levels: as long as the conditions are satisfied, then the trigger event (data not read, the kernel will always inform you).
Edge Trigger: Whenever the state changes, the trigger event.

Edge triggered there is a problem: when a file descriptor read, the data buffer sure to read all, i.e. by repeatedly read (), or until it encounters EAGAIN read returns until the value 0; if not finished, the system considers the file descriptor mutual status does not change, will not notice this file descriptors, and this time the file descriptor like dead.

2.3.1select、poll、epoll

  • Problems exist select
    (1) a limited number FD_SET thereof;
    (2) of the socket polling manner, every time to traverse FD_SET, inefficient;
    (3) the need to maintain a large amount of storage space fd structure, in kernel mode and transmitted between the overhead and large.
  • Problems exist poll
    (1) between a large number of arrays FD_SET user mode and kernel mode transmission, a large overhead;
    (2) Edge problems, as described above.
  • ------- distinguish reason epoll efficient operation and frequent infrequent operation
    (1) New epoll_create descriptor;
    (2) epoll_ctrl (add or remove all connections to be monitored)
    (3) returns an active connection epoll_wait

epoll three elements: red-black tree, mmap and linked lists.
Internal epoll achieved by the kernel and user space of the memory to achieve a mmap, mmap kernel space and the amount of an address of a user address space mapped to the same physical memory address, reduces data exchange between kernel mode and user mode.
Red-black tree stored epoll is listening sockets, a good time to add and delete socket performance.

2.4 may encounter problems and solutions

2.4.1 server issues

  1. Suppose a situation, start a listener server, a connection request arrives, forks a child process to handle customer requests, but for some reason, monitoring server terminated (the child continues to have a few clients on the connection service), restart monitor server.

By default, when the monitor server restart, restart by calling the socket, bind, listen, as it tries to bundle an existing connection (derived before the child process handling the connection) port, which bind call fails .

Solution: between the socket and bind two calls SO_REUSEADDR set socket option, will bind at this time the call was successful. (All TCP socket server should specify this option to allow the server to be restarted in this case)

  1. Suppose a situation, due to the large number of concurrent connections the server, resulting in a new connection arrives, when the accept () returns EMFILE error, how should I do?

This situation means that the file descriptor of this process has reached its limit, can not create a file descriptor for the new socket connection, but because of this "connection" does not get a file descriptor, we can not close (), the program continues to run, epoll_wait () will return immediately, because the new connection pending, listening fd is still readable, such mutual cause server program into a busy loop, affecting the normal operation of other connections.

Solution: As the file descriptor hard limit, we can set ourselves a slightly lower soft limit, if the number of connections exceeds the soft limit, take the initiative to close the new connection.

2.4.1 Client

When using non-blocking IO, usually the application tasks into multiple processes (using fork) or more threads, I'm using here is the current process is divided into two sub-processes, in which the child is used to copy the messages from the server to standard output, from the parent process server messaging client to the standard input of the copy, as shown in FIG. Parent and child share the same socket: parent process to write sockets inside, the child read from the socket, the two file descriptors refer to the same socket
Here Insert Picture Description
######## echo.h ############

#ifndef _ECHO_
#define _ECHO_

#include <sys/types.h> //常用数据类型定义头文件
#include <sys/socket.h>
#include <sys/epoll.h>
#include <fcntl.h>
#include <error.h>
#include <arpa/inet.h> //地址转换
#include <unistd.h> //提供对操作系统应用接口访问功能的头文件 fork 等、、、
#include <stdlib.h>  //常用的系统函数,free、、、
#include <stdio.h>
#include <string>
#include <cstring>

using namespace std;

#define MAX_EVENT_NUMBER 1024
#define BUF_SIZE 300
#define EPOLL_SIZE 100 //epoll最大监听数

#define PORT "2019"  //atoi(PORT),将字符串变为整形
#define SERVERIP "127.0.0.1"


int setNonblocking(int sockfd){
	int flags = fcntl(sockfd,F_GETFL);//获取旧的文件标志
	int newflags = flags | O_NONBLOCK;
	int n;
	if((n = fcntl(sockfd,newflags)) == -1){
		perror("fcntl error");
		exit(-1);}
	return flags;
}

void addfd(int epollfd,int sockfd,int state){
	struct epoll_event event;
	event.data.fd = sockfd;
	event.events = state;
	epoll_ctl(epollfd,EPOLL_CTL_ADD,sockfd,&event);
}

void handleEvent(struct epoll_event * EventLoop, int eventsNumber,int epollfd,int listenfd){
	int confd = 0;
	struct sockaddr_in clientaddr;
	char buf[BUF_SIZE];
	for(int i= 0; i< eventsNumber;++i){
		if(EventLoop[i].data.fd == listenfd){ //如果是第一次建立连接
			socklen_t clientlength = sizeof(clientaddr);
			confd = accept(listenfd,(struct sockaddr*) &clientaddr,&clientlength);
			addfd(epollfd,confd,EPOLLIN);
			//clients.push_back(clientsocket);
		}
		else{
			confd = EventLoop[i].data.fd;
			bzero(&buf,strlen(buf));
			int len = sprintf(buf,"After RTO :");
			int nread = 0;
			if((nread = read(confd,buf+len,BUF_SIZE))<0){
				perror("read error");
				exit(-1);}
		        for(unsigned int i = nread;i < strlen(buf);++i){ //ROT	加密
					if((buf[i] >= 'a' && buf[i] <= 'm')  || (buf[i] >= 'A' && buf[i] <= 'M')){
						buf[i] = char(buf[i]  + 13);	
						continue;}
					else if((buf[i] >= 'm' && buf[i] <= 'z') || (buf[i] >= 'M' && buf[i] <= 'Z')){
						buf[i] = char(buf[i]  - 13);		
						continue;}
				}
			buf[strlen(buf)+1] = 0;		
			write(confd,buf,strlen(buf)+1);
		}
	}
}


#endif

########## ############# echoserver.cpp

#include "echo.h"

int main(){
	int error = 0;
	char buf[BUF_SIZE];
	struct sockaddr_in serveraddr;
	bzero(&serveraddr,sizeof(serveraddr));
	serveraddr.sin_family = AF_INET;
	inet_aton(SERVERIP,&serveraddr.sin_addr);
	serveraddr.sin_port = htons(atoi(PORT));
	int listenfd = -1;
	
	if((listenfd = socket(PF_INET,SOCK_STREAM,0)) == -1){
		perror("socket error");
		exit(-1);
	}

	int optval = 0;
	socklen_t optlen;
	optlen = sizeof(optval);
	optval = 1;
	setsockopt(listenfd,SOL_SOCKET,SO_REUSEADDR,(void*) optval,optlen);
	if((bind(listenfd,(struct sockaddr *)&serveraddr,sizeof(serveraddr))) == -1){
		perror("bind error");
		exit(-1);
	}

	int listenopt = 10;
	char * ptr;
	if((ptr = getenv("LISTENQ")) != NULL)
		listenopt = atoi(ptr);	
	if((listen(listenfd,listenopt)) == -1){
		perror("listen error");
		exit(-1);
	}
	
	int epollfd = epoll_create(EPOLL_SIZE); //监听的数目
	struct epoll_event EventLoop[MAX_EVENT_NUMBER];
	addfd(epollfd,listenfd,EPOLLIN);//添加监听事件,调用epoll_ctl

	int eventsNumber = 0;
	
	while(1){
		eventsNumber = epoll_wait(epollfd,EventLoop,EPOLL_SIZE,-1); //-1表示阻塞
		if(eventsNumber == -1){
			perror("epollwait error");
			exit(-1);
		}
		handleEvent(EventLoop,eventsNumber,epollfd,listenfd);
	}


	close(listenfd);
	close(epollfd);
}


################ echocli.cpp #######################

#include "echo.h"

void readfrom(int clientsocket,char * buf);
void writeto(int clientsocket,char * buf);

int main(){
	int clientsocket;
	char buf[BUF_SIZE];
	if((clientsocket = socket(PF_INET,SOCK_STREAM,0))== -1){
		perror("client socket error");
		exit(-1);
	}

	struct sockaddr_in serveraddr;
	bzero(&serveraddr,sizeof(serveraddr));
	serveraddr.sin_family = AF_INET;
	inet_aton(SERVERIP,&serveraddr.sin_addr);
	serveraddr.sin_port = htons(atoi(PORT));

	if((connect(clientsocket,(struct sockaddr *) & serveraddr,sizeof(serveraddr))) == -1){
		perror("connect error");
		exit(-1);
	}

	pid_t pid = fork();
	if(pid == 0){
		writeto(clientsocket,buf);
	}
	else{
		readfrom(clientsocket,buf);
	}
	close(clientsocket);
	return 0;
}

void readfrom(int clientsocket,char * buf){
	while(1){
		if((read(clientsocket,buf,BUF_SIZE)) == 0){
			perror("client read error");
			exit(-1);
		}
		printf("%s",buf);
	}
}

void writeto(int clientsocket,char * buf){
	while(1){
		fgets(buf,BUF_SIZE,stdin);
	if(!strcmp(buf,"exit\n")){
		shutdown(clientsocket,SHUT_WR);
		return;
	}
	write(clientsocket,buf,strlen(buf));
	}
}

to sum up

This code can only achieve the basic functions of echo, is not perfect, will improve the follow-up.
Based on a lot of bloggers blog, because before is copied in a notebook, it can not find the link, if you feel infringed, please private letter to me, I will remove the content in the first place.

Guess you like

Origin blog.csdn.net/YoungSusie/article/details/91583147