Network programming sockets (7) - a multithreaded version of a simple TCP network program

Multiple connections are supported by creating a thread per request:

server.c:
#include <stdio.h>
#include<pthread.h>
#include<sys/socket.h>
#include<netinet/in.h>
#include<unistd.h>
#include<string.h>
#include<sys/types.h>
#include<arpa/inet.h>
#include<stdlib.h>

#define MAX 128

typedef struct {
    int sock;
    char ip[24];
    int port;
} net_info_t;

int Startup(char* ip,int port){
    int sock = socket (AF_INET, SOCK_STREAM, 0);
    if(sock < 0){
        printf("socket error!\n");
        exit(2);
    }
    struct sockaddr_in local;
    local.sin_family = AF_INET;
    local.sin_addr.s_addr = inet_addr(ip);
    local.sin_port = htons(port);

    if(bind(sock,(struct sockaddr*)&local,sizeof(local)) < 0){
        printf("bind error!\n");
        exit(3);
    }
    
    if(listen(sock,5) < 0){
        printf("listen error!\n");
        exit(4);
    }

    return sock;
}

void service(int sock,char* ip,int port){
    char buf[MAX];
    while(1){
        buf[0] = 0;
        ssize_t s = read(sock,buf,sizeof(buf)-1);
        if(s > 0){
            buf[s] = 0;
            printf("[%s:%d] say# %s\n",ip,port,buf);
            write(sock,buf,strlen(buf));
        }
        else if(s == 0){
            printf("client [%s:%d] quit!\n",ip,port);
            break;
        }
        else{
            printf("read error!\n");
            break;
        }
    }
}

void* thread_service(void* arg){
    net_info_t* p = (net_info_t*)arg;
    service(p->sock,p->ip,p->port);
    close(p->sock);
    free(p);
}

int main(int argc,char* argv[]){
    if(argc != 3){
        printf("Usage:%s [ip] [port]\n",argv[0]);
        return 1;
    }
    int listen_sock = Startup(argv[1],atoi(argv[2]));
    
    struct sockaddr_in peer;
    char ipBuf[24];
    for(;;){
        ipBuf[0] = 0;
        socklen_t len = sizeof(peer);
        int new_sock = accept(listen_sock,(struct sockaddr*)&peer,&len);
        if(new_sock < 0){
            printf("accept error!\n");
            continue;
        }
        inet_ntop(AF_INET,(const void*)&peer.sin_addr,ipBuf,sizeof(ipBuf));
        int p = ntohs(peer.sin_port);
        printf("get a new connect,[%s:%d]\n",ipBuf,p);

        net_info_t *info = (net_info_t*)malloc(sizeof(net_info_t));
        if(info == NULL){
            perror("malloc");
            close(new_sock);
            continue;
        }
        info->sock = new_sock;
        strcpy(info->ip,ipBuf);
        info->port = p;

        pthread_t tid;
        pthread_create(&tid,NULL,thread_service,(void*)info);
        pthread_detach(tid);
    }
    return 0;
}
client.c:
#include <stdio.h>
#include<sys/socket.h>
#include<netinet/in.h>
#include<unistd.h>
#include<string.h>
#include<sys/types.h>
#include<stdlib.h>
#include<string.h>

#define MAX 128

int main(int argc,char* argv[]){
    if(argc != 3){
        printf("Usage:%s [ip] [port]\n",argv[0]);
        return 1;
    }
    int sock = socket (AF_INET, SOCK_STREAM, 0);
    if(sock < 0){
        printf("socket error!\n");
        return 2;
    }

    struct sockaddr_in server;
    server.sin_family = AF_INET;
    server.sin_port = htons(atoi(argv[2]));
    server.sin_addr.s_addr = inet_addr(argv[1]);

    if(connect(sock,(struct sockaddr*)&server,sizeof(server)) < 0){
        printf("connect error!\n");
        return 3;
    }

    char buf[MAX];
    while(1){
        printf("please Enter# ");
        fflush(stdout);
        ssize_t s = read(0,buf,sizeof(buf)-1);
        if(s > 0){
        buf[s-1] = 0;
        if(strcmp("quit",buf) == 0){
            printf("client quit!\n");
            break;
        }
        write(sock,buf,strlen(buf));
        s = read(sock,buf,sizeof(buf)-1);
        buf[s] = 0;
        printf("server Echo# %s\n",buf);
        }
    }
    close(sock);
    return 0;
}
  • Advantages and disadvantages of the multithreaded version

advantage:

    (1) Can handle multiple user requests;

    (2) The code is relatively simple and the writing cycle is short.

shortcoming:

    (1) The child process is created only when the connection arrives, and creating a child process takes time and affects performance;

    (2) Each process of the multi-threaded server occupies resources, resulting in a limited number of customers that it can serve;

    (3) As the number of processes in a multi-threaded server increases, the pressure on the CPU will increase, and the time required for CPU scheduling will become longer, which will affect performance.

    (4) The stability of the multi-threaded server is poor, and the entire server may hang due to thread safety issues.

  • Process pool and thread pool

1. Pool

Since the hardware resources of the server are "abundant", a very direct way to improve the performance of the server is to exchange space for time, that is, "waste" the hardware resources of the server in exchange for its operating efficiency. This is the concept of pools.

A pool is a collection of resources that are created and initialized when the server starts, which is called static resource allocation.

When the server enters the formal operation stage, that is, when it starts to process client requests, if it needs related resources, it can be obtained directly from the pool without dynamic allocation. Obviously, getting the required resources directly from the pool is much faster than dynamically allocating resources, because the system calls to allocate system resources are time-consuming.

When the server has finished processing a client connection, it can put the associated resources back into the pool without performing a system call to release the resources. From the final effect, the pool is equivalent to the application facility for the server to manage system resources, which avoids the frequent access of the server to the kernel. Increased efficiency.

There are many types of pools, the common ones are process pools, thread pools, and memory pools.

2. Process pool && thread pool

In object-oriented programming, the creation and destruction of objects is a relatively complex process, which is time-consuming, so in order to improve the running efficiency of the program, the number of times of creating and destroying objects, especially some very resource-consuming objects, should be reduced as much as possible. Create and destroy. 

So we can create a process pool (thread pool), put some processes (threads) into it in advance, call them directly when they are used, and return the process (threads) to the process pool (thread pool) after they are used up, saving the creation of Time to delete the process, but of course it requires additional overhead.

Using thread pools and process pools can make the work of managing processes and threads handed over to the system management, without the need for programmers to manage the threads and processes inside.

The benefit of process pools and thread pools is to reduce the creation and return time, thereby improving efficiency.

Process pools are similar to thread pools, so here we take the process pool as an example. Unless otherwise specified, the following descriptions of process pools also apply to thread pools.

The process pool is a group of child processes pre-created by the server, and the number of these child processes is between 3 and 10 (of course, this is only a typical situation). The number of threads in the thread pool should be similar to the number of CPUs.

All child processes in the process pool are running the same code and have the same properties like priority, PGID, etc.

当有新的任务来到时,主进程将通过某种方式选择进程池中的某一个子进程来为之服务。相比于动态创建子进程,选择一个已经存在的子进程的代价显得小得多。至于主进程选择哪个子进程来为新任务服务,则有两种方法:

1)主进程使用某种算法来主动选择子进程。最简单、最常用的算法是随机算法和 Round Robin (轮流算法)。

2)主进程和所有子进程通过一个共享的工作队列来同步,子进程都睡眠在该工作队列上。当有新的任务到来时,主进程将任务添加到工作队列中。这将唤醒正在等待任务的子进程,不过只有一个子进程将获得新任务的“接管权”,它可以从工作队列中取出任务并执行之,而其他子进程将继续睡眠在工作队列上。

当选择好子进程后,主进程还需要使用某种通知机制来告诉目标子进程有新任务需要处理,并传递必要的数据。最简单的方式是,在父进程和子进程之间预先建立好一条管道,然后通过管道来实现所有的进程间通信。在父线程和子线程之间传递数据就要简单得多,因为我们可以把这些数据定义为全局,那么它们本身就是被所有线程共享的。

线程池主要用于:

1)需要大量的线程来完成任务,且完成任务的时间比较短。 比如WEB服务器完成网页请求这样的任务,使用线程池技术是非常合适的。因为单个任务小,而任务数量巨大。但对于长时间的任务,比如一个Telnet连接请求,线程池的优点就不明显了。因为Telnet会话时间比线程的创建时间大多了。

2)对性能要求苛刻的应用,比如要求服务器迅速响应客户请求。

3)接受突发性的大量请求,但不至于使服务器因此产生大量线程的应用。




Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325575292&siteId=291194637