Architecture Construction - Practice

Remaining problem:

Afternoon:
1. Thread pool
2. Separation of read and write: AOP
3. Guava-cache and spring eache cache
4. Send SMS


cache framework,

asynchronous queue framework

, sub-library and sub-table framework, high-

availability framework (combined with learning dubbo's high-availability implementation)

 

 

1. According to the characteristics of different industries and demand characteristics, carry out detailed design, system scale, and flow assessment

2. Technical selection, which is required here, can provide technical solutions for most needs

3. Able to guide test engineers, test plans, test tools, test techniques

4. Be able to guide the operation and maintenance company, hardware, network environment, which basic software is required and optimization

4. Master the system, the installation, configuration and optimization skills of each component of the system

5. Clarify the technical level of the project, and be able to select employees who match the technical ability of the project

6. Keep abreast of the latest technological developments

 

For example, performance measurement diagnosis and optimization, jvm, concurrency practice, redis, mq, network and other optimization capabilities

 

There are many open source rpc, take one and use it, and then gradually do monitoring, current limiting, service downgrade, etc.

 

In a business scenario with very large traffic, it is still very beneficial to use a microservice architecture. It is not very large. The general rpc and soa are basically sufficient.....

 

 

======================Construction Practice============================ ===

1. Summarize the architecture development process and express it fluently.
2. You need to record your own task progress and related achievements.

System scale (daily activity) and hardware evaluation to be
required, detailed design and technology selection (microservices) to be
tested tools and test solutions (including Various technologies, single and cluster) are waiting for
cutting-edge technology


【high performance】

 

In the early stage of website development, one machine is often deployed: web application, database, and file server; with the increase of website users, the web application server, database server, and file server will be separated into different servers; then, according to the location of performance bottlenecks, Select cache, apply horizontal expansion, data read and write separation, asynchronous processing and a series of technologies; when the above measures cannot be satisfied, especially when the number of data read and write cannot meet the processing of massive data, it is necessary to sub-database and sub-table or adopt a micro-service architecture  
Figure: To be added
 

 

The initial structure of the website

Description: (1) In order to ensure high availability, each component must be clustered in the early stage of the website (2) Here, OpenResty is considered later, and the web application layer is added

Access layer (Nginx*2+keepalived) + Nginx cluster + tomcat cluster + single Mysql library (read and write separation) + multi-level cache

1. Access layer (traffic load layer + core Nginx layer)

Function:

Forward traffic to the core Nginx layer through LVS+HAProxy to achieve load balancing of traffic;

At the core Nginx layer, general functions such as traffic grouping, content caching, request header filtering, failover, current limiting, and firewall can be implemented

Here: The traffic load layer is not used, and the high availability of the core Nginx layer is achieved through keepAlived

 

Software/Technology Selection: Load Balancer ( Nginx , F5, LVS, HaProxy), High Availability Hot Standby Solution ( keepAlived )

 

Scheme planning: Nginx*2 + keepAlived*2

VIP/listening port IP CPU name Nginx port Default master-slave
192.168.1.100/88 192.168.1.111 edu-proxy-01 80 Master
192.168.1.112 edu-proxy-02 80 Slaves

Note: Due to the machine, in the test scenario, the load balancer and the Tomcat cluster are configured on the same machine; the actual hostnames edu-web-01, edu-web-02

 

Practice Program/Documentation:

Installation process: Refer to Keepalived+Nginx to achieve high-availability web load balancing.
Installed and configured software: /usr/local/nginx /usr/local/keepalived

Configuration file address: /etc/keepalived
related components/configuration file: see attachment

Related commands:

#/usr/local/nginx/sbin/nginx

#/usr/local/nginx/sbin/nginx -s reload

# service keepalived stop/start/restart
  Supplement: 100 million level traffic - load balancing and reverse proxy part to be added to Nginx - basics (reverse proxy / load balancing / page cache.  

 

2. Business Nginx layer

Function: For example, on the product details page, business logic can be implemented in the business Nginx, or reverse proxy to a cluster such as Tomcat;

In this layer, content compression can be implemented (the purpose of this layer is to reduce the CPU pressure of the core Nginx and distribute the pressure to each business Nginx), A/B testing, and downgrade.

 

Software/Technology Selection: Nginx glassfish

 

Program planning:

IP CPU name The port number
192.168.1.106 edu-nginx-01 80
192.168.1.107 edu-nginx-02 80
192.168.1.108 edu-nginx-03 80

 

Installation and configuration best practices: refer to the relevant content of this blog post

Software installation location: /usr/local/nginx

Configuration file location: /usr/local/nginx/conf/nginx.conf Some information for configuring load balancing

Related components/configuration files: see nginx (business Nginx configuration).conf in the attachment nginx+keepalived

Related commands:

#/usr/local/nginx/sbin/nginx -s reload

# service keepalived stop/start/restart

 

3. Web server cluster

effect:

Software/Technology Selection: Tomcat, glassfish, webLogic, webService, Jboss

 

Program planning:

IP CPU name The port number
192.168.1.111 edu-web-01 8081
192.168.1.112 edu-web-02 8081

 

Installation and configuration best practices: refer to the relevant content of this blog post

Software installation location: /usr/local/src/tomcat7

Configuration file location: /usr/local/src/tomcat7/conf/server,xml configure port number, root directory, etc.

Related components/configuration files: see attachment

Related commands:

 

# /usr/local/src/tomcat7/bin/startup.sh & tail -f /usr/local/src/tomcat7/logs/catalina.out
Dynamic and static separation: Due to the consideration of OpenResty, static resources are distributed on the Nginx-web server, refer to: Nginx+Tomcat load balancing configuration

Additional questions:
3.1 .session consistency issues

Three methods for session sharing in tomcat cluster

3.2 Data Consistency Issues

to be written

 

 Replenish:

Tomcat optimization

 

4. Cache

Role: cache is the first tool to think and use for performance optimization

Software/technology selection: Redis , Memcache Scheme planning: Configuration file address command
IP PORT CPU name
192.168.1.122 7770(Master)7771(Slave)8880(Master)8881(Slave)6660(Master)6661(Slave) edu-redis-01
192.168.1.123 7770(Master)7771(Slave)8880(Master)8881(Slave)6660(Master)6661(Slave) edu-redis-02
192.168.1.124 7770(Master)7771(Slave)8880(Master)8881(Slave)6660(Master)6661(Slave) edu-redis-03
Description: 3 sets of 7770/8880 port services are the master cluster; 7771/8881 is the slave cluster; Redis is not fragmented 122:6660 master, 6661 of 122-123-124 is the slave cluster configuration file: see attachment nutcracker.yml TWEMPROXY: points slice proxy server
IP  
192.168.1.122  
Basic commands and configuration file addresses
Need to start on 192.168.1.122, 123, 124 respectively:
SSDB 7770 7771 8880 8881
nohup /usr/local/ssdb/ssdb-server   /usr/local/ssdb-master/ssdb_basic_7770.conf &  
nohup /usr/local/ssdb/ssdb-server   /usr/local/ssdb-master/ssdb_basic_7771.conf &  
nohup /usr/local/ssdb/ssdb-server   /usr/local/ssdb-master/ssdb_desc_8880.conf  &  
nohup /usr/local/ssdb/ssdb-server   /usr/local/ssdb-master/ssdb_desc_8880.conf  &  

REDIS
192.168.1.122:6660 6661
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis_6660.conf &
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis_6661.conf &
192.168.1.123 6661
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis_6661.conf &
192.168.1.124 6661
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis_6661.conf &

Start Twemproxy
nutcracker.init {start|stop|status|restart|reload|condrestart}
nutcracker -d -c /usr/local/twemproxy/conf/nutcracker.yml -p /usr/local/twemproxy/run/redisproxy.pid -o /usr/local/twemproxy/run/redisproxy.log

 

Main reference: cache (5)

Redis introduction, installation and cluster introduction

Introduction and use of SSDB

Twemproxy - caching proxy fragmentation mechanism

Redis syntax, Key value design and introduction to common cases

Redis 3.0 cluster cluster

 

Multi-level cache practice: combined with account system

1. HTTP caching

 

2. Local full cache

 

3. Distributed cache

    Cache: user information. For specific java operations, see:

 

4. Application-level caching

 

Replenish:

100 million level traffic - Http cache part

2. The first is asynchronous, the second is to use the message queue, and finally the change of binlog can be monitored

3. Construction of multi-level cache

 

 

5. Mysql cluster (read-write separation/sub-database sub-table)

5.1

 

5.2

 

5.3 Read-write separation configuration based on spring-Mybatis

2 pits:

(1) Change the file or read the contents of the target and configure this <aop:aspectj-autoproxy proxy-target-class="true" />   

(2) The sequence of configuration files is very important, otherwise they cannot be read

 

 

 

6. Dubbo-based distributed service governance

effect:

 

refer to:

Building a distributed project based on Dubbo

 

environmental planning

components illustrate IP port
edu-web-boss consumer 192.168.1.111+192.168.1.112 8081
edu-service-user provider /usr/server/edu/service/user -
provider 192.168.1.111+192.168.1.112 8082
zookeeper-3.4.6 Registration Center 192.168.1.106/107/108 2181-2182-2183
MySQL5.6 database 192.168.1.122 3306

 Here, the undistributed tomcat7 becomes: tomcat7-server1 tomcat7-server2

 

[root@edu-web-01 src]# mv tomcat7 tomcat7-server1
[root@edu-web-01 src]# cp -r tomcat7-server1 tomcat7-server2
 Modify the relevant port of tomcat7-server2:
shutdown port: 8005 is mainly responsible for starting shutdown.    
Ajp port: 8009 is mainly responsible for balancing through ajp (commonly used for apache and tomcat integration)    
http port: 8081 can be accessed directly through the web page (nginx+tomcata integration)    
#Note* If the three ports of tomcat1 are: 8005 8009 8081, then the ports of tomcat2 are +1 on this basis, that is: 8006 8010 8082

 

 

 Deploy Dubbo service independently

cd /usr/server/edu/service/user/  
./service-user.sh start  
./service-user.sh stop  
./service-user.sh restart

 

Dubbo service consumer web application war

 

#start up
/usr/local/src/tomcat7-server1/bin/startup.sh & tail -f /usr/local/src/tomcat7-server1/logs/catalina.out
#stop
/usr/local/src/tomcat7-server1/bin/shutdown.sh & tail -f /usr/local/src/tomcat7-server1/logs/catalina.out

test:

192.168.1.111:8081/edu-web-boss

 

 





Because of a series of problems in a single application, building microservices
, selecting the characteristics of microservices, dividing business modules, and improving microservices. . Also different applications are not synchronized, improve data processing capabilities,

sub-database sub-table technology

search engine

 


Availability:

 

Including monitoring, downgrading, current limiting, etc. Slow construction
includes: dynamic configuration change

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326267283&siteId=291194637