[Distributed load balancing example] Ruby service migration, mongrel multi-server, lighttpd proxy load balancing

Common commands for database migration:
mysqldump -uroot -pt --default-character-set=utf8 -d ***_development> create_***_development.sql
mysqldump -uroot -pt --quick --no-create-info - extended-insert --default-character-set=latin1 ***_development> data_***_development.sql
or mysqldump -uroot --all ***_development> ***_development_dump.sql
scp -P ***-v data _ *** _ development.sql *** @ *** : / home / *** // local file data _ *** _ 192.168.1.1 to the remote copy development.sql the /home/username/vsftp.conf
will You are prompted to enter yes to confirm and then enter the password
scp -P *** -v create_***_development.sql ***@***:/home/***

mysql -uroot -p ***_development <create_** *_development.sql
mysql -uroot -p ***_development <data_***_development.sql

iconv -c -f latin1 -t gb2312 data_***_development.sql> data_***_developmentGB2312.sql

Start with:
#Migrating from the test server of the ***.114 website department to the new server of ***.119
cd /home/***/webapp/***
apt-get install rails
apt-get install ruby
apt-get install lighttpd #To
install
gem install -v=*** rails #To
install
gem install rchardet
sudo apt-get install ruby***-dev
apt-get install mongrel #External
network ip can access these services, behind -N The parameter is the number of processes started, starting from the set ip, incrementing 3010, 3011, 3012
#-c represents the root directory of the rails project, production production mode, development development debugging mode
mongrel_rails cluster::configure -e production -p 3010 -N 3 -c / Home / *** / *** / the webapp / *** / -a 0.0.0.0 --user *** *** --group
# or
mongrel_rails cluster :: configure -e development -p 3010 - N 3 -c /home/***/***/webapp/***/ -a 0.0.0.0 --user *** --group *** #Only
local access, you can access through local proxy
mongrel_rails cluster::configure -e production -p 3010 -N 3 -c /home/***/webapp/***/ -a 127.0.0.1 --user *** --group ***
generate config/mongrel_cluster .yml
# start
mongrel_rails Cluster :: restart
# here mongrel functions like ruby script / server webrick -d webrick in -p3010

# configure proxy server
sudo APT-GET install lighttpd
vi /etc/lighttpd/lighttpd.conf
# amended as follows: (Mod_proxy, mod_alias, mod_rewrite are all required)
server.modules = ("mod_proxy",
            "mod_access",
            "mod_alias",
            "mod_accesslog",
            "mod_compress",
            "mod_rewrite",
------------ ----------
## bind to port (default:80)
#here or modify to the required port
#server.port = 80
----------------------
proxy.debug = 0
#proxy.balance = "fair" The first one is full, then pick the next one server

#proxy.balance = "hash" A url is fixed to a server
proxy.balance = "round-robin" A server response is updated every time
proxy.server = ("/" =>
(
("host" => "127.0 .0.1", "port" => 8888)
)
)
$HTTP["host"] == "***.***.com" {         proxy.balance = "round-robin"         proxy.server = ("/ "=> (             ("host" => "127.0.0.1", "port" => 3010 ),             ("host" => "127.0.0.1", "port" => 3011 ),             ("host" => "127.0.0.1", "port" =>3012))) ---------------------- #The cluster set here corresponds to the cluster started by mongrel_rails above /etc/init.d/lighttpd restart ruby project root directory Modify the following config/environments/production.rb    to (no need to set cache):











config.action_controller.perform_caching = false config.action_view.cache_template_loading = false
#So
far, the migration deployment is complete

An illustration of distributed processing is attached below:

1

Guess you like

Origin blog.csdn.net/jrckkyy/article/details/5432483