Centos7 install Ambari

 

   Previous: Installation of centos7: http://username2.iteye.com/admin/blogs/2390323

10. Ambari installation:

	https://ambari.apache.org/
	http://www.infocool.net/kb/OtherCloud/201611/214644.html

	Ambari-2.4.1.0 compressed package address:
	http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.1.0/ambari-2.4.1.0-centos7.tar.gz
	HDP-2.5.0.0 compressed package address:
	http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.0.0/HDP-2.5.0.0-centos7-rpm.tar.gz
	HDP UTILS compressed package download address:
	http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/HDP-UTILS-1.1.0.21-centos7.tar.gz
	For other versions, please refer to the download address:
	Ambari :
	http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/ambari_repositories.html
	HDP and HDP UTILS:
	http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/hdp_stack_repositories.html

11. Install apache and decompress the above compressed file to the http directory and the DocumentRoot directory:
   1) yum update
    yum install httpd
    2) Modify the configuration file /etc/httpd/conf/httpd.conf
	Add listening address and port: ServerName 192.168.145.131:80
	You can modify the listening address: DocumentRoot "/var/www/html"

    3)启动 /etc/httpd/httpd -k start/restart/stop
    4) Access the server 192.168.145.131:80 to see the file directory structure.
    5) Directories of the two projects:
      http://192.168.145.131/HDP/centos7/
      http://192.168.145.131/

12. Install ambari-server

  cd /etc/yum.repos.d/
  wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.1.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
  yum install ambari-server
  ambari-server setup
  vi /etc/ambari-server/conf/ambari.properties
  Add client.api.port=18080 at the end to change the port of ambari to 18080
  sudo ambari-server start
  Enter http://192.168.145.131:18080 in the browser
  Enter the user name and password on the page that appears: admin/admin
  At this point, the ambari installation is complete, and the cluster is installed below.




 13. Copy the server to 5 copies, set the host of all machines, and do mutual password-free login
	ssh-keygen -t rsa

	ssh-copy-id  linux130.cn
	ssh-copy-id  linux132.cn
	ssh-copy-id  linux131.cn
	ssh-copy-id  linux133.cn
	ssh-copy-id  linux134.cn



 14. Set all hosts
	 sudo vi /etc/hosts

	192.168.145.131 linux131.cn linux131
	192.168.145.130 linux130.cn linux130
	192.168.145.129 linux132.cn linux132
	192.168.145.133 linux133.cn linux133
	192.168.145.134 linux134.cn linux134
The following are used to set the domain name in ambari:
	linux131.cn
	linux130.cn
	linux132.cn
	linux133.cn
	linux134.cn



15. Turn on the NTP service to synchronize the clocks on each machine
 yum -y install ntp
 systemctl is-enabled ntpd
 systemctl enable ntpd
 systemctl start ntpd
16. Set the network domain name on each machine
	sudo vi  /etc/sysconfig/network
	NETWORKING=yes
	HOSTNAME=linux132.cn

17. If the firewall is not closed, it needs to be closed
	Turn off firewall and SELinux
	 systemctl disable firewalld
	 systemctl stop firewalld
	Temporarily shut down without restarting the machine:
		setenforce 0

	Modify the configuration file to make the machine restart also take effect:  
	cat /etc/sysconfig/selinux
	SELINUX=disabled


18. According to the wizard on ambari, install hadoop related
    Name: bigdata
    The hdp server should be set to private, and set to the address given by our own 11 medium and small (5), the system is set to redhad7,
    The operating user selects root, and all machines must have a password-free login of the root account, upload the id_rsa private key on the ambari and server servers
    In the next step, ambari will check the ssh password-free login of each machine, and will copy some files on each machine.
    

If the virtual machine runs out of disk space, disk expansion:
    http://blog.csdn.net/icycolawater/article/details/6992722

19. Access the installed service
http://192.168.145.131:18080

 
20. If there is too little memory, it may be slow to execute hadoop commands, and you need to stop swapping memory.

1) You can use the following two commands to clear and refresh the swap, first close it and then open it: swapoff -a (just close it if you don’t need swap memory) | swapon -a
2) It is recommended that the remaining 10% of physical memory use the swap partition: echo 10 > /proc/sys/vm/swappiness

21. There will be several errors when executing the hive command:
1) If hadoop is in safe mode, execute: hadoop dfsadmin -safemode leave
2) For permission problems, modify dfs.permissions=false on ambari


 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326010597&siteId=291194637