Big data cluster build jdk, hadoop, mysql, hive, spark, sqoop, flume...

Basic environment:
jdk version:
Hadoop version:

vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
modify the
original configuration of the machine to static IP

TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=25b39fc8-eace-49ff-bb9b-545a2f9d4533
DEVICE=eno16777736
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no

Modified configuration

TYPE=Ethernet
BOOTPROTO=static 
IPADDR=192.168.234.100 
NETMASK=255.255.255.0
GATEWAY=192.168.234.2 
DNS1=192.168.234.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=25b39fc8-eace-49ff-bb9b-545a2f9d4533
DEVICE=eno16777736
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no

When changing the IP, you need to see how much your network card is
. 1. Click Edit, select the virtual network editor
Insert picture description here
2. The display is as follows, you can see that the subnet address is 192.168.234.0
Insert picture description here
and then restart, reboot

Create the other two machines by cloning

1. Note: When cloning, make sure that the machine is turned
off. After shutting down, select Manage-Select Clone. Note that you should select full clone when cloning
Insert picture description here
. After successful cloning, you need to modify the address of the mask.
Select Settings-Network Adapter-Click Generate to
Insert picture description here
modify Host name
View command:
hostnamectl
modify command:
hostnamectl set-hostname hostname;
Insert picture description here
modify the static IP address, and remove the mask address
vi /etc/sysconfig/network-scripts/ifcfg-eno16777736

In order to avoid unnecessary troubles, you can turn off all the firewalls when you practice yourself.
Temporarily turn off
systemctl stop firewalld.service.
Prohibit starting
systemctl disable firewalld.service.
You can refer to my article about turning off the firewall
https://blog.csdn. net/qq_38220334/article/details/105354257

Three machines do the mapping between host name and IP
Modify the hosts file
vi /etc/hosts of hadoop1

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.234.100 hadoop1
192.168.234.101 hadoop2
192.168.234.102 hadoop3

Test if it can be pinged, it is ok
Insert picture description here

Configure the password-free login
command: ssh-keygen -t rsa
and then press Enter

[root@hadoop1 etc]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
78:49:9b:18:94:3f:71:1e:d9:0e:27:49:92:d8:e2:e7 root@hadoop1
The key's randomart image is:
+--[ RSA 2048]----+
|      .+.o.+     |
|     .+ +.B o    |
|     ..o.+ *     |
|      .=++. .    |
|      ooS.       |
|       .E        |
|                 |
|                 |
|                 |
+-----------------+

Copy the key to other machines
ssh-copy-id root@hadoop2
and then enter yes, and then enter your machine's password to configure successfully

The authenticity of host 'hadoop2 (192.168.234.101)' can't be established.
ECDSA key fingerprint is ca:4e:1e:9b:bf:dd:40:b3:51:21:41:e6:09:c4:7f:4e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@hadoop2'"
and check to make sure that only the key(s) you wanted were added.

Test, log in to Hadoop2 directly on hadoop1

[root@hadoop1 etc]# ssh hadoop2
Last login: Sun Sep 20 11:05:36 2020 from 192.168.234.1
[root@hadoop2 ~]# exit
登出
Connection to hadoop2 closed.
[root@hadoop1 etc]# 

Note:
also configure his own password-free login on Hadoop1

// An highlighted block
var foo = 'bar';

Install the jdk
pressurized jdk installation package, modify the /etc/profile file
and then add the following in the last position;

export JAVA_HOME=/root/soft/jdk18/jdk1.8.0_202
export PATH=$PATH:$JAVA_HOME/bin

Let the configuration file take effect
source /etc/profile

Several machines must close selinux and
modify SELINUX=enforcing to SELINUX=disabled

[root@hadoop1 ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

Guess you like

Origin blog.csdn.net/qq_38220334/article/details/108689738