Build linux, install jdk mysq5.7 nacos zookeeper, install big data ecosystem kafka flume hadoop hive hbase zeppelin sp es everything is plus

The first step is to install the virtual machine

 

 

 

 

 

 

 

 

 

 

 

 

 enter

ip a

Check your ip address

  The second step is FinalShell (xhell, etc.) configure port connection

#Enter cd /opt

 The second step is to install the software

 ##Run the following shell.sh script to install the basic environment​

##Run the following shell.sh script to install the basic environment 

#Modify the machine name
hostnamectl set-hostname $1

#Modify static network
addr=$2 #192.168.64.130
sed -i 's/dhcp/static/' /etc/sysconfig/network-scripts/ifcfg-ens33
echo "IPADDR=$addr" >> /etc/sysconfig/network- scripts/ifcfg-ens33
echo "NETMASK=255.255.255.0" /etc/sysconfig/network-scripts/ifcfg-ens33 #Subnet mask
gw=`awk 'BEGIN{split("'"$addr"'",ips, ".");print ips[1] "." ips[2] "." ips[3] "." 2 }'` echo "GATEWAY=$gw" >> /etc/sysconfig/network-scripts/
ifcfg -ens33 #Gateway
echo "DNS1=114.114.114.114" >> /etc/sysconfig/network-scripts/ifcfg-ens33
echo "DNS2=8.8.8.8" >> /etc/sysconfig/network-scripts/ifcfg-ens33
systemctl restart network #Restart the network

#Bind address and name
echo "$addr $1" >> /etc/hosts

#Close the firewall
systemctl stop firewalld
systemctl disable firewalld

#Install vim and wget
yum install -y vim wget

#Replace yum source
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak3
#Download from Alibaba Cloud
wget -O /etc/yum.repos. d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
yum clean all 
yum makecache

#Create software installation folder
mkdir -p /opt/soft

#Configure JDK
mkdir -p /opt/soft/jdk180
jdkPath=`find /opt/ -name 'jdk*.tar.gz*'`
tar -zxf $jdkPath -C /opt/soft/jdk180 --strip-components 1 #Extract to jdk180 and go to the first layer of folders

if [ ! $JAVA_HOME ]
then 
    echo 'export JAVA_HOME=/opt/soft/jdk180' >> /etc/profile
    echo 'export CLASSPATH=.:%JAVA_HOME/lib/dt.jar:%JAVA_HOME/lib/tools.jar' >> /etc/profile
    echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
    source /etc/profile
fi  
 

#1 Start activation and install jdk

#myseata is a custom host name 
#192.168.64.135 is your own IP address
source shell02.sh myseata 192.168.64.135


#2 Install mysql5.7

Uninstall the original **mariadb**

#Query the file name
rpm -qa | grep mariadb 
#xxx Query the file name in the previous step and then uninstall
rpm -e --nodeps xxx 


Download and install mysql 5.7

wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm

yum -y install mysql57-community-release-el7-10.noarch.rpm

rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022

yum install mysql-server -y


File authorization

chown -R root:root /var/lib/mysql

chown root /var/lib/mysql/


mysql Chinese garbled code processing

# Edit /etc/my.cnf
vim /etc/my.cnf
​[
mysqld] 
character-set-server=utf8
​[
client] 
default-character-set=utf8 
[mysql] 
default-character-set=utf8
​#Save
Exit
: wq
#Restart
service mysqld restart


Modify **mysql** login password to open remote login permissions

mysql5.7 login

#View the temporary password (in the opt directory)
grep "password" /var/log/mysqld.log 

#Copy this password and use this temporary password to log in later 



#Log in to the database nU5TydlD__a is your own temporary password
mysql -uroot -p ​nU5TydlD__a
​use mysql ​# 3090_Cmok Your password ALTER USER 'root'@'localhost' IDENTIFIED BY '3090_Cmok'; ​# Modify remote login GRANT ALL PRIVILEGES ON *.* TO root@"%" IDENTIFIED BY "3090_Cmok"; ​flush privileges; ​exit ; ​# (If the password error is displayed when you cannot log in, follow the method below) ------------- -------------------------------------------------- ------------------- 















The basic idea is to follow the following steps: Enter the my.cnf configuration file
through Under [mysqld], add a line skip-grant-tables to skip security verification; systemctl restart mysqld Restart MySQL; mysql -uroot -p log in to MySQL and enter any password; enter use mysql; enter the database named mysql; enter UPDATE user SET password=PASSWORD ("your own password") WHERE user='root ';Change the password; Enter quit; to exit MySQL; Repeat step 1; Delete the skip-grant-tables just added to [mysqld], exit vim, and restart MySQL; mysql -uroot -p Enter the value you just set New password, log in to MySQL. ​However, after trying this method many times, an error message ERROR 1054 (42S22): Unknown column 'password' in 'field list' was reported after step 6. It almost ! ! !


























By chance, I found an article on CSDN with few likes. It was mentioned that
the password field was no longer available in the mysql database of MySQL 5.7. The password field was changed to authentication_string . There is a cute and special sentence. Want to shout out! ! ! ! According to this article, change the command in step 6 to: update mysql.user set authentication_string =password('HJZ@bb1314') where user='root'; and do it again. It was a success in no time! PS ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ​When you log in to MySQL with the newly modified password later ​ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement. ​It means Say, the new password you just changed is actually an initial password, and you are also required to change your password before you can continue to log in.




















This is connected to "Step 6: Configure MySQL" of manually deploying the LNMP environment (Alibaba Cloud Linux 2).


​mysql
-uroot -p
​Your
set password
​use
mysql
​#
3090_Cmok Your password
ALTER USER 'root'@'localhost' IDENTIFIED BY ' password ';
​#
Modify remote login
GRANT ALL PRIVILEGES ON *.* TO root @"%" IDENTIFIED BY " password ";
​flush
privileges;
​exit
;

#3 Build stand-alone nacos

jps
cd /opt/
ls
tar -zxf nacos-server-1.4.2.tar.gz 
mv nacos soft/nacos8848
cd soft/nacos8848/conf/
vim application.properties
#release spring db db db db modify ip address 192.168.64.135 This IP is my own
#account root password 3090_Cmok (here is my mysql password)


cd /opt/soft/nacos8848/bin/
#Close cluster mode
vim startup.sh
#==============================#
# Modify the following file
export MODE="standalone"
#==============================#

#Modify the environment variables in the /etc/profile file
vim /etc/profile
#Add to the end
===============================
# nacos env
export NACOS_HOME=/opt/soft/nacos8848
export PATH=$PATH:$NACOS_HOME/bin
=============================
#:wq!
source /etc/profile
​----------------------------------------
-
#Re-enter mysql5.7 to activate sql
mysql -uroot -p ​3090_Cmok
​show databases; ​#Create database 1 create database mydemo; ​use mydemo; ​#New form create table stocks(id int primary key not null auto_increment,shopid int not null,storenum; ​#Create database 2













create database nacos;
​use
nacos; ​#Activate
source /opt/soft/nacos8848/conf/nacos-mysql.sql ​exit ---------------------- ------------------ #Start sh startup.sh #Open browser 192.168.64.135:8848/nacos/#/login








If you can't open the browser successfully!

View logs

Found that nacos startup error Constructor threw exception; nested exception is ErrCode:500, ErrMsg:jmenv.tbsite.net exception

Cluster.conf.example in the conf directory is required and needs to be manually changed to cluster.conf

 

 Enter successfully! !

4Install  zookeeper

 cd /opt/
 ls
 tar -zxf zookeeper-3.4.5-cdh5.14.2.tar.gz
 mv zookeeper-3.4.5-cdh5.14.2 /opt/soft/zk345
 cd /opt/soft/zk345/conf/
 ls
cp zoo_sample .cfg zoo.cfg
 vim zoo.cfg
 ======================
 #Modify this file
 dataDir=/opt/soft/zk345/data
 #Insert the following content ip Use your own
 server.1=192.168.64.128:2888:3888 for the address
  ======================
  
  
 vim /etc/profile
configure environment variables

#zookeeper env
export ZOOKEEPER_HOME=/opt/soft/zk345
export PATH=$PATH:$ZOOKEEPER_HOME/bin
:wq
#Activate configuration
source /etc/profile
#Start
zkServer.sh start

5Install  kafka 

##########Install kafka##########
cd /opt/
ls
tar -zxf kafka_2.11-2.0.0.tgz
ls
mv kafka_2.11-2.0.0 soft/ kafka200
cd soft/kafka200/config/
ls
vim server.properties
#Modify the following content
#! ! ! Use your own address for the ip address
=========Release> listeners=PLAINTEXT://192.168.64.138:9092
=========> log.dirs=/opt/soft/kafka200/ kafka-logs
=========> zookeeper.connect=192.168.64.138:2181
#Save and exit
: wq

Configure environment variables

vim /etc/profile

  ======================

#kafka env
export KAFKA_HOME=/opt/soft/kafka200
export PATH=$PATH:$KAFKA_HOME/bin

 =======================

source /etc/profile

#Run start

cd /opt/soft/kafka200/bin
./kafka-server-start.sh /opt/soft/kafka200/config/server.properties
 

 6Install  flume 

 #1 Unzip the folder and move it to the specified location
cd /opt
tar -zxf flume-ng-1.6.0-cdh5.14.2.tar.gz
mv apache-flume-1.6.0-cdh5.14.2-bin/ /opt/soft/ flume160
​cd
/opt/soft/flume160/conf
 configure flume configuration file
cp flume-env.sh.template flume.env.sh
vim flume.env.sh
<================ ========================>
export JAVA_HOME=/opt/soft/jdk180
<================ ========================>
​#
4 Configure environment variables
vim /etc/profile
============== =========================
​#flume
env
export FLUME_HOME=/opt/soft/flume160
export PATH=$PATH:$FLUME_HOME/bin
== =====================================
#Activate configuration
source /etc/profile

#Start flume and transfer data to the target conf

./flume-ng agent -n a1 -c /opt/soft/flume160/conf -f /opt/flumeconf/third.conf

 7Install  hadoop 

拖入hadoop相关jar包到 /opt
cd /opt
tar -zxf hadoop-2.6.0-cdh5.14.2.tar.gz
mv hadoop-2.6.0-cdh5.14.2 soft/hadoop260
cd soft/hadoop260
cd etc/hadoop
pwd
vim hadoop-env.sh
1=============================
export JAVA_HOME=/opt/soft/jdk180
:wq
1=============================

vim core-site.xml
2============================
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.64.128:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/soft/hadoop260/tmp</value>
    </property>
</configuration>
:wq
2============================

vim hdfs-site.xml
3============================
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
:wq
3============================

cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
4============================
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
:wq
4============================

vim yarn-site.xml
5============================
<configuration>
    <property>
        <name>yarn.resourcemanager.localhost</name>
        <value>localhost</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
:wq export HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_HOME=/opt/soft/hadoop260# Hadoop ENV6=== =========================vim /etc/profile​#Configure hadoop environment variables. Please use your own hadoop260
5============================







export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_INSTALL= $HADOOP_HOME
​:
wq
6== ==========================
#Activate the above configuration
source /etc/profile
#Log in without password
ssh-keygen -t rsa -P ''
cd /root/.ssh/
ls
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
yes
ok
ls
ll
ssh 192.168.64.128
exit
#Remotely log in to hadoop210 as its own host name/ect/hosts Or systemctl sethostname hadoop210#
ssh hadoop01
yes
exit
#Direct login without password
ssh hadoop01
exit
#Format NameNode
hdfs namenode -format
#Start hadoop

start-all.sh

yes

yes

 

 8Install  hive 

 cd /opt

tar -zxf hive-1.1.0-cdh5.14.2.tar.gz 

mv hive-1.1.0-cdh5.14.2 /opt/soft/hive110

cd /opt/soft/hive110/conf

vim hive-site.xml  #添加下面代码
====================================
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.64.210:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>3090_Cmok</value>
</property>

<property>
    <name>hive.server2.authentication</name>
    <value>NONE</value>
  </property>
  <property>
    <name>hive.server2.thrift.client.user</name>
    <value>root</value>
  </property>
  <property>
    <name>hive.server2.thrift.client.password</name>
    <value> root</value>
  </property>
</configuration>
​<
!-- mysql database password <value>3090_Cmok</value> Use your own 3090_Cmok>
<!-- password=root equals passwordless login for easy connection>
<! -- If it is a remote mysql database, you need to write the remote IP or hosts here -->
===============================
=======

 2 hadoop configuration core-site.xml (note that there are 5 in total, no more or less!!!)

 <property>
                <name>fs.defaultFS</name>
                <value>hdfs://192.168.64.128:9000</value>
    </property>
    <property>
             <name>hadoop.tmp.dir</name>           
             <value>file:/home/hadoop/temp</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
       <property>
                <name>hadoop.proxyuser.root.users</name>
                <value>*</value>
        </property>

3. Drag in the mysql driver 

 4. Configure environment variables

vim /etc/profile
​
#Hive
export HIVE_HOME=/opt/soft/hive110
export PATH=$PATH:$HIVE_HOME/bin
​
:wq
​
source /etc/profile

 5Initialize the database

schematool -dbType mysql -initSchema

6. Start hive

zkServer.sh start
start-all.sh
​
hive --service metastore
​
hive --service hiveserver2
​
hive

7.hql 

show databases;
create database mydemo;
use mydemo;
create table userinfos(userid int,username string,birthday string);
insert into userinfos values(1,'zs',30);
select * from userinfos;
 

 

9 Hbase installation  

Drag the jar package into the installation prerequisite! ! !

cd /opt/
tar -zxf hbase-1.2.0-cdh5.14.2.tar.gz
mv hbase-1.2.0-cdh5.14.2 /opt/soft/hbase120
cd /opt/soft/hbase120/conf/
ls

 vim hbase-env.sh

export JAVA_HOME=/opt/soft/jdk180
export HBASE_MANAGES_ZK=false
:wq
​

 vim hbase-site.xml

<property>
     <name>>hbase.rootdir</name>
     <value>hdfs://192.168.64.210:9000/hbase</value>
</property>
<!—No configuration required for stand-alone mode, distributed configuration. is true-->
<property>
     <name>hbase.cluster.distributed</name>
     <value>true</value>
</property>
<!—Single-machine mode does not require configuration. Distribution is the physical configuration specified for zookeeper. Path name -- >
<property>
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/home/cm/hbase</value>
</property>
<!—1.2.4--> 
<property >    ​​#No Chinese version <property>      <name>>hbase.rootdir</name>      <value>hdfs://192.168.64.128:9000/hbase</value> </property> <property>








     <name>hbase.cluster.distributed</name>
     <value>true</value>
</property>
<property>
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/opt/soft/hbase120/data</value>
</property>
<property>
     <name>hbase.zookeeper.property.clientPort</name>
     <value>2181</value>
</property>

Configure environment variables 

vim /etc/profile
==========================
#habase env
export HBASE_HOME=/opt/soft/hbase120
export PATH=$PATH:$HBASE_HOME/bin
==========================
:wq
source /etc/profile
 

Start hdoop first and then start hbase 

start-all.sh 
#jps 7 
jps should appear 
#Start hbase (HMaster HRegionServer) 
start-hbase.sh 
hbase shell 
list

Create tablespace

create_namespace 'mydemo'
create 'mydemo:userinfos','base'

create 'events','base'

create 'eventsattends','base'

#删除表
disable 'mydemo:userinfos'
drop 'mydemo:userinfos'
exit
​

10Install zeppelin   

tar -zxf zeppelin-0.8.1-bin-all.tgz -C /opt/soft/

hdfs dfs -cat /hive/warehouse/mydemo.db/userinfos/000000_0


cd /opt/soft/

ls

mv zeppelin-0.8.1-bin-all/ zeppelin081

ls

cd /opt/soft/zeppelin081/conf/

ls

cp zeppelin-site.xml.template zeppelin-site.xml

vim zeppelin-site.xml
==============================
<property>
  <name>zeppelin.helium.registry</name>
  <value>helium</value>
</property>
==============================

cp zeppelin-env.sh.template zeppelin-env.sh

vim zeppelin-env.sh
==============================
export JAVA_HOME=/opt/soft/jdk180
export HADOOP_CONF_DIR=/opt/soft/hadoop260/etc/hadoop
==============================

cp /opt/soft/hive110/conf/hive-site.xml /opt/soft/zeppelin081/conf/

cp /opt/soft/hadoop260/share/hadoop/common/hadoop-common-2.6.0-cdh5.14.2.jar /opt/soft/zeppelin081/interpreter/jdbc/

cp /opt/soft/hive110/lib/hive-jdbc-1.1.0-cdh5.14.2-standalone.jar /opt/soft/zeppelin081/interpreter/jdbc/

2. Configure environment variables 

vim /etc/profile
#Zeppelin
export ZEPPELIN_HOME=/opt/soft/zeppelin081
export PATH=$PATH:$ZEPPELIN_HOME/bin
​
:wq
​
source /etc/profile

3. Start

cd /opt/soft/zeppelin081/bin/
​./zeppelin-daemon.sh
start
​http://192.168.64.128:8080/
#Enter the address in the browser to enter zeppelin
#http://192.168.64.128:50070/ #hadoop Check

 

(2) Set properties

default.driver   org.apache.hive.jdbc.HiveDriver

default.url     jdbc:hive2://192.168.64.128:10000

default.user    hive

11Install sqoop

premise

 

1 file configuration 

cd /opt/

tar -zxf sqoop

ls

mv sqoop-1.4.6-cdh5.14.2 /opt/soft/sqoop146

cd /opt/soft/sqoop146/conf/

cp sqoop-env-template.sh sqoop-env.sh

vim sqoop-env.sh

=====================

export HADOOP_COMMON_HOME=/opt/soft/hadoop260

export HADOOP_MAPRED_HOME=/opt/soft/hadoop260

export HBASE_HOME=/opt/soft/hbase120

export HIVE_HOME=/opt/soft/hive110

export ZOOKEEPER_HOME=/opt/soft/zk345
export ZOOCFGDIR=/opt/soft/zk345

=====================

:wq

2. Configure environment variables

vim /etc/profile

=====================

#Sqoop
export SQOOP_HOME=/opt/soft/sqoop146
export PATH=$PATH:$SQOOP_HOME/bin

=====================

:wq

source /etc/profile

3. Drag in the jar package

 

 

#1

cp /opt/soft/hadoop260/share/hadoop/common/hadoop-common-2.6.0-cdh5.14.2.jar  /opt/soft/sqoop146/lib/
#2

cp /opt/soft/hive110/lib/hive-jdbc-1.1.0-cdh5.14.2-standalone.jar /opt/soft/sqoop146/lib/

 

4 Zeppelin imports the mysql data table to hdfs and then uses hive mapping

%shsoop import --connect jdbc:mysq1://192.168.64.128: 3386/party --table users --username root --password ok --delete-target-dir --target-dir /party/users --split-b user_id -m 1

12Install ElasticSearch

premise

 

Elasticsearch distributed installation steps 1

#Consider the need to install elasticsearch-head for web display, so install nodejs first, mainly using npm
 
​wget
https://npm.taobao.org/mirrors/node/v11.0.0/node-v11.0.0.tar.gz 
​tar
- zxvf node-v11.0.0.tar.gz 
​mv
node-v11.0.0 /opt/soft/ 
​cd
/opt/soft/node-v11.0.0 
​yum
install gcc gcc-c++ 
​./configure
 
​make
 
​make
install 
​node

-v

Step 2

cd /opt/ 
tar -zxf elasticsearch-6.7.1.tar.gz 
mv elasticsearch-6.7.1 /opt/soft/ 
cd /opt/soft/es671/config/ vim elasticsearch.yml 
========= ===== 
#Modify 
cluster.name: es-app 
node.name: es-1 
network.host: 192.168.64.128 
http.port: 9200 
#Insert 
http.cors.enabled: true 
http.cors.allow-origin: "*" 
============== 
:wq 
​#Create a user 
useradd cm 
passwd cm   
ok 
ok 
su cm 
su vim /etc/security/limits.conf 
#Append question 1 to the end of the file System Max The number of files is too low. 
cm
soft nofile 65536 
cm hard nofile 131072


 
cm soft nproc 4096
cm hard nproc 4096
​vim /etc/sysctl.conf 
#Append to the end of the file Problem 2 Virtual memory is too low 
vm.max_map_count=655360 
​#Activate
 
sysctl -p 
​#Authorize
 
chown cm:cm -R /opt/soft/es671/ 
​su
cm 
cd. . 
cd /opt/soft/es671/bin/ 
ls ./elasticsearch #Browser view 
192.168.64.128: 9200


Step 3(open new window)

cd /opt/ 
#Install zip 
yum install -y unzip zip 
#Unzip 
elasticsearch-head-master.zip 
​mv
elasticsearch-head-master /opt/soft/eshead 
​cd
/opt/soft/eshead/ 
#Tell the system to import the package Finally, a file cannot be found and an error is reported (not important and does not affect development) 
npm install 
​#Open a new window 
cd /opt/soft/eshead npm run start 
​#Browser access (window visualization es) 
http://192.168.64.128: 
9100


Guess you like

Origin blog.csdn.net/just_learing/article/details/126333328