Foreground connection:
Hadoop HA deployment (MINI version) https://blog.csdn.net/m0_54925305/article/details/121566611?spm=1001.2014.3001.5501 https://blog.csdn.net/m0_54925305/article/details/121566611?spm= 1001.2014.3001.5501 Spark Component Deployment (MINI Version) https://blog.csdn.net/m0_54925305/article/details/121615781?spm=1001.2014.3001.5501 https://blog.csdn.net/m0_54925305/article/details/121615781 ?spm=1001.2014.3001.5501
Environment preparation:
Numbering | CPU name | type | user | password |
---|---|---|---|---|
1 | master1-1 | master node | root | passwd |
2 | slave1-1 | slave node | root | passwd |
3 | slave1-2 | slave node | root | passwd |
Note: The extraction code is: 0000
Environment deployment:
1. The Zookeeper component needs to be installed The specific requirements are the same as the Zookeeper task requirements, and adapt to the Kafka environment, start Zookeeper and save the results with screenshots
1. Start three machines zookeeper
bin/zkServer.sh start
2. Decompress the Kafka installation package to the "/usr/local/src" path, and modify the decompressed folder name to kafka, take a screenshot and save the result
1. Enter the /h3cu directory to find kafka
cd /h3cu
2. Unzip kafka
tar -zxvf kafka_2.11-1.0.0.tgz -C /usr/local/src
3. Rename kafka
mv kafka_2.11-1.0.0 kafka
3. Set the Kafka environment variable and make the environment variable take effect only for the current root user, take a screenshot and save the result
1. Add environment variables
vi /root/.bashrc
2. Make the environment variable take effect immediately
source /root/.bashrc
4. Modify the corresponding files of Kafka, take screenshots and save the results
1. Enter the kafka/config directory
cd /usr/local/src/kafka/config
2. Modify the server.properties file
vi server.properties
A. Modify zookeeper.connect, modify log.dirs, and finally add two lines
Before changing:
After changing:
Before changing:
After changing:
Finally add:
host.name=master1-1 delete.topic.enable=true
3. Create the logs directory
mkdir logs
Note: Since there is no logs directory by default in the kafka installation directory, create a logs directory under kafka/
4. Cluster distribution
scp -r /usr/local/src/kafka slave1-1:/usr/local/src/
scp -r /usr/local/src/kafka slave1-2:/usr/local/src/
5. Modify the server.properties files of slave1 and slave2 respectively
slave1-1 node:
broker.id=1 host.name=slave1-1
slave1-2 node:
broker.id=2 host.name=slave1-2
5. Start Kafka and save the command output results, take screenshots and save the results
Enter the kafka installation directory
1. Start kafka
bin/kafka-server-start.sh -daemon ./config/server.properties &
Note: Before starting kafka, make sure that zookeeper has been started, and all three machines are started
6. Create the specified topic, take screenshots and save the results
1. Create topic-test on the master
./bin/kafka-topics.sh --create --zookeeper master1-1:2181,slave1-1:2181,slave1-2:2181 --replication-factor 3 --partitions 3 --topic test
7. View all topic information, take screenshots and save the results
1. View all topic information
./bin/kafka-topics.sh --list --zookeeper localhost:2181
8. Start the specified producer (producer), take a screenshot and save the result
1. Start the producer on the master
./bin/kafka-console-producer.sh --broker-list master1-1:9092,slave1-1:9092,slave1-2:9092 --topic test
9. Start the consumer, take a screenshot and save the result
1. Start the consumer on the slave
./bin/kafka-console-consumer.sh --bootstrap-server master1-1:9092,slave1-1:9092,slave1-2:9092 --from-beginning --topic test
10. Test the producer, take screenshots and save the results
Note: Just enter some content in the producer
11. Test the consumer, take screenshots and save the results
Note: the consumer will automatically print the content entered by the producer
Kafka component deployment (MINI version) completed
What can't beat you will make you stronger!