Big Data Development-Docker-Use Docker to quickly build a big data environment in 10 minutes, Hadoop, Hive, Spark, Hue, Kafka, ElasticSearch...

Make a pit and make up the tutorial later. Of course, this is not the most important thing. If you just want to have an environment to test, you only need the following three steps

1.git clone https://github.com/hulichao/docker-bigdata

2.安装dockerdocker-compose, 然后cd docker-bigdata && docker-compose -f docker-compose-dev up -d`

3.启动集群 登入There are many 环境,然后scripts to ,ps:start and stop the cluster under docker sh scripts/start-cluster.sh scripts`, start on demand, pay attention to comments

The mirror image you pulled is in this warehouse, and https://hub.docker.com/repository/hoult
file
then you can play happily with the
ps: port mapping part, but also pay attention, because the port may not always be available when you build your own books or other servers, see Change the port and start again.

Wu Xie, Xiao San Ye, a little rookie in the background, big data, and artificial intelligence.
Please pay attention to more
file

Guess you like

Origin blog.csdn.net/hu_lichao/article/details/112125800