如何当ssh不是默认22端口时安装蓝鲸的bkdata?

版权声明:感谢您阅读我的文章,转载注明出处哦~~ https://blog.csdn.net/haoding205/article/details/82775871

如何当ssh不是默认22端口时安装蓝鲸的bkdata?

如何安装蓝鲸的bkdata?

q.前提知识:

本文基于《如何当ssh不是默认22端口时安装蓝鲸的app_mgr?》
https://blog.csdn.net/haoding205/article/details/82775040

1.引子:

在上文中,我们知道,快速部署蓝鲸的方法,比如在安装bkdata时的命令是:

cd /data/install
./bk_install bkdata     # 安装蓝鲸数据平台基础模块及其依赖服务
#  安装该模块后,可以开始安 saas 应用: 蓝鲸监控及日志检索

会不会有报错呢?让我们拭目以待:

2.安装过程记录

[root@paas-1 install]# ./bk_install bkdata

           install es(allproject) on host: 192.168.1.101           

[192.168.1.101]20180919-171248 601   add user: es
useradd: warning: the home directory already exists.
Not copying any file from skel directory into it.
[192.168.1.101]20180919-171248 516   create directories ...
fs.file-max = 242667
fs.nr_open = 242667
vm.max_map_count = 512000
#此处省略100000字
init data of snapshot and components
add etl connector of 2_tomcat_cache
add etl connector of 2_tomcat_jsp
add etl connector of 2_tomcat_net
add etl connector of 2_tomcat_servlet
add etl connector of 2_tomcat_thread
add etl connector of 2_uptimecheck_tcp
add etl connector of 2_uptimecheck_udp
add etl connector of 2_uptimecheck_heartbeat
add etl connector of 2_uptimecheck_http
add etl connector of 2_redis_cpu
add etl connector of 2_redis_client
add etl connector of 2_redis_mem
add etl connector of 2_redis_stat
add etl connector of 2_redis_repl
add etl connector of 2_redis_aof
add etl connector of 2_redis_rdb
add etl connector of 2_system_proc
add etl connector of 2_system_cpu_detail
add etl connector of 2_system_cpu_summary
add etl connector of 2_system_mem
add etl connector of 2_system_disk
add etl connector of 2_system_inode
add etl connector of 2_system_mem
add etl connector of 2_system_net
add etl connector of 2_system_swap
add etl connector of 2_system_env
add etl connector of 2_system_io
add etl connector of 2_system_netstat
add etl connector of 2_system_load
add etl connector of 2_nginx_net
add etl connector of 2_mysql_innodb
add etl connector of 2_mysql_net
add etl connector of 2_mysql_performance
add etl connector of 2_mysql_rep
add etl connector of 2_apache_net
add etl connector of 2_apache_performance
add tsdb connector of 2_tomcat_cache
add tsdb connector of 2_tomcat_jsp
add tsdb connector of 2_tomcat_net
add tsdb connector of 2_tomcat_servlet
add tsdb connector of 2_tomcat_thread
add tsdb connector of 2_uptimecheck_tcp
add tsdb connector of 2_uptimecheck_udp
add tsdb connector of 2_uptimecheck_heartbeat
add tsdb connector of 2_uptimecheck_http
add tsdb connector of 2_redis_cpu
add tsdb connector of 2_redis_client
add tsdb connector of 2_redis_mem
add tsdb connector of 2_redis_stat
add tsdb connector of 2_redis_repl
add tsdb connector of 2_redis_aof
add tsdb connector of 2_redis_rdb
add tsdb connector of 2_system_proc
add tsdb connector of 2_system_cpu_detail
add tsdb connector of 2_system_cpu_summary
add tsdb connector of 2_system_mem
add tsdb connector of 2_system_disk
add tsdb connector of 2_system_inode
add tsdb connector of 2_system_mem
add tsdb connector of 2_system_net
add tsdb connector of 2_system_swap
add tsdb connector of 2_system_env
add tsdb connector of 2_system_io
add tsdb connector of 2_system_netstat
add tsdb connector of 2_system_load
add tsdb connector of 2_nginx_net
add tsdb connector of 2_mysql_innodb
add tsdb connector of 2_mysql_net
add tsdb connector of 2_mysql_performance
add tsdb connector of 2_mysql_rep
add tsdb connector of 2_apache_net
add tsdb connector of 2_apache_performance
.
----------------------------------------------------------------------
Ran 1 test in 508.540s

OK

[192.168.1.101] es: RUNNING

[192.168.1.102] es: RUNNING

[192.168.1.103] es: RUNNING

[192.168.1.101] kafka: RUNNING

[192.168.1.102] kafka: RUNNING

[192.168.1.103] kafka: RUNNING

[192.168.1.102] beanstalk: RUNNING

---------------------------------------------------------------------------------------------------------
[192.168.1.101] dataapi     dataapi                          RUNNING   pid 2346, uptime 0:10:06
[192.168.1.101] dataapi     dataapi-celery-1                 RUNNING   pid 2343, uptime 0:10:06
[192.168.1.101] dataapi     dataapi-celery-2                 RUNNING   pid 2344, uptime 0:10:06
[192.168.1.101] dataapi     dataapi-celery-3                 RUNNING   pid 2345, uptime 0:10:06
[192.168.1.101] monitor     collect:collect0                   RUNNING   pid 5095, uptime 0:09:39
[192.168.1.101] monitor     collect:collect1                   RUNNING   pid 5096, uptime 0:09:39
[192.168.1.101] monitor     common:logging                     RUNNING   pid 5097, uptime 0:09:39
[192.168.1.101] monitor     common:scheduler                   RUNNING   pid 5098, uptime 0:09:39
[192.168.1.101] monitor     converge:converge0                 RUNNING   pid 5101, uptime 0:09:39
[192.168.1.101] monitor     converge:converge1                 RUNNING   pid 5102, uptime 0:09:39
[192.168.1.101] monitor     converge:converge2                 RUNNING   pid 5104, uptime 0:09:39
[192.168.1.101] monitor     converge:converge3                 RUNNING   pid 5109, uptime 0:09:39
[192.168.1.101] monitor     converge:converge4                 RUNNING   pid 5100, uptime 0:09:39
[192.168.1.101] monitor     detect_cron                        RUNNING   pid 5086, uptime 0:09:39
[192.168.1.101] monitor     kernel:cron                        RUNNING   pid 5090, uptime 0:09:39
[192.168.1.101] monitor     kernel:match_alarm0                RUNNING   pid 5094, uptime 0:09:39
[192.168.1.101] monitor     kernel:match_alarm1                RUNNING   pid 5093, uptime 0:09:39
[192.168.1.101] monitor     kernel:match_alarm2                RUNNING   pid 5092, uptime 0:09:39
[192.168.1.101] monitor     kernel:match_alarm3                RUNNING   pid 5091, uptime 0:09:39
[192.168.1.101] monitor     kernel:qos                         RUNNING   pid 5087, uptime 0:09:39
[192.168.1.101] monitor     run_data_access:run_data_access0   RUNNING   pid 5083, uptime 0:09:39
[192.168.1.101] monitor     run_data_access:run_data_access1   RUNNING   pid 5082, uptime 0:09:39
[192.168.1.101] monitor     run_data_access:run_data_access2   RUNNING   pid 5085, uptime 0:09:39
[192.168.1.101] monitor     run_data_access:run_data_access3   RUNNING   pid 5084, uptime 0:09:39
[192.168.1.101] monitor     run_detect_new:run_detect_new0     RUNNING   pid 5081, uptime 0:09:39
[192.168.1.101] monitor     run_detect_new:run_detect_new1     RUNNING   pid 5080, uptime 0:09:39
[192.168.1.101] monitor     run_detect_new:run_detect_new2     RUNNING   pid 5079, uptime 0:09:39
[192.168.1.101] monitor     run_detect_new:run_detect_new3     RUNNING   pid 5078, uptime 0:09:39
[192.168.1.101] monitor     run_poll_alarm:run_poll_alarm0     RUNNING   pid 5077, uptime 0:09:39
[192.168.1.101] databus     databus_es                       RUNNING   pid 3682, uptime 0:09:57
[192.168.1.101] databus     databus_etl                      RUNNING   pid 3687, uptime 0:09:57
[192.168.1.101] databus     databus_jdbc                     RUNNING   pid 3680, uptime 0:09:57
[192.168.1.101] databus     databus_redis                    RUNNING   pid 3688, uptime 0:09:57
[192.168.1.101] databus     databus_tsdb                     RUNNING   pid 3683, uptime 0:09:57

[192.168.1.103] influxdb: RUNNING

如果以上步骤没有报错, 你现在可以完成bkdata 的部署,可以:
 1. 通过./bk_install saas-o bk_monitor 部署 蓝鲸监控app, 或
 2. 通过开发者中心部署 蓝鲸监控app

[root@paas-1 install]#   

如下图:
在这里插入图片描述

3.验证安装成功的界面

请见下篇《如何当ssh不是默认22端口时安装蓝鲸的fta?》

4.结论

安装蓝鲸的bkdata成功,那为什么没有报关于SSH端口22的错误呢?因为这个坑在前面篇章“安装app_mgr”就趟过去了,所以此篇章就没有报错了。

5.其他参考

http://docs.bk.tencent.com/bkce_install_guide/setup/quick_install.html

好了,聪明如你,知道了如何当ssh不是默认22端口时安装蓝鲸的bkdata,是不是很欢喜 _

还有其他问题的可以在评论区留言或者扫码加博主获取资源或者提问。
这里写图片描述

猜你喜欢

转载自blog.csdn.net/haoding205/article/details/82775871
今日推荐