ESXI system manual installation

ESXI system manual installation

1. Background

During the use of physical servers, resource management is basically performed in a virtualized manner. Currently mainstream virtualization: VMware ESXI/Hyper-v/KVM/xen, etc.

Compared with VMware Vshpere ESXI, it is simpler in terms of configuration and management. The advantages and disadvantages will not be mentioned. Try to use ESXI for management in the test environment. In terms of version, because the server model of the test environment: DELL R710 + is basically configured in R720 R730, in order to be compatible with the server and facilitate virtual machine migration, the version uses ESXI6.7 U3 version. (There are bugs in versions below U3, ESXI7.x is not compatible with R710 R720.)

2. Out-of-band idrac configuration

insert image description here

insert image description here

Do a good job in raid for system installation.

insert image description here

Return to the server page, click 启动to download the jnlp console. Double-click to open:

insert image description here

Need jdk 1.7 version: (add idrac site whitelist in java, enable lts 1.0 1.1 1.2, etc.)

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

insert image description here

3. System installation

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

4. System configuration

4.1 Host name/ip information configuration

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

insert image description here
insert image description here
insert image description here

4.2 configure ansible

open ssh

insert image description here
insert image description here
insert image description here

tcping/telnet authentication port:

insert image description here

Configure ansible host ssh mutual trust

ssh-copy-id 192.168.1.28

[root@gs-ansible-1-118 ~]# ssh-copy-id 192.168.1.28
Password: 
Now try logging into the machine, with "ssh '192.168.1.28'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@gs-ansible-1-118 ~]# ssh 192.168.1.28
Password: 
The time and date of this login have been sent to the system logs.
# 重新登录依然需要密码.检查配置

Fix ssh password login:

[root@gs-esxi-1-28:~] grep AuthorizedKeysFile /etc/ssh/sshd_config 
AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys
[root@gs-esxi-1-28:~] ls -a
.                .mtoolsrc        bin              bootpart4kn.gz   lib              mbr              productLocker    store            tmp              vmfs
..               .ssh             bootbank         dev              lib64            opt              sbin             tardisks         usr              vmimages
.ash_history     altbootbank      bootpart.gz      etc              locker           proc             scratch          tardisks.noauto  var              vmupgrade

[root@gs-esxi-1-28:~] cp .ssh/authorized_keys /etc/ssh/keys-root/
[root@gs-esxi-1-28:~] cat /etc/ssh/keys-root/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3s4pbATnrlwJPVrK52QUSg7dtHXmB1mAujNJt336i0O3uwJys8WylSAldNiqreMGKnaT/MJhSBQT1XOKfGHCBU3VAvtX5sbEs0/sPI0C0y/YWkox/NVldz0E2g+L75Ltj76V8Gbixsmbhz2kj6ozcpR6yHMXipwyFd+oljynnio8AOqivxq3m/hIIK+bihJ3rl+7k+tm6kv++om6VRohplgXuzZnyGIYHM/gErim1q/MNJTMLlXDtsQ9a6bLIJDVfcpt04xujJNbdny2W+4oEt0Ch5y69Knd+n0EtSYh6gIvvvuN9J4dAawVNIOLFkLsGSFWr1q6YSKzGyKxwY94fQ== root@gs-ansible-118

[root@gs-ansible-1-118 ~]# ssh 192.168.1.28 uptime
  8:52:40 up 00:44:11, load average: 0.00, 0.00, 0.00

insert image description here

Configure ansible inventory: => failure pending

[root@gs-ansible-1-118 ~]# echo -e "[ESXI]\n192.168.1.28" >>/etc/ansible/hosts && ansible ESXI --list-host
  hosts (1):
    192.168.1.28
[root@gs-ansible-1-118 ~]# ansible ESXI -m shell -a "uptime"
192.168.1.118 | SUCCESS | rc=0 >>
 16:54:31 up 210 days,  7:50,  4 users,  load average: 0.08, 0.02, 0.01

4.3 Time zone/ntp service configuration

After configuring the network, you can manage the ESXI system via web management/ssh
insert image description here
insert image description here

Modify the storage standard directory name: ip − os / ip-os/ipos/ip-data/ i p − d a t a 1 / ip-data1/ ipdata1/ip-ssd-data
insert image description here
insert image description here

UTC is the standard time zone. Compared with our Eastern District 8, all time is 8 hours slower. At 16:29:18 on April 20, 2022, the logs and other records are all UTC-related time, giving us daily Troubleshooting brings no change. And for the virtual machine, due to the reason of vmtools, the system hardware time will be loaded every time it is restarted, so the time of the virtual machine is in the UTC time zone, which is inconvenient to manage. Below we modify the UTC time zone to the CST time zone.

principle:

Modify the ESXI time zone file "/etc/localtime". Since there is no /Asia/shanghai in the ESXI system, here we copy the file from linux and replace it. But because ESXI will replace the /etc/localtime file every time it restarts, in order to ensure The validity of the replacement, use the scheduled task to replace the file.

# 获取目录位置(系统文件重启会覆盖,数据卷不会)
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 df |grep VMFS
VMFS-6     1790464491520 1537212416 1788927279104   0% /vmfs/volumes/192.168.1.28-os
# 拷贝linux 亚洲上海时区文件到ESXI节点
[root@gs-ansible-1-118 ESXI]# scp /usr/share/zoneinfo/Asia/Shanghai 192.168.1.28:/vmfs/volumes/192.168.1.28-os/
Shanghai                                                                                                                                                                          100%  388     0.4KB/s   00:00    
[root@gs-ansible-1-118 ESXI]# ls
set-esxi-localtime.sh
# 拷贝配置脚本esxi存储目录
[root@gs-ansible-1-118 ESXI]# scp set-esxi-localtime.sh  192.168.1.28:/vmfs/volumes/192.168.1.28-os/
set-esxi-localtime.sh                                                                                                                                                             100%   87     0.1KB/s   00:00    

# 查看脚本,主要就是一条覆盖命令
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cat /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh' 
#!/bin/sh
base_dir=/vmfs/volumes/192.168.1.28-os
/bin/rm -f /etc/localtime_bak &&/bin/mv /etc/localtime /etc/localtime_bak && cp $base_dir/Shanghai /etc/localtime

# 配置开机自启动
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'sed -i "s#exit 0#sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh#g" /etc/rc.local.d/local.sh' 
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'echo "exit 0" >> /etc/rc.local.d/local.sh' 


[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cat /etc/rc.local.d/local.sh' 
#!/bin/sh

# local configuration options

# Note: modify at your own risk!  If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading.  Changes are not supported unless under direction of
# VMware support.

# Note: This script will not be run when UEFI secure boot is enabled.

sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh
exit 0



[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh' 
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'date' 
Wed Apr 20 17:42:25 CST 2022

4.4 Add VCSA Unified Management

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

4.4 Basic software installation and configuration

4.4.1 omsa (optimizing)

OpenManage Server Administrator (OMSA) is a software agent that provides a comprehensive, one-to-one systems management solution in two ways: through an integrated, web browser-based graphical user interface (GUI); The command line interface (CLI) displayed through the operating system.

Enables system administrators to manage systems locally and remotely on the network.
Managed nodes: Install the agent and web components. (Windows, Linux)
VIBVIB: Agent program of OMSA, excluding Web components (VMware).

The agent of vmware does not have the function of web components, so there are three packages:

  • omsa idrac basic module
  • omsa system software
  • omsa Windows agent software (management through agent remote vmware)

insert image description here

# 获取包
[root@gs-ansible-1-118 ESXI]# ll
total 13888
-rw-r--r-- 1 root root 2679338 Apr 20 11:44 ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
-rw-r--r-- 1 root root 7113139 Apr 20 11:44 OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
-rw-r--r-- 1 root root 4419566 Apr 19 16:26 PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz
-rwxr-xr-x 1 root root      87 Apr 20 17:20 set-esxi-localtime.sh
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'mkdir /vmfs/volumes/192.168.1.28-os/tools' 
[root@gs-ansible-1-118 ESXI]# scp ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip 192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip                                                                                                                                  100% 2617KB   2.6MB/s   00:00    
[root@gs-ansible-1-118 ESXI]# scp OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip  192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip                                                                                                                             100% 6946KB   6.8MB/s   00:00    

[root@gs-ansible-1-118 ESXI]#  ssh 192.168.1.28 'cd  /vmfs/volumes/192.168.1.28-os/tools && unzip ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip '
Archive:  ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
  inflating: index.xml
  inflating: vendor-index.xml
  inflating: metadata.zip
  inflating: vib20/dcism/Dell_bootbank_dcism_4.2.0.0.ESXi6-2581.vib
  
 ssh 192.168.1.28 'esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/vib20/dcism/Dell_bootbank_dcism_4.2.0.0.ESXi6-2581.vib'
 

[root@gs-ansible-1-118 ESXI]#  ssh 192.168.1.28 'cd  /vmfs/volumes/192.168.1.28-os/tools && unzip OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip '
Archive:  OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
replace index.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: A  
  inflating: index.xml
  inflating: vendor-index.xml
  inflating: metadata.zip
  inflating: vib20/OpenManage/Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634.vib
[root@gs-ansible-1-118 ESXI]#  ssh 192.168.1.28 'esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/vib20/OpenManage/Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634.vib'
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634
   VIBs Removed: 
   VIBs Skipped: 
 

4.4.2 prcecli tool

# 获取包
[root@gs-ansible-1-118 ESXI]# ll
total 13888
-rw-r--r-- 1 root root 2679338 Apr 20 11:44 ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
-rw-r--r-- 1 root root 7113139 Apr 20 11:44 OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
-rw-r--r-- 1 root root 4419566 Apr 19 16:26 PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz
-rwxr-xr-x 1 root root      87 Apr 20 17:20 set-esxi-localtime.sh
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'mkdir /vmfs/volumes/192.168.1.28-os/tools' 
[root@gs-ansible-1-118 ESXI]# scp PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz   192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz                                                                                                                                          100% 4316KB   4.2MB/s   00:00    

# 安装
[root@gs-ansible-1-118 ESXI]#  ssh 192.168.1.28 'cd  /vmfs/volumes/192.168.1.28-os/tools && tar -xf PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz && esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/PERCCLI_MRXX5_7.1910.0_A12_VMware/PERCCLI_7.1910_VMware/ESXI\ 6.7/vmware-perccli-007.1910.vib'
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: BCM_bootbank_vmware-perccli_007.1910.0000.0000-01
   VIBs Removed: 
   VIBs Skipped: 

# 验证
[root@gs-ansible-1-118 ESXI]#  ssh 192.168.1.28 '/opt/lsi/perccli/perccli show'
CLI Version = 007.1910.0000.0000 Oct 08, 2021
Operating system = VMkernel 6.7.0
Status Code = 0
Status = Success
Description = None

Number of Controllers = 1
Host Name = gs-esxi-1-28
Operating System  = VMkernel 6.7.0
StoreLib IT Version = 07.2000.0200.0200
StoreLib IR3 Version = 16.14-0

System Overview :
===============

------------------------------------------------------------------------
Ctl Model        Ports PDs DGs DNOpt VDs VNOpt BBU sPR DS EHS ASOs Hlth 
------------------------------------------------------------------------
  0 PERCH730Mini     8   7   2     0   2     0 Opt On  3  N      0 Opt  
------------------------------------------------------------------------

Ctl=Controller Index|DGs=Drive groups|VDs=Virtual drives|Fld=Failed
PDs=Physical drives|DNOpt=Array NotOptimal|VNOpt=VD NotOptimal|Opt=Optimal
Msng=Missing|Dgd=Degraded|NdAtn=Need Attention|Unkwn=Unknown
sPR=Scheduled Patrol Read|DS=DimmerSwitch|EHS=Emergency Spare Drive
Y=Yes|N=No|ASOs=Advanced Software Options|BBU=Battery backup unit/CV
Hlth=Health|Safe=Safe-mode boot|CertProv-Certificate Provision mode
Chrg=Charging | MsngCbl=Cable Failure

[root@gs-ansible-1-118 ~]#  ssh 192.168.1.28 '/opt/lsi/perccli/perccli  /call/eall/sall show'
CLI Version = 007.1910.0000.0000 Oct 08, 2021
Operating system = VMkernel 6.7.0
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.


Drive Information :
=================

----------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                Sp Type 
----------------------------------------------------------------------------------
32:0      0 Onln   0 185.750 GB SATA SSD N   N  512B INTEL SSDSC2BX200G4R U  -    
32:1      1 Onln   0 185.750 GB SATA SSD N   N  512B INTEL SSDSC2BX200G4R U  -    
32:2      2 Onln   1 558.375 GB SAS  HDD N   Y  512B ST600MM0088          U  -    
32:3      3 Onln   1 558.375 GB SAS  HDD N   Y  512B ST600MM0088          U  -    
32:4      4 Onln   1 558.375 GB SAS  HDD N   Y  512B ST600MM0088          U  -    
32:5      5 Onln   1 558.375 GB SAS  HDD N   Y  512B ST600MM0088          U  -    
32:7      7 Onln   0 185.750 GB SATA SSD N   N  512B INTEL SSDSC2BX200G4R U  -    
----------------------------------------------------------------------------------

EID=Enclosure Device ID|Slt=Slot No|DID=Device ID|DG=DriveGroup
DHS=Dedicated Hot Spare|UGood=Unconfigured Good|GHS=Global Hotspare
UBad=Unconfigured Bad|Sntze=Sanitize|Onln=Online|Offln=Offline|Intf=Interface
Med=Media Type|SED=Self Encryptive Drive|PI=Protection Info
SeSz=Sector Size|Sp=Spun|U=Up|D=Down|T=Transition|F=Foreign
UGUnsp=UGood Unsupported|UGShld=UGood shielded|HSPShld=Hotspare shielded
CFShld=Configured shielded|Cpybck=CopyBack|CBShld=Copyback Shielded
UBUnsp=UBad Unsupported|Rbld=Rebuild

Guess you like

Origin blog.csdn.net/weixin_43423965/article/details/128559963