SUSE Ceph Cephfs - Storage6

(1)Policy 配置文件,添加MDS角色定义
# vim /srv/pillar/ceph/proposals/policy.cfg
# MDS
role-mds/cluster/mds*.sls
role-mds/cluster/node00[234].example.com.sls

(2)执行stage2,stage4
salt-run state.orch ceph.stage.2
salt-run state.orch ceph.stage.4

(3)设置 文件系统元数据到SSD
# ceph osd pool set cephfs_metadata crush_rule ssd_replicated_rule

(4)设置 PG PGP 数量
总数据大小,6*12*4=288T ,大约 260TB数据
# ceph osd lspools
3 cephfs_data
4 cephfs_metadata

CephFS 使用45% 的数据(HDD), 元数据 45% (SSD)
12(OSD) * 6(节点)*100(每个OSDPG数量)* 45% (比例) / 3 副本 = 1080 ==> PG 数量1024
1(ssd OSD) * 6(节点)*100(每个OSDPG数量)* 45% (比例) / 3 副本 = 90 ==> PG 数量 128

ceph osd pool set cephfs_data pg_num 1024
ceph osd pool set cephfs_metadata pg_num 128

(5)关闭scrub和deep scrub , pool级别关闭,cephe -s 不会显示
# ceph osd pool set cephfs_data noscrub 1
# ceph osd pool set cephfs_metadata nodeep-scrub 1

(6)设置多活
# ceph fs set cephfs max_mds 2      # cephfs 为fs_name 名字
# ceph mds stat
cephfs-2/2/2 up  {0=node003=up:active,1=node001=up:active}, 1 up:standby

(7)编辑配置ceph.conf ,添加如下参数
# vim /etc/ceph.conf
mds_beacon_grace = 30                # 默认15秒,调整为30秒,mds心跳
mds_beacon_interval = 4              # 默认4秒
mds_bal_fragment_size_max = 200000   # 默认单个目录10万个文件
mds_cache_memory_limit = 2147483648  # 默认1073741824,1G,调整为2G,这个值可以设大一点poc时候
mds_cache_reservation = 0.050000     # 预留 5%
mds_session_autoclose = 300          # 出现异常300秒后自动剔除客户端

(8)客户端挂载
# mkdir /mnt/cephfs_client/
# mount -t ceph 192.168.2.40,192.168.2.41,192.168.2.42:6789:/ \
/mnt/cephfs_client/ -o name=admin,\
secret=AQAfvWhdAAAAABAAIGnAtjOBdDLE8+t/u2zadQ==,rasize=16384  # read ahead 16MB

# df -TH
Filesystem                                    Type      Size  Used Avail Use% Mounted on
192.168.2.40,192.168.2.41,192.168.2.42:6789:/ ceph       20G     0   20G   0% /mnt/cephfs_client

(8)cephfs 配额
客户端要求:the kernel client since version 4.17,配额精确度一般
            最好客户端安装SUSE15系统
# zypper -n in attr
# mkdir /mnt/cephfs_client/quota_dir
 限制 100 MB
# setfattr -n ceph.quota.max_bytes -v 100000000 /mnt/cephfs_client/quota_dir/
 限制 10,000 files
# setfattr -n ceph.quota.max_files -v 10000 /mnt/cephfs_client/quota_dir/
显示配额信息
# getfattr -n ceph.quota.max_files /mnt/cephfs_client/quota_dir
# getfattr -n ceph.quota.max_bytes /mnt/cephfs_client/quota_dir/

产生2个文件
dd if=/dev/zero of=/tmp/cephfs_quota_90M bs=90M count=1
dd if=/dev/zero of=/tmp/cephfs_quota_30M bs=30M count=1

复制文件
admin:/mnt/cephfs_client/quota_dir # cp /tmp/cephfs_quota_90M .
admin:/mnt/cephfs_client/quota_dir # cp /tmp/cephfs_quota_30M .
cp: error writing './cephfs_quota_30M': Disk quota exceeded

(9)cephfs 设置目录生命周期 ,目前没有该功能

(10)NFS
使用图形化界面配置,在Daemons选择节点主机名

编辑DeepSea安装配置文件
# vim /srv/pillar/ceph/proposals/policy.cfg
# NFS
role-ganesha/cluster/node003.example.com.sls

执行salt命令,stage2 和stage4
# salt-run state.orch ceph.stage.2
# salt 'node003*' pillar.items
# salt-run state.orch ceph.stage.4


# cat /etc/ganesha/ganesha.conf | grep -v ^# | grep -v ^$
RADOS_URLS {
  # Path to a ceph.conf file for this cluster. 集群配置文件
  Ceph_Conf = /etc/ceph/ceph.conf;
  # RADOS_URLS use their own ceph client too. Authenticated access
  # requires a cephx keyring file.  用户key ID
  UserId = "ganesha.node003";
  watch_url = "rados://cephfs_data/ganesha/conf-node003";
}
CACHEINODE {
    # Size the dirent cache down as small as possible.
    Dir_Chunk = 0;
    # size the inode cache as small as possible
    NParts = 1;
    Cache_Size = 1;                 # 每个分区哈希表大小
}
NFS_KRB5                            # 是否激活 Kerberos 5. 默认false
{
    Active_krb5 = false;
}
%url rados://cephfs_data/ganesha/conf-node003


NFS 配置
# rados -p .rgw.root ls --namespace=ganesha
export-1
conf-node003
conf-node004

 get <obj-name> <outfile>         fetch object
 put <obj-name> <infile> [--offset offset]

# rados -p .rgw.root get export-1 nfs --namespace=ganesha

(11)CIFS

node002 节点
# zypper in samba-ceph samba-winbind

Admin节点
# cd /etc/ceph
# ceph auth get-or-create client.samba.gw mon 'allow r' \
    osd 'allow *' mds 'allow *' -o ceph.client.samba.gw.keyring
# scp ceph.client.samba.gw.keyring node002:/etc/ceph/

# cp /etc/samba/smb.conf /etc/samba/smb.conf.bak

编辑配置文件
# vim /etc/samba/smb.conf
[global]
        workgroup = WORKGROUP
        passdb backend = tdbsam
        printing = cups
        printcap name = cups
        printcap cache time = 750
        cups options = raw
        map to guest = Bad User
        logon path = \\%L\profiles\.msprofile
        logon home = \\%L\%U\.9xprofile
        logon drive = P:
        usershare allow guests = Yes
[suse]
        path = /
        vfs objects = ceph
        ceph: config_file = /etc/ceph/ceph.conf
        ceph: user_id = samba.gw
        read only = no
        oplocks = no
        kernel share modes = no

systemctl restart smb.service
systemctl restart nmb.service

# systemctl start smb.service
# systemctl enable smb.service
# systemctl start nmb.service
# systemctl enable nmb.service

检查端口
# netstat -ntulp | grep mbd
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      6457/smbd           
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      6457/smbd           
tcp6       0      0 :::139                  :::*                    LISTEN      6457/smbd           
tcp6       0      0 :::445                  :::*                    LISTEN      6457/smbd           
udp        0      0 172.200.50.255:137      0.0.0.0:*                           6483/nmbd           
udp        0      0 172.200.50.41:137       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.2.255:137       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.2.41:137        0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.3.255:137       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.3.41:137        0.0.0.0:*                           6483/nmbd           
udp        0      0 0.0.0.0:137             0.0.0.0:*                           6483/nmbd           
udp        0      0 172.200.50.255:138      0.0.0.0:*                           6483/nmbd           
udp        0      0 172.200.50.41:138       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.2.255:138       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.2.41:138        0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.3.255:138       0.0.0.0:*                           6483/nmbd           
udp        0      0 192.168.3.41:138        0.0.0.0:*                           6483/nmbd           
udp        0      0 0.0.0.0:138             0.0.0.0:*                           6483/nmbd  

验证配置文件
# testparm

重载配置文件
# smbcontrol all reload-config

设置samba密码
# smbpasswd -a root
New SMB password:
Retype new SMB password:
Added user root.

Linux 访问samba
# smbclient -L //172.200.50.41/
Enter WORKGROUP\root's password:

        Sharename       Type      Comment
        ---------       ----      -------
        suse            Disk      
        IPC$            IPC       IPC Service (Samba 4.9.5-git.176.375e1f057883.6.1-SUSE-oS15.0-x86_64)
Reconnecting with SMB1 for workgroup listing.

        Server               Comment
        ---------            -------

        Workgroup            Master
        ---------            -------
        WORKGROUP            NODE002
        
Windows 访问samba
1)    打开来宾登录策略

  • 按住“Win”和“R”键,在弹出的对话框中输入“gpedit.msc”,点击“确定”。
  • 在本地组策略编辑器对话框中,依次点击“计算机配置” --> “管理模板” --> “网络”。
  • 点击“lanman工作站”。
  • 双击“启用不安全的来宾登录”,点击“已启用”,点击“确定”即可。


windows 访问
\\172.200.50.41\suse

猜你喜欢

转载自www.cnblogs.com/alfiesuse/p/11645652.html
今日推荐