IBM General Parallel File System (GPFS) 最近这些年改了两次名,先是改成 IBM Spectrum Scale,最近又改成 IBM Storage Scale。
GPFS 详细介绍
GPFS的数据冗余保护主要有下列三种方式,其中1是最传统的由外部设备提供数据保护,2和3都是由GPFS提供数据保护。这两者的区别在于:2类似于集中存储,奇偶校验不会在网络上传输;3类似于分布式存储,奇偶校验在网络上传输。
- 底层使用独立于GPFS的存储、盘阵、软硬件RAID,RAID的LUN映射给GPFS Server节点用作NSD
- 两台服务器通过冗余链接到JBOD,GPFS使用裸盘,通过 GPFS Native RAID (GNR) 实现RAID
- 多台服务器使用本地硬盘,GPFS使用裸盘,通过 Erasure Code 实现RAID
GPFS的文件系统基于Network Shared Disk (NSD)构建,可以对NSD划分不同的存储池(storage pool),其中system pool必定存在,还可以增加其它的pool,不同的pool可以使用不同类型的NSD做自动化的数据迁移和放置规则。同一个pool中数据在NSD上平均的分布,如果文件系统的元数据和数据的副本设置为1(默认值)则类似于RAID0,如果元数据和数据的副本设置为2则类似于RAID1。文件系统元数据和数据的副本数量可以独立设置可以不同,但是不能超过最大值,最大值在创建文件系统的时候设置,默认都是2,可以指定为3。元数据和数据的副本数除了在文件系统层面配置,还可以在规则中进行更细粒度的控制。NSD用途有四种,主要使用到的是三种,系统池的默认值 数据和元数据(dataAndMetadata),非系统池的默认值 仅数据(dataOnly),仅元数据(metadataOnly)。
块(Block)是可分配给文件的最大连续硬盘空间(在一个NSD上面),也是单次 I/O 操作中下发的最大大小。块由一定数量的子块组成,子块是可分配给文件空间的最小单位。大于一个块的文件存储在一个或多个块中,加上额外的一个或多个子块以保存剩余数据。小于一个块的文件存储在一个或多个子块中。当流式写入大文件时,当一个NSD写满一个Block Size时,就会转到下一个NSD继续写入,从而平衡各个NSD之间性能和空间的消耗,显然更大的Block Size有助于提升存储系统的吞吐量。块大小和子块大小的对应关系如下:64 KB 块对应2 KB 子块,128 KB 块对应4 KB 子块,256 KB-4 MB 块对应8 KiB 子块,8-16 MB 块对应16 KiB 子块。
块大小(Block Size)是GPFS中重要的参数,文件系统的块大小、子块大小和每个块的子块数量是在创建文件系统时设置的,以后不能更改,如果更改只能创建新的并自己迁移数据,这个开销非常大往往是不能接受的。文件系统的块大小还不能超过全局配置maxblocksize的值,更改maxblocksize需要整个GPFS停机。文件系统中所有数据的块大小和子块大小是相同的,所有元数据的块大小可以单独设置,但是数据和元数据的每个块的子块数量是相同的。例如,块大小为16 MB,元数据块大小为1 MB,则数据子块大小为128 KB,元数据子块大小为 8 KB,子块数量128个,特别注意这里的数据块子块大小要比标准的16K大,这是因为元数据的子块数量规定的数据的子块数量。
DSS-G 介绍
Lenovo DSS-G 是联想将GPFS集成的软硬件一体化设备,其硬件、固件和软件是紧密集成的,使用GNR或EC实现数据保护,抛弃了传统的RAID架构。其中DSS-G2xy采用GNR架构,DSS-G100 ECE采用EC架构。
DSS-G2xy硬件为两台相同配置的联想x86服务器和一台或多台JBOD,型号中的xy代表JBOD的类型和数量,x代表4U/5U高密3.5英寸HDD扩展柜,y代表2U 2.5英寸SSD扩展柜。如DSS-G210代表只有一台高密3.5英寸扩展柜。
服务器的配置是死的,只能选择内存容量(384/768)和IB卡种类(无/单口NDR/双口NDR200),板载25Gb以太网接口,四个HBA卡是固定位置安装的,选了IB卡也是固定位置安装两个,也就是所有扩展槽都是定死的。JBOD只能选择硬盘容量,所有槽位都是装满且一样的,除了第一个高密HDD扩展柜的两个固定位置要安装固定800G容量的SSD。服务器和JBOD的联线也是定死的,所有JBOD都适合两个服务器冗余直接链接。在安装过程中,安装脚本会检查硬件配置、整个链接拓扑、升级所有固件。所以说DSS-G是一套硬件、固件和软件紧密集成的系统,升级也是包括操作系统在内的全部软件和固件的升级。
DSS-G 安装部署
第一步:使用Lenovo为DSS定制的Confluent安装两台服务器的系统(dssg-install),该过程会自动安装操作系统、IB驱动和GPFS等必备软件。
第二步:使用dsslsadapters检查PCIe扩展卡安装位置、dsschmod-drive更改HDD配置、dssgcktopology检查链接拓扑、dssgckdisks测试硬盘性能。
第三步:创建或加入现有集群,使用mmlsconfig验证nsdRAIDFirmwareDirectory 为 /opt/lenovo/dss/firmware,再使用mmlsfirmware检查固件版本。
以上三步安装DSS的文档操作即可,除了可以配置两个服务器的名字和IP以外,没有什么可以改的。
系统装好后可以做一些优化,比如以太网配置LACP,IB网络添加IPoIB,在服务器上安装一些管理和监控软件(如lldpd、node_exporter等),其中IPoIB并不是GPFS必须的。
第四步:使用dssgmkstorage创建存储,这一步是将JBOD和服务器上的硬盘创建pdisk、Recovery Group、Declustered Array。把pdisk分配给RG,每个pdisk都有主用服务器和备用服务器,这些pdisk组成DA。
- node class:DSS的两台服务器组成一个node class,这个node class名称默认执行dssgmkstorage主机名前加nc_,如果想改需要直接编辑 /opt/lenovo/dss/bin/dssgmkstorag 中的 local -r classnameL=
- pdisk:服务器系统中的块设备,对应JBOD的所有物理硬盘、还有服务器RAID卡的部分Virtual Disk。服务器上面两块800G做RAID1后划分了多个VD,第一个最大的VD(约700G)用于操作系统,其它5个小的VD(8000M)中的2个创建了pdisk,剩余3个估计是预留的。
- Recovery Group:每个pdisk都仅属于一个RG,每个RG有主服务器和备服务器,互相做故障切换,提供高可用性。每套DSS两台服务器就有两个RG,所有pdisk被分配给两个RG中的一个,RG1主用服务器1 备用服务器2,RG2主用服务器2 备用服务器1。
- Declustered arrays:每个pdisk都属于一个DA,相同容量、性能、容量的pdisk划分成一个DA,比如3.5的HDD一个DA、2.5的SSD是另一个DA。在DSS-G2x0会创建三个DA,NVR是服务器本地硬盘(RAID卡的VD)、SSD是第一个JBOD中两块SSD、DA1是JBOD中所有的HDD。
- 自动给每个RG创建LOGHOME、LOGTIP、LOGTIPBACKUP这三个vdisk,分别属于DA1、NVR、SSD,每一个RG都需要这三个vdisk存储GNR的元数据。log home 存储于JBOD的HDD上用四副本保护,用于存放长期事件日志、短期事件日志、元数据日志和记录小型写入操作的快速写入日志;log tip 存储于服务器本地硬盘用双副本保护(还有底层的RAID1),是 log home 的写缓存,日志先写入 log tip 然后再迁移至 log home,以便提高性能;log tip backup 是 log tip 的额外副本,存储于JBOD的SSD上无副本。
如果把DSS比做一套存储,JBOD就是扩展柜,服务器就是控制器,pdisk是每个物理硬盘,RG就是定义每个物理硬盘的主用和备用控制器,DA是硬盘池,vdisk是存储池和LUN。三个log是存储内部使用的存储空间,比如保险箱盘。但是DSS没有电池,所有写缓存均需要立刻落盘。类似于双控主备模式,每个物理硬盘和存储池/LUN同一个时刻只能属于一个控制器。
创建完成后查看一下,可见两个RG,
[root@dss01 ~]# mmvdisk recoverygroup list --declustered-array
declustered needs capacity pdisks
recovery group array service type BER trim total raw free raw free% total spare background task
-------------- ----------- ------- ---- ------- ---- --------- -------- ----- ----- ----- ---------------
dss01 NVR no NVR enable - - - - 2 0 scrub (16%)
dss01 SSD no SSD enable - - - - 1 0 scrub (8%)
dss01 DA1 no HDD enable no 834 TiB 834 TiB 100% 44 2 scrub (0%)
dss02 NVR no NVR enable - - - - 2 0 scrub (16%)
dss02 SSD no SSD enable - - - - 1 0 scrub (8%)
dss02 DA1 no HDD enable no 834 TiB 834 TiB 100% 44 2 scrub (0%)
mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.
[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss01 --all
needs user
recovery group node class active current or master server service vdisks remarks
-------------- ---------- ------- -------------------------------- ------- ------ -------
dss01 dssg01 yes dss01 no 0
recovery group format version
recovery group current allowable mmvdisk version
-------------- ------------- ------------- ---------------
dss01 5.1.5.0 5.1.5.0 5.1.9.2
node
number server active remarks
------ -------------------------------- ------- -------
922 dss01 yes primary, serving dss01
923 dss02 yes backup
declustered needs vdisks pdisks capacity
array service type BER trim user log total spare rt total raw free raw background task
----------- ------- ---- ------- ---- ---- --- ----- ----- -- --------- -------- ---------------
NVR no NVR enable - 0 1 2 0 1 - - scrub 14d (16%)
SSD no SSD enable - 0 1 1 0 1 - - scrub 14d (8%)
DA1 no HDD enable no 0 1 44 2 2 834 TiB 834 TiB scrub 14d (2%)
mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.
declustered paths AU
pdisk array active total capacity free space log size state
------------ ----------- ------ ----- -------- ---------- -------- -----
n922v001 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
n923v001 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
e1s01ssd SSD 2 4 745 GiB 744 GiB 120 MiB ok
e1s02 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s03 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s04 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s05 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s06 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s07 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s16 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s17 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s18 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s19 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s20 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s21 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s22 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s23 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s31 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s32 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s33 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s34 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s35 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s36 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s37 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s46 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s47 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s48 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s49 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s50 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s51 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s52 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s53 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s61 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s62 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s63 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s64 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s65 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s66 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s67 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s76 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s77 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s78 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s79 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s80 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s81 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s82 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s83 DA1 2 4 20 TiB 19 TiB 40 MiB ok
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 834 TiB 100% -
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 387 MiB -
declustered block size and
vdisk array activity capacity RAID code checksum granularity remarks
------------------ ----------- -------- -------- --------------- --------- --------- -------
RG001LOGHOME DA1 normal 48 GiB 4WayReplication 2 MiB 4096 log home
RG001LOGTIP NVR normal 48 MiB 2WayReplication 2 MiB 4096 log tip
RG001LOGTIPBACKUP SSD normal 48 MiB Unreplicated 2 MiB 4096 log tip backup
declustered VCD spares
configuration data array configured actual remarks
------------------ ----------- ---------- ------ -------
relocation space DA1 24 28 must contain VCD
configuration data disk group fault tolerance remarks
------------------ --------------------------------- -------
rg descriptor 4 pdisk limiting fault tolerance
system index 4 pdisk limited by rg descriptor
vdisk RAID code disk group fault tolerance remarks
------------------ --------------- --------------------------------- -------
RG001LOGHOME 4WayReplication 3 pdisk
RG001LOGTIP 2WayReplication 1 pdisk
RG001LOGTIPBACKUP Unreplicated 0 pdisk
[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss02 --all
needs user
recovery group node class active current or master server service vdisks remarks
-------------- ---------- ------- -------------------------------- ------- ------ -------
dss02 dssg01 yes dss02 no 0
recovery group format version
recovery group current allowable mmvdisk version
-------------- ------------- ------------- ---------------
dss02 5.1.5.0 5.1.5.0 5.1.9.2
node
number server active remarks
------ -------------------------------- ------- -------
922 dss01 yes backup
923 dss02 yes primary, serving dss02
declustered needs vdisks pdisks capacity
array service type BER trim user log total spare rt total raw free raw background task
----------- ------- ---- ------- ---- ---- --- ----- ----- -- --------- -------- ---------------
NVR no NVR enable - 0 1 2 0 1 - - scrub 14d (16%)
SSD no SSD enable - 0 1 1 0 1 - - scrub 14d (8%)
DA1 no HDD enable no 0 1 44 2 2 834 TiB 834 TiB scrub 14d (2%)
mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.
declustered paths AU
pdisk array active total capacity free space log size state
------------ ----------- ------ ----- -------- ---------- -------- -----
n922v002 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
n923v002 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
e1s12ssd SSD 2 4 745 GiB 744 GiB 120 MiB ok
e1s08 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s09 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s10 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s11 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s13 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s14 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s15 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s24 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s25 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s26 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s27 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s28 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s29 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s30 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s38 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s39 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s40 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s41 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s42 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s43 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s44 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s45 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s54 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s55 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s56 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s57 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s58 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s59 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s60 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s68 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s69 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s70 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s71 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s72 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s73 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s74 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s75 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s84 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s85 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s86 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s87 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s88 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s89 DA1 2 4 20 TiB 19 TiB 40 MiB ok
e1s90 DA1 2 4 20 TiB 19 TiB 40 MiB ok
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss02 DA1 HDD 834 TiB 834 TiB 100% -
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 387 MiB -
declustered block size and
vdisk array activity capacity RAID code checksum granularity remarks
------------------ ----------- -------- -------- --------------- --------- --------- -------
RG002LOGHOME DA1 normal 48 GiB 4WayReplication 2 MiB 4096 log home
RG002LOGTIP NVR normal 48 MiB 2WayReplication 2 MiB 4096 log tip
RG002LOGTIPBACKUP SSD normal 48 MiB Unreplicated 2 MiB 4096 log tip backup
declustered VCD spares
configuration data array configured actual remarks
------------------ ----------- ---------- ------ -------
relocation space DA1 24 28 must contain VCD
configuration data disk group fault tolerance remarks
------------------ --------------------------------- -------
rg descriptor 4 pdisk limiting fault tolerance
system index 4 pdisk limited by rg descriptor
vdisk RAID code disk group fault tolerance remarks
------------------ --------------- --------------------------------- -------
RG002LOGHOME 4WayReplication 3 pdisk
RG002LOGTIP 2WayReplication 1 pdisk
RG002LOGTIPBACKUP Unreplicated 0 pdisk
第五步:使用dssServerConfig.sh优化GPFS配置。
第六步:定义和创建vdisk,vdisk在DA上面定义并创建被用于NSD,主要需要确定 Raid Code、Block Size和容量。vdisk存在于DA的所有pdisk上,Raid Code可选 Reed-Solomon code (4+2p/4+3p/8+2p/8+3p)或副本(三或四)保护数据,Block Size 对于 Reed-Solomon code 最大16MB、对于副本最大1MB。
与许多RAID6一样,GNR也有写惩罚问题,写入一个完整的block size时性能最佳,部分写入需要重新计算校验,这会导致性能下降,显然多副本没有这个问题。
元数据占用空间不大,不过读写块都很小,建议用多副本提高性能,如三副本 3WayReplication,空间利用率只有1/3。数据占用空间很大,用奇偶校验可以提供空间利用率,如8+2p,空间利用率有80%。3WayReplication 和 8+2p 可以忍受同时故障2个硬盘,如果需要忍受同时3个硬盘故障则需要 4WayReplication 和 8+3p,这样提高了安全性但是会导致性能和空间利用率下降。
元数据的容量建议至少1%,按照元数据三副本、数据8+2p计算元数据比例:
(0.03/3)/(0.97*0.8)=1.29% 元数据占裸容量3%,数据占裸容量97%,元数据占可用容量1.29%
(0.05/3)/(0.95*0.8)=2.19% 元数据占裸容量5%,数据占裸容量95%,元数据占可用容量2.19%
我们选择元数据5%裸容量三副本块大小1M块,含有元数据的vdisk会被自动指定为system pool;数据95%裸容量8+2p块大小16M,指定为data pool。先定义vdisk,确认无误且内存需求满足后创建vdisk。
[root@dss01 ~]# mmvdisk vdiskset define --vdisk-set mvs01 --recovery-group dss01,dss02 --code 3WayReplication --block-size 1m --set-size 5% --nsd-usage metadataOnly
mmvdisk: Vdisk set 'mvs01' has been defined.
mmvdisk: Recovery group 'dss01' has been defined in vdisk set 'mvs01'.
mmvdisk: Recovery group 'dss02' has been defined in vdisk set 'mvs01'.
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
mvs01 2 13 TiB 41 TiB no -, DA1, 3WayReplication, 1 MiB, metadataOnly, system
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 793 TiB 95% mvs01
dss02 DA1 HDD 834 TiB 793 TiB 95% mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 1080 MiB mvs01 (693 MiB)
[root@dss01 ~]# mmvdisk vdiskset define --vdisk-set dvs01 --recovery-group dss01,dss02 --code 8+2p --block-size 16m --set-size 95% --nsd-usage dataOnly --storage-pool data
mmvdisk: Vdisk set 'dvs01' has been defined.
mmvdisk: Recovery group 'dss01' has been defined in vdisk set 'dvs01'.
mmvdisk: Recovery group 'dss02' has been defined in vdisk set 'dvs01'.
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
dvs01 2 631 TiB 793 TiB no -, DA1, 8+2p, 16 MiB, dataOnly, data
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
dss02 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 14 GiB dvs01 (13 GiB), mvs01 (693 MiB)
[root@dss01 ~]# mmvdisk vdiskset list
vdisk set created file system recovery groups
---------------- ------- ----------- ---------------
dvs01 no - dss01, dss02
mvs01 no - dss01, dss02
[root@dss01 ~]# mmvdisk vdiskset list --vdisk-set all
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
dvs01 2 631 TiB 793 TiB no -, DA1, 8+2p, 16 MiB, dataOnly, data
mvs01 2 13 TiB 41 TiB no -, DA1, 3WayReplication, 1 MiB, metadataOnly, system
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
dss02 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 14 GiB dvs01 (13 GiB), mvs01 (693 MiB)
[root@dss01 ~]# mmvdisk vdiskset list --recovery-group all
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
dss02 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 14 GiB dvs01 (13 GiB), mvs01 (693 MiB)
[root@dss01 ~]# mmvdisk vdiskset create --vdisk-set mvs01,dvs01
mmvdisk: 2 vdisks and 2 NSDs will be created in vdisk set 'mvs01'.
mmvdisk: 2 vdisks and 2 NSDs will be created in vdisk set 'dvs01'.
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG002VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG002VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001VS002
mmvdisk: Created all vdisks in vdisk set 'mvs01'.
mmvdisk: Created all vdisks in vdisk set 'dvs01'.
mmvdisk: (mmcrnsd) Processing disk RG001VS001
mmvdisk: (mmcrnsd) Processing disk RG002VS001
mmvdisk: (mmcrnsd) Processing disk RG001VS002
mmvdisk: (mmcrnsd) Processing disk RG002VS002
mmvdisk: Created all NSDs in vdisk set 'mvs01'.
mmvdisk: Created all NSDs in vdisk set 'dvs01'.
第七步:创建文件系统:因NSD的用途和块大小在前一步vdisk时已经确定了,在这里只要指定vdisk set即可。
[root@dss01 ~]# mmvdisk filesystem create –file-system dssfs –vdisk-set mvs01,dvs01 –mmcrfs -A yes -Q yes -n 1024 -T /dssfs –auto-inode-limit
mmvdisk: Creating file system ‘dssfs’.
mmvdisk: The following disks of dssfs will be formatted on node dss01:
mmvdisk: RG001VS001: size 14520704 MB
mmvdisk: RG002VS001: size 14520704 MB
mmvdisk: RG001VS002: size 662657024 MB
mmvdisk: RG002VS002: size 662657024 MB
mmvdisk: Formatting file system …
mmvdisk: Disks up to size 126.40 TB can be added to storage pool system.
mmvdisk: Disks up to size 7.90 PB can be added to storage pool data.
mmvdisk: Creating Inode File
mmvdisk: 97 % complete on Sun Mar 9 19:28:34 2025
mmvdisk: 100 % complete on Sun Mar 9 19:28:34 2025
mmvdisk: Creating Allocation Maps
mmvdisk: Creating Log Files
mmvdisk: 0 % complete on Sun Mar 9 19:28:40 2025
mmvdisk: 18 % complete on Sun Mar 9 19:28:45 2025
mmvdisk: 31 % complete on Sun Mar 9 19:28:50 2025
mmvdisk: 48 % complete on Sun Mar 9 19:28:55 2025
mmvdisk: 63 % complete on Sun Mar 9 19:29:00 2025
mmvdisk: 75 % complete on Sun Mar 9 19:29:05 2025
mmvdisk: 100 % complete on Sun Mar 9 19:29:08 2025
mmvdisk: Clearing Inode Allocation Map
mmvdisk: Clearing Block Allocation Map
mmvdisk: Formatting Allocation Map for storage pool system
mmvdisk: Formatting Allocation Map for storage pool data
mmvdisk: 76 % complete on Sun Mar 9 19:29:16 2025
mmvdisk: 100 % complete on Sun Mar 9 19:29:17 2025
mmvdisk: Completed creation of file system /dev/dssfs.
这个时候我们再回头看看RG的配置,RG dss01所属的主服务器是dss01,备用服务器是dss02,DA1中每个pdisk空闲空间组成热备空间。
[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss01 --all
needs user
recovery group node class active current or master server service vdisks remarks
-------------- ---------- ------- -------------------------------- ------- ------ -------
dss01 dssg01 yes dss01 no 2
recovery group format version
recovery group current allowable mmvdisk version
-------------- ------------- ------------- ---------------
dss01 5.1.5.0 5.1.5.0 5.1.9.2
node
number server active remarks
------ -------------------------------- ------- -------
922 dss01 yes primary, serving dss01
923 dss02 yes backup
declustered needs vdisks pdisks capacity
array service type BER trim user log total spare rt total raw free raw background task
----------- ------- ---- ------- ---- ---- --- ----- ----- -- --------- -------- ---------------
NVR no NVR enable - 0 1 2 0 1 - - scrub 14d (66%)
SSD no SSD enable - 0 1 1 0 1 - - scrub 14d (33%)
DA1 no HDD enable no 2 1 44 2 2 834 TiB 144 GiB scrub 14d (45%)
mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.
declustered paths AU
pdisk array active total capacity free space log size state
------------ ----------- ------ ----- -------- ---------- -------- -----
n922v001 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
n923v001 NVR 1 1 7992 MiB 7816 MiB 120 MiB ok
e1s01ssd SSD 2 4 745 GiB 744 GiB 120 MiB ok
e1s02 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s03 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s04 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s05 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s06 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s07 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s16 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s17 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s18 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s19 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s20 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s21 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s22 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s23 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s31 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s32 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s33 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s34 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s35 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s36 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s37 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s46 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s47 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s48 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s49 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s50 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s51 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s52 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s53 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s61 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s62 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s63 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s64 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s65 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s66 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s67 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s76 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s77 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s78 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s79 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s80 DA1 2 4 20 TiB 1040 GiB 40 MiB ok
e1s81 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s82 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
e1s83 DA1 2 4 20 TiB 1024 GiB 40 MiB ok
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 14 GiB dvs01 (13 GiB), mvs01 (693 MiB)
declustered block size and
vdisk array activity capacity RAID code checksum granularity remarks
------------------ ----------- -------- -------- --------------- --------- --------- -------
RG001LOGHOME DA1 normal 48 GiB 4WayReplication 2 MiB 4096 log home
RG001LOGTIP NVR normal 48 MiB 2WayReplication 2 MiB 4096 log tip
RG001LOGTIPBACKUP SSD normal 48 MiB Unreplicated 2 MiB 4096 log tip backup
RG001VS001 DA1 normal 13 TiB 3WayReplication 1 MiB 32 KiB
RG001VS002 DA1 normal 631 TiB 8+2p 16 MiB 32 KiB
declustered VCD spares
configuration data array configured actual remarks
------------------ ----------- ---------- ------ -------
relocation space DA1 24 28 must contain VCD
configuration data disk group fault tolerance remarks
------------------ --------------------------------- -------
rg descriptor 4 pdisk limiting fault tolerance
system index 4 pdisk limited by rg descriptor
vdisk RAID code disk group fault tolerance remarks
------------------ --------------- --------------------------------- -------
RG001LOGHOME 4WayReplication 3 pdisk
RG001LOGTIP 2WayReplication 1 pdisk
RG001LOGTIPBACKUP Unreplicated 0 pdisk
RG001VS001 3WayReplication 2 pdisk
RG001VS002 8+2p 2 pdisk
再看看vdisk的配置,可见block size、pool和用途都在vdisk上面配置了。
[root@dss01 ~]# mmvdisk vdiskset list --file-system all
member vdisks
vdisk set count size raw size created file system and attributes
-------------- ----- -------- -------- ------- --------------------------
dvs01 2 631 TiB 793 TiB yes fsb, DA1, 8+2p, 16 MiB, dataOnly, data
mvs01 2 13 TiB 41 TiB yes fsb, DA1, 3WayReplication, 1 MiB, metadataOnly, system
declustered capacity all vdisk sets defined
recovery group array type total raw free raw free% in the declustered array
-------------- ----------- ---- --------- -------- ----- ------------------------
dss01 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
dss02 DA1 HDD 834 TiB 144 GiB 0% dvs01, mvs01
vdisk set map memory per server
node class available required required per vdisk set
---------- --------- -------- ----------------------
dssg01 90 GiB 14 GiB dvs01 (13 GiB), mvs01 (693 MiB)