使用 GPFS GNR 的 Lenovo DSS-G 详细介绍和部署

IBM General Parallel File System (GPFS) 最近这些年改了两次名,先是改成 IBM Spectrum Scale,最近又改成 IBM Storage Scale。

GPFS 详细介绍

GPFS的数据冗余保护主要有下列三种方式,其中1是最传统的由外部设备提供数据保护,2和3都是由GPFS提供数据保护。这两者的区别在于:2类似于集中存储,奇偶校验不会在网络上传输;3类似于分布式存储,奇偶校验在网络上传输。

  1. 底层使用独立于GPFS的存储、盘阵、软硬件RAID,RAID的LUN映射给GPFS Server节点用作NSD
  2. 两台服务器通过冗余链接到JBOD,GPFS使用裸盘,通过 GPFS Native RAID (GNR) 实现RAID
  3. 多台服务器使用本地硬盘,GPFS使用裸盘,通过 Erasure Code 实现RAID

GPFS的文件系统基于Network Shared Disk (NSD)构建,可以对NSD划分不同的存储池(storage pool),其中system pool必定存在,还可以增加其它的pool,不同的pool可以使用不同类型的NSD做自动化的数据迁移和放置规则。同一个pool中数据在NSD上平均的分布,如果文件系统的元数据和数据的副本设置为1(默认值)则类似于RAID0,如果元数据和数据的副本设置为2则类似于RAID1。文件系统元数据和数据的副本数量可以独立设置可以不同,但是不能超过最大值,最大值在创建文件系统的时候设置,默认都是2,可以指定为3。元数据和数据的副本数除了在文件系统层面配置,还可以在规则中进行更细粒度的控制。NSD用途有四种,主要使用到的是三种,系统池的默认值 数据和元数据(dataAndMetadata),非系统池的默认值 仅数据(dataOnly),仅元数据(metadataOnly)。

块(Block)是可分配给文件的最大连续硬盘空间(在一个NSD上面),也是单次 I/O 操作中下发的最大大小。块由一定数量的子块组成,子块是可分配给文件空间的最小单位。大于一个块的文件存储在一个或多个块中,加上额外的一个或多个子块以保存剩余数据。小于一个块的文件存储在一个或多个子块中。当流式写入大文件时,当一个NSD写满一个Block Size时,就会转到下一个NSD继续写入,从而平衡各个NSD之间性能和空间的消耗,显然更大的Block Size有助于提升存储系统的吞吐量。块大小和子块大小的对应关系如下:64 KB 块对应2 KB 子块,128 KB 块对应4 KB 子块,256 KB-4 MB 块对应8 KiB 子块,8-16 MB 块对应16 KiB 子块。

块大小(Block Size)是GPFS中重要的参数,文件系统的块大小、子块大小和每个块的子块数量是在创建文件系统时设置的,以后不能更改,如果更改只能创建新的并自己迁移数据,这个开销非常大往往是不能接受的。文件系统的块大小还不能超过全局配置maxblocksize的值,更改maxblocksize需要整个GPFS停机。文件系统中所有数据的块大小和子块大小是相同的,所有元数据的块大小可以单独设置,但是数据和元数据的每个块的子块数量是相同的。例如,块大小为16 MB,元数据块大小为1 MB,则数据子块大小为128 KB,元数据子块大小为 8 KB,子块数量128个,特别注意这里的数据块子块大小要比标准的16K大,这是因为元数据的子块数量规定的数据的子块数量。

DSS-G 介绍

Lenovo DSS-G 是联想将GPFS集成的软硬件一体化设备,其硬件、固件和软件是紧密集成的,使用GNR或EC实现数据保护,抛弃了传统的RAID架构。其中DSS-G2xy采用GNR架构,DSS-G100 ECE采用EC架构。

DSS-G2xy硬件为两台相同配置的联想x86服务器和一台或多台JBOD,型号中的xy代表JBOD的类型和数量,x代表4U/5U高密3.5英寸HDD扩展柜,y代表2U 2.5英寸SSD扩展柜。如DSS-G210代表只有一台高密3.5英寸扩展柜。

服务器的配置是死的,只能选择内存容量(384/768)和IB卡种类(无/单口NDR/双口NDR200),板载25Gb以太网接口,四个HBA卡是固定位置安装的,选了IB卡也是固定位置安装两个,也就是所有扩展槽都是定死的。JBOD只能选择硬盘容量,所有槽位都是装满且一样的,除了第一个高密HDD扩展柜的两个固定位置要安装固定800G容量的SSD。服务器和JBOD的联线也是定死的,所有JBOD都适合两个服务器冗余直接链接。在安装过程中,安装脚本会检查硬件配置、整个链接拓扑、升级所有固件。所以说DSS-G是一套硬件、固件和软件紧密集成的系统,升级也是包括操作系统在内的全部软件和固件的升级。

DSS-G 安装部署

第一步:使用Lenovo为DSS定制的Confluent安装两台服务器的系统(dssg-install),该过程会自动安装操作系统、IB驱动和GPFS等必备软件。
第二步:使用dsslsadapters检查PCIe扩展卡安装位置、dsschmod-drive更改HDD配置、dssgcktopology检查链接拓扑、dssgckdisks测试硬盘性能。
第三步:创建或加入现有集群,使用mmlsconfig验证nsdRAIDFirmwareDirectory 为 /opt/lenovo/dss/firmware,再使用mmlsfirmware检查固件版本。

以上三步安装DSS的文档操作即可,除了可以配置两个服务器的名字和IP以外,没有什么可以改的。

系统装好后可以做一些优化,比如以太网配置LACP,IB网络添加IPoIB,在服务器上安装一些管理和监控软件(如lldpd、node_exporter等),其中IPoIB并不是GPFS必须的。

第四步:使用dssgmkstorage创建存储,这一步是将JBOD和服务器上的硬盘创建pdisk、Recovery Group、Declustered Array。把pdisk分配给RG,每个pdisk都有主用服务器和备用服务器,这些pdisk组成DA。

  • node class:DSS的两台服务器组成一个node class,这个node class名称默认执行dssgmkstorage主机名前加nc_,如果想改需要直接编辑 /opt/lenovo/dss/bin/dssgmkstorag 中的 local -r classnameL=
  • pdisk:服务器系统中的块设备,对应JBOD的所有物理硬盘、还有服务器RAID卡的部分Virtual Disk。服务器上面两块800G做RAID1后划分了多个VD,第一个最大的VD(约700G)用于操作系统,其它5个小的VD(8000M)中的2个创建了pdisk,剩余3个估计是预留的。
  • Recovery Group:每个pdisk都仅属于一个RG,每个RG有主服务器和备服务器,互相做故障切换,提供高可用性。每套DSS两台服务器就有两个RG,所有pdisk被分配给两个RG中的一个,RG1主用服务器1 备用服务器2,RG2主用服务器2 备用服务器1。
  • Declustered arrays:每个pdisk都属于一个DA,相同容量、性能、容量的pdisk划分成一个DA,比如3.5的HDD一个DA、2.5的SSD是另一个DA。在DSS-G2x0会创建三个DA,NVR是服务器本地硬盘(RAID卡的VD)、SSD是第一个JBOD中两块SSD、DA1是JBOD中所有的HDD。
  • 自动给每个RG创建LOGHOME、LOGTIP、LOGTIPBACKUP这三个vdisk,分别属于DA1、NVR、SSD,每一个RG都需要这三个vdisk存储GNR的元数据。log home 存储于JBOD的HDD上用四副本保护,用于存放长期事件日志、短期事件日志、元数据日志和记录小型写入操作的快速写入日志;log tip 存储于服务器本地硬盘用双副本保护(还有底层的RAID1),是 log home 的写缓存,日志先写入 log tip 然后再迁移至 log home,以便提高性能;log tip backup 是 log tip 的额外副本,存储于JBOD的SSD上无副本。

如果把DSS比做一套存储,JBOD就是扩展柜,服务器就是控制器,pdisk是每个物理硬盘,RG就是定义每个物理硬盘的主用和备用控制器,DA是硬盘池,vdisk是存储池和LUN。三个log是存储内部使用的存储空间,比如保险箱盘。但是DSS没有电池,所有写缓存均需要立刻落盘。类似于双控主备模式,每个物理硬盘和存储池/LUN同一个时刻只能属于一个控制器。

创建完成后查看一下,可见两个RG,

[root@dss01 ~]# mmvdisk recoverygroup list --declustered-array

                declustered   needs                                capacity             pdisks  
recovery group     array     service  type  BER      trim  total raw free raw free%  total spare  background task
--------------  -----------  -------  ----  -------  ----  --------- -------- -----  ----- -----  ---------------
dss01           NVR          no       NVR   enable   -             -        -     -      2     0  scrub (16%)
dss01           SSD          no       SSD   enable   -             -        -     -      1     0  scrub (8%)
dss01           DA1          no       HDD   enable   no      834 TiB  834 TiB  100%     44     2  scrub (0%)
dss02           NVR          no       NVR   enable   -             -        -     -      2     0  scrub (16%)
dss02           SSD          no       SSD   enable   -             -        -     -      1     0  scrub (8%)
dss02           DA1          no       HDD   enable   no      834 TiB  834 TiB  100%     44     2  scrub (0%)

mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.


[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss01 --all

                                                                        needs    user 
recovery group  node class  active   current or master server          service  vdisks  remarks
--------------  ----------  -------  --------------------------------  -------  ------  -------
dss01           dssg01      yes      dss01                             no            0  

                recovery group format version
recovery group     current        allowable    mmvdisk version
--------------  -------------   -------------  ---------------
dss01           5.1.5.0         5.1.5.0        5.1.9.2

 node 
number  server                            active   remarks
------  --------------------------------  -------  -------
   922  dss01                             yes      primary, serving dss01
   923  dss02                             yes      backup

declustered   needs                         vdisks       pdisks           capacity     
   array     service  type    BER    trim  user log  total spare rt  total raw free raw  background task
-----------  -------  ----  -------  ----  ---- ---  ----- ----- --  --------- --------  ---------------
NVR          no       NVR   enable   -        0   1      2     0  1          -        -  scrub 14d (16%)
SSD          no       SSD   enable   -        0   1      1     0  1          -        -  scrub 14d (8%)
DA1          no       HDD   enable   no       0   1     44     2  2    834 TiB  834 TiB  scrub 14d (2%)

mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.

              declustered      paths                               AU   
pdisk            array     active  total  capacity  free space  log size  state  
------------  -----------  ------  -----  --------  ----------  --------  -----  
n922v001      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
n923v001      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
e1s01ssd      SSD               2      4   745 GiB     744 GiB   120 MiB  ok
e1s02         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s03         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s04         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s05         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s06         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s07         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s16         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s17         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s18         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s19         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s20         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s21         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s22         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s23         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s31         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s32         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s33         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s34         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s35         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s36         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s37         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s46         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s47         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s48         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s49         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s50         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s51         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s52         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s53         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s61         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s62         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s63         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s64         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s65         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s66         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s67         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s76         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s77         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s78         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s79         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s80         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s81         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s82         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s83         DA1               2      4    20 TiB      19 TiB    40 MiB  ok

                declustered                 capacity            all vdisk sets defined
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   834 TiB   100%  -

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB   387 MiB  -

                    declustered                                          block size and     
vdisk                  array     activity  capacity  RAID code        checksum granularity  remarks
------------------  -----------  --------  --------  ---------------  ---------  ---------  -------
RG001LOGHOME        DA1          normal      48 GiB  4WayReplication      2 MiB       4096  log home
RG001LOGTIP         NVR          normal      48 MiB  2WayReplication      2 MiB       4096  log tip
RG001LOGTIPBACKUP   SSD          normal      48 MiB  Unreplicated         2 MiB       4096  log tip backup

                    declustered      VCD spares    
configuration data     array     configured  actual  remarks
------------------  -----------  ----------  ------  -------
relocation space    DA1                  24      28  must contain VCD

configuration data  disk group fault tolerance         remarks
------------------  ---------------------------------  -------
rg descriptor       4 pdisk                            limiting fault tolerance
system index        4 pdisk                            limited by rg descriptor

vdisk               RAID code        disk group fault tolerance         remarks
------------------  ---------------  ---------------------------------  -------
RG001LOGHOME        4WayReplication  3 pdisk                            
RG001LOGTIP         2WayReplication  1 pdisk                            
RG001LOGTIPBACKUP   Unreplicated     0 pdisk                            

[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss02 --all

                                                                        needs    user 
recovery group  node class  active   current or master server          service  vdisks  remarks
--------------  ----------  -------  --------------------------------  -------  ------  -------
dss02           dssg01      yes      dss02                             no            0  

                recovery group format version
recovery group     current        allowable    mmvdisk version
--------------  -------------   -------------  ---------------
dss02           5.1.5.0         5.1.5.0        5.1.9.2

 node 
number  server                            active   remarks
------  --------------------------------  -------  -------
   922  dss01                             yes      backup
   923  dss02                             yes      primary, serving dss02

declustered   needs                         vdisks       pdisks           capacity     
   array     service  type    BER    trim  user log  total spare rt  total raw free raw  background task
-----------  -------  ----  -------  ----  ---- ---  ----- ----- --  --------- --------  ---------------
NVR          no       NVR   enable   -        0   1      2     0  1          -        -  scrub 14d (16%)
SSD          no       SSD   enable   -        0   1      1     0  1          -        -  scrub 14d (8%)
DA1          no       HDD   enable   no       0   1     44     2  2    834 TiB  834 TiB  scrub 14d (2%)

mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.

              declustered      paths                               AU   
pdisk            array     active  total  capacity  free space  log size  state  
------------  -----------  ------  -----  --------  ----------  --------  -----  
n922v002      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
n923v002      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
e1s12ssd      SSD               2      4   745 GiB     744 GiB   120 MiB  ok
e1s08         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s09         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s10         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s11         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s13         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s14         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s15         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s24         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s25         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s26         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s27         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s28         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s29         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s30         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s38         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s39         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s40         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s41         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s42         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s43         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s44         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s45         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s54         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s55         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s56         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s57         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s58         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s59         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s60         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s68         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s69         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s70         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s71         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s72         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s73         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s74         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s75         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s84         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s85         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s86         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s87         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s88         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s89         DA1               2      4    20 TiB      19 TiB    40 MiB  ok
e1s90         DA1               2      4    20 TiB      19 TiB    40 MiB  ok

                declustered                 capacity            all vdisk sets defined
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss02           DA1          HDD     834 TiB   834 TiB   100%  -

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB   387 MiB  -

                    declustered                                          block size and     
vdisk                  array     activity  capacity  RAID code        checksum granularity  remarks
------------------  -----------  --------  --------  ---------------  ---------  ---------  -------
RG002LOGHOME        DA1          normal      48 GiB  4WayReplication      2 MiB       4096  log home
RG002LOGTIP         NVR          normal      48 MiB  2WayReplication      2 MiB       4096  log tip
RG002LOGTIPBACKUP   SSD          normal      48 MiB  Unreplicated         2 MiB       4096  log tip backup

                    declustered      VCD spares    
configuration data     array     configured  actual  remarks
------------------  -----------  ----------  ------  -------
relocation space    DA1                  24      28  must contain VCD

configuration data  disk group fault tolerance         remarks
------------------  ---------------------------------  -------
rg descriptor       4 pdisk                            limiting fault tolerance
system index        4 pdisk                            limited by rg descriptor

vdisk               RAID code        disk group fault tolerance         remarks
------------------  ---------------  ---------------------------------  -------
RG002LOGHOME        4WayReplication  3 pdisk                            
RG002LOGTIP         2WayReplication  1 pdisk                            
RG002LOGTIPBACKUP   Unreplicated     0 pdisk                

第五步:使用dssServerConfig.sh优化GPFS配置。

第六步:定义和创建vdisk,vdisk在DA上面定义并创建被用于NSD,主要需要确定 Raid Code、Block Size和容量。vdisk存在于DA的所有pdisk上,Raid Code可选 Reed-Solomon code (4+2p/4+3p/8+2p/8+3p)或副本(三或四)保护数据,Block Size 对于 Reed-Solomon code 最大16MB、对于副本最大1MB。

与许多RAID6一样,GNR也有写惩罚问题,写入一个完整的block size时性能最佳,部分写入需要重新计算校验,这会导致性能下降,显然多副本没有这个问题。

元数据占用空间不大,不过读写块都很小,建议用多副本提高性能,如三副本 3WayReplication,空间利用率只有1/3。数据占用空间很大,用奇偶校验可以提供空间利用率,如8+2p,空间利用率有80%。3WayReplication 和 8+2p 可以忍受同时故障2个硬盘,如果需要忍受同时3个硬盘故障则需要 4WayReplication 和 8+3p,这样提高了安全性但是会导致性能和空间利用率下降。

元数据的容量建议至少1%,按照元数据三副本、数据8+2p计算元数据比例:
(0.03/3)/(0.97*0.8)=1.29%   元数据占裸容量3%,数据占裸容量97%,元数据占可用容量1.29%
(0.05/3)/(0.95*0.8)=2.19%   元数据占裸容量5%,数据占裸容量95%,元数据占可用容量2.19%

我们选择元数据5%裸容量三副本块大小1M块,含有元数据的vdisk会被自动指定为system pool;数据95%裸容量8+2p块大小16M,指定为data pool。先定义vdisk,确认无误且内存需求满足后创建vdisk。

[root@dss01 ~]# mmvdisk vdiskset define --vdisk-set mvs01 --recovery-group dss01,dss02 --code 3WayReplication --block-size 1m --set-size 5% --nsd-usage metadataOnly
mmvdisk: Vdisk set 'mvs01' has been defined.
mmvdisk: Recovery group 'dss01' has been defined in vdisk set 'mvs01'.
mmvdisk: Recovery group 'dss02' has been defined in vdisk set 'mvs01'.

                     member vdisks     
vdisk set       count   size   raw size  created  file system and attributes
--------------  ----- -------- --------  -------  --------------------------
mvs01               2   13 TiB   41 TiB  no       -, DA1, 3WayReplication, 1 MiB, metadataOnly, system

                declustered                 capacity            all vdisk sets defined 
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   793 TiB    95%  mvs01
dss02           DA1          HDD     834 TiB   793 TiB    95%  mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB  1080 MiB  mvs01 (693 MiB)

[root@dss01 ~]# mmvdisk vdiskset define --vdisk-set dvs01 --recovery-group dss01,dss02 --code 8+2p --block-size 16m --set-size 95% --nsd-usage dataOnly --storage-pool data

mmvdisk: Vdisk set 'dvs01' has been defined.
mmvdisk: Recovery group 'dss01' has been defined in vdisk set 'dvs01'.
mmvdisk: Recovery group 'dss02' has been defined in vdisk set 'dvs01'.

                     member vdisks     
vdisk set       count   size   raw size  created  file system and attributes
--------------  ----- -------- --------  -------  --------------------------
dvs01               2  631 TiB  793 TiB  no       -, DA1, 8+2p, 16 MiB, dataOnly, data

                declustered                 capacity            all vdisk sets defined 
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01
dss02           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB    14 GiB  dvs01 (13 GiB), mvs01 (693 MiB)


[root@dss01 ~]# mmvdisk vdiskset list

vdisk set         created  file system  recovery groups
----------------  -------  -----------  ---------------
dvs01             no       -            dss01, dss02
mvs01             no       -            dss01, dss02

[root@dss01 ~]# mmvdisk vdiskset list --vdisk-set all

                     member vdisks     
vdisk set       count   size   raw size  created  file system and attributes
--------------  ----- -------- --------  -------  --------------------------
dvs01               2  631 TiB  793 TiB  no       -, DA1, 8+2p, 16 MiB, dataOnly, data
mvs01               2   13 TiB   41 TiB  no       -, DA1, 3WayReplication, 1 MiB, metadataOnly, system


                declustered                 capacity            all vdisk sets defined 
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01
dss02           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB    14 GiB  dvs01 (13 GiB), mvs01 (693 MiB)

[root@dss01 ~]# mmvdisk vdiskset list --recovery-group all

                declustered                 capacity            all vdisk sets defined 
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01
dss02           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB    14 GiB  dvs01 (13 GiB), mvs01 (693 MiB)

[root@dss01 ~]# mmvdisk vdiskset create --vdisk-set mvs01,dvs01
mmvdisk: 2 vdisks and 2 NSDs will be created in vdisk set 'mvs01'.
mmvdisk: 2 vdisks and 2 NSDs will be created in vdisk set 'dvs01'.
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG002VS001
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG002VS002
mmvdisk: (mmcrvdisk) [I] Processing vdisk RG001VS002
mmvdisk: Created all vdisks in vdisk set 'mvs01'.
mmvdisk: Created all vdisks in vdisk set 'dvs01'.
mmvdisk: (mmcrnsd) Processing disk RG001VS001
mmvdisk: (mmcrnsd) Processing disk RG002VS001
mmvdisk: (mmcrnsd) Processing disk RG001VS002
mmvdisk: (mmcrnsd) Processing disk RG002VS002
mmvdisk: Created all NSDs in vdisk set 'mvs01'.
mmvdisk: Created all NSDs in vdisk set 'dvs01'.

第七步:创建文件系统:因NSD的用途和块大小在前一步vdisk时已经确定了,在这里只要指定vdisk set即可。

[root@dss01 ~]# mmvdisk filesystem create –file-system dssfs –vdisk-set mvs01,dvs01 –mmcrfs -A yes -Q yes -n 1024 -T /dssfs –auto-inode-limit
mmvdisk: Creating file system ‘dssfs’.
mmvdisk: The following disks of dssfs will be formatted on node dss01:
mmvdisk: RG001VS001: size 14520704 MB
mmvdisk: RG002VS001: size 14520704 MB
mmvdisk: RG001VS002: size 662657024 MB
mmvdisk: RG002VS002: size 662657024 MB
mmvdisk: Formatting file system …
mmvdisk: Disks up to size 126.40 TB can be added to storage pool system.
mmvdisk: Disks up to size 7.90 PB can be added to storage pool data.
mmvdisk: Creating Inode File
mmvdisk: 97 % complete on Sun Mar 9 19:28:34 2025
mmvdisk: 100 % complete on Sun Mar 9 19:28:34 2025
mmvdisk: Creating Allocation Maps
mmvdisk: Creating Log Files
mmvdisk: 0 % complete on Sun Mar 9 19:28:40 2025
mmvdisk: 18 % complete on Sun Mar 9 19:28:45 2025
mmvdisk: 31 % complete on Sun Mar 9 19:28:50 2025
mmvdisk: 48 % complete on Sun Mar 9 19:28:55 2025
mmvdisk: 63 % complete on Sun Mar 9 19:29:00 2025
mmvdisk: 75 % complete on Sun Mar 9 19:29:05 2025
mmvdisk: 100 % complete on Sun Mar 9 19:29:08 2025
mmvdisk: Clearing Inode Allocation Map
mmvdisk: Clearing Block Allocation Map
mmvdisk: Formatting Allocation Map for storage pool system
mmvdisk: Formatting Allocation Map for storage pool data
mmvdisk: 76 % complete on Sun Mar 9 19:29:16 2025
mmvdisk: 100 % complete on Sun Mar 9 19:29:17 2025
mmvdisk: Completed creation of file system /dev/dssfs.

这个时候我们再回头看看RG的配置,RG dss01所属的主服务器是dss01,备用服务器是dss02,DA1中每个pdisk空闲空间组成热备空间。

[root@dss01 ~]# mmvdisk recoverygroup list --recovery-group dss01 --all

                                                                        needs    user 
recovery group  node class  active   current or master server          service  vdisks  remarks
--------------  ----------  -------  --------------------------------  -------  ------  -------
dss01           dssg01      yes      dss01                             no            2  

                recovery group format version
recovery group     current        allowable    mmvdisk version
--------------  -------------   -------------  ---------------
dss01           5.1.5.0         5.1.5.0        5.1.9.2

 node 
number  server                            active   remarks
------  --------------------------------  -------  -------
   922  dss01                             yes      primary, serving dss01
   923  dss02                             yes      backup

declustered   needs                         vdisks       pdisks           capacity     
   array     service  type    BER    trim  user log  total spare rt  total raw free raw  background task
-----------  -------  ----  -------  ----  ---- ---  ----- ----- --  --------- --------  ---------------
NVR          no       NVR   enable   -        0   1      2     0  1          -        -  scrub 14d (66%)
SSD          no       SSD   enable   -        0   1      1     0  1          -        -  scrub 14d (33%)
DA1          no       HDD   enable   no       2   1     44     2  2    834 TiB  144 GiB  scrub 14d (45%)

mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.

              declustered      paths                               AU   
pdisk            array     active  total  capacity  free space  log size  state  
------------  -----------  ------  -----  --------  ----------  --------  -----  
n922v001      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
n923v001      NVR               1      1  7992 MiB    7816 MiB   120 MiB  ok
e1s01ssd      SSD               2      4   745 GiB     744 GiB   120 MiB  ok
e1s02         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s03         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s04         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s05         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s06         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s07         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s16         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s17         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s18         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s19         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s20         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s21         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s22         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s23         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s31         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s32         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s33         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s34         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s35         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s36         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s37         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s46         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s47         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s48         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s49         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s50         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s51         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s52         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s53         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s61         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s62         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s63         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s64         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s65         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s66         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s67         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s76         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s77         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s78         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s79         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s80         DA1               2      4    20 TiB    1040 GiB    40 MiB  ok
e1s81         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s82         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok
e1s83         DA1               2      4    20 TiB    1024 GiB    40 MiB  ok

                declustered                 capacity            all vdisk sets defined
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB    14 GiB  dvs01 (13 GiB), mvs01 (693 MiB)

                    declustered                                          block size and     
vdisk                  array     activity  capacity  RAID code        checksum granularity  remarks
------------------  -----------  --------  --------  ---------------  ---------  ---------  -------
RG001LOGHOME        DA1          normal      48 GiB  4WayReplication      2 MiB       4096  log home
RG001LOGTIP         NVR          normal      48 MiB  2WayReplication      2 MiB       4096  log tip
RG001LOGTIPBACKUP   SSD          normal      48 MiB  Unreplicated         2 MiB       4096  log tip backup
RG001VS001          DA1          normal      13 TiB  3WayReplication      1 MiB     32 KiB  
RG001VS002          DA1          normal     631 TiB  8+2p                16 MiB     32 KiB  

                    declustered      VCD spares    
configuration data     array     configured  actual  remarks
------------------  -----------  ----------  ------  -------
relocation space    DA1                  24      28  must contain VCD

configuration data  disk group fault tolerance         remarks
------------------  ---------------------------------  -------
rg descriptor       4 pdisk                            limiting fault tolerance
system index        4 pdisk                            limited by rg descriptor

vdisk               RAID code        disk group fault tolerance         remarks
------------------  ---------------  ---------------------------------  -------
RG001LOGHOME        4WayReplication  3 pdisk                            
RG001LOGTIP         2WayReplication  1 pdisk                            
RG001LOGTIPBACKUP   Unreplicated     0 pdisk                            
RG001VS001          3WayReplication  2 pdisk                            
RG001VS002          8+2p             2 pdisk      

再看看vdisk的配置,可见block size、pool和用途都在vdisk上面配置了。

[root@dss01 ~]# mmvdisk vdiskset list --file-system all

                     member vdisks     
vdisk set       count   size   raw size  created  file system and attributes
--------------  ----- -------- --------  -------  --------------------------
dvs01               2  631 TiB  793 TiB  yes      fsb, DA1, 8+2p, 16 MiB, dataOnly, data
mvs01               2   13 TiB   41 TiB  yes      fsb, DA1, 3WayReplication, 1 MiB, metadataOnly, system

                declustered                 capacity            all vdisk sets defined 
recovery group     array     type  total raw  free raw  free%  in the declustered array
--------------  -----------  ----  ---------  --------  -----  ------------------------
dss01           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01
dss02           DA1          HDD     834 TiB   144 GiB     0%  dvs01, mvs01

                  vdisk set map memory per server      
node class  available  required  required per vdisk set
----------  ---------  --------  ----------------------
dssg01         90 GiB    14 GiB  dvs01 (13 GiB), mvs01 (693 MiB)
Posted in HPC

DNSmasq 为HPC集群外容器提供集群内主机名解析

在HPC集群中通常有DNS和本地hosts提供解析服务,以便节点间通过主机名互相通信,而不是直接使用IP地址。但是如果在集群外有一个独立服务器中的容器需要与集群内的节点通过主机名通讯,就需要通过DNS来给容器提供解析服务。

通过自动化脚本将集群的hosts拷贝到独立服务器的一个目录下,如 /home/hpc/dns/hosts

自己做一个dnsmasq的容器:

[yaoge123]$ cat dnsmasq/Dockerfile 
FROM alpine:latest
RUN apk update \
 && apk upgrade \
 && apk add --no-cache \
            dnsmasq \
 && rm -rf /var/cache/apk/*

编写docker-compose.yml:

  1. dnsmasq提供了DNS服务,需要指定ip地址,以便在下面其它容器配置中指定dns ip
  2. /home/hpc/dns 是存储hosts的本机目录
  3. 生产环境用 –keep-in-foreground,调试时用–no-daemon和–log-queries
  4. –domain-needed 一定要加,防止dnsmasq将没有域的主机名(没有.的)转发给上游DNS
  5. –cache-size= 改的比hosts文件行数多一些
  6. abc是要解析集群内主机名的容器,添加的dns就是为了用dnsmasq来提供解析服务
  7. 不要解析的就不要加dns
services:
  dnsmasq:
    build: ./dnsmasq
    image: dnsmasq
    container_name: dnsmasq
    networks:
      default:
        ipv4_address: 192.168.100.200
    volumes:
      - /home/hpc/dns:/etc/dns:ro
    command:
      - dnsmasq
      - --keep-in-foreground
        #- --no-daemon
        #- --log-queries
      - --domain-needed
      - --no-hosts
      - --cache-size=3000
      - --hostsdir=/etc/dns
  abc:
    image: abc
    container_name: abc
    dns:
      - 192.168.100.200
  …………
networks:
  default:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 192.168.100.0/24

测试解析和查看 dnsmasq 缓存情况,evictions为0最好

[yaoge123]# run --rm -it --network=docker_default --dns=192.168.100.200 alpine sh
/ # apk add bind-tools
/ # dig +short node_name
/ # for i in "cachesize.bind insertions.bind evictions.bind misses.bind hits.bind auth.bind servers.bind";do dig +short chaos txt $i;done

NVMe 热移除

  1. 通过主板BMC确认拟移除物理位置NVMe盘对应的SN
  2. 使用 nvme list 查找SN对应的盘符
  3. 使用 mmlsnsd -m|grep $HOSTNAME 查找盘符对应的NSD名称
  4. 使用 mmdeldisk 从文件系统中移除NSD
  5. 使用 mmdelnsd 删除NSD
  6. ls -l /sys/class/block/ 查找盘符对应的BUS ID
  7. 使用 lspci -vvv|grep -a1 NVMe 查找BUS ID对应的 Physical Slot
  8. cd /sys/bus/pci/slots/$slot ($slot替换为上一步查到的Physical Slot)
  9. cat address 确认BUS ID正确
  10. echo 0 > power 下电
  11. lsblk 中已无此盘符
  12. 执行 mmnsddiscover 刷新
  13. 此时对应NVMe盘指示灯应熄灭,拔出此盘

GPFS CES 安装配置

GPFS提供两种高可用NFS服务的方式,分别是Cluster NFS (CNFS)和Cluster Export Services (CES),二者互斥只能选其一。CNFS只支持NFS、CES支持NFS/SMB/Object。CNFS基于Linux kernel的NFS server,NFS的配置不由GPFS管理,元数据性能较好;CES基于用户空间的Ganesha NFS server,GPFS管理NFS配置,数据流式访问性能好。注意两者切换必将导致NFS停机。

设置CES共享目录:此目录每个CES节点应均能访问,此步骤需要整个GPFS集群停机
mmshutdown -a
mmchconfig cesSharedRoot=/share/ces
mmstartup -a

添加CES节点:
mmchnode –ces-enable -N ces1,ces2

配置CES IP:CES IP是专用于提供NFS/SMB/Object服务的虚IP,不可用于内部GPFS通讯,CES IP必须可通过DNS或/etc/hosts解析。每个CES节点上应配置有和CES虚IP相同网段IP的网络接口,GPFS只能给这些网络接口添加子IP。如ces1配置有192.168.1.101/24、ces2配置有192.168.1.102/24,CES IP为192.168.1.11和192.168.1.12
mmces address add –ces-ip 192.168.1.11,192.168.1.12

验证CES IP:
[root@ces1 ~]# mmces address list –full-list
cesAddress cesNode attributes cesGroup preferredNode unhostableNodes
192.168.1.11 ces2 none none none none
192.168.1.12 ces1 none none none none

安装NFS:
yum install pyparsing pygobject2 libwbclient
rpm -ivh gpfs.nfs-ganesha-2.7.5-ibm058.12.el7.x86_64.rpm gpfs.nfs-ganesha-gpfs-2.7.5-ibm058.12.el7.x86_64.rpm gpfs.nfs-ganesha-utils-2.7.5-ibm058.12.el7.x86_64.rpm

安装SMB:
yum install libarchive gdb
rpm -ivh gpfs.smb-4.11.16_gpfs_19-2.el7.x86_64.rpm

启用CES NFS:
mmces service enable nfs

在所有CES节点上启动NFS服务:
mmces service start NFS -a

验证CES NFS:
[root@ces1 ~]# mmces service list -a
Enabled services: NFS
ces1: NFS is running
ces2: NFS is running

推荐创建一个独立的fileset用于NFS:
mmcrfileset share data –inode-space new
mmlinkfileset share data -J /share/data

设置用户认证方式:
mmuserauth service create –data-access-method file –type userdefined

创建NFS共享:
mmnfs export add /share/data –client “192.168.1.100/32(Access_Type=RW)”

检查NFS共享:
[root@ces1 ~]# mmnfs export list

Path Delegations Clients
—————— ———– ————-
/share/data NONE 192.168.1.100/32

 

Prometheus + Grafana 监控 NVIDIA GPU

1.首先安装 NVIDIA Data Center GPU Manager (DCGM),从 https://developer.nvidia.com/dcgm 下载安装

nv-hostengine -t
yum erase -y datacenter-gpu-manager
rpm -ivh datacenter-gpu-manager*
systemctl enable --now dcgm.service

2. 安装 NVIDIA DCGM exporter for Prometheus,从 https://github.com/NVIDIA/gpu-monitoring-tools/tree/master/exporters/prometheus-dcgm 下载手工安装

wget -q -O /usr/local/bin/dcgm-exporter https://raw.githubusercontent.com/NVIDIA/gpu-monitoring-tools/master/exporters/prometheus-dcgm/dcgm-exporter/dcgm-exporter
chmod +x /usr/local/bin/dcgm-exporter
mkdir /run/prometheus 
wget -q -O /etc/systemd/system/prometheus-dcgm.service https://raw.githubusercontent.com/NVIDIA/gpu-monitoring-tools/master/exporters/prometheus-dcgm/bare-metal/prometheus-dcgm.service
systemctl daemon-reload
systemctl enable --now prometheus-dcgm.service

3. 从 https://prometheus.io/download/#node_exporter 下载 node_exporter,手工安装为服务并添加 dcgm-exporter 资料

tar xf node_exporter*.tar.gz
mv node_exporter-*/node_exporter /usr/local/bin/
chown root:root /usr/local/bin/node_exporter
chmod +x /usr/local/bin/node_exporter

cat > /etc/systemd/system/node_exporter.service <<EOF
[Unit]
Description=Prometheus Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
EOF

sed -i '/ExecStart=\/usr\/local\/bin\/node_exporter/c\ExecStart=\/usr\/local\/bin\/node_exporter --collector.textfile.directory=\/run\/prometheus' /etc/systemd/system/node_exporter.service

systemctl daemon-reload
systemctl enable --now node_exporter.service

4. Grafana 添加这个Dashboard
https://grafana.com/grafana/dashboards/11752

HPE ProLiant DL380 Gen10 不同BIOS设置内存性能测试

硬件环境

2*Intel(R) Xeon(R) Gold 5122 CPU @ 3.60GHz
12*HPE SmartMemory DDR4-2666 RDIMM 16GiB

iLO 5 1.37 Oct 25 2018
System ROM U30 v1.46 (10/02/2018)
Intelligent Platform Abstraction Data 7.2.0 Build 30
System Programmable Logic Device 0x2A
Power Management Controller Firmware 1.0.4
NVMe Backplane Firmware 1.20
Power Supply Firmware 1.00
Power Supply Firmware 1.00
Innovation Engine (IE) Firmware 0.1.6.1
Server Platform Services (SPS) Firmware 4.0.4.288
Redundant System ROM U30 v1.42 (06/20/2018)
Intelligent Provisioning 3.20.154
Power Management Controller FW Bootloader 1.1
HPE Smart Storage Battery 1 Firmware 0.60
HPE Eth 10/25Gb 2p 631FLR-SFP28 Adptr 212.0.103001
HPE Ethernet 1Gb 4-port 331i Adapter – NIC 20.12.41
HPE Smart Array P816i-a SR Gen10 1.65
HPE 100Gb 1p OP101 QSFP28 x16 OPA Adptr 1.5.2.0.0
HPE InfiniBand EDR/Ethernet 100Gb 2-port 840QSF 12.22.40.30
Embedded Video Controller 2.5

软件环境

CentOS Linux release 7.6.1810 (Core)
Linux yaoge123 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Intel(R) Memory Latency Checker – v3.6

Continue reading

安装 GPFS 管理GUI

  1. GUI节点安装 gpfs.gss.pmcollector-.rpm gpfs.gss.pmsensors-.rpm gpfs.gui-.noarch.rpm gpfs.java-.x86_64.rpm
  2. 所有节点安装 gpfs.gss.pmsensors-*.rpm
  3. 初始化收集器节点 mmperfmon config generate –collectors [node list],GUI节点必须是收集器节点
  4. 启用传感器节点 mmchnode –perfmon -N [SENSOR_NODE_LIST]
  5. 设置容量监控节点和间隔 mmperfmon config update GPFSDiskCap.restrict=[node] GPFSDiskCap.period=86400
  6. 设置fileset容量监控节点和间隔 mmperfmon config update GPFSFilesetQuota.restrict=[node] GPFSFilesetQuota.period=3600
  7. GUI节点自动启动systemctl enable gpfsgui

删除

  1. GUI节点:systemctl stop gpfsgui; systemctl disable gpfsgui;
  2. mmlscluster |grep perfmon 查询一下哪些节点,mmchnode –noperfmon -N [SENSOR_NODE_LIST]
  3. mmperfmon config delete –all
  4. 清空数据库 psql postgres postgres -c “drop schema fscc cascade”
  5. 删除相关的rpm包 yum erase gpfs.gss.pmcollector gpfs.gss.pmsensors gpfs.gui gpfs.java
  6. mmlsnodeclass 查询有哪些节点,分别用mmchnodeclass GUI_SERVERS delete -N <……> 和 mmchnodeclass GUI_MGMT_SERVERS delete -N <……> 删除

浪潮刀片和机架的一些问题

用了四年的浪潮NF5270M3机架服务器、I8000刀箱和NX5440刀片服务器,总结一下碰到的管理问题

  1. 浪潮认为刀片BMC的IP应该紧跟着刀箱管理模块的IP顺序增加,比如刀箱管理模块是192.168.1.10,那么第一个刀片就应该是192.168.1.11。不这样设置会出现一些问题,如按刀片上的KVM按钮会亮红灯无法切换,需要SSH到刀片的BMC里用命令行修改一些配置文件
  2. 使用浏览器访问刀片的BMC必须用IP,用hostname的访问打开IP KVM会报错无法使用
  3. 在Linux下打开刀片的IP KVM报错无法使用,Windows下则正常,新的刀片已解决此问题
  4. 刀箱管理模块无法发送报警邮件,NTP配置无法保存且也没有同步时间,无发送syslog功能
  5. 机架BMC发送报警测试邮件正常,但是实际出现故障(如移除电源模块)时却没有发送邮件
  6. 刀箱电源风扇模块故障时前面板报警灯不亮,只在刀箱背后的电源风扇模块上有指示灯变红
  7. 机架RAID卡故障,如硬盘故障,前面板报警灯不亮

GPFS 创建 CNFS

GPFS有两种NFS导出方式,一是Cluster Export Services (CES) NFS,二是clustered NFS (CNFS)。CNFS使用Linux内核的nfsd,提供了较好小文件操作性能,当然也只支持NFS。CES使用用户空间Ganesha的nfsd,连续读写性能较好,CES还支持SMB和Object存储。

CNFS通过动态的调整IP地址来提供NFS的HA,只能提供故障转移不支持负载均衡,下面以nfs1,nfs2两个节点配置CNFS为例

  1. 确保所有服务端和客户端节点与同一个时钟源严格同步时间,服务端节点需要mmchlicense server
  2. 为了防止故障切换时出现写入问题,CNFS export共享目录的文件系统需指定syncnfs挂载选项
    mmchfs fsyaoge123 -o syncnfs
  3. 在每个服务端上面配置相同的/etc/exports,不同目录的fsid必须不同,不同服务端上面同一个目录的fsid必须相同,1.1.1.10/1.1.1.11是客户端IP
    /fsayaoge123/nfs 1.1.1.10(ro,fsid=11) 1.1.1.11(rw,fsid=11)
  4. 在每个服务端上面设置nfsd自动启动
    systemctl enable nfs-server
  5. 定义CNFS的共享目录,最好是一个单独小的文件系统且不被NFS共享出去
    mmchconfig cnfsSharedRoot=/fs2yaoge123/cnfs
  6. 每个服务端使用一个额外的静态IP(onboot=no)用于NFS共享,注意这个IP不能用于GPFS,将这个interface启动起来
  7. 在每个服务端上进行配置,ip_address_list为上面配置的专用于NFS的ip,node为这个节点在GPFS里面的主机名
    mmchnode --cnfs-interface=ip_address_list -N node
    mmchnode --cnfs-interface=1.1.1.1 -N nfs1  //配置nfs1使用1.1.1.1做为NFS专用IP
  8. 查一下rpc.mountd绑定的端口号,配置上去
    mmchconfig cnfsMountdPort=mountd_port -N node
  9. NFS客户端挂载时需加选项 -o sync,hard,intr,nfs1为主用
    mount -o sync,hard,intr 1.1.1.1:/fsyaoge123/nfs /mnt
  10. 测试关机、停止nfsd、停止gpfs三种情况下是否会自动切换

查看CNFS节点

mmlscluster --cnfs

删除CNFS节点

mmchnode --cnfs-interface=DELETE -N "nfs1,nfs2"