2009年9月7日 星期一

add storage node on GPFS

新增一個新的儲存節點到檔案系統中。

※新增節點前,亦須確認該節點的 ssh key 是否已建立,並且確認其它節點可以無密碼login.
※確認 firewall 已允許各節點間互連。

$ mmaddnode c153 (可指定是否為quorum, 或之後用 mmchconfig designation=quorum 或 mmchnode --quorum 調整)
Fri Sep 4 17:12:58 CST 2009: mmaddnode: Processing node c153
mmaddnode: Command successfully completed
mmaddnode: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmstartup -N c153
Fri Sep 4 17:13:11 CST 2009: mmstartup: Starting GPFS ...

$ cat add_new_disk (此例子指定FailureGroup為2)
/dev/sda:c153::dataAndMetadata:2::
/dev/sdb:c153::dataAndMetadata:2::

$ mmcrnsd -F add_new_disk -v no
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
gpfs0 gpfs1nsd c149
gpfs0 gpfs2nsd c150
gpfs0 gpfs3nsd c149
gpfs0 gpfs4nsd c150
gpfs0 gpfs5nsd c152
gpfs0 gpfs6nsd c152
(free disk) gpfs7nsd c153
(free disk) gpfs8nsd c153

$ mmadddisk gpfs0 -F add_new_disk -r
The following disks of gpfs0 will be formatted on node c149.twaren.net:
gpfs7nsd: size 1220994560 KB
gpfs8nsd: size 1220984320 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
79 % complete on Fri Sep 4 17:16:19 2009
100 % complete on Fri Sep 4 17:16:20 2009
Completed adding disks to file system gpfs0.
mmadddisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Restriping gpfs0 ...
Scanning file system metadata, phase 1 ...
2 % complete on Fri Sep 4 17:16:27 2009
22 % complete on Fri Sep 4 17:16:30 2009
40 % complete on Fri Sep 4 17:16:33 2009
59 % complete on Fri Sep 4 17:16:36 2009
80 % complete on Fri Sep 4 17:16:39 2009
100 % complete on Fri Sep 4 17:16:42 2009
Scan completed successfully.
Scanning file system metadata, phase 2 ...
15 % complete on Fri Sep 4 17:16:45 2009
33 % complete on Fri Sep 4 17:16:48 2009
51 % complete on Fri Sep 4 17:16:51 2009
69 % complete on Fri Sep 4 17:16:54 2009
86 % complete on Fri Sep 4 17:16:57 2009
100 % complete on Fri Sep 4 17:17:00 2009
Scan completed successfully.
Scanning file system metadata, phase 3 ...
Scan completed successfully.
Scanning file system metadata, phase 4 ...
Scan completed successfully.
Scanning user file metadata ...
Scan completed successfully.
Done

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
gpfs2nsd nsd 512 2 yes yes ready up 2 system desc
gpfs3nsd nsd 512 1 yes yes ready up 3 system desc
gpfs4nsd nsd 512 2 yes yes ready up 4 system
gpfs5nsd nsd 512 1 yes yes ready up 5 system
gpfs6nsd nsd 512 1 yes yes ready up 6 system
gpfs7nsd nsd 512 2 yes yes ready up 7 system
gpfs8nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h (dynamic add up disk space)
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 4.6T 4.6T 51% /gpfs

GPFS install on CentOS

延續上一篇 GPFS install on Gentoo Linux
雖然說在 Gentoo 上安裝是沒有問題,可是在使用上卻是有很多問題,它相關的tools使用上,動不動就會 kernel panic, 如果你是想 production 使用,建議還是用它 support 的 OS 吧。

以下是在 CentOS 5.3 安裝 GPFS v3.2.1

$ cat /etc/hosts
10.0.0.149 c149
10.0.0.150 c150
10.0.0.152 c152
10.0.0.153 c153

$ ssh-keygen -t rsa (create key pair on every node)

$ copy each node's public key (id_rsa.pub) into authorized_keys (every node, even itself)

$ test full-mesh log in without any password prompt (check tcp-wrapper)

$ cat /etc/redhat-release (fool gpfs install script :o)
Red Hat Enterprise Linux Server release 5.3 (Tikanga)

$ touch /usr/lpp/mmfs/lib/libgpfslum.so (fool gpfs install script :o)

$ cd /usr/src/gpfs
$ wget ftp://ftp.software.ibm.com/software/server/gpfs/gpfs-3.2.1-14.x86_64.update.tar.gz
$ tar zxvf gpfs-3.2.1-14.x86_64.update.tar.gz
$ rpm -ivh gpfs.msg.en_US-3.2.1-14.noarch.rpm
$ rpm -ivh gpfs.gpl-3.2.1-14.noarch.rpm
$ rpm -ivh gpfs.docs-3.2.1-14.noarch.rpm
$ yum install imake.x86_64
$ yum install compat-libstdc++-33.x86_64 (gpfs needs libstdc++.so.5)
$ rpm -ivh gpfs.base-3.2.1-14.x86_64.update.rpm --nodeps

$ cd /usr/lpp/mmfs/src
$ export SHARKCLONEROOT=/usr/lpp/mmfs/src
$ make Autoconfig (or config by yourself)
$ cat config/site.mcr
#define GPFS_ARCH_X86_64
LINUX_DISTRIBUTION = REDHAT_AS_LINUX
#define LINUX_DISTRIBUTION_LEVEL 53
#define LINUX_KERNEL_VERSION 2061899

$ make clean
$ make World
$ make InstallImages

$ cat ~/.bashrc
PATH=$PATH:/usr/lpp/mmfs/bin

$ cat gpfs.nodes
c149:manager-quorum:
c150:manager-quorum:
c152:manager-quorum:
c153:manager-quorum:

$ mmcrcluster -C TWAREN_FTP -N gpfs.nodes -p c149 -R /usr/bin/scp -r /usr/bin/ssh -s c150
Tue Sep 15 20:09:52 CST 2009: mmcrcluster: Processing node c149
Tue Sep 15 20:09:52 CST 2009: mmcrcluster: Processing node c150
Tue Sep 15 20:09:53 CST 2009: mmcrcluster: Processing node c152
Tue Sep 15 20:09:54 CST 2009: mmcrcluster: Processing node c153
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlscluster
GPFS cluster information
========================
GPFS cluster name: TWAREN_FTP.c149
GPFS cluster id: 720576581582423056
GPFS UID domain: TWAREN_FTP.c149
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server: c149
Secondary server: c150

Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 c149 10.0.0.149 c149 quorum-manager
2 c150 10.0.0.150 c150 quorum-manager
3 c152 10.0.0.152 c152 quorum-manager
4 c153 10.0.0.153 c153 quorum-manager

$ mmstartup -a
Tue Sep 15 20:10:37 CST 2009: mmstartup: Starting GPFS ...

$ mmgetstate -a -L
Node number Node name Quorum Nodes up Total nodes GPFS state Remarks
------------------------------------------------------------------------------------
1 c149 3 4 4 active quorum node
2 c150 3 4 4 active quorum node
3 c152 3 4 4 active quorum node
4 c153 3 4 4 active quorum node

$ cat gpfs.disks (FailureGroup 1 & 2)
/dev/sda:c149::dataAndMetadata:1::
/dev/sdb:c149::dataAndMetadata:1::
/dev/sda:c152::dataAndMetadata:1::
/dev/sdb:c152::dataAndMetadata:1::
/dev/sda:c150::dataAndMetadata:2::
/dev/sdb:c150::dataAndMetadata:2::
/dev/sda:c153::dataAndMetadata:2::
/dev/sdb:c153::dataAndMetadata:2::

$ mmcrnsd -F gpfs.disks
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ cat gpfs.disks
# /dev/sda:c149::dataAndMetadata:1::
gpfs1nsd:::dataAndMetadata:1::
# /dev/sdb:c149::dataAndMetadata:1::
gpfs2nsd:::dataAndMetadata:1::
# /dev/sda:c152::dataAndMetadata:1::
gpfs3nsd:::dataAndMetadata:1::
# /dev/sdb:c152::dataAndMetadata:1::
gpfs4nsd:::dataAndMetadata:1::
# /dev/sda:c150::dataAndMetadata:2::
gpfs5nsd:::dataAndMetadata:2::
# /dev/sdb:c150::dataAndMetadata:2::
gpfs6nsd:::dataAndMetadata:2::
# /dev/sda:c153::dataAndMetadata:2::
gpfs7nsd:::dataAndMetadata:2::
# /dev/sdb:c153::dataAndMetadata:2::
gpfs8nsd:::dataAndMetadata:2::

$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs1nsd 000000004AAF8496 c149
(free disk) gpfs2nsd 000000004AAF8497 c149
(free disk) gpfs3nsd 000000004AAF8496 c152
(free disk) gpfs4nsd 000000004AAF8497 c152
(free disk) gpfs5nsd 000000004AAF8499 c150
(free disk) gpfs6nsd 000000004AAF849A c150
(free disk) gpfs7nsd 000000004AAF849B c153
(free disk) gpfs8nsd 000000004AAF849C c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 256K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs1nsd: size 1220994560 KB
gpfs2nsd: size 1220984320 KB
gpfs3nsd: size 1220994560 KB
gpfs4nsd: size 1220984320 KB
gpfs5nsd: size 1220994560 KB
gpfs6nsd: size 1220994560 KB
gpfs7nsd: size 1220994560 KB
gpfs8nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
gpfs2nsd nsd 512 1 yes yes ready up 2 system desc
gpfs3nsd nsd 512 1 yes yes ready down 3 system
gpfs4nsd nsd 512 1 yes yes ready down 4 system
gpfs5nsd nsd 512 2 yes yes ready up 5 system desc
gpfs6nsd nsd 512 2 yes yes ready up 6 system
gpfs7nsd nsd 512 2 yes yes ready up 7 system
gpfs8nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ mount /gpfs

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.4G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 91.5395 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 45.7509 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 22.8762 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 11.4205 seconds, 118 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 5.68368 seconds, 118 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs10nsd 000000004AAF899D c149
(free disk) gpfs11nsd 000000004AAF899D c152
(free disk) gpfs12nsd 000000004AAF899E c152
(free disk) gpfs13nsd 000000004AAF899F c150
(free disk) gpfs14nsd 000000004AAF89A0 c150
(free disk) gpfs15nsd 000000004AAF89A1 c153
(free disk) gpfs16nsd 000000004AAF89A2 c153
(free disk) gpfs9nsd 000000004AAF899C c149

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 128K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs9nsd: size 1220994560 KB
gpfs10nsd: size 1220984320 KB
gpfs11nsd: size 1220994560 KB
gpfs12nsd: size 1220984320 KB
gpfs13nsd: size 1220994560 KB
gpfs14nsd: size 1220994560 KB
gpfs15nsd: size 1220994560 KB
gpfs16nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
59 % complete on Tue Sep 15 20:35:04 2009
100 % complete on Tue Sep 15 20:35:07 2009
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs9nsd nsd 512 1 yes yes ready up 1 system desc
gpfs10nsd nsd 512 1 yes yes ready up 2 system desc
gpfs11nsd nsd 512 1 yes yes ready down 3 system
gpfs12nsd nsd 512 1 yes yes ready up 4 system
gpfs13nsd nsd 512 2 yes yes ready up 5 system desc
gpfs14nsd nsd 512 2 yes yes ready up 6 system
gpfs15nsd nsd 512 2 yes yes ready up 7 system
gpfs16nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 2.1G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.629 seconds, 93.7 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 57.2698 seconds, 93.7 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.6111 seconds, 93.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2832 seconds, 94.0 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.12948 seconds, 94.1 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs17nsd 000000004AAF8CF3 c149
(free disk) gpfs18nsd 000000004AAF8CF4 c149
(free disk) gpfs19nsd 000000004AAF8CF4 c152
(free disk) gpfs20nsd 000000004AAF8CF5 c152
(free disk) gpfs21nsd 000000004AAF8CF6 c150
(free disk) gpfs22nsd 000000004AAF8CF8 c150
(free disk) gpfs23nsd 000000004AAF8CF9 c153
(free disk) gpfs24nsd 000000004AAF8CFA c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 64K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs17nsd: size 1220994560 KB
gpfs18nsd: size 1220984320 KB
gpfs19nsd: size 1220994560 KB
gpfs20nsd: size 1220984320 KB
gpfs21nsd: size 1220994560 KB
gpfs22nsd: size 1220994560 KB
gpfs23nsd: size 1220994560 KB
gpfs24nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
29 % complete on Tue Sep 15 20:49:24 2009
57 % complete on Tue Sep 15 20:49:29 2009
86 % complete on Tue Sep 15 20:49:34 2009
100 % complete on Tue Sep 15 20:49:36 2009
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs17nsd nsd 512 1 yes yes ready up 1 system desc
gpfs18nsd nsd 512 1 yes yes ready up 2 system desc
gpfs19nsd nsd 512 1 yes yes ready down 3 system
gpfs20nsd nsd 512 1 yes yes ready up 4 system
gpfs21nsd nsd 512 2 yes yes ready up 5 system desc
gpfs22nsd nsd 512 2 yes yes ready up 6 system
gpfs23nsd nsd 512 2 yes yes ready up 7 system
gpfs24nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 3.3G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 116.784 seconds, 91.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 58.4151 seconds, 91.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.9273 seconds, 92.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.3859 seconds, 93.3 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.43935 seconds, 90.2 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs25nsd 000000004AAF91F1 c149
(free disk) gpfs26nsd 000000004AAF91F2 c149
(free disk) gpfs27nsd 000000004AAF91F1 c152
(free disk) gpfs28nsd 000000004AAF91F3 c152
(free disk) gpfs29nsd 000000004AAF91F4 c150
(free disk) gpfs30nsd 000000004AAF91F5 c150
(free disk) gpfs31nsd 000000004AAF91F6 c153
(free disk) gpfs32nsd 000000004AAF91F7 c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 512K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs25nsd: size 1220994560 KB
gpfs26nsd: size 1220984320 KB
gpfs27nsd: size 1220994560 KB
gpfs28nsd: size 1220984320 KB
gpfs29nsd: size 1220994560 KB
gpfs30nsd: size 1220994560 KB
gpfs31nsd: size 1220994560 KB
gpfs32nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs25nsd nsd 512 1 yes yes ready up 1 system desc
gpfs26nsd nsd 512 1 yes yes ready up 2 system desc
gpfs27nsd nsd 512 1 yes yes ready down 3 system
gpfs28nsd nsd 512 1 yes yes ready up 4 system
gpfs29nsd nsd 512 2 yes yes ready up 5 system desc
gpfs30nsd nsd 512 2 yes yes ready up 6 system
gpfs31nsd nsd 512 2 yes yes ready up 7 system
gpfs32nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.3G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.431 seconds, 93.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 57.4142 seconds, 93.5 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.5306 seconds, 94.1 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2473 seconds, 94.2 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.10075 seconds, 94.5 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs33nsd 000000004AAF9486 c149
(free disk) gpfs34nsd 000000004AAF9487 c149
(free disk) gpfs35nsd 000000004AAF9486 c152
(free disk) gpfs36nsd 000000004AAF9488 c152
(free disk) gpfs37nsd 000000004AAF9489 c150
(free disk) gpfs38nsd 000000004AAF948A c150
(free disk) gpfs39nsd 000000004AAF948B c153
(free disk) gpfs40nsd 000000004AAF948C c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 1024K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs33nsd: size 1220994560 KB
gpfs34nsd: size 1220984320 KB
gpfs35nsd: size 1220994560 KB
gpfs36nsd: size 1220984320 KB
gpfs37nsd: size 1220994560 KB
gpfs38nsd: size 1220994560 KB
gpfs39nsd: size 1220994560 KB
gpfs40nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs33nsd nsd 512 1 yes yes ready up 1 system desc
gpfs34nsd nsd 512 1 yes yes ready up 2 system desc
gpfs35nsd nsd 512 1 yes yes ready down 3 system
gpfs36nsd nsd 512 1 yes yes ready up 4 system
gpfs37nsd nsd 512 2 yes yes ready up 5 system desc
gpfs38nsd nsd 512 2 yes yes ready up 6 system
gpfs39nsd nsd 512 2 yes yes ready up 7 system
gpfs40nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.1G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.311 seconds, 93.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 56.9555 seconds, 94.3 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.5401 seconds, 94.1 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2041 seconds, 94.5 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.02471 seconds, 95.5 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs41nsd 000000004AAF9746 c149
(free disk) gpfs42nsd 000000004AAF9747 c149
(free disk) gpfs43nsd 000000004AAF9747 c152
(free disk) gpfs44nsd 000000004AAF9748 c152
(free disk) gpfs45nsd 000000004AAF9749 c150
(free disk) gpfs46nsd 000000004AAF974A c150
(free disk) gpfs47nsd 000000004AAF974B c153
(free disk) gpfs48nsd 000000004AAF974D c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 2048K -m 2 -M 2 -r 2 -R 2 -v no
mmcrfs: The specified block size (2048K) exceeds the maximum
allowed block size currently in effect (1024K).
Either specify a smaller value for the -B parameter,
or increase the maximum block size by issuing:
mmchconfig maxblocksize=2048K
and restarting the GPFS daemon
mmcrfs: Command failed. Examine previous error messages to determine cause.

$ mmchconfig maxblocksize=2048K
Verifying GPFS is stopped on all nodes ...
mmchconfig: GPFS is still active on c153
mmchconfig: GPFS is still active on c150
mmchconfig: GPFS is still active on c152
mmchconfig: GPFS is still active on c149
mmchconfig: Command failed. Examine previous error messages to determine cause.

$ mmshutdown -a
Tue Sep 15 21:33:37 CST 2009: mmshutdown: Starting force unmount of GPFS file systems
Tue Sep 15 21:33:42 CST 2009: mmshutdown: Shutting down GPFS daemons
c149: Shutting down!
c150: Shutting down!
c152: Shutting down!
c153: Shutting down!
c149: 'shutdown' command about to kill process 2905
c149: Unloading modules from /usr/lpp/mmfs/bin
c149: Unloading module mmfs
c150: 'shutdown' command about to kill process 16590
c150: Unloading modules from /usr/lpp/mmfs/bin
c150: Unloading module mmfs
c153: 'shutdown' command about to kill process 31397
c153: Unloading modules from /usr/lpp/mmfs/bin
c152: 'shutdown' command about to kill process 17482
c152: Unloading modules from /usr/lpp/mmfs/bin
c153: Unloading module mmfs
c152: Unloading module mmfs
c149: Unloading module mmfslinux
c149: Unloading module tracedev
c150: Unloading module mmfslinux
c150: Unloading module tracedev
c153: Unloading module mmfslinux
c153: Unloading module tracedev
c152: Unloading module mmfslinux
c152: Unloading module tracedev
Tue Sep 15 21:33:52 CST 2009: mmshutdown: Finished

$ mmchconfig maxblocksize=2048K
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmstartup -a
Tue Sep 15 21:34:06 CST 2009: mmstartup: Starting GPFS ...

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 2048K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c149.twaren.net:
gpfs41nsd: size 1220994560 KB
gpfs42nsd: size 1220984320 KB
gpfs43nsd: size 1220984320 KB
gpfs44nsd: size 1220984320 KB
gpfs45nsd: size 1220994560 KB
gpfs46nsd: size 1220994560 KB
gpfs47nsd: size 1220994560 KB
gpfs48nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs41nsd nsd 512 1 yes yes ready up 1 system desc
gpfs42nsd nsd 512 1 yes yes ready down 2 system
gpfs43nsd nsd 512 1 yes yes ready up 3 system desc
gpfs44nsd nsd 512 1 yes yes ready up 4 system
gpfs45nsd nsd 512 2 yes yes ready up 5 system desc
gpfs46nsd nsd 512 2 yes yes ready up 6 system
gpfs47nsd nsd 512 2 yes yes ready up 7 system
gpfs48nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 936M 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
dd: writing `/gpfs/test': Stale NFS file handle
9+0 records in
8+0 records out
8388608 bytes (8.4 MB) copied, 0.054915 seconds, 153 MB/s