2009年10月23日 星期五

upgrade GPFS on CentOS 5.4

剛好最近 CentOS 5.4 release 了,更新一下,順便 update gpfs (3.2.1-14 -> 3.3.0-1)
更新上沒有太大問題。

$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)

$ tar xvf gpfs-3.3.0-1.x86_64.update.tar.gz
README
gpfs.base-3.3.0-1.x86_64.update.rpm
gpfs.docs-3.3.0-1.noarch.rpm
gpfs.gpl-3.3.0-1.noarch.rpm
gpfs.gui-3.3.0-1.x86_64.rpm
gpfs.msg.en_US-3.3.0-1.noarch.rpm

$ rpm -U gpfs.base-3.3.0-1.x86_64.update.rpm
$ rpm -U gpfs.docs-3.3.0-1.noarch.rpm
$ rpm -U gpfs.msg.en_US-3.3.0-1.noarch.rpm
$ rpm -U gpfs.gui-3.3.0-1.x86_64.rpm
You may start the GPFS GUI now by typing : /etc/init.d/gpfsgui start
Alternatively, the GPFS GUI will start on reboot.
$ rpm -ivh --force gpfs.gpl-3.3.0-1.noarch.rpm
Preparing... ########################################### [100%]
1:gpfs.gpl ########################################### [100%]

$ cd /usr/lpp/mmfs/src
$ export SHARKCLONEROOT=/usr/lpp/mmfs/src

$ make Autoconfig
cd /usr/lpp/mmfs/src/config; ./configure --genenvonly; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1

$ cat config/env.mcr
#define GPFS_ARCH_X86_64
#define GPFS_LINUX
LINUX_DISTRIBUTION := REDHAT_AS_LINUX
#define LINUX_DISTRIBUTION_LEVEL 54
#define LINUX_KERNEL_VERSION 2061899
KERNEL_BUILD_DIR := /lib/modules/2.6.18-164.el5/build

$ make clean
$ make World
$ make InstallImages

good ref link: http://www.ibm.com/developerworks/wikis/display/hpccentral/General+Parallel+File+System+(GPFS)

另外補一下 gpfs tuning link: http://www.ibm.com/developerworks/wikis/display/hpccentral/GPFS+Tuning+Parameters

2009年9月7日 星期一

add storage node on GPFS

新增一個新的儲存節點到檔案系統中。

※新增節點前,亦須確認該節點的 ssh key 是否已建立,並且確認其它節點可以無密碼login.
※確認 firewall 已允許各節點間互連。

$ mmaddnode c153 (可指定是否為quorum, 或之後用 mmchconfig designation=quorum 或 mmchnode --quorum 調整)
Fri Sep 4 17:12:58 CST 2009: mmaddnode: Processing node c153
mmaddnode: Command successfully completed
mmaddnode: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmstartup -N c153
Fri Sep 4 17:13:11 CST 2009: mmstartup: Starting GPFS ...

$ cat add_new_disk (此例子指定FailureGroup為2)
/dev/sda:c153::dataAndMetadata:2::
/dev/sdb:c153::dataAndMetadata:2::

$ mmcrnsd -F add_new_disk -v no
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
gpfs0 gpfs1nsd c149
gpfs0 gpfs2nsd c150
gpfs0 gpfs3nsd c149
gpfs0 gpfs4nsd c150
gpfs0 gpfs5nsd c152
gpfs0 gpfs6nsd c152
(free disk) gpfs7nsd c153
(free disk) gpfs8nsd c153

$ mmadddisk gpfs0 -F add_new_disk -r
The following disks of gpfs0 will be formatted on node c149.twaren.net:
gpfs7nsd: size 1220994560 KB
gpfs8nsd: size 1220984320 KB
Extending Allocation Map
Checking Allocation Map for storage pool 'system'
79 % complete on Fri Sep 4 17:16:19 2009
100 % complete on Fri Sep 4 17:16:20 2009
Completed adding disks to file system gpfs0.
mmadddisk: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Restriping gpfs0 ...
Scanning file system metadata, phase 1 ...
2 % complete on Fri Sep 4 17:16:27 2009
22 % complete on Fri Sep 4 17:16:30 2009
40 % complete on Fri Sep 4 17:16:33 2009
59 % complete on Fri Sep 4 17:16:36 2009
80 % complete on Fri Sep 4 17:16:39 2009
100 % complete on Fri Sep 4 17:16:42 2009
Scan completed successfully.
Scanning file system metadata, phase 2 ...
15 % complete on Fri Sep 4 17:16:45 2009
33 % complete on Fri Sep 4 17:16:48 2009
51 % complete on Fri Sep 4 17:16:51 2009
69 % complete on Fri Sep 4 17:16:54 2009
86 % complete on Fri Sep 4 17:16:57 2009
100 % complete on Fri Sep 4 17:17:00 2009
Scan completed successfully.
Scanning file system metadata, phase 3 ...
Scan completed successfully.
Scanning file system metadata, phase 4 ...
Scan completed successfully.
Scanning user file metadata ...
Scan completed successfully.
Done

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
gpfs2nsd nsd 512 2 yes yes ready up 2 system desc
gpfs3nsd nsd 512 1 yes yes ready up 3 system desc
gpfs4nsd nsd 512 2 yes yes ready up 4 system
gpfs5nsd nsd 512 1 yes yes ready up 5 system
gpfs6nsd nsd 512 1 yes yes ready up 6 system
gpfs7nsd nsd 512 2 yes yes ready up 7 system
gpfs8nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h (dynamic add up disk space)
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 4.6T 4.6T 51% /gpfs

GPFS install on CentOS

延續上一篇 GPFS install on Gentoo Linux
雖然說在 Gentoo 上安裝是沒有問題,可是在使用上卻是有很多問題,它相關的tools使用上,動不動就會 kernel panic, 如果你是想 production 使用,建議還是用它 support 的 OS 吧。

以下是在 CentOS 5.3 安裝 GPFS v3.2.1

$ cat /etc/hosts
10.0.0.149 c149
10.0.0.150 c150
10.0.0.152 c152
10.0.0.153 c153

$ ssh-keygen -t rsa (create key pair on every node)

$ copy each node's public key (id_rsa.pub) into authorized_keys (every node, even itself)

$ test full-mesh log in without any password prompt (check tcp-wrapper)

$ cat /etc/redhat-release (fool gpfs install script :o)
Red Hat Enterprise Linux Server release 5.3 (Tikanga)

$ touch /usr/lpp/mmfs/lib/libgpfslum.so (fool gpfs install script :o)

$ cd /usr/src/gpfs
$ wget ftp://ftp.software.ibm.com/software/server/gpfs/gpfs-3.2.1-14.x86_64.update.tar.gz
$ tar zxvf gpfs-3.2.1-14.x86_64.update.tar.gz
$ rpm -ivh gpfs.msg.en_US-3.2.1-14.noarch.rpm
$ rpm -ivh gpfs.gpl-3.2.1-14.noarch.rpm
$ rpm -ivh gpfs.docs-3.2.1-14.noarch.rpm
$ yum install imake.x86_64
$ yum install compat-libstdc++-33.x86_64 (gpfs needs libstdc++.so.5)
$ rpm -ivh gpfs.base-3.2.1-14.x86_64.update.rpm --nodeps

$ cd /usr/lpp/mmfs/src
$ export SHARKCLONEROOT=/usr/lpp/mmfs/src
$ make Autoconfig (or config by yourself)
$ cat config/site.mcr
#define GPFS_ARCH_X86_64
LINUX_DISTRIBUTION = REDHAT_AS_LINUX
#define LINUX_DISTRIBUTION_LEVEL 53
#define LINUX_KERNEL_VERSION 2061899

$ make clean
$ make World
$ make InstallImages

$ cat ~/.bashrc
PATH=$PATH:/usr/lpp/mmfs/bin

$ cat gpfs.nodes
c149:manager-quorum:
c150:manager-quorum:
c152:manager-quorum:
c153:manager-quorum:

$ mmcrcluster -C TWAREN_FTP -N gpfs.nodes -p c149 -R /usr/bin/scp -r /usr/bin/ssh -s c150
Tue Sep 15 20:09:52 CST 2009: mmcrcluster: Processing node c149
Tue Sep 15 20:09:52 CST 2009: mmcrcluster: Processing node c150
Tue Sep 15 20:09:53 CST 2009: mmcrcluster: Processing node c152
Tue Sep 15 20:09:54 CST 2009: mmcrcluster: Processing node c153
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlscluster
GPFS cluster information
========================
GPFS cluster name: TWAREN_FTP.c149
GPFS cluster id: 720576581582423056
GPFS UID domain: TWAREN_FTP.c149
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server: c149
Secondary server: c150

Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 c149 10.0.0.149 c149 quorum-manager
2 c150 10.0.0.150 c150 quorum-manager
3 c152 10.0.0.152 c152 quorum-manager
4 c153 10.0.0.153 c153 quorum-manager

$ mmstartup -a
Tue Sep 15 20:10:37 CST 2009: mmstartup: Starting GPFS ...

$ mmgetstate -a -L
Node number Node name Quorum Nodes up Total nodes GPFS state Remarks
------------------------------------------------------------------------------------
1 c149 3 4 4 active quorum node
2 c150 3 4 4 active quorum node
3 c152 3 4 4 active quorum node
4 c153 3 4 4 active quorum node

$ cat gpfs.disks (FailureGroup 1 & 2)
/dev/sda:c149::dataAndMetadata:1::
/dev/sdb:c149::dataAndMetadata:1::
/dev/sda:c152::dataAndMetadata:1::
/dev/sdb:c152::dataAndMetadata:1::
/dev/sda:c150::dataAndMetadata:2::
/dev/sdb:c150::dataAndMetadata:2::
/dev/sda:c153::dataAndMetadata:2::
/dev/sdb:c153::dataAndMetadata:2::

$ mmcrnsd -F gpfs.disks
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sda
mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ cat gpfs.disks
# /dev/sda:c149::dataAndMetadata:1::
gpfs1nsd:::dataAndMetadata:1::
# /dev/sdb:c149::dataAndMetadata:1::
gpfs2nsd:::dataAndMetadata:1::
# /dev/sda:c152::dataAndMetadata:1::
gpfs3nsd:::dataAndMetadata:1::
# /dev/sdb:c152::dataAndMetadata:1::
gpfs4nsd:::dataAndMetadata:1::
# /dev/sda:c150::dataAndMetadata:2::
gpfs5nsd:::dataAndMetadata:2::
# /dev/sdb:c150::dataAndMetadata:2::
gpfs6nsd:::dataAndMetadata:2::
# /dev/sda:c153::dataAndMetadata:2::
gpfs7nsd:::dataAndMetadata:2::
# /dev/sdb:c153::dataAndMetadata:2::
gpfs8nsd:::dataAndMetadata:2::

$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs1nsd 000000004AAF8496 c149
(free disk) gpfs2nsd 000000004AAF8497 c149
(free disk) gpfs3nsd 000000004AAF8496 c152
(free disk) gpfs4nsd 000000004AAF8497 c152
(free disk) gpfs5nsd 000000004AAF8499 c150
(free disk) gpfs6nsd 000000004AAF849A c150
(free disk) gpfs7nsd 000000004AAF849B c153
(free disk) gpfs8nsd 000000004AAF849C c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 256K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs1nsd: size 1220994560 KB
gpfs2nsd: size 1220984320 KB
gpfs3nsd: size 1220994560 KB
gpfs4nsd: size 1220984320 KB
gpfs5nsd: size 1220994560 KB
gpfs6nsd: size 1220994560 KB
gpfs7nsd: size 1220994560 KB
gpfs8nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs1nsd nsd 512 1 yes yes ready up 1 system desc
gpfs2nsd nsd 512 1 yes yes ready up 2 system desc
gpfs3nsd nsd 512 1 yes yes ready down 3 system
gpfs4nsd nsd 512 1 yes yes ready down 4 system
gpfs5nsd nsd 512 2 yes yes ready up 5 system desc
gpfs6nsd nsd 512 2 yes yes ready up 6 system
gpfs7nsd nsd 512 2 yes yes ready up 7 system
gpfs8nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ mount /gpfs

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.4G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 91.5395 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 45.7509 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 22.8762 seconds, 117 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 11.4205 seconds, 118 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 5.68368 seconds, 118 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs10nsd 000000004AAF899D c149
(free disk) gpfs11nsd 000000004AAF899D c152
(free disk) gpfs12nsd 000000004AAF899E c152
(free disk) gpfs13nsd 000000004AAF899F c150
(free disk) gpfs14nsd 000000004AAF89A0 c150
(free disk) gpfs15nsd 000000004AAF89A1 c153
(free disk) gpfs16nsd 000000004AAF89A2 c153
(free disk) gpfs9nsd 000000004AAF899C c149

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 128K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs9nsd: size 1220994560 KB
gpfs10nsd: size 1220984320 KB
gpfs11nsd: size 1220994560 KB
gpfs12nsd: size 1220984320 KB
gpfs13nsd: size 1220994560 KB
gpfs14nsd: size 1220994560 KB
gpfs15nsd: size 1220994560 KB
gpfs16nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
59 % complete on Tue Sep 15 20:35:04 2009
100 % complete on Tue Sep 15 20:35:07 2009
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs9nsd nsd 512 1 yes yes ready up 1 system desc
gpfs10nsd nsd 512 1 yes yes ready up 2 system desc
gpfs11nsd nsd 512 1 yes yes ready down 3 system
gpfs12nsd nsd 512 1 yes yes ready up 4 system
gpfs13nsd nsd 512 2 yes yes ready up 5 system desc
gpfs14nsd nsd 512 2 yes yes ready up 6 system
gpfs15nsd nsd 512 2 yes yes ready up 7 system
gpfs16nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 2.1G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.629 seconds, 93.7 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 57.2698 seconds, 93.7 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.6111 seconds, 93.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2832 seconds, 94.0 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.12948 seconds, 94.1 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs17nsd 000000004AAF8CF3 c149
(free disk) gpfs18nsd 000000004AAF8CF4 c149
(free disk) gpfs19nsd 000000004AAF8CF4 c152
(free disk) gpfs20nsd 000000004AAF8CF5 c152
(free disk) gpfs21nsd 000000004AAF8CF6 c150
(free disk) gpfs22nsd 000000004AAF8CF8 c150
(free disk) gpfs23nsd 000000004AAF8CF9 c153
(free disk) gpfs24nsd 000000004AAF8CFA c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 64K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs17nsd: size 1220994560 KB
gpfs18nsd: size 1220984320 KB
gpfs19nsd: size 1220994560 KB
gpfs20nsd: size 1220984320 KB
gpfs21nsd: size 1220994560 KB
gpfs22nsd: size 1220994560 KB
gpfs23nsd: size 1220994560 KB
gpfs24nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 11 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
29 % complete on Tue Sep 15 20:49:24 2009
57 % complete on Tue Sep 15 20:49:29 2009
86 % complete on Tue Sep 15 20:49:34 2009
100 % complete on Tue Sep 15 20:49:36 2009
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs17nsd nsd 512 1 yes yes ready up 1 system desc
gpfs18nsd nsd 512 1 yes yes ready up 2 system desc
gpfs19nsd nsd 512 1 yes yes ready down 3 system
gpfs20nsd nsd 512 1 yes yes ready up 4 system
gpfs21nsd nsd 512 2 yes yes ready up 5 system desc
gpfs22nsd nsd 512 2 yes yes ready up 6 system
gpfs23nsd nsd 512 2 yes yes ready up 7 system
gpfs24nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 3.3G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 116.784 seconds, 91.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 58.4151 seconds, 91.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.9273 seconds, 92.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.3859 seconds, 93.3 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.43935 seconds, 90.2 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs25nsd 000000004AAF91F1 c149
(free disk) gpfs26nsd 000000004AAF91F2 c149
(free disk) gpfs27nsd 000000004AAF91F1 c152
(free disk) gpfs28nsd 000000004AAF91F3 c152
(free disk) gpfs29nsd 000000004AAF91F4 c150
(free disk) gpfs30nsd 000000004AAF91F5 c150
(free disk) gpfs31nsd 000000004AAF91F6 c153
(free disk) gpfs32nsd 000000004AAF91F7 c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 512K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs25nsd: size 1220994560 KB
gpfs26nsd: size 1220984320 KB
gpfs27nsd: size 1220994560 KB
gpfs28nsd: size 1220984320 KB
gpfs29nsd: size 1220994560 KB
gpfs30nsd: size 1220994560 KB
gpfs31nsd: size 1220994560 KB
gpfs32nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs25nsd nsd 512 1 yes yes ready up 1 system desc
gpfs26nsd nsd 512 1 yes yes ready up 2 system desc
gpfs27nsd nsd 512 1 yes yes ready down 3 system
gpfs28nsd nsd 512 1 yes yes ready up 4 system
gpfs29nsd nsd 512 2 yes yes ready up 5 system desc
gpfs30nsd nsd 512 2 yes yes ready up 6 system
gpfs31nsd nsd 512 2 yes yes ready up 7 system
gpfs32nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.3G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.431 seconds, 93.8 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 57.4142 seconds, 93.5 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.5306 seconds, 94.1 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2473 seconds, 94.2 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.10075 seconds, 94.5 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs33nsd 000000004AAF9486 c149
(free disk) gpfs34nsd 000000004AAF9487 c149
(free disk) gpfs35nsd 000000004AAF9486 c152
(free disk) gpfs36nsd 000000004AAF9488 c152
(free disk) gpfs37nsd 000000004AAF9489 c150
(free disk) gpfs38nsd 000000004AAF948A c150
(free disk) gpfs39nsd 000000004AAF948B c153
(free disk) gpfs40nsd 000000004AAF948C c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 1024K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c150.twaren.net:
gpfs33nsd: size 1220994560 KB
gpfs34nsd: size 1220984320 KB
gpfs35nsd: size 1220994560 KB
gpfs36nsd: size 1220984320 KB
gpfs37nsd: size 1220994560 KB
gpfs38nsd: size 1220994560 KB
gpfs39nsd: size 1220994560 KB
gpfs40nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs33nsd nsd 512 1 yes yes ready up 1 system desc
gpfs34nsd nsd 512 1 yes yes ready up 2 system desc
gpfs35nsd nsd 512 1 yes yes ready down 3 system
gpfs36nsd nsd 512 1 yes yes ready up 4 system
gpfs37nsd nsd 512 2 yes yes ready up 5 system desc
gpfs38nsd nsd 512 2 yes yes ready up 6 system
gpfs39nsd nsd 512 2 yes yes ready up 7 system
gpfs40nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 1.1G 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 114.311 seconds, 93.9 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=512k count=10k
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 56.9555 seconds, 94.3 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=256k count=10k
10240+0 records in
10240+0 records out
2684354560 bytes (2.7 GB) copied, 28.5401 seconds, 94.1 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=128k count=10k
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 14.2041 seconds, 94.5 MB/s
$ dd if=/dev/zero of=/gpfs/test bs=64k count=10k
10240+0 records in
10240+0 records out
671088640 bytes (671 MB) copied, 7.02471 seconds, 95.5 MB/s


$ mmlsnsd -a -L
File system Disk name NSD volume ID NSD servers
---------------------------------------------------------------------------------------------
(free disk) gpfs41nsd 000000004AAF9746 c149
(free disk) gpfs42nsd 000000004AAF9747 c149
(free disk) gpfs43nsd 000000004AAF9747 c152
(free disk) gpfs44nsd 000000004AAF9748 c152
(free disk) gpfs45nsd 000000004AAF9749 c150
(free disk) gpfs46nsd 000000004AAF974A c150
(free disk) gpfs47nsd 000000004AAF974B c153
(free disk) gpfs48nsd 000000004AAF974D c153

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 2048K -m 2 -M 2 -r 2 -R 2 -v no
mmcrfs: The specified block size (2048K) exceeds the maximum
allowed block size currently in effect (1024K).
Either specify a smaller value for the -B parameter,
or increase the maximum block size by issuing:
mmchconfig maxblocksize=2048K
and restarting the GPFS daemon
mmcrfs: Command failed. Examine previous error messages to determine cause.

$ mmchconfig maxblocksize=2048K
Verifying GPFS is stopped on all nodes ...
mmchconfig: GPFS is still active on c153
mmchconfig: GPFS is still active on c150
mmchconfig: GPFS is still active on c152
mmchconfig: GPFS is still active on c149
mmchconfig: Command failed. Examine previous error messages to determine cause.

$ mmshutdown -a
Tue Sep 15 21:33:37 CST 2009: mmshutdown: Starting force unmount of GPFS file systems
Tue Sep 15 21:33:42 CST 2009: mmshutdown: Shutting down GPFS daemons
c149: Shutting down!
c150: Shutting down!
c152: Shutting down!
c153: Shutting down!
c149: 'shutdown' command about to kill process 2905
c149: Unloading modules from /usr/lpp/mmfs/bin
c149: Unloading module mmfs
c150: 'shutdown' command about to kill process 16590
c150: Unloading modules from /usr/lpp/mmfs/bin
c150: Unloading module mmfs
c153: 'shutdown' command about to kill process 31397
c153: Unloading modules from /usr/lpp/mmfs/bin
c152: 'shutdown' command about to kill process 17482
c152: Unloading modules from /usr/lpp/mmfs/bin
c153: Unloading module mmfs
c152: Unloading module mmfs
c149: Unloading module mmfslinux
c149: Unloading module tracedev
c150: Unloading module mmfslinux
c150: Unloading module tracedev
c153: Unloading module mmfslinux
c153: Unloading module tracedev
c152: Unloading module mmfslinux
c152: Unloading module tracedev
Tue Sep 15 21:33:52 CST 2009: mmshutdown: Finished

$ mmchconfig maxblocksize=2048K
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmstartup -a
Tue Sep 15 21:34:06 CST 2009: mmstartup: Starting GPFS ...

$ mmcrfs /gpfs gpfs0 -F gpfs.disks -B 2048K -m 2 -M 2 -r 2 -R 2 -v no
The following disks of gpfs0 will be formatted on node c149.twaren.net:
gpfs41nsd: size 1220994560 KB
gpfs42nsd: size 1220984320 KB
gpfs43nsd: size 1220984320 KB
gpfs44nsd: size 1220984320 KB
gpfs45nsd: size 1220994560 KB
gpfs46nsd: size 1220994560 KB
gpfs47nsd: size 1220994560 KB
gpfs48nsd: size 1220984320 KB
Formatting file system ...
Disks up to size 18 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

$ mmlsdisk gpfs0 -L
disk driver sector failure holds holds storage
name type size group metadata data status availability disk id pool remarks
------------ -------- ------ ------- -------- ----- ------------- ------------ ------- ------------ ---------
gpfs41nsd nsd 512 1 yes yes ready up 1 system desc
gpfs42nsd nsd 512 1 yes yes ready down 2 system
gpfs43nsd nsd 512 1 yes yes ready up 3 system desc
gpfs44nsd nsd 512 1 yes yes ready up 4 system
gpfs45nsd nsd 512 2 yes yes ready up 5 system desc
gpfs46nsd nsd 512 2 yes yes ready up 6 system
gpfs47nsd nsd 512 2 yes yes ready up 7 system
gpfs48nsd nsd 512 2 yes yes ready up 8 system
Number of quorum disks: 3
Read quorum value: 2
Write quorum value: 2

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/gpfs0 9.1T 936M 9.1T 1% /gpfs

$ dd if=/dev/zero of=/gpfs/test bs=1024k count=10k
dd: writing `/gpfs/test': Stale NFS file handle
9+0 records in
8+0 records out
8388608 bytes (8.4 MB) copied, 0.054915 seconds, 153 MB/s

2009年6月12日 星期五

POHMELFS install on Gentoo Linux

POHMELFS 據說最近才進 2.6.30 kernel. Parallel Optimized Host Message Exchange Layered File System 的縮寫,是一套 Parallel network filesystem.

kernel: 2.6.30


Step1.
取得相關 source.
pohmelfs 可以直接由它官方網站上取得
$ git clone http://www.ioremap.net/git/pohmelfs.git
或是從最新的 linux kernel (2.6.30)
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.30.tar.bz2


Step2.
kernel 編譯。
如果你是從 official kernel 2.6.30 編譯的話,請在 Device Drivers -> Staging drivers 選取 POHMELFS filesystem support.

如果你是 clone 它的原始 source, 請到 File systems -> Network File Systems 選取 POHMELFS filesystem support.

kernel 編譯完記得重開機。


Step3.
取得 userspace utils 並編譯。
$ cd /usr/src
$ git clone http://www.ioremap.net/git/pohmelfs-server.git
$ cd pohmelfs-server
3.1. patch 相關檔案 (如果你是用它官方的 pohmelfs.git kernel 就不需 patch 了)
============================================
--- cfg/cfg.c.orig      2009-06-11 20:32:32.000000000 +0800
+++ cfg/cfg.c   2009-06-11 20:32:45.000000000 +0800
@@ -39,7 +39,7 @@

 #include "swab.h"

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>
 #include <linux/connector.h>

 #include "fserver.h"
==================================================
--- include/fserver.h.orig      2009-06-11 20:33:13.000000000 +0800
+++ include/fserver.h   2009-06-11 20:33:23.000000000 +0800
@@ -36,7 +36,7 @@
 };
 #endif

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>

 #include "list.h"
 #include "rbtree.h"
=============================================
--- utils/flush.c.orig  2009-06-11 20:33:43.000000000 +0800
+++ utils/flush.c       2009-06-11 20:33:53.000000000 +0800
@@ -43,7 +43,7 @@

 #include "swab.h"

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>

 #include <openssl/hmac.h>
 #include <openssl/evp.h>
=============================================
3.2. 編譯。
$ ./autogen.sh
$ ./configure --with-kdir-path=/usr/src/linux/drivers/staging/pohmelfs (指定 netfs.h header 位置)
$ make
$ make install
3.3. 模組載入。
$ modprobe pohmelfs
$ lsmod
Module                  Size  Used by
pohmelfs               66116  0
$ dmesg
pohmelfs: module is from the staging directory, the quality is unknown, you have been warned.


Step4.
啟動 storage server 及掛載測試!
4.1. 啟動 server
$ fserver -r /mnt/pohmelfs -w1 (on node26)
Server is now listening at 0.0.0.0:1025.
$ fserver -r /mnt/pohmelfs -w1 (on node27)
Server is now listening at 0.0.0.0:1025.
4.2. client 掛載
$ cfg -A add -a 140.110.x.26 -p 1025 -i1 (on node28)
$ mount -t pohmel -o "idx=1" none /mnt/pohmelfs (on node28)
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
none                   34G  2.9G   32G   9% /mnt/pohmelfs
$ cfg -A add -a 140.110.x.27 -p 1025 -i1 (on node28)
這樣 node28 可以同時存取 node26 及 node27 export dir.
請注意,先加入一個 node, mount 後再陸續加入其他 node, 否則無法掛載,說明請參考這裡
4.3. 確認
$ cfg -A show -i1 (on node28)
Config Index = 1
Family    Server IP                                            Port
AF_INET   140.110.x.26                                          1025
AF_INET   140.110.x.27                                          1025


說明:
當在 client node(node28) 寫檔案時,會寫到兩台 server node(node26, node27)
當在 client node(node28) 讀檔案時,會從兩台 server node 讀取資料。
建議目前不要用在 production 環境上 XD


參考:
pohmelfs-server source 裡的 README.

2009年6月9日 星期二

GPFS install on Gentoo Linux

IBM GPFS 是一套商用的 cluster file system, 要錢的。
關於 GPFS price 問題請參考:How is GPFS priced?
我這邊安裝的環境是:
kernel: 2.6.29-gentoo-r5
gpfs: 3.2.1-12

Step1.
安裝基本需要套件。
1.1. ksh
$ emerge -uD app-shells/ksh
1.2. rsh
$ emerge -uD net-misc/netkit-rsh
1.3. imake
$ emerge -uD x11-misc/imake
1.4. libstdc++.so.5
$ mkdir /usr/src/gpfs/
$ cd /usr/src/gpfs/
$ wget ftp://rpmfind.net/linux/fedora/releases/10/Everything/i386/os/Packages/compat-libstdc++-33-3.2.3-64.i386.rpm
$ rpm -ivh compat-libstdc++-33-3.2.3-64.i386.rpm

Step2.
下載安裝 GPFS.
$ wget ftp://ftp.software.ibm.com/software/server/gpfs/gpfs-3.2.1-12.i386.update.tar.gz
$ tar zxf gpfs-3.2.1-12.i386.update.tar.gz
$ rpm -ivh gpfs.msg.en_US-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.gpl-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.docs-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.base-3.2.1-12.i386.update.rpm --noscripts
取得 scripts.
$ rpm -qip --scripts gpfs.base-3.2.1-12.i386.update.rpm > scripts.sh
編譯相關 binary.
$ export SHARKCLONEROOT=/usr/lpp/mmfs/src
$ cd /usr/lpp/mmfs/src/config
$ cp site.mcr.proto site.mcr
確認 site.mcr 部分設定。
#define GPFS_ARCH_I386
LINUX_DISTRIBUTION = KERNEL_ORG_LINUX
#define LINUX_KERNEL_VERSION 2062999
$ cd /usr/lpp/mmfs/src
$ make World
$ make InstallImages
新增搜尋 PATH.
PATH = /usr/lpp/mmfs/bin:$PATH

Step3.
public key 設置。
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
aa:bb:cc:dd:ee:ff:11:22:33:44:55:66:77:88:99:00 root@node147
The key's randomart image is:
+--[ RSA 2048]----+
| .o=*|
| o .=E+|
| o . .* |
| . +... .|
| S o .=. |
| . +. |
| o. |
| ... |
| .. |
+-----------------+
上傳 id_dsa.pub 至 所有主機 且重新命名為 authorized_keys.

Step4.
編輯主機節點設定。
$ cat mycluster.allnodes
node147:quorum
node148:quorum

Step5.
Create cluster.
$ mmcrcluster -N mycluster.allnodes -p node147 -r /usr/bin/ssh -R /usr/bin/scp -C mycluster
Mon Jun 8 22:19:24 CST 2009: mmcrcluster: Processing node node147
Mon Jun 8 22:19:24 CST 2009: mmcrcluster: Processing node node148
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Step6.
顯示確認 cluster/node 狀態。
$ mmlscluster
GPFS cluster information
========================
GPFS cluster name: mycluster.node147
GPFS cluster id: 15226457667488521708
GPFS UID domain: mycluster.node147
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server: node147
Secondary server: (none)

Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 node147 211.79.x.147 node147 quorum
2 node148 211.79.x.148 node148 quorum

$ mmlsnode -a
GPFS nodeset Node list
------------- -------------------------------------------------------
mycluster node147 node148

Step7.
啟動 GPFS.
$ mmstartup -a
Mon Jun 8 22:20:20 CST 2009: mmstartup: Starting GPFS ...

Step8.
檢查載入模組。
$ lsmod
Module Size Used by
mmfs 1048096 1
mmfslinux 174468 4 mmfs
tracedev 9888 3 mmfs,mmfslinux

Step9.
建立 Network shared disks (NSDs) 描述檔。
$ cat nodes.descfile
/dev/sda1:node147::dataAndMetadata::
/dev/sda1:node148::dataAndMetadata::
目前測試不支援 HP /dev/cciss/c0d0p1

Step10.
建立 NSD.
$ mmcrnsd -F nodes.descfile
mmcrnsd: Processing disk sda1
mmcrnsd: Processing disk sda1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
建立完 NSD 後,nodes.descfile 會自動做些變更。
$ cat nodes.descfile
# /dev/sda1:node147::dataAndMetadata::
gpfs1nsd:::dataAndMetadata:4001::
# /dev/sda1:node148::dataAndMetadata::
gpfs2nsd:::dataAndMetadata:4002::

Step11.
確認節點硬碟狀況。
$ mmlsnsd -m
Disk name NSD volume ID Device Node name Remarks
---------------------------------------------------------------------------------------
gpfs1nsd D34F3E934A2D1FD6 /dev/sda1 node147 server node
gpfs2nsd D34F3E944A2D1FD7 /dev/sda1 node148 server node

Step12.
格式化檔案系統。
$ mmcrfs /gpfs gpfs0 -F nodes.descfile
The following disks of gpfs0 will be formatted on node node147:
gpfs1nsd: size 1220698993 KB
gpfs2nsd: size 1221269301 KB
Formatting file system ...
Disks up to size 10 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
程式會自動修改 /etc/fstab 檔案。
$ cat /etc/fstab
/dev/gpfs0 /gpfs gpfs rw,mtime,atime,dev=gpfs0,autostart 0 0

Step13.
掛載!測試!
$ mount /gpfs
$ time dd if=/dev/zero of=/gpfs/test1 bs=5120 count=1024000
1024000+0 records in
1024000+0 records out
5242880000 bytes (5.2 GB) copied, 23.5701 s, 222 MB/s

real 0m23.612s
user 0m0.329s
sys 0m11.902s
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p3 136G 1.8G 134G 2% /
udev 10M 44K 10M 1% /dev
shm 8.0G 0 8.0G 0% /dev/shm
/dev/gpfs0 2.3T 5.6G 2.3T 1% /gpfs
$ md5sum /gpfs/test1 (on node147)
f0c4910bd1b40aecaad309d2a8999e66 test1
$ md5sum /gpfs/test1 (on node148)
f0c4910bd1b40aecaad309d2a8999e66 test1

其他:
Adding a new disk to an existing GPFS file system.
$ mmcrnsd -F descfile-node1-sda2
$ mmadddisk gpfs0 -F descfile-node1-sda2 -r -a
Delet a disk.
$ mmdeldisk gpfs0 gpfs6nsd -r -a
Delete all file systems with mmdelfs.
$ mmdelfs /dev/gpfs0
Delete all NSDs with mmdelnsd.
$ mmdelnsd 'gpfs1nsd;gpfs2nsd;gpfs3nsd'
Shutdown GPFS on all nodes.
$ mmshutdown -a
Tue Jun 9 15:18:09 CST 2009: mmshutdown: Starting force unmount of GPFS file systems
Tue Jun 9 15:18:14 CST 2009: mmshutdown: Shutting down GPFS daemons
node148: Shutting down!
node147: Shutting down!
node148: 'shutdown' command about to kill process 3461
node148: Unloading modules from /usr/lpp/mmfs/bin
node148: Unloading module mmfs
node148: Unloading module mmfslinux
node148: Unloading module tracedev
node147: 'shutdown' command about to kill process 3643
node147: Unloading modules from /usr/lpp/mmfs/bin
node147: Unloading module mmfs
node147: Unloading module mmfslinux
node147: Unloading module tracedev
Tue Jun 9 15:18:21 CST 2009: mmshutdown: Finished
Remove all nodes from nodeset.
$ mmdelnode -a

說明:
目前 node147 提供 1.2TB(sda1), node148 提供 1.2TB(sda1),兩個 node 皆可看到 total: 2.3TB
目前有些東西編譯不出來,可是不知道影響到啥 XD 因為模組有正常載入 @_@
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/tracedev.ko] undefined!
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko] undefined!
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/mmfs26.ko] undefined!

參考: