2009年6月12日 星期五

POHMELFS install on Gentoo Linux

POHMELFS 據說最近才進 2.6.30 kernel. Parallel Optimized Host Message Exchange Layered File System 的縮寫,是一套 Parallel network filesystem.

kernel: 2.6.30


Step1.
取得相關 source.
pohmelfs 可以直接由它官方網站上取得
$ git clone http://www.ioremap.net/git/pohmelfs.git
或是從最新的 linux kernel (2.6.30)
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.30.tar.bz2


Step2.
kernel 編譯。
如果你是從 official kernel 2.6.30 編譯的話,請在 Device Drivers -> Staging drivers 選取 POHMELFS filesystem support.

如果你是 clone 它的原始 source, 請到 File systems -> Network File Systems 選取 POHMELFS filesystem support.

kernel 編譯完記得重開機。


Step3.
取得 userspace utils 並編譯。
$ cd /usr/src
$ git clone http://www.ioremap.net/git/pohmelfs-server.git
$ cd pohmelfs-server
3.1. patch 相關檔案 (如果你是用它官方的 pohmelfs.git kernel 就不需 patch 了)
============================================
--- cfg/cfg.c.orig      2009-06-11 20:32:32.000000000 +0800
+++ cfg/cfg.c   2009-06-11 20:32:45.000000000 +0800
@@ -39,7 +39,7 @@

 #include "swab.h"

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>
 #include <linux/connector.h>

 #include "fserver.h"
==================================================
--- include/fserver.h.orig      2009-06-11 20:33:13.000000000 +0800
+++ include/fserver.h   2009-06-11 20:33:23.000000000 +0800
@@ -36,7 +36,7 @@
 };
 #endif

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>

 #include "list.h"
 #include "rbtree.h"
=============================================
--- utils/flush.c.orig  2009-06-11 20:33:43.000000000 +0800
+++ utils/flush.c       2009-06-11 20:33:53.000000000 +0800
@@ -43,7 +43,7 @@

 #include "swab.h"

-#include <fs/pohmelfs/netfs.h>
+#include <netfs.h>

 #include <openssl/hmac.h>
 #include <openssl/evp.h>
=============================================
3.2. 編譯。
$ ./autogen.sh
$ ./configure --with-kdir-path=/usr/src/linux/drivers/staging/pohmelfs (指定 netfs.h header 位置)
$ make
$ make install
3.3. 模組載入。
$ modprobe pohmelfs
$ lsmod
Module                  Size  Used by
pohmelfs               66116  0
$ dmesg
pohmelfs: module is from the staging directory, the quality is unknown, you have been warned.


Step4.
啟動 storage server 及掛載測試!
4.1. 啟動 server
$ fserver -r /mnt/pohmelfs -w1 (on node26)
Server is now listening at 0.0.0.0:1025.
$ fserver -r /mnt/pohmelfs -w1 (on node27)
Server is now listening at 0.0.0.0:1025.
4.2. client 掛載
$ cfg -A add -a 140.110.x.26 -p 1025 -i1 (on node28)
$ mount -t pohmel -o "idx=1" none /mnt/pohmelfs (on node28)
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
none                   34G  2.9G   32G   9% /mnt/pohmelfs
$ cfg -A add -a 140.110.x.27 -p 1025 -i1 (on node28)
這樣 node28 可以同時存取 node26 及 node27 export dir.
請注意,先加入一個 node, mount 後再陸續加入其他 node, 否則無法掛載,說明請參考這裡
4.3. 確認
$ cfg -A show -i1 (on node28)
Config Index = 1
Family    Server IP                                            Port
AF_INET   140.110.x.26                                          1025
AF_INET   140.110.x.27                                          1025


說明:
當在 client node(node28) 寫檔案時,會寫到兩台 server node(node26, node27)
當在 client node(node28) 讀檔案時,會從兩台 server node 讀取資料。
建議目前不要用在 production 環境上 XD


參考:
pohmelfs-server source 裡的 README.

2009年6月9日 星期二

GPFS install on Gentoo Linux

IBM GPFS 是一套商用的 cluster file system, 要錢的。
關於 GPFS price 問題請參考:How is GPFS priced?
我這邊安裝的環境是:
kernel: 2.6.29-gentoo-r5
gpfs: 3.2.1-12

Step1.
安裝基本需要套件。
1.1. ksh
$ emerge -uD app-shells/ksh
1.2. rsh
$ emerge -uD net-misc/netkit-rsh
1.3. imake
$ emerge -uD x11-misc/imake
1.4. libstdc++.so.5
$ mkdir /usr/src/gpfs/
$ cd /usr/src/gpfs/
$ wget ftp://rpmfind.net/linux/fedora/releases/10/Everything/i386/os/Packages/compat-libstdc++-33-3.2.3-64.i386.rpm
$ rpm -ivh compat-libstdc++-33-3.2.3-64.i386.rpm

Step2.
下載安裝 GPFS.
$ wget ftp://ftp.software.ibm.com/software/server/gpfs/gpfs-3.2.1-12.i386.update.tar.gz
$ tar zxf gpfs-3.2.1-12.i386.update.tar.gz
$ rpm -ivh gpfs.msg.en_US-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.gpl-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.docs-3.2.1-12.noarch.rpm
$ rpm -ivh gpfs.base-3.2.1-12.i386.update.rpm --noscripts
取得 scripts.
$ rpm -qip --scripts gpfs.base-3.2.1-12.i386.update.rpm > scripts.sh
編譯相關 binary.
$ export SHARKCLONEROOT=/usr/lpp/mmfs/src
$ cd /usr/lpp/mmfs/src/config
$ cp site.mcr.proto site.mcr
確認 site.mcr 部分設定。
#define GPFS_ARCH_I386
LINUX_DISTRIBUTION = KERNEL_ORG_LINUX
#define LINUX_KERNEL_VERSION 2062999
$ cd /usr/lpp/mmfs/src
$ make World
$ make InstallImages
新增搜尋 PATH.
PATH = /usr/lpp/mmfs/bin:$PATH

Step3.
public key 設置。
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
aa:bb:cc:dd:ee:ff:11:22:33:44:55:66:77:88:99:00 root@node147
The key's randomart image is:
+--[ RSA 2048]----+
| .o=*|
| o .=E+|
| o . .* |
| . +... .|
| S o .=. |
| . +. |
| o. |
| ... |
| .. |
+-----------------+
上傳 id_dsa.pub 至 所有主機 且重新命名為 authorized_keys.

Step4.
編輯主機節點設定。
$ cat mycluster.allnodes
node147:quorum
node148:quorum

Step5.
Create cluster.
$ mmcrcluster -N mycluster.allnodes -p node147 -r /usr/bin/ssh -R /usr/bin/scp -C mycluster
Mon Jun 8 22:19:24 CST 2009: mmcrcluster: Processing node node147
Mon Jun 8 22:19:24 CST 2009: mmcrcluster: Processing node node148
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Step6.
顯示確認 cluster/node 狀態。
$ mmlscluster
GPFS cluster information
========================
GPFS cluster name: mycluster.node147
GPFS cluster id: 15226457667488521708
GPFS UID domain: mycluster.node147
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:
-----------------------------------
Primary server: node147
Secondary server: (none)

Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 node147 211.79.x.147 node147 quorum
2 node148 211.79.x.148 node148 quorum

$ mmlsnode -a
GPFS nodeset Node list
------------- -------------------------------------------------------
mycluster node147 node148

Step7.
啟動 GPFS.
$ mmstartup -a
Mon Jun 8 22:20:20 CST 2009: mmstartup: Starting GPFS ...

Step8.
檢查載入模組。
$ lsmod
Module Size Used by
mmfs 1048096 1
mmfslinux 174468 4 mmfs
tracedev 9888 3 mmfs,mmfslinux

Step9.
建立 Network shared disks (NSDs) 描述檔。
$ cat nodes.descfile
/dev/sda1:node147::dataAndMetadata::
/dev/sda1:node148::dataAndMetadata::
目前測試不支援 HP /dev/cciss/c0d0p1

Step10.
建立 NSD.
$ mmcrnsd -F nodes.descfile
mmcrnsd: Processing disk sda1
mmcrnsd: Processing disk sda1
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
建立完 NSD 後,nodes.descfile 會自動做些變更。
$ cat nodes.descfile
# /dev/sda1:node147::dataAndMetadata::
gpfs1nsd:::dataAndMetadata:4001::
# /dev/sda1:node148::dataAndMetadata::
gpfs2nsd:::dataAndMetadata:4002::

Step11.
確認節點硬碟狀況。
$ mmlsnsd -m
Disk name NSD volume ID Device Node name Remarks
---------------------------------------------------------------------------------------
gpfs1nsd D34F3E934A2D1FD6 /dev/sda1 node147 server node
gpfs2nsd D34F3E944A2D1FD7 /dev/sda1 node148 server node

Step12.
格式化檔案系統。
$ mmcrfs /gpfs gpfs0 -F nodes.descfile
The following disks of gpfs0 will be formatted on node node147:
gpfs1nsd: size 1220698993 KB
gpfs2nsd: size 1221269301 KB
Formatting file system ...
Disks up to size 10 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs0.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
程式會自動修改 /etc/fstab 檔案。
$ cat /etc/fstab
/dev/gpfs0 /gpfs gpfs rw,mtime,atime,dev=gpfs0,autostart 0 0

Step13.
掛載!測試!
$ mount /gpfs
$ time dd if=/dev/zero of=/gpfs/test1 bs=5120 count=1024000
1024000+0 records in
1024000+0 records out
5242880000 bytes (5.2 GB) copied, 23.5701 s, 222 MB/s

real 0m23.612s
user 0m0.329s
sys 0m11.902s
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p3 136G 1.8G 134G 2% /
udev 10M 44K 10M 1% /dev
shm 8.0G 0 8.0G 0% /dev/shm
/dev/gpfs0 2.3T 5.6G 2.3T 1% /gpfs
$ md5sum /gpfs/test1 (on node147)
f0c4910bd1b40aecaad309d2a8999e66 test1
$ md5sum /gpfs/test1 (on node148)
f0c4910bd1b40aecaad309d2a8999e66 test1

其他:
Adding a new disk to an existing GPFS file system.
$ mmcrnsd -F descfile-node1-sda2
$ mmadddisk gpfs0 -F descfile-node1-sda2 -r -a
Delet a disk.
$ mmdeldisk gpfs0 gpfs6nsd -r -a
Delete all file systems with mmdelfs.
$ mmdelfs /dev/gpfs0
Delete all NSDs with mmdelnsd.
$ mmdelnsd 'gpfs1nsd;gpfs2nsd;gpfs3nsd'
Shutdown GPFS on all nodes.
$ mmshutdown -a
Tue Jun 9 15:18:09 CST 2009: mmshutdown: Starting force unmount of GPFS file systems
Tue Jun 9 15:18:14 CST 2009: mmshutdown: Shutting down GPFS daemons
node148: Shutting down!
node147: Shutting down!
node148: 'shutdown' command about to kill process 3461
node148: Unloading modules from /usr/lpp/mmfs/bin
node148: Unloading module mmfs
node148: Unloading module mmfslinux
node148: Unloading module tracedev
node147: 'shutdown' command about to kill process 3643
node147: Unloading modules from /usr/lpp/mmfs/bin
node147: Unloading module mmfs
node147: Unloading module mmfslinux
node147: Unloading module tracedev
Tue Jun 9 15:18:21 CST 2009: mmshutdown: Finished
Remove all nodes from nodeset.
$ mmdelnode -a

說明:
目前 node147 提供 1.2TB(sda1), node148 提供 1.2TB(sda1),兩個 node 皆可看到 total: 2.3TB
目前有些東西編譯不出來,可是不知道影響到啥 XD 因為模組有正常載入 @_@
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/tracedev.ko] undefined!
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/mmfslinux.ko] undefined!
ERROR: "struct_module" [/usr/lpp/mmfs/src/gpl-linux/mmfs26.ko] undefined!

參考:

2009年6月2日 星期二

GFS install on Gentoo Linux

GFS, 一套由 GPL 變成 commercial, 後來又被 Red Hat 買下來變成 GPL 的一套 shared disk file system.

之前我以為這是要錢的,後來經過某長輩 (z1x) 的指正,才知道原來要錢的是,RHCS (Red Hat Cluster Suite),這套通通幫你編好相關的 binary 了,你只要圖形介面按一按,就能 create 你要的 cluster 環境了 (好像某長輩也需要這東西),其實介面就是 Conga 弄出來的。

本來以為 GFS 的運作會像前幾篇介紹的 Gluster, PVFS, Lustre 一樣,後來發現不太像,在這邊花費了相當多的時間。

目前測試的環境:
kernel: 2.6.29-gentoo-r5
gfs: 2.03.09
openais: 0.80.3
cluster-2, cluster-3 都有 linux kernel 某個版本以上的限定,請參考GFS官方網站
不然可能會有類似下面的錯誤訊息:
cluster-3.0.0.rc2 # ./configure
Configuring Makefiles for your system...
Checking tree: nothing to do
Checking kernel:
 Current kernel version: 2.6.28
 Minimum kernel version: 2.6.29
 FAILED!


Step1.
因為 GFS 是 over 在 OpenAIS framwork 上,所以,嗯,非裝不可。
你可以自行從 source 編譯,不過它有 depend Corosync 甚至 nss (Network Security Services) library 或像是 ldap, slang blahblah header files, 我懶,所以直接從 gentoo portage 安裝 :D
$ emerge -uD sys-cluster/openais


Step2.
kernel 編譯,請記得把 multicast, nbd (Network block device support), gfs2, lock_dlm, dlm 都編成模組。

編好後重開機,用新 kernel.


Step3.
安裝相關 userspace 套件。
$ emerge -uD sys-libs/slang (rgmanager need)
$ emerge -uD sys-cluster/rgmanager
$ emerge -uD sys-fs/gfs2

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N    ] sys-cluster/cman-lib-2.03.09  1,743 kB
[ebuild  N    ] dev-python/pexpect-2.3  USE="-doc -examples" 148 kB
[ebuild  N    ] sys-cluster/ccs-2.03.09  0 kB
[ebuild  N    ] sys-cluster/openais-0.80.3-r1  USE="-debug" 468 kB
[ebuild  N    ] sys-cluster/dlm-lib-2.03.09  0 kB
[ebuild  N    ] sys-cluster/dlm-2.03.09  0 kB
[ebuild  N    ] sys-cluster/cman-2.03.09-r1  0 kB
[ebuild  N    ] perl-core/libnet-1.22  USE="-sasl" 67 kB
[ebuild  N    ] dev-perl/Net-SSLeay-1.35  130 kB
[ebuild  N    ] virtual/perl-libnet-1.22  0 kB
[ebuild  N    ] dev-perl/Net-Telnet-3.03-r1  35 kB
[ebuild  N    ] sys-cluster/fence-2.03.09-r1  0 kB
[ebuild  N    ] sys-fs/gfs2-2.03.09  USE="-doc" 0 kB

Total: 13 packages (13 new), Size of downloads: 2,588 kB

$ emerge -uD gnbd-kernel
$ emerge -uD sys-block/nbd (如果你 gnbd-kernel 編的過的話,請優先用 gnbd Orz...)
or
$ emerge -uD sys-cluster/gnbd (如果你 gnbd-kernel 編不過的話,請用 nbd = =")


Step4.
載入模組。
$ depmod -a
$ modprobe gfs2
$ modprobe configfs
$ modprobe dlm
$ modprobe lock_dlm
$ modprobe nbd
$ lsmod
Module                  Size  Used by
nbd                    10084  0
lock_dlm               14116  0
dlm                   112656  10 lock_dlm
configfs               22668  2 dlm
gfs2                  332196  1 lock_dlm
$ dmesg
GFS2 (built Jun  1 2009 17:46:14) installed
DLM (built Jun  1 2009 17:45:59) installed
Lock_DLM (built Jun  1 2009 17:46:25) installed
nbd: registered device at major 43


Step5.
組態設定。
$ cat /etc/cluster/cluster.conf (僅需放在其中一個 node, cluster 啟動時會自動複製到其他 node)

$ cat /etc/ais/openais.conf
totem {
        version: 2
        secauth: off
        threads: 0
        nodeid: 2
        interface {
                ringnumber: 0
                bindnetaddr: 140.110.x.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
        }
}


Step6.
啟動服務。
你可以簡單的用 /etc/init.d/gfs2
$ /etc/init.d/gfs2 start
 * Loading dlm kernel module ... [ ok ]
 * Loading lock_dlm kernel module ... [ ok ]
 * Mounting ConfigFS ... [ ok ]
 * Starting ccsd ... [ ok ]
 * Starting cman ... [ ok ]
 * Waiting for quorum (300 secs) ... [ ok ]
 * Starting groupd ... [ ok ]
 * Starting fenced ... [ ok ]
 * Joining fence domain ... [ ok ]
 * Starting dlm_controld ... [ ok ]
 * Starting gfs_controld ... [ ok ]
 * Starting gfs2 cluster:
 * Loading gfs2 kernel module ... [ ok ]
或是以下方式啟動 debug 模式:
$ mount -t configfs none /sys/kernel/config
$ ccsd -n
$ cman_tool join -d
$ groupd -D
$ fenced -D
$ dlm_controld -D
$ gfs_controld -D
$ fence_tool join


Step7.
測試!
$ ccs_test connect
Connect successful.
 Connection descriptor = 1950
$ cman status
Version: 6.2.0
Config Version: 1
Cluster Name: mycluster
Cluster Id: 56756
Cluster Member: Yes
Cluster Generation: 216
Membership state: Cluster-Member
Nodes: 3
Expected votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 7
Flags: Dirty
Ports Bound: 0
Node name: node26
Node ID: 2
Multicast addresses: 226.94.1.1
Node addresses: 140.110.x.26
$ cman_tool services
type             level name     id       state
fence            0     default  00010002 none
[2 3 4]
$ cman_tool nodes
Node  Sts   Inc   Joined               Name
   2   M    208   2009-06-02 19:00:44  node26
   3   M    212   2009-06-02 19:03:57  node27
   4   M    216   2009-06-02 19:06:37  node28


Step8.
格式化 partition 與掛載。
$ mkfs -t gfs2 -p lock_dlm -t mycluster:testgfs2 -j 4 /dev/cciss/c0d1p1
This will destroy any data on /dev/cciss/c0d1p1.
  It appears to contain a LVM2_member raid.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/cciss/c0d1p1
Blocksize:                 4096
Device Size                33.91 GB (8890316 blocks)
Filesystem Size:           33.91 GB (8890316 blocks)
Journals:                  4
Resource Groups:           136
Locking Protocol:          "lock_dlm"
Lock Table:                "mycluster:testgfs2"
$ mount -t gfs2 -v /dev/cciss/c0d1p1 /mnt/gfs
/sbin/mount.gfs2: mount /dev/cciss/c0d1p1 /mnt/gfs
/sbin/mount.gfs2: parse_opts: opts = "rw"
/sbin/mount.gfs2:   clear flag 1 for "rw", flags = 0
/sbin/mount.gfs2: parse_opts: flags = 0
/sbin/mount.gfs2: parse_opts: extra = ""
/sbin/mount.gfs2: parse_opts: hostdata = ""
/sbin/mount.gfs2: parse_opts: lockproto = ""
/sbin/mount.gfs2: parse_opts: locktable = ""
/sbin/mount.gfs2: message to gfs_controld: asking to join mountgroup:
/sbin/mount.gfs2: write "join /mnt/gfs gfs2 lock_dlm mycluster:testgfs2 rw /dev/cciss/c0d1p1"
/sbin/mount.gfs2: message from gfs_controld: response to join request:
/sbin/mount.gfs2: lock_dlm_join: read "0"
/sbin/mount.gfs2: message from gfs_controld: mount options:
/sbin/mount.gfs2: lock_dlm_join: read "hostdata=jid=0:id=262146:first=1"
/sbin/mount.gfs2: lock_dlm_join: hostdata: "hostdata=jid=0:id=262146:first=1"
/sbin/mount.gfs2: lock_dlm_join: extra_plus: "hostdata=jid=0:id=262146:first=1"
/sbin/mount.gfs2: mount(2) ok
/sbin/mount.gfs2: lock_dlm_mount_result: write "mount_result /mnt/gfs gfs2 0"
/sbin/mount.gfs2: read_proc_mounts: device = "/dev/cciss/c0d1p1"
/sbin/mount.gfs2: read_proc_mounts: opts = "rw,hostdata=jid=0:id=262146:first=1"
$ df -h
/dev/cciss/c0d1p1      34G  518M   34G   2% /mnt/gfs


Step9.
disk share.
這邊使用的是 native kernel nbd module, 建議用 gnbd.
9.1. nbd server configuration.
$ cat /etc/nbd-server/config (on node26)
[generic]
[export]
    exportname = /dev/cciss/c0d1p1
        port = 2000
        authfile = /etc/nbd-server/allow
$ cat /etc/nbd-server/allow (on node26)
140.110.x.26
140.110.x.27
140.110.x.28
140.110.x.0/24
9.2. nbd server export.
$ nbd-server (on node26)
9.3. nbd client import.
$ nbd-client node26 2000 /dev/nbd0 (on node27)
Negotiation: ..size = 35561264KB
bs=1024, sz=35561264
9.4. mount!
$ mount -t gfs2 /dev/nbd0 /mnt/gfs (on node27)
$ df -h (on node27)
/dev/nbd0              34G  518M   34G   2% /mnt/gfs
$ cman_tool services (on node27)
type             level name      id       state
fence            0     default   00010002 none
[2 3 4]
dlm              1     testgfs2  00020003 none
[3]
gfs              2     testgfs2  00010003 none
[3]


Step10.
另外一台 client 掛載
$ nbd-client node26 2000 /dev/nbd0 (on node28)
Negotiation: ..size = 35561264KB
bs=1024, sz=35561264
$ mount -t gfs2 /dev/nbd0 /mnt/gfs/ (on node28)
$ df -h (on node28)
Filesystem            Size  Used Avail Use% Mounted on
/dev/nbd0              34G  518M   34G   2% /mnt/gfs
$ cman_tool services (on node28)
type             level name      id       state
fence            0     default   00010002 none
[2 3 4]
dlm              1     testgfs2  00020003 none
[3 4]
gfs              2     testgfs2  00010003 none
[3 4]
$ cman_tool services (on node27)
type             level name      id       state
fence            0     default   00010002 none
[2 3 4]
dlm              1     testgfs2  00020003 none
[3 4]
gfs              2     testgfs2  00010003 none
[3 4]


Step11.
concurrent write test.
$ vim /etc/gfs/concurrent_test.txt (on node28)
$ vim /etc/gfs/concurrent_test.txt (on node27)
E325: ATTENTION
Found a swap file by the name ".concurrent_test.txt.swp"
          owned by: root   dated: Wed Jun  3 07:31:38 2009
         file name: /mnt/gfs/concurrent_test.txt
          modified: YES
         user name: root   host name: node28
        process ID: 4454
While opening file "concurrent_test.txt"

(1) Another program may be editing the same file.
    If this is the case, be careful not to end up with two
    different instances of the same file when making changes.
    Quit, or continue with caution.

(2) An edit session for this file crashed.
    If this is the case, use ":recover" or "vim -r concurrent_test.txt"
    to recover the changes (see ":help recovery").
    If you did this already, delete the swap file ".concurrent_test.txt.swp"
    to avoid this message.

Swap file ".concurrent_test.txt.swp" already exists!
"concurrent_test.txt" [New File]


說明:
*停止服務。
umount [-v] "mountpoint"
nbd-client -d /dev/nbd0
fence_tool leave
cman_tool leave
*更新 cluster.conf.
ccs_tool update foo.conf (記得更新 config_version)

革命尚未成功,完整的架構是 gfs2 + gnbd + clvm...

Update:
gnbd 不 support 了... 請看 這裡 與 那裡