From last few days, i was searching to implement cluster on my single laptop. The idea was to become more familiar with clustering. Linuxquestios.org guys help me in this direction and i decided to use vmware server 2.1 to implement clustering among guest nodes. My laptop has ubuntu 9.1 installed and i installed rhel 5.1 as guest using vmware server. Going througth docs on clustering using vwware i concluded that i have to add a virtual scsi disk with diffrent bust allocation to my virtual guest. I added scsi1, then i changed some configuration in guest .vmx file, like. Earlier my major concern was how to implement a disk that can be shared among guests.
Following modifications in done in .vmx file
disk.locking = false
scsi1.present = true
scsi1.sharedbus = true
scsi1.virtualdev= "lsilogic"
scsi1:0.present = true
scsi1.0.filename = "d:virtualshareddisk"
scsi1:0.mode = "independent: -persistent"
scsi1:0.devicetype = "disk"
after that i restarted my guest. I jumped when i found that 'fdisk -l' command list my new disk.
So now i have disk that can be shared among my individual guest.
Since i already decided to use OCFS as cluster file system so i installed ocfs2-'uname -r' and ocfs-tools-'uname -r'. For managing grpahically i also installed ocfsconsole-'uname -r' (remember to replace 'uname -r' with your kernel version).
After installation, i noticed that two new script files get created inside /etc/initd.d. Files are o2cb and ocfs2. Now there is time to configure ocfs2, so i executed script
root#cd /etc/init.d
roooot#./o2cb configure
..
above command generated error that cluster.conf not found so i created /etc/ocfs2/cluster.conf with following details
cluster:
node_count=2
name=ocfs2
node:
ip_port=7777
ip_address=192.168.11.90
number=1
name=node1
cluster=ocfs2
node:
ip_port=7777
ip_address=192.168.11.100
number=2
name=node2
cluster=ocfs2
now execute
root#cd /etc/init.d
roooot#./o2cb configure
sucessful .
after that create new partition on /dev/sdb
after that execute
root# mkfs.ocfs2 -b 4k -C 32k -N4 -L shareddata /dev/sdb1 --fs-feature-level=max-compat
clustered file system created on /dev/sdb1, now mount it on guests
#mount -t ocfs2 /dev/sdb1 /mnt/shared
Great all worked ..
Following modifications in done in .vmx file
disk.locking = false
scsi1.present = true
scsi1.sharedbus = true
scsi1.virtualdev= "lsilogic"
scsi1:0.present = true
scsi1.0.filename = "d:virtualshareddisk"
scsi1:0.mode = "independent: -persistent"
scsi1:0.devicetype = "disk"
after that i restarted my guest. I jumped when i found that 'fdisk -l' command list my new disk.
So now i have disk that can be shared among my individual guest.
Since i already decided to use OCFS as cluster file system so i installed ocfs2-'uname -r' and ocfs-tools-'uname -r'. For managing grpahically i also installed ocfsconsole-'uname -r' (remember to replace 'uname -r' with your kernel version).
After installation, i noticed that two new script files get created inside /etc/initd.d. Files are o2cb and ocfs2. Now there is time to configure ocfs2, so i executed script
root#cd /etc/init.d
roooot#./o2cb configure
..
above command generated error that cluster.conf not found so i created /etc/ocfs2/cluster.conf with following details
cluster:
node_count=2
name=ocfs2
node:
ip_port=7777
ip_address=192.168.11.90
number=1
name=node1
cluster=ocfs2
node:
ip_port=7777
ip_address=192.168.11.100
number=2
name=node2
cluster=ocfs2
now execute
root#cd /etc/init.d
roooot#./o2cb configure
sucessful .
after that create new partition on /dev/sdb
after that execute
root# mkfs.ocfs2 -b 4k -C 32k -N4 -L shareddata /dev/sdb1 --fs-feature-level=max-compat
clustered file system created on /dev/sdb1, now mount it on guests
#mount -t ocfs2 /dev/sdb1 /mnt/shared
Great all worked ..