GlusterFS is a software-only, highly available, scalable, centrally managed storage pool.
Download & Install rpm from
OR build rpm from tar
yum -y groupinstall "Development tools" yum -y install fuse fuse-devel rpcbind readline-devel libibverbs-devel rpm-devel
download latest tar from http://download.gluster.com/pub/gluster/glusterfs/LATEST/
rpmbuild -ta glusterfs-3.2.5.tar.gz cd ./rpmbuild/RPMS/x86_64 rpm -ivh * / yum -y install *.rpm
Enable service on boot & start
chkconfig glusterd on service glusterd start
yum -y install glusterfs-core* glusterfs-fuse* glusterfs-rdma*
Here we are using following two server ip & client
Probe peer server, run below command on server1 to probe server2
#on gserver1 gluster peer probe 192.168.3.141
#get peer / server status gluster peer status
#create volume mkdir /storage gluster volume create storevol replica 2 transport tcp 192.168.3.132:/storage 192.168.3.141:/storage Creation of volume storevol has been successful. Please start the volume to access data. #storevol is volume name used #replica 2 is for two bricks to be configured as replica, you can add more bricks later but atleast two at a time. #here /storage mount point is used to store GlusterFS files / folders
#start volume gluster volume start storevol Starting volume storevol has been successful
#get volume info gluster volume info
#By default, all clients can connect to the volume. If you want to grant access to client2 ( 192.168.2.102) gluster volume set testvol auth.allow 192.168.2.102
mkdir /home1 mount -t glusterfs 192.168.3.132:/storevol /home1/
Here we have mounted glusterfs from server1 to mount point home1.
Now even if server1 fails, client can access files stored on /home1 via server2. There is no need to remount from another server etc. While mounting from server1, client fetches volume info for other servers too. Hence no mount point / server ip change required.
Files / Folders copied on /home1 will be replicated on both servers.
Incase of server1 failed, files will be stored on server2 which is synced back to server1 on restore.
Add / Remove bricks from GlusterFS
While adding / removing bricks you need to add / remove atleast two bricks. if single is used following error will appear.
gluster volume add-brick storevol 192.168.2.104:/storage
Incorrect number of bricks supplied 1 for type REPLICATE with count 2
Additional servers for demo
#add peer gluster peer probe 192.168.2.104 gluster peer probe 192.168.2.223
#add brick to gluster gluster volume add-brick storevol 192.168.2.104:/storage 192.168.2.223:/storage
BELOW COMMAND WILL CAUSE DATA LOSS.
#remove brick gluster volume remove-brick storevol 192.168.2.223:/storage 192.168.2.104:/storage
if you find any missing point in here, please let us know in comment section or tweet us at @linuxreaders. To get more articles like this, subscribe to our RSS feeds / Mails.