How to install GlusterFS in RHEL/CentOS and Fedora
Merits
- High performance
- High availabity
- Simple
- Innovative
- Scale Linearly
- Elasticity
Gluster outstanding among other distributed file systems
- Scalable
- Affordable
- Open Source
- Flexible.
Storage concepts in GlusterFS
- Brick &ndash It is basically any directory to be shared among the trusted storage pool.
- Trusted Storage Pool &ndash The collection of shared files/directories, which are based on the designed protocol.
- Block Storage &ndash The devices which the data is being moved across systems in the form of blocks.
- Cluster &ndash In Red Hat Storage, both cluster and trusted storage pool convey the same meaning of collaboration of servers based on a defined protocol.
- Distributed File System &ndash A file system in which data is spread over different nodes.
- FUSE &ndash To create file systems above kernel without involving any of the kernel code.
- glusterd &ndash glusterd is the GlusterFS management daemon which is the backbone of file system which will be running throughout the whole time whenever the servers are in active state.
- POSIX &ndash It is the family of standards defined by the IEEE as a solution to the compatibility between Unix-variants in the form of an Application Programmable Interface (API).
- RAID &ndash It is a technology that gives increased storage reliability through redundancy.
- Subvolume &ndash A brick after being processed by least at one translator.
- Translator &ndash It performs the basic actions initiated by the user from the mount point. It connects one or more sub volumes.
- Volume &ndash It is a logical collection of bricks. All the operations are based on the different types of volumes created by the user.
The different types of volumes
- Replicated volume
- Distributed volume
- Striped volume
- Distributed replicated volume.
To install GlusterFS in RHEL/CentOS and Fedora
Requirements
Install CentOS 7 between the two nodes.
Set the host name as &ldquo server&rdquo and &ldquo client&ldquo .
Create a working network.
Name the storage disk on both nodes as &ldquo /brick/data&ldquo .
Here, we are installing and configuring GlusterFS for the first time for high availability of storage. Now lets use the servers to create volumes and replicate data between them.
Follow the installation process and configuration in both the servers
To Enable EPEL and GlusterFS Repository
Before Installing GlusterFS on both the servers, enable EPEL repositories.
\[root@server ~\]# yum install epel-release -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
\* base: centos.webwerks.com
\* epel: ftp.cuhk.edu.hk
\* extras: centos.webwerks.com
\* updates: centos.excellmedia.net
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-5 will be updated
.
.
.
Updated:
epel-release.noarch 0:7-6
Complete!
To enable GlusterFs repository use the following command.
\[root@server ~\]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
--2016-05-10 10:55:22-- http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
Resolving download.gluster.org (download.gluster.org)... 23.253.208.221, 2001:4801:7824:104:be76:4eff:fe10:23d8
Connecting to download.gluster.org (download.gluster.org)|23.253.208.221|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1049 (1.0K)
Saving to: &lsquo /etc/yum.repos.d/glusterfs-epel.repo&rsquo
100%\[====================================================> \] 1,049 --.-K/s in 0s
2016-05-10 10:55:23 (53.1 MB/s) - &lsquo /etc/yum.repos.d/glusterfs-epel.repo&rsquo saved \[1049/1049\]
To install GlusterFS
Run the following command to install GlusterFS.
\[root@server ~\]# yum install glusterfs-server -y Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
\* base: centos.webwerks.com
\* epel: ftp.cuhk.edu.hk
\* extras: centos.webwerks.com
\* updates: centos.excellmedia.net
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-server.x86\_64 0:3.7.11-1.el7 will be installed
.
.
.
Dependency Updated:
glusterfs-api.x86\_64 0:3.7.11-1.el7 glusterfs-libs.x86\_64 0:3.7.11-1.el7
Complete!
Start the GlusterFS management daemon
Execute the below command to start the GlusterFS services.
\[root@server ~\]# systemctl start glusterd.service
Check the status of daemon.
\[root@server ~\]# systemctl status glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service disabled)
Active: active (running) since Tue 2016-05-10 10:57:32 IST 8s ago
Process: 7261 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG\_LEVEL $GLUSTERD\_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 7262 (glusterd)
CGroup: /system.slice/glusterd.service
??7262 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
May 10 10:57:32 client systemd\[1\]: Started GlusterFS, a clustered file-system server.
To Configure a SELinux and iptables
Open ' /etc/sysconfig/selinux' on both the servers. Edit SELinux permission to &ldquo permissive&rdquo or &ldquo disabled&rdquo . Then Save and close the file.
\[root@server ~\]# vim /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
Flush the iptables in both nodes.
\[root@server ~\]# iptables -F
To Configure the Trusted Pool
On " Server"
\[root@server ~\]# gluster peer probe 192.168.5.190 peer probe: success.
On " Client"
\[root@client ~\]# gluster peer probe 192.168.5.191 peer probe: success. Host 192.168.5.191 port 24007 already in peer list
Important: Once this pool has been connected, the trusted users alone can probe new servers into this pool.
To check the status of the gluster
On " server"
\[root@server ~\]# gluster peer status Number of Peers: 1
Hostname: 192.168.5.190
Uuid: b6c2fd86-af1b-420b-b111-dee1e16d2c3b
State: Peer in Cluster (Connected)
On " client"
\[root@client ~\]# gluster peer status
Number of Peers: 1
Hostname: 192.168.5.191
Uuid: a4f7053b-35fe-425b-9dab-0b761b32e770
State: Peer in Cluster (Connected)
To list the gluster pool list
On " server"
\[root@server ~\]# gluster pool list
UUID Hostname State
b6c2fd86-af1b-420b-b111-dee1e16d2c3b 192.168.5.190 Connected
a4f7053b-35fe-425b-9dab-0b761b32e770 localhost Connected
On " Client"
\[root@client ~\]# gluster pool list
UUID Hostname State
a4f7053b-35fe-425b-9dab-0b761b32e770 192.168.5.191 Connected
b6c2fd86-af1b-420b-b111-dee1e16d2c3b localhost Connected
To Set up a Volume
Create a volume on any single server.
\[root@server ~\]# gluster volume create dist-volume 192.168.5.190:/brick/data force
volume create: dist-volume: success: please start the volume to access data
\[root@server ~\]# gluster volume start dist-volume
volume start: dist-volume: success
Run the following command to Check the volume info.
\[root@server ~\]# gluster volume info
Volume Name: dist-volume
Type: Distribute
Volume ID: fd8e2ad9-bb89-4b3d-9e22-743b39f131ad
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.5.190:/brick/data
Options Reconfigured:
performance.readdir-ahead: on
To Check the volume status of the &ldquo server&rdquo .
\[root@server ~\]# gluster volume status Status of volume: dist-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.5.190:/brick/data 49152 0 Y 7816
NFS Server on localhost 2049 0 Y 4289
NFS Server on 192.168.5.190 2049 0 Y 7844
Task Status of Volume dist-volume
------------------------------------------------------------------------------
There are no active volume tasks
To Check volume info in &ldquo client&rdquo server
\[root@client ~\]# gluster volume info Volume Name: dist-volume
Type: Distribute
Volume ID: fd8e2ad9-bb89-4b3d-9e22-743b39f131ad
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.5.190:/brick/data
Options Reconfigured:
performance.readdir-ahead: on
Now verify the volume status in &ldquo client&rdquo server
\[root@client ~\]# gluster volume status Status of volume: dist-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.5.190:/brick/data 49152 0 Y 7816
NFS Server on localhost 2049 0 Y 7844
NFS Server on 192.168.5.191 2049 0 Y 4289
Task Status of Volume dist-volume
------------------------------------------------------------------------------
There are no active volume tasks
Important : If volume is not started, the error messages will be stored under &lsquo /var/log/glusterfs&lsquo on one or both the servers.
To Verify the Volume
Now mount the gluster volume group under the directory &ldquo /mnt/gluster&rdquo .
\[root@client ~\]# mkdir /mnt/gluster
\[root@client ~\]# mount.glusterfs 192.168.5.191:/dist-volume /mnt/gluster
\[root@client ~\]# df -h Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 5.1G 13G 30% /
devtmpfs 486M 0 486M 0% /dev
tmpfs 494M 140K 494M 1% /dev/shm
tmpfs 494M 7.1M 487M 2% /run
tmpfs 494M 0 494M 0% /sys/fs/cgroup
/dev/sda1 497M 116M 382M 24% /boot
192.168.5.191:/dist-volume 18G 5.1G 13G 30% /mnt/gluster
To make it permanent, mount and add the entry in the fstab file.
#
# /etc/fstab
.
.
.
192.168.5.191:/dist-volume /mnt/gluster glusterfs defaults,\_netdev 0 0
\[root@client ~\]# mount -a
Rebalance &ndash The large amount of data was previously residing, we can perform a rebalance operation to distribute the data among all the bricks including the newly added brick.
Geo-replication &ndash It provides back-ups of data for disaster recovery. Here comes the concept of master and slave volumes. So that if master is down whole of the data can be accessed via slave.
Self-heal &ndash If any of the bricks in a replicated volume are down and users modify the files within the other brick, the automatic self-heal daemon will come into action as soon as the brick is up next time and the transactions occurred during the down time are synced accordingly.
Comments ( 0 )
No comments available