GlusterFS
ext4文件系统格式化缓慢
# mkfs.ext4 -T largefile /dev/vdb1
______________________________________________
kvm 使用 gfs 报权限错误
gluster> volume set vm server.allow-insecure on
volume set: success
gluster> volume set vm storage.owner-uid 37
volume set: success
关闭SELinux 问题解决
_____________________________________________
在防火墙上启用所需的端口:
firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent
firewall-cmd --zone=public --add-port=49151/tcp --permanent
firewall-cmd --reload
success
https://wiki.centos.org/zh/HowTos/GlusterFSonCentOS
_____________________________________________
移除brick 后再次使用该目录:
2
setfattr -x trusted.glusterfs.volume-id /mnt/brick1/data
setfattr -x trusted.gfid /mnt/brick1/data
Of course, you should also ensure that the filesystem you’re adding is cleared, especially the .glusterfs hidden directory.
1
rm -rf /mnt/brick1/data/.glusterfs
______________________________________________
GlusterFS 运维
1、volume状态
gluster> volume set data server.statedump-path /var/log/glusterfs/
gluster> volume statedump data
volume statedump: success
gluster> volume status data inode/detail/clients/mem/fd/callpool
detail - Displays additional information about the bricks.
clients - Displays the list of clients connected to the volume.
mem - Displays the memory usage and memory pool details of the bricks.
inode - Displays the inode tables of the volume.
fd - Displays the open fd (file descriptors) tables of the volume.
callpool - Displays the pending calls of the volume.
2、volume性能
gluster> volume profile data start
Starting volume profile on data has been successful
gluster> volume profile data info
3、volume 回收站
Volume Options
gluster volume set <VOLNAME> features.trash <on / off>
This command can be used to enable trash translator in a volume. If set to on, trash directory will be created in every brick inside the volume during volume start command. By default translator is loaded during volume start but remains non-functional. Disabling trash with the help of this option will not remove the trash directory or even its contents from the volume.
gluster volume set <VOLNAME> features.trash-dir <name>
This command is used to reconfigure the trash directory to a user specified name. The argument is a valid directory name. Directory will be created inside every brick under this name. If not specified by the user, the trash translator will create the trash directory with the default name “.trashcan”. This can be used only when trash-translator is on.
gluster volume set <VOLNAME> features.trash-max-filesize <size>
This command can be used to filter files entering trash directory based on their size. Files above trash_max_filesize are deleted/truncated directly. Value for size may be followed by mutliplicative suffixes as KB(=1024 bytes), MB(=1024*1024 bytes) and GB(=1024*1024*1024 bytes). Default size is set to 5MB. Considering the fact that trash directory is consuming the glusterfs volume space, trash feature is implemented to function in such a way that it directly deletes/truncates files with size > 1GB even if this option is set to some value greater than 1GB.
gluster volume set <VOLNAME> features.trash-eliminate-path <path1> [ , <path2> , . . . ]
This command can be used to set the eliminate pattern for the trash translator. Files residing under this pattern will not be moved to trash directory during deletion/truncation. Path must be a valid one present in volume.
gluster volume set <VOLNAME> features.trash-internal-op <on / off>
This command can be used to enable trash for internal operations like self-heal and re-balance. By default set to off.
Enable trash translator
gluster> volume set data features.trash on
4、启用 Jumbo frames
when to configure
any network faster than 1-GbE
workload is sequential large-file reads/writes
LIMITATION: requires all network switches in VLAN must be configured to handle jumbo frames, do not configure otherwise.
how to configure
edit network interface file at /etc/sysconfig/network-scripts/ifcfg-your-interface
Ethernet (on ixgbe driver): add "MTU=9000" (MTU means "maximum transfer unit") record to network interface file
Infiniband (on mlx4 driver): add "CONNECTED_MODE=yes" and "MTU=65520" records to network interface file
ifdown your-interface; ifup your-interface
test with "ping -s 16384 other-host-on-VLAN"
switch requires max frame size larger than MTU because of protocol headers, usually 9216 bytes
_____________________________________________
[root@node4 hudson.jobs.config.back]# crontab -l
2 18 * * * /mnt/hudson.jobs.config.back/init.sh >> /mnt/hudson.jobs.config.back/backup.log 2>&1
[root@node4 hudson.jobs.config.back]# more init.sh
#!/bin/sh
_SOURCE_DIR=/mnt/hudson/jobs
_DISTEN_DIR=/mnt/hudson.jobs.config.back
_DATE=\.`date +%F`
echo "=======`date`======="
for i in `ls /mnt/hudson/jobs`;do
echo "++++++: "$i
if [ ! -r $i ]; then
mkdir -p $i
fi
cp $_SOURCE_DIR/$i/config.xml $_DISTEN_DIR/$i/config.xml$_DATE;
done
tree $_DISTEN_DIR
_____________________________________________
Glusterfs安装包
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.2/CentOS/
______________________________________________
http://blog.gluster.org/2014/07/web-interface-to-manage-gluster-nodes/http://resources.ovirt.org/pub/ovirt-3.4/rpm/el6Server/x86_64/
_____________________________________________
GlusterFs 之 trouble shooting
手动切割卷日志
命令格式:
#gluster volume log rotate VOLNAME
Forexample:
# glustervolume log rotate testvolume
# 日志路径:/var/log/glusterfs/bricks/brickPath.log.epoch-time-stamp。
清除文件锁
当文件需要清除锁时,可以手动处理
命令格式:
#gluster volume clear-locks VOLNAMEPATH kind {blocked | granted | all}
{inode [RANGE] | entry [BASENAME] | posix [RANGE]}
如何查看哪些文件被锁定?
# gluster volume statedump VOLNAME
# 这样生成状态文件,通过对文件的分析得到加锁的文件。文件在server.statedump-path指定的目录下,文件名为 BRICK-PATH.BRICK-PID.dump.EPOCH-TIME-STAMP 。
example:
1 清除granted文件锁:
# cat/tmp/dump/data.20282.dump.1409039654
===========================================================
[xlator.features.locks.vol-locks.inode]
path=/
mandatory=0
entrylk-count=1
lock-dump.domain.domain=vol-replicate-0
xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCKon basename=file1, pid = 714782904,
=====================================
#gluster volume clear-locks test-volume / kind granted entry file1
2 清除granted inode锁:
# cat/tmp/dump/data1.20283.dump.1409039676
=====================================
[xlator.features.locks.vol-locks.inode]
path=/file1
mandatory=0
inodelk-count=1
lock-dump.domain.domain=vol-replicate-0
inodelk.inodelk[0](ACTIVE)=type=WRITE,whence=0, start=0, len=0, pid =714787072,
=====================================
#gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0
3 清除granted POSIX锁:
# cat/tmp/dump/data1.20284.dump.1409049672
=====================================
xlator.features.locks.vol1-locks.inode]
path=/file1
mandatory=0
posixlk-count=15
posixlk.posixlk[0](ACTIVE)=type=WRITE,whence=0, start=8, len=1, pid =23848,
……
posixlk.posixlk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1,owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked
=====================================
#gluster volume clear-locks test-volume /file1 kind granted posix 0,8-1
4 清除blocked POSIX锁:
#gluster volume clear-locks test-volume /file1 kind blocked posix 0,0-1
_____________________________________________
太空桥
linux 下使用NC 和 Tar 压缩传输文件
linux下使用NC 和 Tar 压缩传输文件
在A机器上有database目录,需要传输到B机 A机器IP:10.204.3.175
A机器: tar -cf - /home/database | nc -l 5677
B机器: nc 10.204.3.175 5677 | tar -xf -
经测试 750M的Oracle安装文件目录,经压缩传输,居然只用了8秒左右时间,在公司内网。
排除某个目录用 --exclude
tar -zcvf dir.tar.gz --exclude=dir3 dir
在alpine上用nc 如: nc -l -p 2222
_____________________________________________
# yum --disablerepo=\* --enablerepo=c6-media -y install gtk* gisc* xfs*
# rpm -ivh gp*.rpm
# modprob xfs
# lsmod |grep xfs
# parted
(parted) mklabel gpt [yes]
(parted) mkpart primary 0 -1 [ignore]
(parted) p
# mkfs.xfs -f /dev/sdb1
# mkdir -p /gfs
# mount -t xfs /dev/sdb1 /gfs
# blkid /dev/sdb1 >> /etc/fstab
# Linux VI 命令/etc/fstab
UUID=xxx.xxx.xxxx /gfs xfs defaults 1 1
# init 6
parted /dev/sdb
mklabel gpt
mkpart primary 2048s 100%
mkfs.xfs -f /dev/sdb1
mount -t xfs /dev/sdb1 /data
______________________________________________
8 wget -c http://192.168.168.206/iso/CentOS-6.4-x86_64-bin-DVD1.iso
9 wget -c http://192.168.168.206/iso/CentOS-6.4-x86_64-bin-DVD2.iso
10 Linux VI 命令/etc/yum.repos.d/CentOS-Media.repo
11 mkdir -p /media/CentOS
12 mkdir -p /media/cdrom
13 mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /media/CentOS/
14 mount -o loop CentOS-6.4-x86_64-bin-DVD2.iso /media/cdrom/
15 df -ah
16 wget -c http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/glusterfs-3.4.2.tar.gz
17 yum --disablerepo=\* --enablerepo=c6-media -y groupinstall "通用桌面" "X 窗口系统"
18 yum --disablerepo=\* --enablerepo=c6-media -y install gcc make tigervnc* flex* bison* openssl* lvm* python* systemtap-sdt-devel.x86_64 libaio* readline* fuse* xfsdump.x86_64
19 fdisk /dev/sdb
20 mkfs.xfs -f -i size=512 -l size=128m,lazy-count=1 -d agcount=16 /dev/sdb1
21 mkdir -p /gfs
22 mount -t xfs /dev/sdb1 /gfs
23 blkid /dev/sdb1 >> /etc/fstab
24 Linux VI 命令/etc/fstab
______________________________________________
yum --disablerepo=\* --enablerepo=c6-media -y install gcc make tigervnc* flex* bison* openssl* lvm* python* systemtap-sdt-devel.x86_64 libaio* readline* fuse*
______________________________________________
setfattr -x trusted.glusterfs.volume-id $brick_path
setfattr -x trusted.gfid $brick_path
rm -rf $brick_path/.glusterfs
setfattr -x trusted.glusterfs.volume-id /brick
setfattr -x trusted.gfid
rm -rf $brick_path/.glusterfs
[root@node4 17k]# more rmv.sh
setfattr -x trusted.glusterfs.volume-id $1
setfattr -x trusted.gfid $1
rm -rf $1/.glusterfs
______________________________________________
# yum -y install gcc make xfsdump.x86_64 flex* bison* openssl* lvm* python* systemtap-sdt-devel.x86_64 libaio* readline* fuse* sqlite-devel acl-devel libxml2-devel
# ./configure --enable-bd-xlator --enable-debug --enable-systemtap
# make && make install
_____________________________________________
3.7.2 编译参数
# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# yum -y install userspace-rcu-devel.x86_64
# yum -y install gcc make xfsdump.x86_64 flex* bison* openssl* lvm* python* systemtap-sdt-devel.x86_64 libaio* readline* fuse* sqlite-devel acl-devel libxml2-devel glib2-devel.x86_64 userspace-rcu-devel.x86_64
# ./configure --enable-bd-xlator --enable-debug --enable-systemtap --enable-qemu-block
_____________________________________________
1113248337 qq
131543904
1134605104
______________________________________________
【测试用例1】客户端自动挂载测试
步骤:
1、mount.glusterfs 挂载volume mail
2、集群停止 volume mail
3、客户查看挂载点
4、集群启动 volume mail
5、客户端查看挂载点
【测试用例2】节点故障测试
步骤:
1、mount.glusterfs 挂载volume mail
2、断开节点3网络连接
3、客户端查看挂载点
4、客户端写入/读取文件
【测试用例3】节点故障恢复测试
步骤:
1、mount.glusterfs 挂载volume mail
2、断开节点3网络连接
3、客户端查看挂载点
4、客户端写入/读取文件
5、恢复节点3网络连接
6、客户端查看文件完整性
【测试用例4】节点移除测试
步骤:
1、mount.glusterfs 挂载volume mail
2、peer detach glusterfs003 force强制移除节点
3、客户端查看挂载点
4、客户端写入/读取文件
【测试用例5】节点移除恢复测试
步骤:
1、mount.glusterfs 挂载volume mail
2、peer detach glusterfs003 force强制移除节点
3、客户端查看挂载点
4、客户端写入/读取文件
5、peer probe glusterfs003 节点加入集群
6、客户端查看文件完整性
______________________________________________
sudo blkid
1、安装 parted 挂载 DVD1和 DVD2
2、parted执行 mkfs 可以突破16T限制,然后更改 fstab自动挂载
3、blkid 查看 uuid
4、源码编译缺少相应包时,yum 安装 包名-devel 包名-headers
5、mini安装时缺少scp 安装 openssh-client即可
6、(parted) mklabel gpt
(parted) mkpart primary 0KB 2999GB
(parted) mkpart primary 0KB 2999GB
_____________________________________________
http://blog.csdn.net/liuhong1123/article/details/8118230http://yyri.blog.163.com/blog/static/148943951201212710572679/http://gluster.org/community/documentation/index.php/GlusterFS_Technical_FAQ
_____________________________________________
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/
_____________________________________________
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
# yum -y install glusterfs-server.x86_64 glusterfs.x86_64 glusterfs-fuse.x86_64 glusterfs-devel.x86_64
_____________________________________________
# ./configure --enable-bd-xlator --disable-fusermount --enable-debug --enable-systemtap
# make && make install
_____________________________________________
mkfs.xfs -f -i size=512 -l size=128m,lazy-count=1 -d agcount=16 /dev/sdb1
_____________________________________________
# wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/CentOS/glusterfs-alpha-epel.repo
# rpm -ivh glusterfs-alpha-epel.repo
To use this repo:
+ download the glusterfs-alpha-epel.repo file to your system
+ Install the file in your /etc/yum.repos.d/ directory
+ run yum with --enablerepo=glusterfs-alpha-epel, e.g.
`yum --enablerepo=glusterfs-alpha-epel install glusterfs`
RPMs in this repo are built on the Fedora Project's Koji build
system
The eRPMs are signed with a 2048-bit gpg key. You may edit your repo
file and set gpgcheck=1 at your discretion.
______________________________________________
附:
Self-Heal
Currently AFR doesn't do active self heal. That is, it won't fix all the inconsistencies automatically. But instead it fixes the inconsistencies when a file gets opened. Hence, if one needs to make sure all of his afr'd copies are in sync, following command may help.
[python] view plaincopy
$ find /mnt/glusterfs -type f -exec head -n 1 {} /;
A faster healing solution could be
[python] view plaincopy
$ find /mnt/glusterfs -type f -exec head -c 1 {} /; >/dev/null
转贴地址: http://anotherbug.blog.chinajavaworld.com/entry/4356/0/
_____________________________________________
关于libvirt挂载glusterFs分布式文件系统下的镜像研究
2013-04-02 16:05 90人阅读 评论(0) 收藏 举报
glusterfslibvirt虚拟机
实验目的:将虚拟机中的一块硬盘放到分布式文件系统上以保障数据安全。
参与人员:虚拟机,物理机,分布式文件系统集群。
分布式文件系统部分:
我们这里选用的是glusterfs这个分布式文件系统
在每台机器上首先下载glusterFS的软件包
[plain] view plaincopy
wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/glusterfs-3.4.0alpha.tar.gz
然后运行一下脚本
[plain] view plaincopy
wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/glusterfs-3.4.0alpha.tar.gz
运行安装脚本
脚本的内容
#!/bin/sh
echo Install GlusterFs
apt-get update –fix-missing
apt-get install -y build-essential flex bison libssl-dev libreadline6-dev
echo Configure
tar zxf glusterfs-3.4.0alpha.tar.gz
cd glusterfs-3.4.0alpha; ./configure
echo Make
cd glusterfs-3.4.0alpha; make
echo Make Install
cd glusterfs-3.4.0alphag; make install
echo Fix library
cp /usr/local/lib/*/usr/lib
echo Finish
echo If you want to start GlusterFS, you can type:
echo “# service glusterd start”
运行
./install.sh
之后在每个服务器上启动glusterd服务
service glusterd start
完成之后用命令把4台机器组成一个存储池
gluster peer probe SERVER(端口为24007)
然后运行命令创建分布式文件系统卷:
gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
完成之后在客户端挂载到分布式文件系统上就能进行下面的配置了
mount -t glusterfs server1:/test-volume /test
在服务器上进行一个虚拟盘创建(前提是这里面你这个镜像的创建路径是分布式文件系统的挂载点)。命令如下:
[plain] view plaincopy
qemu-img create -f qcow2 add.img 10G
接着运行一个命令:
[html] view plaincopy
virsh attach-disk test /test/add.img vdb
这条命令在于test为虚拟机的名字, vda为目标设备名。
以上命令也可以用修改配置文件来完成
[plain] view plaincopy
#virsh edit test 在xml中的disk后面添加如下几行。
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/test/test.img'/>
<target dev='vdb' bus='virtio'/>
</disk>
此时登录虚拟机,用
#fdisk -l 查看是否回显示新添加的硬盘/dev/vdb
然后,对vdb进行格式化,命令是
mkfs.ext4 /dev/vdb
其中ext4是格式,你可以依据自己喜好改成ext3
接下来,新建一个目录用来挂载新的硬盘
#mkdir /test
#mount /dev/vdb /test
______________________________________________
http://blog.chinaunix.net/uid-29312110-id-4241647.html
gluster 2014-05-07 16:12:06
分类: LINUX
Set up the gluster server with rdma type according to steps below:
prepare two machines with rhel6 OS for gluster servers:
serverA: 10.66.106.25
serverB: 10.66.106.39
Install the following packages
# rpm -qa|grep gluster
glusterfs-api-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-api-devel-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-libs-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.59rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.59rhs-1.el6rhs.x86_64
1. Create gluster volume on the gluster server A and B.
# more /etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
option rpc-auth-allow-insecure on
end-volume
# service glusterd restart
Stopping glusterd: [ OK ]
Starting glusterd: [ OK ]
# mkdir /br1
# chmod -R 777 /br1
# setenforce 0
# iptables -F
On gluster serverA:
# gluster peer probe 10.66.106.39
peer probe: success.
# gluster peer status
Number of Peers: 1
Hostname: 10.66.106.39
Uuid: 40f4b505-0765-4a6b-906b-db68c078c1dd
State: Peer in Cluster (Connected)
# gluster volume create gluster-vol1 10.66.106.25:/br1 10.66.106.39:/br1 force
volume create: gluster-vol1: success: please start the volume to access data
# gluster volume set gluster-vol1 server.allow-insecure on
volume set: success
# gluster volume info
Volume Name: gluster-vol1
Type: Distribute
Volume ID: ea32fcf6-3b6e-43ed-9e87-862a35fa0ddf
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.66.106.25:/br1
Brick2: 10.66.106.39:/br1
Options Reconfigured:
server.allow-insecure: on
# gluster volume start gluster-vol1
# gluster volume status
Status of volume: gluster-vol1
Gluster process Port Online Pid
--------------------------------------------------------------------------
Brick 10.66.106.25:/br1 49152 Y 32446
Brick 10.66.106.39:/br1 49152 Y 452
NFS Server on localhost N/A N N/A
NFS Server on 10.66.106.39 N/A N N/A
Task Status of Volume gluster-vol1
-------------------------------------------------------------------------
There are no active volume tasks
2.Prepare a gluster client
with rhel7 OS
Install the following packages on client
# rpm -qa|grep gluster
glusterfs-libs-3.4.0.59rhs-1.el7.x86_64
glusterfs-fuse-3.4.0.59rhs-1.el7.x86_64
glusterfs-3.4.0.59rhs-1.el7.x86_64
glusterfs-api-devel-3.4.0.59rhs-1.el7.x86_64
glusterfs-rdma-3.4.0.59rhs-1.el7.x86_64
glusterfs-debuginfo-3.4.0.59rhs-1.el7.x86_64
glusterfs-api-3.4.0.59rhs-1.el7.x86_64
glusterfs-devel-3.4.0.59rhs-1.el7.x86_64
3.
# qemu-img create -f qcow2 -o lazy_refcounts=on gluster://10.66.106.25/gluster-vol1/qcow3-vol1 8G
Formatting 'gluster://10.66.106.25/gluster-vol1/qcow3-vol1', fmt=qcow2 size=8589934592 encryption=off cluster_size=65536 lazy_refcounts=on
4.
set the nfs.disable=on in the gluster server
# gluster volume set gluster-vol1 nfs.disable on
# gluster volume info gluster-vol1 | grep nfs.disable
nfs.disable: on
5. # qemu-img info gluster://10.66.106.25/gluster-vol1/qcow3-vol1 image: gluster://10.66.106.25/gluster-vol1/qcow3-vol1 file
format: qcow2 virtual size: 8.0G (8589934592 bytes)
disk size: 193K
cluster_size: 65536
Format specific information:
compat: 1.1 lazy refcounts: true 6. qemu连接glusterfs支持多种格式,其中端口都是固定的:
gluster://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
gluster+tcp://server.domain.com:24007/testvol/dir/a.img
gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
gluster+rdma://1.2.3.4:24007/testvol/a.img
经过测试发现qemu连接glusterfs是支持高可用的,如gluster://1.2.3.4/testvol/a.img,1.2.3.4宕机将不会影响虚拟机运行.
另外测试过程发现gluster对xfs分区格式兼容性不是太好,会有使用空间大小不正确的现象,解决方法是:
gluster volume set <volname> cluster.stripe-coalesce enable
_____________________________________________
myron