本文共 7069 字,大约阅读时间需要 23 分钟。
Ceph已然成为开源社区极为火爆的分布式存储开源方案,最近需要调研Openstack与Ceph的融合方案,因此开始了解Ceph,当然从搭建Ceph集群开始。
我搭建机器使用了6台虚拟机,包括一个admin节点,一个monitor节点,一个mds节点,两个osd节点,一个client节点。机器的配置是:
> lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarchDistributor ID: CentOSDescription: CentOS Linux release 7.1.1503 (Core) Release: 7.1.1503Codename: Core> uname -r
3.10.0-229.el7.x86_64下面开始逐步介绍。
>df -T>umount -l /data(去掉挂载点)>mkfs.xfs -f /dev/vdb(强制转换文件系统)>mount /dev/vdb /data(重新挂载)
> hostnamectl set-hostname {name}c)添加hosts文件实现集群主机名与主机名之间相互能够解析(host 文件添加主机名不要使用fqdn方式)
> lsb_release -a > uname -r
[ceph-noarch]name=Ceph noarch packagesbaseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch enabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc
> sudo yum update && sudo yum install ceph-deploy
注:很多资料说是用ssh-copy-id,笔者尝试并不work,就将admin的key添加到其他机器上去即可。
# yum -y install ntp ntpdate ntp-doc# ntpdate 0.us.pool.ntp.org# hwclock --systohc# systemctl enable ntpd.service# systemctl start ntpd.service0.4 关闭防火墙或者开放 6789/6800~6900端口、关闭SELINUX
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config# setenforce 0打开 Ceph 需要的端口
# yum -y install firewalld# firewall-cmd --zone=public --add-port=6789/tcp --permanent# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent# firewall-cmd --reload
10.221.83.246 ceph-adm-node10.221.83.247 ceph-mon-node10.221.83.248 ceph-mds-node10.221.83.249 ceph-osd-node110.221.83.250 ceph-osd-node210.221.83.251 ceph-client-node
mkdir -p ~/my-clustercd ~/my-clusterhints: ceph-deploy会把文件输出到当前目录,所以请确保在此目录下执行ceph-deploy
>ceph-deploy new ceph-mon-nodehints :当前目录下ls会有一个Ceph配置文件、一个monitor密钥环和一个日志文件
> osd pool default size = 22.4 在所有节点安装ceph
>ceph-deploy install ceph-adm-node ceph-mon-node ceph-mds-node ceph-osd-node1 ceph-osd-node2 ceph-client-node2.5 安装ceph monitor
>ceph-deploy mon create ceph-mon-node
>ceph-deploy gatherkeys ceph-mon-node
ssh ceph-osd-node{id}mkdir -p /var/local/osd{id}exit
#创建osdceph-deploy osd prepare ceph-osd-node1:/var/local/osd1 ceph-osd-node2:/var/local/osd2 #激活osdceph-deploy osd activate ceph-osd-node1:/var/local/osd1 ceph-osd-node2:/var/local/osd2
ceph-deploy osd create ceph-osd-node1:/var/local/osd1 ceph-osd-node2:/var/local/osd2
[ceph-osd-node1][INFO ] Running command: /usr/sbin/ceph-disk list[ceph-osd-node1][INFO ] ----------------------------------------[ceph-osd-node1][INFO ] ceph-0[ceph-osd-node1][INFO ] ----------------------------------------[ceph-osd-node1][INFO ] Path /var/lib/ceph/osd/ceph-0[ceph-osd-node1][INFO ] ID 0[ceph-osd-node1][INFO ] Name osd.0[ceph-osd-node1][INFO ] Status up[ceph-osd-node1][INFO ] Reweight 1.0[ceph-osd-node1][INFO ] Active ok[ceph-osd-node1][INFO ] Magic ceph osd volume v026[ceph-osd-node1][INFO ] Whoami 0[ceph-osd-node1][INFO ] Journal path /var/local/osd1/journal[ceph-osd-node1][INFO ] ----------------------------------------
[ceph-osd-node2][INFO ] Running command: /usr/sbin/ceph-disk list[ceph-osd-node2][INFO ] ----------------------------------------[ceph-osd-node2][INFO ] ceph-1[ceph-osd-node2][INFO ] ----------------------------------------[ceph-osd-node2][INFO ] Path /var/lib/ceph/osd/ceph-1[ceph-osd-node2][INFO ] ID 1[ceph-osd-node2][INFO ] Name osd.1[ceph-osd-node2][INFO ] Status up[ceph-osd-node2][INFO ] Reweight 1.0[ceph-osd-node2][INFO ] Active ok[ceph-osd-node2][INFO ] Magic ceph osd volume v026[ceph-osd-node2][INFO ] Whoami 1[ceph-osd-node2][INFO ] Journal path /var/local/osd2/journal[ceph-osd-node2][INFO ] ----------------------------------------
> ceph-deploy admin ceph-*(all ceph node)> 例如:ceph-deploy admin ceph-osd-node1 ceph-osd-node2 ceph-mon-node ceph-mds-node ceph-client-node3.4 修改ceph.client.admin.keyring权限:
> ceph-deploy mds create ceph-mds-node报错:
> ceph-deploy install ceph-client-node
> ceph osd pool create cephfs_data> ceph osd pool create cephfs_metadata > ceph fd new cephfs_metadata cephfs_data
>ceph-deploy admin ceph-client-node修改keyring文件权限:
> sudo chmod +r /etc/ceph/ceph.client.admin.keyringmonitor ip: 10.221.83.247
> sudo mkdir -p /mnt/mycephfs27.2.2 安装ceph-fuse
> sudo yum install -y ceph-fuse
> sudo ceph-fuse -m 10.221.83.247:6789 /mnt/mycephfs2
> fusermount -u /mnt/mycephfs2/
Author:忆之独秀
Email:leaguenew@qq.com
注明出处: