kubernetes存储之GlusterFS集群详解

目录
  • 1、glusterfs概述
    • 1.1、glusterfs简介
    • 1.2、glusterfs特点
    • 1.3、glusterfs卷的模式
  • 2、heketi概述
  • 3、部署heketi+glusterfs
    • 3.1、准备工作
      • 3.1.1、所有节点安装glusterfs客户端
      • 3.1.2、节点打标签
      • 3.1.3、所有节点加载对应模块
    • 3.2、创建glusterfs集群
      • 3.2.1、下载相关安装文件
      • 3.2.2、创建集群
      • 3.2.3、查看gfs pods
    • 3.3、创建heketi服务
      • 3.3.1、创建heketi的service account对象
      • 3.3.2、创建heketi对应的权限和secret
      • 3.3.3、初始化部署heketi
    • 3.4、创建gfs集群
      • 3.4.1、复制二进制文件
      • 3.4.2、配置topology-sample
      • 3.4.3、获取当前heketi的ClusterIP
      • 3.4.4、使用heketi创建gfs集群
      • 3.4.5、持久化heketi配置
  • 4、创建storageclass
  • 5、测试通过gfs提供动态存储
  • 6、分析k8s通过heketi创建pv及pvc的过程
  • 7、测试数据
  • 8、测试deployment
  • 参考来源:
  • 总结

1、glusterfs概述

1.1、glusterfs简介

glusterfs是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储。

1.2、glusterfs特点

  • 可以扩展到几PB容量
  • 支持处理数千个客户端
  • 兼容POSIX接口
  • 使用通用硬件,普通服务器即可构建
  • 能够使用支持扩展属性的文件系统,例如ext4,XFS
  • 支持工业标准的协议,例如NFS,SMB
  • 提供很多高级功能,例如副本,配额,跨地域复制,快照以及bitrot检测
  • 支持根据不同工作负载进行调优

1.3、glusterfs卷的模式

glusterfs中的volume的模式有很多中,包括以下几种:

  • 分布卷(默认模式):即DHT, 也叫 分布卷: 将文件以hash算法随机分布到 一台服务器节点中存储。
  • 复制模式:即AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。
  • 条带模式:即Striped, 创建volume 时带 stripe x 数量: 将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。
  • 分布式条带模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 server = 4 个节点: 是DHT 与 Striped 的组合型。
  • 分布式复制模式:最少需要4台服务器才能创建。 创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。
  • 条带复制卷模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server = 4 个节点: 是 Striped 与 AFR 的组合型。
  • 三种模式混合: 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组。

2、heketi概述

heketi是一个提供RESTful API管理gfs卷的框架,能够在kubernetes、openshift、openstack等云平台上实现动态的存储资源供应,支持gfs多集群管理,便于管理员对gfs进行操作,在kubernetes集群中,pod将存储的请求发送至heketi,然后heketi控制gfs集群创建对应的存储卷。

heketi动态在集群内选择bricks构建指定的volumes,以确保副本会分散到集群不同的故障域内。

heketi还支持任意数量的glusterfs集群,以保证接入的云服务器不局限于单个glusterfs集群。

3、部署heketi+glusterfs

环境:kubeadm安装的最新k8s 1.16.2版本,由1master+2node组成,网络插件选用的是flannel,默认kubeadm安装的k8s,会给master打上污点,本文为了实现gfs集群功能,先手动去掉了污点。

本文的glusterfs卷模式为复制卷模式。

另外,glusterfs在kubernetes集群中需要以特权运行,需要在kube-apiserver中添加–allow-privileged=true参数以开启此功能,默认此版本的kubeadm已开启。

[root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@k8s-master-01 ~]# kubectl taint node k8s-master-01 node-role.kubernetes.io/master-
node/k8s-master-01 untainted
[root@k8s-master-01 ~]# kubectl describe nodes k8s-master-01 |grep Taint
Taints:             <none>

3.1、准备工作

为了保证pod能够正常使用gfs作为后端存储,需要每台运行pod的节点上提前安装gfs的客户端工具,其他存储方式也类似。

3.1.1、所有节点安装glusterfs客户端

$ yum install -y glusterfs glusterfs-fuse -y

3.1.2、节点打标签

需要安装gfs的kubernetes设置Label,因为gfs是通过kubernetes集群的DaemonSet方式安装的。

DaemonSet安装方式默认会在每个节点上都进行安装,除非安装前设置筛选要安装节点Label,带上此标签的节点才会安装。

安装脚本中设置DaemonSet中设置安装在贴有 storagenode=glusterfs的节点,所以这是事先将节点贴上对应Label。

[root@k8s-master-01 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
k8s-master-01   Ready    master   5d      v1.16.2
k8s-node-01     Ready    <none>   4d23h   v1.16.2
k8s-node-02     Ready    <none>   4d23h   v1.16.2
[root@k8s-master-01 ~]# kubectl label node k8s-master-01 storagenode=glusterfs
node/k8s-master-01 labeled
[root@k8s-master-01 ~]# kubectl label node k8s-node-01 storagenode=glusterfs
node/k8s-node-01 labeled
[root@k8s-master-01 ~]# kubectl label node k8s-node-02 storagenode=glusterfs
node/k8s-node-02 labeled
[root@k8s-master-01 ~]# kubectl get nodes --show-labels
NAME            STATUS   ROLES    AGE     VERSION   LABELS
k8s-master-01   Ready    master   5d      v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-01,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs
k8s-node-01     Ready    <none>   4d23h   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-01,kubernetes.io/os=linux,storagenode=glusterfs
k8s-node-02     Ready    <none>   4d23h   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-02,kubernetes.io/os=linux,storagenode=glusterfs

3.1.3、所有节点加载对应模块

$ modprobe dm_snapshot
$ modprobe dm_mirror
$ modprobe dm_thin_pool

查看是否加载

$ lsmod | grep dm_snapshot
$ lsmod | grep dm_mirror
$ lsmod | grep dm_thin_pool

3.2、创建glusterfs集群

采用容器化方式部署gfs集群,同样也可以使用传统方式部署,在生产环境中,gfs集群最好是独立于集群之外进行部署,之后只需要创建对应的endpoints即可。这里采用Daemonset方式部署,同时保证已经打上标签的节点上都运行一个gfs服务,并且均有提供存储的磁盘。

3.2.1、下载相关安装文件

[root@k8s-master-01 glusterfs]# pwd
/root/manifests/glusterfs
[root@k8s-master-01 glusterfs]# wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
[root@k8s-master-01 glusterfs]# tar xf heketi-client-v7.0.0.linux.amd64.tar.gz
[root@k8s-master-01 glusterfs]# cd heketi-client/share/heketi/kubernetes/
[root@k8s-master-01 kubernetes]# pwd
/root/manifests/glusterfs/heketi-client/share/heketi/kubernetes

在本集群中,下面用到的daemonset控制器及后面用到的deployment控制器的api版本均变为了apps/v1,所以需要手动修改下载的json文件再进行部署,资源编排文件中需要指定selector声明。避免出现以下报错:

[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
error: unable to recognize "glusterfs-daemonset.json": no matches for kind "DaemonSet" in version "extensions/v1beta1"

修改api版本

"apiVersion": "extensions/v1beta1"

为apps/v1

"apiVersion": "apps/v1",

指定selector声明

[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
error: error validating "glusterfs-daemonset.json": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

对应后面内容的selector,用matchlabel相关联

"spec": {
    "selector": {
        "matchLabels": {
            "glusterfs-node": "daemonset"
        }
    },

3.2.2、创建集群

[root@k8s-master-01 kubernetes]# kubectl apply -f glusterfs-daemonset.json
daemonset.apps/glusterfs created

注意:

  • 这里使用的是默认的挂载方式,可使用其他磁盘作为gfs的工作目录
  • 此处创建的namespace为default,可手动指定为其他namespace

3.2.3、查看gfs pods

[root@k8s-master-01 kubernetes]# kubectl get pods
NAME              READY   STATUS    RESTARTS   AGE
glusterfs-9tttf   1/1     Running   0          1m10s
glusterfs-gnrnr   1/1     Running   0          1m10s
glusterfs-v92j5   1/1     Running   0          1m10s

3.3、创建heketi服务

3.3.1、创建heketi的service account对象

[root@k8s-master-01 kubernetes]# cat heketi-service-account.json
{
  "apiVersion": "v1",
  "kind": "ServiceAccount",
  "metadata": {
    "name": "heketi-service-account"
  }
}
[root@k8s-master-01 kubernetes]# kubectl apply -f heketi-service-account.json
serviceaccount/heketi-service-account created
[root@k8s-master-01 kubernetes]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         71m
heketi-service-account   1         5s

3.3.2、创建heketi对应的权限和secret

[root@k8s-master-01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=dafault:heketi-service-account
clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created
[root@k8s-master-01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json
secret/heketi-config-secret created

3.3.3、初始化部署heketi

同样的,需要修改api版本以及增加selector声明部分。

[root@k8s-master-01 kubernetes]# vim heketi-bootstrap.json
...
      "kind": "Deployment",
      "apiVersion": "apps/v1"
...
      "spec": {
        "selector": {
          "matchLabels": {
            "name": "deploy-heketi"
          }
        },
...
[root@k8s-master-01 kubernetes]# kubectl create -f heketi-bootstrap.json
service/deploy-heketi created
deployment.apps/deploy-heketi created
[root@k8s-master-01 kubernetes]# vim heketi-deployment.json
...
      "kind": "Deployment",
      "apiVersion": "apps/v1",
...
      "spec": {
        "selector": {
          "matchLabels": {
            "name": "heketi"
          }
        },
        "replicas": 1,
...
[root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json
secret/heketi-db-backup created
service/heketi created
deployment.apps/heketi created
[root@k8s-master-01 kubernetes]# kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
deploy-heketi-6c687b4b84-p7mcr   1/1     Running   0          72s
heketi-68795ccd8-9726s           0/1     ContainerCreating   0          50s
glusterfs-9tttf                  1/1     Running   0          48m
glusterfs-gnrnr                  1/1     Running   0          48m
glusterfs-v92j5                  1/1     Running   0          48m

3.4、创建gfs集群

3.4.1、复制二进制文件

复制heketi-cli到/usr/local/bin目录下

[root@k8s-master-01 heketi-client]# pwd
/root/manifests/glusterfs/heketi-client
[root@k8s-master-01 heketi-client]# cp bin/heketi-cli /usr/local/bin/
[root@k8s-master-01 heketi-client]# heketi-cli -v
heketi-cli v7.0.0

3.4.2、配置topology-sample

修改topology-sample,manage为gfs管理服务的节点Node主机名,storage为节点的ip地址,device为节点上的裸设备,也就是用于提供存储的磁盘最好使用裸设备,不进行分区。
因此,需要预先在每个gfs的节点上准备好新的磁盘,这里分别在三个节点都新添加了一块/dev/sdb磁盘设备,大小均为10G。

[root@k8s-master-01 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0    2G  0 part /boot
└─sda2            8:2    0   48G  0 part
  ├─centos-root 253:0    0   44G  0 lvm  /
  └─centos-swap 253:1    0    4G  0 lvm
sdb               8:16   0   10G  0 disk
sr0              11:0    1 1024M  0 rom
[root@k8s-node-01 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0    2G  0 part /boot
└─sda2            8:2    0   48G  0 part
  ├─centos-root 253:0    0   44G  0 lvm  /
  └─centos-swap 253:1    0    4G  0 lvm
sdb               8:16   0   10G  0 disk
sr0              11:0    1 1024M  0 rom
[root@k8s-node-02 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0    2G  0 part /boot
└─sda2            8:2    0   48G  0 part
  ├─centos-root 253:0    0   44G  0 lvm  /
  └─centos-swap 253:1    0    4G  0 lvm
sdb               8:16   0   10G  0 disk
sr0              11:0    1 1024M  0 rom

配置topology-sample

{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s-master-01"
                            ],
                            "storage": [
                                "192.168.2.10"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s-node-01"
                            ],
                            "storage": [
                                "192.168.2.11"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "k8s-node-02"
                            ],
                            "storage": [
                                "192.168.2.12"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                }
            ]
        }
    ]
}

3.4.3、获取当前heketi的ClusterIP

查看当前heketi的ClusterIP,并通过环境变量声明

[root@k8s-master-01 kubernetes]# kubectl get svc|grep heketi
deploy-heketi   ClusterIP   10.1.241.99   <none>        8080/TCP   3m18s
[root@k8s-master-01 kubernetes]# curl http://10.1.241.99:8080/hello
Hello from Heketi
[root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.241.99:8080
[root@k8s-master-01 kubernetes]# echo $HEKETI_CLI_SERVER
http://10.1.185.215:8080

3.4.4、使用heketi创建gfs集群

执行如下命令创建gfs集群会提示Invalid JWT token: Token missing iss claim

[root@k8s-master-01 kubernetes]# heketi-cli topology load --json=topology-sample.json
Error: Unable to get topology information: Invalid JWT token: Token missing iss claim

这是因为新版本的heketi在创建gfs集群时需要带上参数,声明用户名及密码,相应值在heketi.json文件中配置,即:

[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology-sample.json
Creating cluster ... ID: 1c5ffbd86847e5fc1562ef70c033292e
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node k8s-master-01 ... ID: b6100a5af9b47d8c1f19be0b2b4d8276
                Adding device /dev/sdb ... OK
        Creating node k8s-node-01 ... ID: 04740cac8d42f56e354c94bdbb7b8e34
                Adding device /dev/sdb ... OK
        Creating node k8s-node-02 ... ID: 1b33ad0dba20eaf23b5e3a4845e7cdb4
                Adding device /dev/sdb ... OK

执行了heketi-cli topology load之后,Heketi在服务器做的大致操作如下:

  • 进入任意glusterfs Pod内,执行gluster peer status 发现都已把对端加入到了可信存储池(TSP)中。
  • 在运行了gluster Pod的节点上,自动创建了一个VG,此VG正是由topology-sample.json 文件中的磁盘裸设备创建而来。
  • 一块磁盘设备创建出一个VG,以后创建的PVC,即从此VG里划分的LV。
  • heketi-cli topology info 查看拓扑结构,显示出每个磁盘设备的ID,对应VG的ID,总空间、已用空间、空余空间等信息。
    通过部分日志查看
[root@k8s-master-01 manifests]# kubectl logs -f deploy-heketi-6c687b4b84-l5b6j
...
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [pvs -o pv_name,pv_uuid,vg_name --reportformat=json /dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [  {
      "report": [
          {
              "pv": [
                  {"pv_name":"/dev/sdb", "pv_uuid":"1UkSIV-RYt1-QBNw-KyAR-Drm5-T9NG-UmO313", "vg_name":"vg_398329cc70361dfd4baa011d811de94a"}
              ]
          }
      ]
  }
]: Stderr [  WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/centos/root not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/centos/swap not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
]
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [udevadm info --query=symlink --name=/dev/sdb] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [
]: Stderr []
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[negroni] 2019-10-23T02:17:44Z | 200 |   93.868µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f
[kubeexec] DEBUG 2019/10/23 02:17:44 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [vgdisplay -c vg_398329cc70361dfd4baa011d811de94a] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [  vg_398329cc70361dfd4baa011d811de94a:r/w:772:-1:0:0:0:-1:0:1:1:10350592:4096:2527:0:2527:YCPG9X-b270-1jf2-VwKX-ycpZ-OI9u-7ZidOc
]: Stderr []
[cmdexec] DEBUG 2019/10/23 02:17:44 heketi/executors/cmdexec/device.go:273:cmdexec.(*CmdExecutor).getVgSizeFromNode: /dev/sdb in k8s-node-01 has TotalSize:10350592, FreeSize:10350592, UsedSize:0
[heketi] INFO 2019/10/23 02:17:44 Added device /dev/sdb
[asynchttp] INFO 2019/10/23 02:17:44 Completed job 3d0b6edb0faa67e8efd752397f314a6f in 3m2.694238221s
[negroni] 2019-10-23T02:17:45Z | 204 |   105.23µs | 10.1.241.99:8080 | GET /queue/3d0b6edb0faa67e8efd752397f314a6f
[cmdexec] INFO 2019/10/23 02:17:45 Check Glusterd service status in node k8s-node-01
[kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
[heketi] INFO 2019/10/23 02:17:45 Adding node k8s-node-02
[negroni] 2019-10-23T02:17:45Z | 202 |   146.998544ms | 10.1.241.99:8080 | POST /nodes
[asynchttp] INFO 2019/10/23 02:17:45 Started job 8da70b6fd6fec1d61c4ba1cd0fe27fe5
[cmdexec] INFO 2019/10/23 02:17:45 Probing: k8s-node-01 -> 192.168.2.12
[negroni] 2019-10-23T02:17:45Z | 200 |   74.577µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
[kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:45 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[negroni] 2019-10-23T02:17:46Z | 200 |   79.893µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 peer probe 192.168.2.12] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [peer probe: success.
]: Stderr []
[cmdexec] INFO 2019/10/23 02:17:46 Setting snapshot limit
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [gluster --mode=script --timeout=600 snapshot config snap-max-hard-limit 14] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout [snapshot config: snap-max-hard-limit for System set successfully
]: Stderr []
[heketi] INFO 2019/10/23 02:17:46 Added node 1b33ad0dba20eaf23b5e3a4845e7cdb4
[asynchttp] INFO 2019/10/23 02:17:46 Completed job 8da70b6fd6fec1d61c4ba1cd0fe27fe5 in 488.404011ms
[negroni] 2019-10-23T02:17:46Z | 303 |   80.712µs | 10.1.241.99:8080 | GET /queue/8da70b6fd6fec1d61c4ba1cd0fe27fe5
[negroni] 2019-10-23T02:17:46Z | 200 |   242.595µs | 10.1.241.99:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4
[heketi] INFO 2019/10/23 02:17:46 Adding device /dev/sdb to node 1b33ad0dba20eaf23b5e3a4845e7cdb4
[negroni] 2019-10-23T02:17:46Z | 202 |   696.018µs | 10.1.241.99:8080 | POST /devices
[asynchttp] INFO 2019/10/23 02:17:46 Started job 21af2069b74762a5521a46e2b52e7d6a
[negroni] 2019-10-23T02:17:46Z | 200 |   82.354µs | 10.1.241.99:8080 | GET /queue/21af2069b74762a5521a46e2b52e7d6a
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [pvcreate -qq --metadatasize=128M --dataalignment=256K '/dev/sdb'] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 02:17:46 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
...

3.4.5、持久化heketi配置

上面创建的heketi没有配置持久化的卷,如果heketi的pod重启,可能会丢失之前的配置信息,所以现在创建heketi持久化的卷来对heketi数据进行持久化,该持久化方式利用gfs提供的动态存储,也可以采用其他方式进行持久化。
在所有节点安装device-mapper*

yum install -y device-mapper*

将配置信息保存为文件,并创建持久化相关信息

[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' setup-openshift-heketi-storage Saving heketi-storage.json
Saving heketi-storage.json
[root@k8s-master-01 kubernetes]# kubectl apply -f heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created

删除中间产物

[root@k8s-master-01 kubernetes]# kubectl delete all,svc,jobs,deployment,secret --selector="deploy-heketi"
pod "deploy-heketi-6c687b4b84-l5b6j" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-6c687b4b84" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted

创建持久化的heketi

[root@k8s-master-01 kubernetes]# kubectl apply -f heketi-deployment.json
secret/heketi-db-backup created
service/heketi created
deployment.apps/heketi created
[root@k8s-master-01 kubernetes]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
glusterfs-cqw5d          1/1     Running   0          41m
glusterfs-l2lsv          1/1     Running   0          41m
glusterfs-lrdz7          1/1     Running   0          41m
heketi-68795ccd8-m8x55   1/1     Running   0          32s

查看持久化后heketi的svc,并重新声明环境变量

[root@k8s-master-01 kubernetes]# kubectl get svc
NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
heketi                     ClusterIP   10.1.45.61   <none>        8080/TCP   2m9s
heketi-storage-endpoints   ClusterIP   10.1.26.73   <none>        1/TCP      4m58s
kubernetes                 ClusterIP   10.1.0.1     <none>        443/TCP    14h
[root@k8s-master-01 kubernetes]# export HEKETI_CLI_SERVER=http://10.1.45.61:8080
[root@k8s-master-01 kubernetes]# curl http://10.1.45.61:8080/hello
Hello from Heketi

查看gfs集群信息,更多操作参照官方文档说明

[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology info

Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e

    File:  true
    Block: true

    Volumes:

        Name: heketidbstorage
        Size: 2
        Id: b25f4b627cf66279bfe19e8a01e9e85d
        Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
        Mount: 192.168.2.11:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.2.12,192.168.2.10
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 3ab6c19b8fe0112575ba04d58573a404
                        Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick
                        Size (GiB): 2
                        Node: b6100a5af9b47d8c1f19be0b2b4d8276
                        Device: 703e3662cbd8ffb24a6401bb3c3c41fa

                        Id: d1fa386f2ec9954f4517431163f67dea
                        Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick
                        Size (GiB): 2
                        Node: 04740cac8d42f56e354c94bdbb7b8e34
                        Device: 398329cc70361dfd4baa011d811de94a

                        Id: d2b0ae26fa3f0eafba407b637ca0d06b
                        Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick
                        Size (GiB): 2
                        Node: 1b33ad0dba20eaf23b5e3a4845e7cdb4
                        Device: 7c791bbb90f710123ba431a7cdde8d0b

    Nodes:

        Node Id: 04740cac8d42f56e354c94bdbb7b8e34
        State: online
        Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
        Zone: 1
        Management Hostnames: k8s-node-01
        Storage Hostnames: 192.168.2.11
        Devices:
                Id:398329cc70361dfd4baa011d811de94a   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7
                        Bricks:
                                Id:d1fa386f2ec9954f4517431163f67dea   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_d1fa386f2ec9954f4517431163f67dea/brick

        Node Id: 1b33ad0dba20eaf23b5e3a4845e7cdb4
        State: online
        Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
        Zone: 1
        Management Hostnames: k8s-node-02
        Storage Hostnames: 192.168.2.12
        Devices:
                Id:7c791bbb90f710123ba431a7cdde8d0b   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7
                        Bricks:
                                Id:d2b0ae26fa3f0eafba407b637ca0d06b   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_d2b0ae26fa3f0eafba407b637ca0d06b/brick

        Node Id: b6100a5af9b47d8c1f19be0b2b4d8276
        State: online
        Cluster Id: 1c5ffbd86847e5fc1562ef70c033292e
        Zone: 1
        Management Hostnames: k8s-master-01
        Storage Hostnames: 192.168.2.10
        Devices:
                Id:703e3662cbd8ffb24a6401bb3c3c41fa   Name:/dev/sdb            State:online    Size (GiB):9       Used (GiB):2       Free (GiB):7
                        Bricks:
                                Id:3ab6c19b8fe0112575ba04d58573a404   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_3ab6c19b8fe0112575ba04d58573a404/brick

4、创建storageclass

[root@k8s-master-01 kubernetes]# vim storageclass-gfs-heketi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
  resturl: "http://10.1.45.61:8080"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "My Secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
allowVolumeExpansion: true
[root@k8s-master-01 kubernetes]# kubectl apply -f storageclass-gfs-heketi.yaml
storageclass.storage.k8s.io/gluster-heketi created

参数说明:

  • reclaimPolicy:Retain 回收策略,默认是Delete,删除pvc后pv及后端创建的volume、brick(lvm)不会被删除。
  • gidMin和gidMax,能够使用的最小和最大gid
  • volumetype:卷类型及个数,这里使用的是复制卷,个数必须大于1

5、测试通过gfs提供动态存储

创建一个pod使用动态pv,在StorageClassName指定之前创建的StorageClass的name,即gluster-heketi:

[root@k8s-master-01 kubernetes]# vim pod-use-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-use-pvc
spec:
  containers:
  - name: pod-use-pvc
    image: busybox
    command:
      - sleep
      - "3600"
    volumeMounts:
    - name: gluster-volume
      mountPath: "/pv-data"
      readOnly: false
  volumes:
  - name: gluster-volume
    persistentVolumeClaim:
      claimName: pvc-gluster-heketi

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-gluster-heketi
spec:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 1Gi

创建pod并查看创建的pv和pvc

[root@k8s-master-01 kubernetes]# kubectl apply -f pod-use-pvc.yaml
pod/pod-use-pvc created
persistentvolumeclaim/pvc-gluster-heketi created
[root@k8s-master-01 kubernetes]# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS     REASON   AGE
persistentvolume/pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c   1Gi        RWO            Retain           Bound    default/pvc-gluster-heketi   gluster-heketi            57s

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/pvc-gluster-heketi   Bound    pvc-0fb9b246-4da4-491c-b6a2-4f38489ab11c   1Gi        RWO            gluster-heketi   62s

6、分析k8s通过heketi创建pv及pvc的过程

通过pvc及向storageclass申请创建对应的pv,具体可通过查看创建的heketi pod的日志
首先发现heketi接收到请求之后运行了一个job任务,创建了三个bricks,在三个gfs节点中创建对应的目录:

[heketi] INFO 2019/10/23 03:08:36 Allocating brick set #0
[negroni] 2019-10-23T03:08:36Z | 202 |   56.193603ms | 10.1.45.61:8080 | POST /volumes
[asynchttp] INFO 2019/10/23 03:08:36 Started job 3ec932315085609bc54ead6e3f6851e8
[heketi] INFO 2019/10/23 03:08:36 Started async operation: Create Volume
[heketi] INFO 2019/10/23 03:08:36 Trying Create Volume (attempt #1/5)
[heketi] INFO 2019/10/23 03:08:36 Creating brick 289fe032c1f4f9f211480e24c5d74a44
[heketi] INFO 2019/10/23 03:08:36 Creating brick a3172661ba1b849d67b500c93c3dd652
[heketi] INFO 2019/10/23 03:08:36 Creating brick 917e27a9dbc5395ebf08dff8d3401b43
[negroni] 2019-10-23T03:08:36Z | 200 |   72.083µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 1
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [mkdir -p /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:08:36 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2

创建lv,添加自动挂载

[kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
[kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout [meta-data=/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
]: Stderr []
[kubeexec] DEBUG 2019/10/23 03:08:37 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [awk "BEGIN {print \"/dev/mapper/vg_703e3662cbd8ffb24a6401bb3c3c41fa-brick_a3172661ba1b849d67b500c93c3dd652 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}"] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]

创建brick,设置权限

[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chown :40000 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr []
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 2
[negroni] 2019-10-23T03:08:38Z | 200 |   83.159µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_7c791bbb90f710123ba431a7cdde8d0b/brick_917e27a9dbc5395ebf08dff8d3401b43/brick] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout []: Stderr []
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_703e3662cbd8ffb24a6401bb3c3c41fa/brick_a3172661ba1b849d67b500c93c3dd652/brick] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout []: Stderr []
[kubeexec] DEBUG 2019/10/23 03:08:38 heketi/pkg/remoteexec/log/commandlog.go:46:log.(*CommandLogger).Success: Ran command [chmod 2775 /var/lib/heketi/mounts/vg_398329cc70361dfd4baa011d811de94a/brick_289fe032c1f4f9f211480e24c5d74a44/brick] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout []: Stderr []
[cmdexec] INFO 2019/10/23 03:08:38 Creating volume vol_08e8447256de2598952dcb240e615d0f replica 3

创建对应的volume

[asynchttp] INFO 2019/10/23 03:08:41 Completed job 3ec932315085609bc54ead6e3f6851e8 in 5.007631648s
[negroni] 2019-10-23T03:08:41Z | 303 |   78.335µs | 10.1.45.61:8080 | GET /queue/3ec932315085609bc54ead6e3f6851e8
[negroni] 2019-10-23T03:08:41Z | 200 |   5.751689ms | 10.1.45.61:8080 | GET /volumes/08e8447256de2598952dcb240e615d0f
[negroni] 2019-10-23T03:08:41Z | 200 |   139.05µs | 10.1.45.61:8080 | GET /clusters/1c5ffbd86847e5fc1562ef70c033292e
[negroni] 2019-10-23T03:08:41Z | 200 |   660.249µs | 10.1.45.61:8080 | GET /nodes/04740cac8d42f56e354c94bdbb7b8e34
[negroni] 2019-10-23T03:08:41Z | 200 |   270.334µs | 10.1.45.61:8080 | GET /nodes/1b33ad0dba20eaf23b5e3a4845e7cdb4
[negroni] 2019-10-23T03:08:41Z | 200 |   345.528µs | 10.1.45.61:8080 | GET /nodes/b6100a5af9b47d8c1f19be0b2b4d8276
[heketi] INFO 2019/10/23 03:09:39 Starting Node Health Status refresh
[cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-01
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-lrdz7 c:glusterfs ns:default (from host:k8s-node-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
[heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 04740cac8d42f56e354c94bdbb7b8e34 up=true
[cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-node-02
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-l2lsv c:glusterfs ns:default (from host:k8s-node-02 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
[heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node 1b33ad0dba20eaf23b5e3a4845e7cdb4 up=true
[cmdexec] INFO 2019/10/23 03:09:39 Check Glusterd service status in node k8s-master-01
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:34:log.(*CommandLogger).Before: Will run command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/kube/exec.go:72:kube.ExecCommands: Current kube connection count: 0
[kubeexec] DEBUG 2019/10/23 03:09:39 heketi/pkg/remoteexec/log/commandlog.go:41:log.(*CommandLogger).Success: Ran command [systemctl status glusterd] on [pod:glusterfs-cqw5d c:glusterfs ns:default (from host:k8s-master-01 selector:glusterfs-node)]: Stdout filtered, Stderr filtered
[heketi] INFO 2019/10/23 03:09:39 Periodic health check status: node b6100a5af9b47d8c1f19be0b2b4d8276 up=true
[heketi] INFO 2019/10/23 03:09:39 Cleaned 0 nodes from health cache

7、测试数据

测试使用该pv的pod之间能否共享数据,手动进入到pod并创建文件

[root@k8s-master-01 kubernetes]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
glusterfs-cqw5d          1/1     Running   0          90m
glusterfs-l2lsv          1/1     Running   0          90m
glusterfs-lrdz7          1/1     Running   0          90m
heketi-68795ccd8-m8x55   1/1     Running   0          49m
pod-use-pvc              1/1     Running   0          20m
[root@k8s-master-01 kubernetes]# kubectl exec -it pod-use-pvc /bin/sh
/ # cd /pv-data/
/pv-data # echo "hello world">a.txt
/pv-data # cat a.txt
hello world

查看创建的卷

[root@k8s-master-01 kubernetes]# heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' volume list
Id:08e8447256de2598952dcb240e615d0f    Cluster:1c5ffbd86847e5fc1562ef70c033292e    Name:vol_08e8447256de2598952dcb240e615d0f
Id:b25f4b627cf66279bfe19e8a01e9e85d    Cluster:1c5ffbd86847e5fc1562ef70c033292e    Name:heketidbstorage

将设备挂载查看卷中的数据,vol_08e8447256de2598952dcb240e615d0f为卷名称

[root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_08e8447256de2598952dcb240e615d0f /mnt
[root@k8s-master-01 kubernetes]# ll /mnt/
total 1
-rw-r--r-- 1 root 40000 12 Oct 23 11:29 a.txt
[root@k8s-master-01 kubernetes]# cat /mnt/a.txt
hello world

8、测试deployment

测试通过deployment控制器部署能否正常使用storageclass,创建nginx的deployment

[root@k8s-master-01 kubernetes]# vim nginx-deployment-gluster.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-gfs
spec:
  selector:
    matchLabels:
      name: nginx
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-gfs-html
              mountPath: "/usr/share/nginx/html"
            - name: nginx-gfs-conf
              mountPath: "/etc/nginx/conf.d"
      volumes:
      - name: nginx-gfs-html
        persistentVolumeClaim:
          claimName: glusterfs-nginx-html
      - name: nginx-gfs-conf
        persistentVolumeClaim:
          claimName: glusterfs-nginx-conf

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-nginx-html
spec:
  accessModes: [ "ReadWriteMany" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 500Mi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-nginx-conf
spec:
  accessModes: [ "ReadWriteMany" ]
  storageClassName: "gluster-heketi"
  resources:
    requests:
      storage: 10Mi

查看相应资源

[root@k8s-master-01 kubernetes]# kubectl get pod,pv,pvc|grep nginx
pod/nginx-gfs-7d66cccf76-mkc76   1/1     Running   0          2m45s
pod/nginx-gfs-7d66cccf76-zc8n2   1/1     Running   0          2m45s
persistentvolume/pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb   1Gi        RWX            Retain           Bound    default/glusterfs-nginx-conf   gluster-heketi            2m34s
persistentvolume/pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5   1Gi        RWX            Retain           Bound    default/glusterfs-nginx-html   gluster-heketi            2m34s
persistentvolumeclaim/glusterfs-nginx-conf   Bound    pvc-87481e3a-9b7e-43aa-a0b9-4028ce0a1abb   1Gi        RWX            gluster-heketi   2m45s
persistentvolumeclaim/glusterfs-nginx-html   Bound    pvc-f954a4ca-ea1c-458d-8490-a49a0a001ab5   1Gi        RWX            gluster-heketi   2m45s

查看挂载情况

[root@k8s-master-01 kubernetes]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- df -Th
Filesystem                                        Type            Size  Used Avail Use% Mounted on
overlay                                           overlay          44G  3.2G   41G   8% /
tmpfs                                             tmpfs            64M     0   64M   0% /dev
tmpfs                                             tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/centos-root                           xfs              44G  3.2G   41G   8% /etc/hosts
shm                                               tmpfs            64M     0   64M   0% /dev/shm
192.168.2.10:vol_adf6fc08c8828fdda27c8aa5ce99b50c fuse.glusterfs 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 fuse.glusterfs 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                             tmpfs           2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                             tmpfs           2.0G     0  2.0G   0% /proc/acpi
tmpfs                                             tmpfs           2.0G     0  2.0G   0% /proc/scsi
tmpfs                                             tmpfs           2.0G     0  2.0G   0% /sys/firmware

在宿主机挂载和创建文件

[root@k8s-master-01 kubernetes]# mount -t glusterfs 192.168.2.10:vol_454e14ae3184122ff9a14d77e02b10b9 /mnt/
[root@k8s-master-01 kubernetes]# cd /mnt/
[root@k8s-master-01 mnt]# echo "hello world">index.html
[root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-mkc76 -- cat /usr/share/nginx/html/index.html
hello world

扩容nginx副本,查看是否能正常挂载

[root@k8s-master-01 mnt]# kubectl scale deployment nginx-gfs --replicas=3
deployment.apps/nginx-gfs scaled
[root@k8s-master-01 mnt]# kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
glusterfs-cqw5d              1/1     Running   0          129m
glusterfs-l2lsv              1/1     Running   0          129m
glusterfs-lrdz7              1/1     Running   0          129m
heketi-68795ccd8-m8x55       1/1     Running   0          88m
nginx-gfs-7d66cccf76-mkc76   1/1     Running   0          8m55s
nginx-gfs-7d66cccf76-qzqnv   1/1     Running   0          23s
nginx-gfs-7d66cccf76-zc8n2   1/1     Running   0          8m55s
[root@k8s-master-01 mnt]# kubectl exec -it nginx-gfs-7d66cccf76-qzqnv -- cat /usr/share/nginx/html/index.html
hello world

至此,在k8s集群中部署heketi+glusterfs提供动态存储结束。

参考来源:

https://github.com/heketi/heketi

https://github.com/gluster/gluster-kubernetes

https://www.jb51.net/article/244019.htm

总结

到此这篇关于kubernetes存储之GlusterFS集群的文章就介绍到这了,更多相关kubernetes存储GlusterFS集群内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • docker kubernetes dashboard安装部署详细介绍

    docker之kubernetes dashboard部署 1. 环境说明: 1). 架构: 注: 本次实验服务器环境均采用centos 7. 服务安装均采用yum install. 192.168.3.7 master 192.168.3.16 node 2).使用的软件包: master: docker kubernetes-master etcd flannel nodes: docker kubernetes-node flannel 3). 软件版本: docker: 1.10.3 k

  • 云原生技术kubernetes(K8S)简介

    目录 01 kubernetes是什么? 02 kubernetes和Compost+Swarm之间的区别 03 一点总结 今天我们看看kubernetes技术的介绍,最近在极客时间上看张磊老师的深入kubernetes技术,讲的非常好,有兴趣的同学可以去收听一下,对于理解kubernetes技术非常有帮助,这里我会按照自己的进度,分享一下学习的笔记. 今天站的角度比较高,概念性质的东西会多一点. 01 kubernetes是什么? 曾经我认为这个问题很好回答,直到不断的去理解kubernete

  • Kubernetes(k8s)基础介绍

    之前我一直想学习Kubernetes,因为它听起来很有意思(如果你是希腊人,你会觉得这个名字很有问题),但我从来没有机会,因为我没有任何东西需要运行在集群中.而最近,我的工作中开始逐步涉及Kubernetes相关的事情,所以这次我抓住机会,开始查资料,但后来我发现目前所有的资料(包括官方教程)都过于冗长,结构也不合理,这让我一开始有点沮丧. 经过几天的研究,我开始逐步理解Kubernetes的核心理念,并且把他部署到了生产环境中.因为我的简历现在说自己是个"Kubernetes专家",

  • kubernetes作用领域总结

    kubernetes在容器编排大战中由于应用的可移植性以及支持混合云/多云部署方式上的灵活性.加上开放可扩展的理念,使得周边社区非常活跃.从既有调研结果看,kubernetes已成为容器编排领域的标准.但是它并不成熟,很多方面都大有可为,下面就是列举了一些方面: 1.集群联邦 kubernetes是一个集中式容器管理工具.横向上来说,集群管理工具还有分布式和共享式等.代表性的分布式容器管理工具如yarn与kubernetes的区别是yarn的一台宿主机作为一个master来进行容器管理.分配速度

  • kubernetes存储之GlusterFS集群详解

    目录 1.glusterfs概述 1.1.glusterfs简介 1.2.glusterfs特点 1.3.glusterfs卷的模式 2.heketi概述 3.部署heketi+glusterfs 3.1.准备工作 3.1.1.所有节点安装glusterfs客户端 3.1.2.节点打标签 3.1.3.所有节点加载对应模块 3.2.创建glusterfs集群 3.2.1.下载相关安装文件 3.2.2.创建集群 3.2.3.查看gfs pods 3.3.创建heketi服务 3.3.1.创建heke

  • Redis集群详解

    Redis集群详解 Redis有三种集群模式,分别是: * 主从模式 * Sentinel模式 * Cluster模式 三种集群模式各有特点,关于Redis介绍可以参考这里:NoSQL(二)--Redis Redis官网:https://redis.io/ ,最新版本6.0.5 主从模式 主从模式介绍 主从模式是三种模式中最简单的,在主从复制中,数据库分为两类:主数据库(master)和从数据库(slave). 其中主从复制有如下特点: * 主数据库可以进行读写操作,当读写操作导致数据变化时会自

  • LINUX中NGINX反向代理下的TOMCAT集群(详解)

    Nginx具有反向代理(注意和正向代理的区别)和负载均衡等特点. 这次Nginx安装在 192.168.1.108 这台linux 机器上.安装Nginx 先要装openssl库,gcc,PCRE,zlib库等. Tomcat 安装在192.168.1.168 和 192.168.1.178 这两台机器上.客户端通过访问192.168.1.108 反向代理访问到 192.168.1.168 和 192.168.1.178 里Tomcat 部署的工程内容. 1.Linux 下安装Nginx (机器

  • linux系统安装hadoop真分布式集群详解

    Hadoop真分布式完全集群安装,基于版本2.7.2安装,在两台Linux机器上面分别安装Hadoop的master和slave节点. 1.安装说明 不管NameNode还是DataNode节点,安装的用户名需要一致.master和slave的区别,只是在于配置的hostname,在config的slaves配置的hostname所代表的机器即为slave,不使用主机名也可以,直接配置为IP即可.在这种集群下面,需要在master节点创建namenode路径,并且使用格式化命令hdfs name

  • docker 搭建hadoop以及hbase集群详解

    要用docker搭建集群,首先需要构造集群所需的docker镜像.构建镜像的一种方式是,利用一个已有的镜像比如简单的linux系统,运行一个容器,在容器中手动的安装集群所需要的软件并进行配置,然后commit容器到新的镜像.另一种方式是,使用Dockerfile来自动化的构造镜像. 下面采用第二种. 1. 创建带ssh服务的ubuntu14.04系统镜像 使用ubuntu14系统来安装hadoop和hbase,由于hadoop集群机器之间通过ssh通信,所以需要在ubuntu14系统中安装ssh

  • Linux学习教程之redis哨兵集群详解

    前言 Sentinel(哨兵)是用于监控redis集群中Master状态的工具,其已经被集成在redis2.4+的版本中,下面话不多说了,来一起看看详细的介绍吧 1.Sentinel 哨兵 Sentinel(哨兵)是Redis 的高可用性解决方案:由一个或多个Sentinel 实例 组成的Sentinel 系统可以监视任意多个主服务器,以及这些主服务器属下的所有从服务器,并在被监视的主服务器进入下线状态时,自动将下线主服务器属下的某个从服务器升级为新的主服务器. 例如: 在Server1 掉线后

  • Node.js进程管理之进程集群详解

    一.cluster模块 Node.js是单线程处理,对于高并发的请求怎么样能增加吞吐量呢?为了提高服务器的利用率,能不能多核的来处理呢?于是就有了cluster模块. cluster模块可以轻松实现运行在同一机器不同进程上的TCP或HTTP服务器集群.它们仍使用相同的底层套接字,从而在相同的IP地址和端口组合上处理请求. 下面是它的一些事件属性和方法. 事件: fork:当新的工作进程已经被派生时发出.callback函数接收worker对象作为唯一的参数.function(Worker) on

  • kubernetes数据持久化PV PVC深入分析详解

    目录 1. 什么是PV,PVC? 1.1 什么是PV 1.2 什么是PVC? 2. PV资源实践 2.1 PV配置字段详解 2.2 HostPath PV示例 2.3 NFS PV示例 3. PVC资源实践 3.1 PVC配置清单详解 3.2 hostPath-PVC示例 3.3 NFS-PV-PVC实践之准备NFS共享存储 3.4 准备NFS-PVC 3.4.1准备Pod并使用PVC 3.4.2 测试数据持久性 1. 什么是PV,PVC? 1.1 什么是PV 官方文档地址: https://k

  • 云原生技术kubernetes调度单位pod的使用详解

    k8s中的最小调度单位---pod 之前的文章中,我们对k8s能够解决的问题做了简单介绍,简单来说,它解决的问题是容器的编排与调度,它的核心价值在于:运行在大规模集群的任务之间,实际上存在着各种各样的关系,这些关系的处理,才是任务编排和系统管理最困难的地方,k8s就是为了这个问题而生的. 这句话比较难理解,我们从已有的知识入手,抽丝剥茧,慢慢理解它.我们已经知道,容器的本质是一个进程,它包含三个部分: 如果说容器是云环境的一个进程,那么你可以将k8s理解成云环境中的一个操作系统. 在一个操作系统

  • Kubernetes 权限管理认证鉴权详解

    目录 正文 认证 认证用户 Normal Users Service Accounts 认证策略 客户端证书 不记名令牌 Static Token File Service Account Tokens OpenID Connect Tokens 鉴权 鉴权流程 鉴权模块 RBAC Role 和 ClusterRole RoleBinding 和 ClusterRoleBinding Service Account 最后 正文 Kubernetes 主要通过 API Server 对外提供服务,

随机推荐