Ansible部署K8s集群的方法

目录
  • 检查网络:k8s-check.yaml检查k8s各主机的网络是否可达;
  • 检查k8s各主机操作系统版本是否达到要求;
  • 配置k8s集群dns解析:k8s-hosts-cfg.yaml
  • 配置yum源:k8s-yum-cfg.yaml
  • 时钟同步:k8s-time-sync.yaml
  • 禁用iptable、firewalld、NetworkManager服务
  • 禁用SElinux、swap:k8s-SE-swap-disable.yaml
  • 修改内核:k8s-kernel-cfg.yaml
  • 配置ipvs:k8s-ipvs-cfg.yaml
  • 安装docker:k8s-docker-install.yaml
  • 安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml
  • 安装集群镜像:k8s-apps-images.yaml
  • k8s集群初始化:k8s-cluster-init.yaml

环境:

主机 IP地址 组件
ansible 192.168.175.130 ansible
master 192.168.175.140 docker,kubectl,kubeadm,kubelet
node 192.168.175.141 docker,kubectl,kubeadm,kubelet
node 192.168.175.142 docker,kubectl,kubeadm,kubelet

检查及调试相关命令:

$ ansible-playbook -v k8s-time-sync.yaml --syntax-check
$ ansible-playbook -v k8s-*.yaml -C
$ ansible-playbook -v k8s-yum-cfg.yaml -C --start-at-task="Clean origin dir" --step
$ ansible-playbook -v k8s-kernel-cfg.yaml --step

主机inventory文件:

/root/ansible/hosts

[k8s_cluster]
master ansible_host=192.168.175.140
node1  ansible_host=192.168.175.141
node2  ansible_host=192.168.175.142

[k8s_cluster:vars]
ansible_port=22
ansible_user=root
ansible_password=hello123	

检查网络:k8s-check.yaml检查k8s各主机的网络是否可达;

检查k8s各主机操作系统版本是否达到要求;

- name: step01_check
  hosts: k8s_cluster
  gather_facts: no
  tasks:
    - name: check network
      shell:
        cmd: "ping -c 3 -m 2 {{ansible_host}}"
      delegate_to: localhost

    - name: get system version
      shell: cat /etc/system-release
      register: system_release

    - name: check system version
      vars:
        system_version: "{{ system_release.stdout | regex_search('([7-9].[0-9]+).*?') }}"
        suitable_version: 7.5
      debug:
        msg: "{{ 'The version of the operating system is '+ system_version +', suitable!' if (system_version | float >= suitable_version) else 'The version of the operating system is unsuitable' }}"

调试命令:

$ ansible-playbook --ssh-extra-args '-o StrictHostKeyChecking=no' -v -C k8s-check.yaml
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml
$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml --start-at-task="get system version"
  • 连接配置:k8s-conn-cfg.yaml在ansible服务器的/etc/hosts文件中添加k8s主机名解析配置
  • 生成密钥对,配置ansible免密登录到k8s各主机
- name: step02_conn_cfg
  hosts: k8s_cluster
  gather_facts: no
  vars_prompt:
    - name: RSA
      prompt: Generate RSA or not(Yes/No)?
      default: "no"
      private: no

    - name: password
      prompt: input your login password?
      default: "hello123"
  tasks:
    - name: Add DNS of k8s to ansible
      delegate_to: localhost
      lineinfile:
        path: /etc/hosts
        line: "{{ansible_host}}  {{inventory_hostname}}"
        backup: yes
    - name: Generate RSA
      run_once: true
      shell:
        cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
        creates: /root/.ssh/id_rsa
      when: RSA | bool
    - name: Configure password free login
      shell: |
          /usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null
          /usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null
          /usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ ansible_host }}
          #/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ inventory_hostname }}
    - name: Test ssh
      shell: hostname

执行:

$ ansible-playbook k8s-conn-cfg.yaml
Generate RSA or not(Yes/No)? [no]: yes
input your login password? [hello123]:

PLAY [step02_conn_cfg] **********************************************************************************************************
TASK [Add DNS of k8s to ansible] ************************************************************************************************
ok: [master -> localhost]
ok: [node1 -> localhost]
ok: [node2 -> localhost]
TASK [Generate RSA] *************************************************************************************************************
changed: [master -> localhost]
TASK [Configure password free login] ********************************************************************************************
changed: [node1 -> localhost]
changed: [node2 -> localhost]
TASK [Test ssh] *****************************************************************************************************************
changed: [master]
changed: [node1]
changed: [node2]
PLAY RECAP **********************************************************************************************************************
master                     : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置k8s集群dns解析: k8s-hosts-cfg.yaml

  • 设置主机名
  • /etc/hosts文件中互相添加dns解析
- name: step03_cfg_host
  hosts: k8s_cluster
  gather_facts: no
  tasks:
    - name: set hostname
      hostname:
        name: "{{ inventory_hostname }}"
        use: systemd
    - name: Add dns to each other
      lineinfile:
        path: /etc/hosts
        backup: yes
        line: "{{item.value.ansible_host}}  {{item.key}}"
      loop: "{{ hostvars | dict2items }}"
      loop_control:
        label: "{{ item.key }} {{ item.value.ansible_host }}"

执行:

$ ansible-playbook k8s-hosts-cfg.yaml

PLAY [step03_cfg_host] **********************************************************************************************************
TASK [set hostname] *************************************************************************************************************
ok: [master]
ok: [node1]
ok: [node2]
TASK [Add dns to each other] ****************************************************************************************************
ok: [node2] => (item=node1 192.168.175.141)
ok: [master] => (item=node1 192.168.175.141)
ok: [node1] => (item=node1 192.168.175.141)
ok: [node2] => (item=node2 192.168.175.142)
ok: [master] => (item=node2 192.168.175.142)
ok: [node1] => (item=node2 192.168.175.142)
ok: [node2] => (item=master 192.168.175.140)
ok: [master] => (item=master 192.168.175.140)
ok: [node1] => (item=master 192.168.175.140)
PLAY RECAP **********************************************************************************************************************
master                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置yum源:k8s-yum-cfg.yaml

- name: step04_yum_cfg
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Create back-up directory
      file:
        path: /etc/yum.repos.d/org/
        state: directory
    - name: Back-up old Yum files
      shell:
        cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/
        removes: /etc/yum.repos.d/org/
    - name: Add new Yum files
      copy:
        src: ./files_yum/
        dest: /etc/yum.repos.d/
    - name: Check yum.repos.d
        cmd: ls /etc/yum.repos.d/*

时钟同步:k8s-time-sync.yaml

- name: step05_time_sync
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Start chronyd.service
      systemd:
        name: chronyd.service
        state: started
        enabled: yes
    - name: Modify time zone & clock
      shell: |
        cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
        clock -w
        hwclock -w
    - name: Check time now
      command: date

禁用iptable、firewalld、NetworkManager服务

- name: step06_net_service
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Stop some services for net
      systemd:
        name: "{{ item }}"
        state: stopped
        enabled: no
      loop:
        - firewalld
        - iptables
        - NetworkManager

执行:

$ ansible-playbook -v k8s-net-service.yaml
... ...
failed: [master] (item=iptables) => {
    "ansible_loop_var": "item",
    "changed": false,
    "item": "iptables"
}

MSG:
Could not find the requested service iptables: host
PLAY RECAP **********************************************************************************************************************
master                     : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
node1                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
node2                      : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

禁用SElinux、swap:k8s-SE-swap-disable.yaml

- name: step07_net_service
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: SElinux disabled
      lineinfile:
        path: /etc/selinux/config
        line: SELINUX=disabled
        regexp: ^SELINUX=
        state: present
        backup: yes
    - name: Swap disabled
        path: /etc/fstab
        line: '#\1'
        regexp: '(^/dev/mapper/centos-swap.*$)'
        backrefs: yes

修改内核:k8s-kernel-cfg.yaml

- name: step08_kernel_cfg
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Create /etc/sysctl.d/kubernetes.conf
      copy:
        content: ''
        dest: /etc/sysctl.d/kubernetes.conf
        force: yes
    - name: Cfg bridge and ip_forward
      lineinfile:
        path: /etc/sysctl.d/kubernetes.conf
        line: "{{ item }}"
        state: present
      loop:
        - 'net.bridge.bridge-nf-call-ip6tables = 1'
        - 'net.bridge.bridge-nf-call-iptables = 1'
        - 'net.ipv4.ip_forward = 1'
    - name: Load cfg
      shell:
        cmd: |
          sysctl -p
          modprobe br_netfilter
        removes: /etc/sysctl.d/kubernetes.conf
    - name: Check cfg
        cmd: '[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3'

执行:

$ ansible-playbook -v k8s-kernel-cfg.yaml --step

TASK [Check cfg] ****************************************************************************************************************
changed: [master] => {
    "changed": true,
    "cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
    "delta": "0:00:00.011574",
    "end": "2022-02-27 04:26:01.332896",
    "rc": 0,
    "start": "2022-02-27 04:26:01.321322"
}
changed: [node2] => {
    "delta": "0:00:00.016331",
    "end": "2022-02-27 04:26:01.351208",
    "start": "2022-02-27 04:26:01.334877"
changed: [node1] => {
    "delta": "0:00:00.016923",
    "end": "2022-02-27 04:26:01.355983",
    "start": "2022-02-27 04:26:01.339060"
PLAY RECAP **********************************************************************************************************************
master                     : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=4    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

配置ipvs:k8s-ipvs-cfg.yaml

- name: step09_ipvs_cfg
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Install ipset and ipvsadm
      yum:
        name: "{{ item }}"
        state: present
      loop:
        - ipset
        - ipvsadm
    - name: Load modules
      shell: |
        modprobe -- ip_vs
        modprobe -- ip_vs_rr
        modprobe -- ip_vs_wrr
        modprobe -- ip_vs_sh
        modprobe -- nf_conntrack_ipv4
    - name: Check cfg
      shell:
        cmd: '[ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3'

安装docker:k8s-docker-install.yaml

- name: step10_docker_install
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Install docker-ce
      yum:
        name: docker-ce-18.06.3.ce-3.el7
        state: present
    - name: Cfg docker
      copy:
        src: ./files_docker/daemon.json
        dest: /etc/docker/
    - name: Start docker
      systemd:
        name: docker.service
        state: started
        enabled: yes

    - name: Check docker version
      shell:
        cmd: docker --version

安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml

- name: step11_k8s_install_kubepkgs
  hosts: k8s_cluster
  gather_facts: no
  tasks:

    - name: Install k8s components
      yum:
        name: "{{ item }}"
        state: present
      loop:
        - kubeadm-1.17.4-0
        - kubelet-1.17.4-0
        - kubectl-1.17.4-0
    - name: Cfg k8s
      copy:
        src: ./files_k8s/kubelet
        dest: /etc/sysconfig/
        force: no
        backup: yes
    - name: Start kubelet
      systemd:
        name: kubelet.service
        state: started
        enabled: yes

安装集群镜像:k8s-apps-images.yaml

- name: step12_apps_images
  hosts: k8s_cluster
  gather_facts: no

  vars:
    apps:
      - kube-apiserver:v1.17.4
      - kube-controller-manager:v1.17.4
      - kube-scheduler:v1.17.4
      - kube-proxy:v1.17.4
      - pause:3.1
      - etcd:3.4.3-0
      - coredns:1.6.5
  vars_prompt:
      - name: cfg_python
        prompt: Do you need to install docker pkg for python(Yes/No)?
        default: "no"
        private: no
  tasks:
    - block:
        - name: Install python-pip
          yum:
            name: python-pip
            state: present
        - name: Install docker pkg for python
          shell:
            cmd: |
              pip install docker==4.4.4
              pip install websocket-client==0.32.0
            creates: /usr/lib/python2.7/site-packages/docker/
      when: cfg_python | bool
    - name: Pull images
      community.docker.docker_image:
        name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
        source: pull
      loop: "{{ apps }}"
    - name: Tag images
        repository: "k8s.gcr.io/{{ item }}"
        force_tag: yes
        source: local
    - name: Remove images for ali
        state: absent

执行:

$ ansible-playbook k8s-apps-images.yaml
Do you need to install docker pkg for python(Yes/No)? [no]:

PLAY [step12_apps_images] *******************************************************************************************************
TASK [Install python-pip] *******************************************************************************************************
skipping: [node1]
skipping: [master]
skipping: [node2]
TASK [Install docker pkg for python] ********************************************************************************************
TASK [Pull images] **************************************************************************************************************
changed: [node1] => (item=kube-apiserver:v1.17.4)
changed: [node2] => (item=kube-apiserver:v1.17.4)
changed: [master] => (item=kube-apiserver:v1.17.4)
changed: [node1] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-controller-manager:v1.17.4)
changed: [node1] => (item=kube-scheduler:v1.17.4)
changed: [master] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=kube-proxy:v1.17.4)
changed: [node2] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=pause:3.1)
changed: [master] => (item=pause:3.1)
changed: [node2] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=etcd:3.4.3-0)
changed: [master] => (item=etcd:3.4.3-0)
changed: [node2] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=coredns:1.6.5)
changed: [master] => (item=coredns:1.6.5)
changed: [node2] => (item=pause:3.1)
changed: [node2] => (item=etcd:3.4.3-0)
changed: [node2] => (item=coredns:1.6.5)
TASK [Tag images] ***************************************************************************************************************
ok: [node1] => (item=kube-apiserver:v1.17.4)
ok: [master] => (item=kube-apiserver:v1.17.4)
ok: [node2] => (item=kube-apiserver:v1.17.4)
ok: [node1] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-controller-manager:v1.17.4)
ok: [node2] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-scheduler:v1.17.4)
ok: [node1] => (item=kube-scheduler:v1.17.4)
ok: [node2] => (item=kube-scheduler:v1.17.4)
ok: [master] => (item=kube-proxy:v1.17.4)
ok: [node1] => (item=kube-proxy:v1.17.4)
ok: [node2] => (item=kube-proxy:v1.17.4)
ok: [master] => (item=pause:3.1)
ok: [node1] => (item=pause:3.1)
ok: [node2] => (item=pause:3.1)
ok: [master] => (item=etcd:3.4.3-0)
ok: [node1] => (item=etcd:3.4.3-0)
ok: [node2] => (item=etcd:3.4.3-0)
ok: [master] => (item=coredns:1.6.5)
ok: [node1] => (item=coredns:1.6.5)
ok: [node2] => (item=coredns:1.6.5)
TASK [Remove images for ali] ****************************************************************************************************
PLAY RECAP **********************************************************************************************************************
master                     : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
node1                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
node2                      : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

k8s集群初始化:k8s-cluster-init.yaml

- name: step13_cluster_init
  hosts: master
  gather_facts: no
  tasks:
    - block:
        - name: Kubeadm init
          shell:
            cmd:
              kubeadm init
              --apiserver-advertise-address={{ ansible_host }}
              --kubernetes-version=v1.17.4
              --service-cidr=10.96.0.0/12
              --pod-network-cidr=10.244.0.0/16
              --image-repository registry.aliyuncs.com/google_containers

        - name: Create /root/.kube
          file:
            path: /root/.kube/
            state: directory
            owner: root
            group: root
        - name: Copy /root/.kube/config
          copy:
            src: /etc/kubernetes/admin.conf
            dest: /root/.kube/config
            remote_src: yes
            backup: yes
        - name: Copy kube-flannel
            src: ./files_k8s/kube-flannel.yml
            dest: /root/
        - name: Apply kube-flannel
            cmd: kubectl apply -f /root/kube-flannel.yml
        - name: Get token
            cmd: kubeadm token create --print-join-command
          register: join_token
        - name: debug join_token
          debug:
            var: join_token.stdout

到此这篇关于Ansible部署K8s集群的文章就介绍到这了,更多相关Ansible部署K8s集群内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • ansible批量部署tomcat的方法

    1.1 构建目录结构 此操作是安装nginx+mysql+tomcat+db的目录结构,可以参考一下,不错~ mkdir -p /ansible/roles/{nginx,mysql,tomcat,db}/{defaults,files,handlers,meta,tasks,templates,vars} defaults 默认寻找路径 tasks 存放playbooks路径 files 存放文件和脚本包,copy模块文件搜索路径 templates 模版存放路径 handlers notif

  • ansible-playbook实现自动部署KVM及安装python3的详细教程

    1.何为ansible-playbook playbook是ansible用于配置,部署,和管理被控节点的剧本,通过playbook的详细描述,执行其中的一系列tasks,可以让远端主机达到预期的状态.playbook就像Ansible控制器给被控节点列出的的一系列to-do-list,而被控节点必须要完成. 2.ansible-playbook编写格式 playbook完全居于yaml文件格式.YMAL格式是类似于JSON的文件格式,便于人理解和阅读,同时便于书写.,类似于半结构化数据,声明式

  • Ansible批量部署Nginx的示例代码

    1.1 将nginx的安装包,和安装脚本copy到客户端,并安装脚本 vim /ansible/roles/nginx/tasks/nginx.yml - name: copy nginx_tar_gz to client copy: src=nginx-1.8.0.tar.gz dest=/tmp/nginx-1.8.0.tar.gz - name: copy install_shell to client copy: src=install_nginx.sh dest=/tmp/instal

  • Ansible部署K8s集群的方法

    目录 检查网络:k8s-check.yaml检查k8s各主机的网络是否可达; 检查k8s各主机操作系统版本是否达到要求: 配置k8s集群dns解析:k8s-hosts-cfg.yaml 配置yum源:k8s-yum-cfg.yaml 时钟同步:k8s-time-sync.yaml 禁用iptable.firewalld.NetworkManager服务 禁用SElinux.swap:k8s-SE-swap-disable.yaml 修改内核:k8s-kernel-cfg.yaml 配置ipvs:

  • 在K8s上部署Redis集群的方法步骤

    一.前言 架构原理:每个Master都可以拥有多个Slave.当Master下线后,Redis集群会从多个Slave中选举出一个新的Master作为替代,而旧Master重新上线后变成新Master的Slave. 二.准备操作 本次部署主要基于该项目:https://github.com/zuxqoj/kubernetes-redis-cluster 其包含了两种部署Redis集群的方式: StatefulSet Service&Deployment 两种方式各有优劣,对于像Redis.Mong

  • centos7系统部署k8s集群详细介绍

    目录 1 版本.规划 1.1 版本信息: 1.2集群规划 2.部署 1.关闭防火墙 2.关闭selinux 3.关闭swap 4.添加主机名和IP对应关系 5.将桥接的IPV4流量传递给iptables的链 6.安装docker 安装: 7.添加阿里云yum软件源 8.安装kubeadm.kubelet.kubectl 9.初始化master节点 10.安装pod网络插件(CNI) 11.node节点加入集群 1 版本.规划 1.1 版本信息: 名称 版本号 内核 3.10.0-1160.el7

  • 部署k8s集群的超详细实践步骤

    目录 1.部署k8s的两种方式: 2.环境准备 3.初始化配置 3.1.安装环境准备:下面的操作需要在所有的节点上执行. 3.2.安装 Docker.kubeadm.kubelet[所有节点] 4.部署k8s-master[master执行] 4.1.kubeadm部署(需要等上一会) 4.2.拷贝k8s认证文件 5.配置k8s的node节点[node节点操作] 5.1.向集群添加新节点,执行在kubeadm init输出的kubeadm join命令 6.部署容器网络 (master执行) 7

  • 使用Kubeadm在CentOS7.2上部署Kubernetes集群的方法

    本文参考kubernetes官网文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes集群,解决了一些在按照该文档部署时遇到的问题. 操作系统版本 # cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) 内核版本 # uname -r 3.10.0-327.el7.x86_64 集群节点 192.168.120.122 kube

  • 使用docker快速部署Elasticsearch集群的方法

    本文将使用Docker容器(使用docker-compose编排)快速部署Elasticsearch 集群,可用于开发环境(单机多实例)或生产环境部署. 注意,6.x版本已经不能通过 -Epath.config 参数去指定配置文件的加载位置,文档说明: For the archive distributions, the config directory location defaults to $ES_HOME/config. The location of the >config direc

  • Docker Stack 部署web集群的方法步骤

    Docker越来越成熟,功能也越来越强大.使用Dokcer Stack做服务集群也是非常的方便,docker自己就提供了负载功能,感觉很方便,就想给大家分享一下,做一个简单的教程. 环境 我是用了两台centos7的虚拟机来做这个教程他们的ip分别是 主服务器:192.168.0.105 // 也是私有仓库服务器 服务器2: 192.168.0.49 这篇帖子中所有的代码 github地址:https://github.com/lpxxn/godockerswarm 设置Docker Swarm

  • linux contos6.8下部署kafka集群的方法

    有3台服务器,ip分别为192.168.174.10,192.168.174.11,192.168.174.12. 1.官网下载,分别在每台机器上解压安装 # 创建kafka的安装目录 mkdir -p /usr/local/software/kafka # 解压 tar -xvf kafka_2.12-1.1.0.tgz -C /usr/local/software/kafka/ 2.修改每台服务器的/etc/profile文件,设置kafka环境变量,添加如下内容 export KAFKA_

  • Docker+K8S 集群环境搭建及分布式应用部署

    1.安装docker yum install docker #启动服务 systemctl start docker.service systemctl enable docker.service #测试 docker version 2.安装etcd yum install etcd -y #启动etcd systemctl start etcd systemctl enable etcd #输入如下命令查看 etcd 健康状况 etcdctl -C http://localhost:2379

  • 通过Docker部署Redis 6.x集群的方法

    系统环境: Redis 版本:6.0.8 Docker 版本:19.03.12 系统版本:CoreOS 7.8 内核版本:5.8.5-1.el7.elrepo.x86_64 一.什么是 Redis 集群模式 在 Redis 3.0 版本后正式推出 Redis 集群模式,该模式是 Redis 的分布式的解决方案,是一个提供在多个 Redis 节点间共享数据的程序集,且 Redis 集群是去中心化的,它的每个 Master 节点都可以进行读写数据,每个节点都拥有平等的关系,每个节点都保持各自的数据和

随机推荐