Nginx实现高可用集群构建(Keepalived+Haproxy+Nginx)

1、组件及实现的功能

Keepalived:实现对Haproxy服务的高可用,并采用双主模型配置;

Haproxy:实现对Nginx的负载均衡和读写分离;

Nginx:实现对HTTP请求的高速处理;

2、架构设计图

3、Keepalived部署

在两个节点上都需要执行安装keepalived,命令如下:

$ yum -y install keepalived

修改 172.16.25.109 节点上 keepalived.conf 文件配置,命令如下

$ vim /etc/keepalived/keepalived.conf

修改后的内容如下:

! Configuration File for keepalived
global_defs {
   notification_email {
         root@localhost
   }
   notification_email_from admin@lnmmp.com
   smtp_connect_timeout 3
   smtp_server 127.0.0.1
   router_id LVS_DEVEL
}
vrrp_script chk_maintaince_down {
   script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
   interval 1
   weight 2
}
vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 1
    weight 2
}
vrrp_instance VI_1 {
    interface eth0
    state MASTER
    priority 100
    virtual_router_id 125
    garp_master_delay 1
    authentication {
        auth_type PASS
        auth_pass 1e3459f77aba4ded
    }
    track_interface {
       eth0
    }
    virtual_ipaddress {
        172.16.25.10/16 dev eth0 label eth0:0
    }
    track_script {
        chk_haproxy
    }
    notify_master "/etc/keepalived/notify.sh master 172.16.25.10"
    notify_backup "/etc/keepalived/notify.sh backup 172.16.25.10"
    notify_fault "/etc/keepalived/notify.sh fault 172.16.25.10"
}
vrrp_instance VI_2 {
    interface eth0
    state BACKUP
    priority 99
    virtual_router_id 126
    garp_master_delay 1
    authentication {
        auth_type PASS
        auth_pass 7615c4b7f518cede
    }
    track_interface {
       eth0
    }
    virtual_ipaddress {
        172.16.25.11/16 dev eth0 label eth0:1
    }
    track_script {
        chk_haproxy
chk_maintaince_down
    }
    notify_master "/etc/keepalived/notify.sh master 172.16.25.11"
    notify_backup "/etc/keepalived/notify.sh backup 172.16.25.11"
    notify_fault "/etc/keepalived/notify.sh fault 172.16.25.11"
}

同理修改 172.16.25.110 节点上 keepalived.conf 配置,内容如下:

! Configuration File for keepalived
global_defs {
   notification_email {
         root@localhost
   }
   notification_email_from admin@lnmmp.com
   smtp_connect_timeout 3
   smtp_server 127.0.0.1
   router_id LVS_DEVEL
}
vrrp_script chk_maintaince_down {
   script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
   interval 1
   weight 2
}
vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 1
    weight 2
}
vrrp_instance VI_1 {
    interface eth0
    state BACKUP
    priority 99
    virtual_router_id 125
    garp_master_delay 1
    authentication {
        auth_type PASS
        auth_pass 1e3459f77aba4ded
    }
    track_interface {
       eth0
    }
    virtual_ipaddress {
        172.16.25.10/16 dev eth0 label eth0:0
    }
    track_script {
        chk_haproxy
chk_maintaince_down
    }
    notify_master "/etc/keepalived/notify.sh master 172.16.25.10"
    notify_backup "/etc/keepalived/notify.sh backup 172.16.25.10"
    notify_fault "/etc/keepalived/notify.sh fault 172.16.25.10"
}
vrrp_instance VI_2 {
    interface eth0
    state MASTER
    priority 100
    virtual_router_id 126
    garp_master_delay 1
    authentication {
        auth_type PASS
        auth_pass 7615c4b7f518cede
    }
    track_interface {
       eth0
    }
    virtual_ipaddress {
        172.16.25.11/16 dev eth0 label eth0:1
    }
    track_script {
        chk_haproxy
    }
    notify_master "/etc/keepalived/notify.sh master 172.16.25.11"
    notify_backup "/etc/keepalived/notify.sh backup 172.16.25.11"
    notify_fault "/etc/keepalived/notify.sh fault 172.16.25.11"
}
# vi /etc/keepalived/notify.sh
#!/bin/bash
# Author: Jason.Yu <admin@lnmmp.com>
# description: An example of notify script
#
contact='root@localhost'
notify() {
    mailsubject="`hostname` to be $1: $2 floating"
    mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
    echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
    master)
        notify master $2
        /etc/rc.d/init.d/haproxy restart
        exit 0
    ;;
    backup)
        notify backup $2 # 在节点切换成backup状态时,无需刻意停止haproxy服务,防止chk_maintaince和chk_haproxy多次对haproxy服务操作;
        exit 0
    ;;
    fault)
        notify fault $2 # 同上
        exit 0
    ;;
    *)
        echo 'Usage: `basename $0` {master|backup|fault}'
        exit 1
    ;;
esac

在两个节点上执行 keepalived 启动命令,命令如下:

  $ service keepalived start

4、Haproxy部署

在两个节点上都需要执行安装 HAProxy,命令如下:

 $ yum -y install haproxy

修改 172.16.25.109 和 172.16.25.110 节点上 haproxy.cfg 文件配置(两节点配置文件内容一致),命令如下:

 $ vim /etc/haproxy/haproxy.cfg

配置文件内容如下:

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user         haproxy
    group       haproxy
    daemon # 以后台程序运行;
defaults
    mode                   http # 选择HTTP模式,即可进行7层过滤;
    log                     global
    option                  httplog # 可以得到更加丰富的日志输出;
    option                  dontlognull
    option http-server-close # server端可关闭HTTP连接的功能;
    option forwardfor except 127.0.0.0/8 # 传递client端的IP地址给server端,并写入“X-Forward_for”首部中;
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 30000
listen stats
    mode http
    bind 0.0.0.0:1080 # 统计页面绑定1080端口;
    stats enable # 开启统计页面功能;
    stats hide-version # 隐藏Haproxy版本号;
    stats uri     /haproxyadmin?stats # 自定义统计页面的访问uri;
    stats realm   Haproxy\ Statistics # 统计页面密码验证时的提示信息;
    stats auth    admin:admin # 为统计页面开启登录验证功能;
    stats admin if TRUE # 若登录用户验证通过,则赋予管理功能;
frontend http-in
    bind *:80
    mode http
    log global
    option httpclose
    option logasap
    option dontlognull
    capture request  header Host len 20
    capture request  header Referer len 60
    acl url_static       path_beg       -i /static /p_w_picpaths /javascript /stylesheets
    acl url_static       path_end       -i .jpg .jpeg .gif .png .css .js .html
    use_backend static_servers if url_static # 符合ACL规则的,请求转入后端静态服务器
    default_backend dynamic_servers # 默认请求转入后端动态服务器
backend static_servers
    balance roundrobin
    server imgsrv1 192.168.0.25:80 check maxconn 6000 # 静态服务器,可配置多台,还可设置权重weight;
backend dynamic_servers
    balance source # 对于动态请求利用source调度算法,可一定程度上实现session保持;但最好利用cookie绑定的方式实现session保持
    server websrv1 192.168.0.35:80 check maxconn 1000 # 动态服务器,可配置多台,还可设置权重weight;

两个节点执行启动服务,命令如下:

$ service haproxy start

5、Nginx部署

yum -y groupinstall “Development tools”
yum -y groupinstall “Server Platform Development”
yum install gcc openssl-devel pcre-devel zlib-devel
groupadd -r nginx
useradd -r -g nginx -s /sbin/nologin -M nginx
tar xf nginx-1.4.7.tar.gz
cd nginx-1.4.7
mkdir -pv /var/tmp/nginx
./configure \
  --prefix=/usr \
  --sbin-path=/usr/sbin/nginx \
  --conf-path=/etc/nginx/nginx.conf \
  --error-log-path=/var/log/nginx/error.log \
  --http-log-path=/var/log/nginx/access.log \
  --pid-path=/var/run/nginx/nginx.pid  \
  --lock-path=/var/lock/nginx.lock \
  --user=nginx \
  --group=nginx \
  --with-http_ssl_module \
  --with-http_flv_module \
  --with-http_stub_status_module \
  --with-http_gzip_static_module \
  --http-client-body-temp-path=/var/tmp/nginx/client/ \
  --http-proxy-temp-path=/var/tmp/nginx/proxy/ \
  --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ \
  --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \
  --http-scgi-temp-path=/var/tmp/nginx/scgi \
  --with-pcre
make && make install

配置服务脚本

vi /etc/init.d/nginx # 配置服务脚本
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# config:      /etc/sysconfig/nginx
# pidfile:     /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/etc/nginx/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
make_dirs() {
   # make required directories
   user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
   options=`$nginx -V 2>&1 | grep 'configure arguments:'`
   for opt in $options; do
       if [ `echo $opt | grep '.*-temp-path'` ]; then
           value=`echo $opt | cut -d "=" -f 2`
           if [ ! -d "$value" ]; then
               # echo "creating" $value
               mkdir -p $value && chown -R $user $value
           fi
       fi
   done
}
start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}
stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}
restart() {
    configtest || return $?
    stop
    sleep 1
    start
}
reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}
force_reload() {
    restart
}
configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
    status $prog
}
rh_status_q() {
    rh_status >/dev/null 2>&1
}
case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac
chmod +x /etc/init.d/nginx # 复***务脚本执行权限
vi /etc/nginx/nginx.conf # 编辑主配置文件
worker_processes  2;
error_log  /var/log/nginx/nginx.error.log;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       80;
        server_name  xxrenzhe.lnmmp.com;
        access_log  /var/log/nginx/nginx.access.log  main;
        location / {
            root   /www/lnmmp.com;
            index  index.php index.html index.htm;
        }
        error_page  404              /404.html;
        error_page  500 502 503 504  /50x.html;
        location = /50x.html {
            root   /www/lnmmp.com;
        }
        location ~ \.php$ {
            root           /www/lnmmp.com;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }
    }
}
vi /etc/nginx/fastcgi_params # 编辑fastcgi参数文件
fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx;
fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;
fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;
fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;

启动服务

service nginx configtest # 服务启动前先验证配置文件是否正确
service nginx start
ps -ef |grep nginx # 检查nginx进程,尤其是worker进程是否与worker_processes值一致
ss -antupl |grep 80 # 检查服务端口是否启动

6、访问验证

Haproxy 统计页面测试

动静分离测试

高可用测试

到此 Nginx高可用集群构建(Keepalived+Haproxy+Nginx)介绍完成。

到此这篇关于Nginx实现高可用集群构建(Keepalived+Haproxy+Nginx)的文章就介绍到这了,更多相关Nginx 高可用集群内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • Keepalived+Nginx+Tomcat 实现高可用Web集群的示例代码

    Keepalived+Nginx+Tomcat 实现高可用Web集群 一.Nginx的安装过程 1.下载Nginx安装包,安装依赖环境包 (1)安装 C++编译环境 yum -y install gcc #C++ (2)安装pcre yum -y install pcre-devel (3)安装zlib yum -y install zlib-devel (4)安装Nginx 定位到nginx 解压文件位置,执行编译安装命令 [root@localhost nginx-1.12.2]# pwd

  • nginx高可用集群的实现过程

    这篇文章主要介绍了nginx高可用集群的实现过程,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下 1.配置: (1)需要两台nginx服务器 (2)需要keepalived (3)需要虚拟ip 2.配置高可用的准备工作 (1)需要两台服务器192.168.180.113和192.168.180.112 (2)在两台服务器安装nginx (3)在两台服务器安装keepalived 3.在两台服务器安装keepalived (1)使用yum命令进行安

  • Nginx实现高可用集群构建(Keepalived+Haproxy+Nginx)

    1.组件及实现的功能 Keepalived:实现对Haproxy服务的高可用,并采用双主模型配置; Haproxy:实现对Nginx的负载均衡和读写分离; Nginx:实现对HTTP请求的高速处理; 2.架构设计图 3.Keepalived部署 在两个节点上都需要执行安装keepalived,命令如下: $ yum -y install keepalived 修改 172.16.25.109 节点上 keepalived.conf 文件配置,命令如下 $ vim /etc/keepalived/

  • 运用.net core中实例讲解RabbitMQ高可用集群构建

    目录 一.集群架构简介 二.普通集群搭建 2.1 各个节点分别安装RabbitMQ 2.2 把节点加入集群 2.3 代码演示普通集群的问题 三.镜像集群 四.HAProxy环境搭建. 五.KeepAlived 环境搭建 一.集群架构简介 当单台 RabbitMQ 服务器的处理消息的能力达到瓶颈时,此时可以通过 RabbitMQ 集群来进行扩展,从而达到提升吞吐量的目的.RabbitMQ 集群是一个或多个节点的逻辑分组,集群中的每个节点都是对等的,每个节点共享所有的用户,虚拟主机,队列,交换器,绑

  • nginx搭建高可用集群的实现方法

    目录 Keepalived+Nginx 高可用集群(主从模式) Keepalived+Nginx 高可用集群(主从模式) 集群架构图 1.准备两台装有Nginx虚拟机 2.都需安装Keepalived yum install keepalived -y 查看是否安装成功 rpm -q -a keepalived 安装之后,在 etc 里面生成目录 keepalived,有文件 keepalived.conf 3.完成高可用配置(主从配置) 修改/etc/keepalived/keepalivec

  • Keepalived+HAProxy高可用集群K8S实现

    目录 本文采用Keepalived+HAProxy的方式构建高可用集群.当你如果你有硬件负载均衡设备当然更好了. 准备环境: 主机 ip k8s-master01 192.168.10.4 k8s-master02 192.168.10.5 k8s-master03 192.168.10.6 VIP 192.168.10.150 架构图 注意:master集群采用奇数台数,3.5.7… 所有节点都进行hosts文件解析 tail -3 /etc/hosts 192.168.10.4 k8s-ma

  • centos环境下nginx高可用集群的搭建指南

    目录 1.概述 2.CentOS中nginx集群搭建 2.1 集群架构图 2.2 Keepalived 2.3 集群搭建准备 2.4 集群搭建 2.4.1 安装keepalived 2.4.2 配置keepalived.conf 2.4.3 编写nginx监测脚本 2.4.4 启动keepalived 2.4.5 启动nginx 2.4.6 测试 3.小结 4.参考文献 总结 1.概述 nginx单机部署时,一旦宕机就会导致整个服务的不可用,导致雪崩式效应.集群式部署是解决单点式雪崩效应的有效方

  • CentOS下RabbitMq高可用集群环境搭建教程

    CentOS下RabbitMq高可用集群环境搭建教程分享给大家. 准备工作 1.准备两台或多台安装有rabbitmq-server服务的服务器 我这里准备了两台,分别如下: 192.168.40.130 rabbitmq01 192.168.40.131 rabbitmq02 2.确保防火墙是关闭的3,官网参考资料 http://www.rabbitmq.com/clustering.html hosts映射 修改每台服务上的hosts文件(路径:/etc/hosts),设置成如下: 192.1

  • 基于 ZooKeeper 搭建 Hadoop 高可用集群 的教程图解

    一.高可用简介 Hadoop 高可用 (High Availability) 分为 HDFS 高可用和 YARN 高可用,两者的实现基本类似,但 HDFS NameNode 对数据存储及其一致性的要求比 YARN ResourceManger 高得多,所以它的实现也更加复杂,故下面先进行讲解: 1.1 高可用整体架构 HDFS 高可用架构如下: 图片引用自: https://www.edureka.co/blog/how-to-set-up-hadoop-cluster-with-hdfs-hi

  • 基于mysql+mycat搭建稳定高可用集群负载均衡主备复制读写分离操作

    数据库性能优化普遍采用集群方式,oracle集群软硬件投入昂贵,今天花了一天时间搭建基于mysql的集群环境. 主要思路 简单说,实现mysql主备复制-->利用mycat实现负载均衡. 比较了常用的读写分离方式,推荐mycat,社区活跃,性能稳定. 测试环境 MYSQL版本:Server version: 5.5.53,到官网可以下载WINDWOS安装包. 注意:确保mysql版本为5.5以后,以前版本主备同步配置方式不同. linux实现思路类似,修改my.cnf即可. A主mysql.19

  • MySQL之高可用集群部署及故障切换实现

    一.MHA 1.概念 2.MHA 的组成 3.MHA 的特点 二.搭建MySQL+MHA 思路和准备工作 1.MHA架构 数据库安装 一主两从 MHA搭建 2.故障模拟 模拟主库失效 备选主库成为主库 原故障主库恢复重新加入到MHA成为从库 3.准备4台安装MySQL虚拟机 MHA高可用集群相关软件包 MHAmanager IP:192.168.221.30 MySQL1 IP:192.168.221.20 MySQL2 IP:192.168.221.100 MySQL3 IP: 192.168

随机推荐