MySQL高可用MMM方案安装部署分享

1 install mysql

请参考http://www.jb51.net/article/47094.htm

2. Basic configuration of master 1

3. Create users
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%' IDENTIFIED BY 'mmm_monitor';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'%' IDENTIFIED BY 'mmm_agent';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.88.49.%' IDENTIFIED BY 'repl';
grant insert,create,delete,update,select on *.* to 'tim'@'%' identified by 'tim';
Note: Don not use a replication_password longer than 32 characters

4. Synchronisation of data between both databases

5. Setup replication
set m-s:
change master to master_host='10.88.49.119',master_log_file='mysql56-bin.000026',master_log_pos=332, master_user='repl',master_password='repl';

6. Install mmm
6.1 download mmm.tar.gz
wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz
6.2 mv :mmm2:mysql-mmm-2.2.1.tar.gz mysql-mmm-2.2.1.tar.gz
tar -xvf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make
cmake
[] don not require make and make install, there have *.conf in /etc/mysql-mmm folder.

7. install lib package
yum install -y perl-*
yum install -y libart_lgpl.x86_64
yum install -y mysql-mmm.noarch fail
yum install -y rrdtool.x86_64
yum install -y rrdtool-perl.x86_64
7.1 []another way to install lib package in network
cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

8. Config MMM of DB host
vim /etc/mysql-mmm/mmm_common.conf
Don not forget to copy this file to all other hosts (including the monitoring host).
#Bugsfor$
scp /etc/mysql-mmm/mmm_common.conf 10.88.49.119:/etc/mysql-mmm/
scp /etc/mysql-mmm/mmm_common.conf 10.88.49.122:/etc/mysql-mmm/
scp /etc/mysql-mmm/mmm_common.conf 10.88.49.123:/etc/mysql-mmm/

On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:
chkconfig --add mysql-mmm-agent

9. Config Monitor
On the monitor host(10.88.49.123) we need to edit /etc/mysql-mmm/mmm_mon.conf:
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path /var/lib/misc/mmm_mond.status
auto_set_online 5
ping_ips 10.88.49.254,10.88.49.130,10.88.49.131,10.88.49.132,10.88.49.133,10.88.49.134
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password mmm_monitor
</host>
debug 0
ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch of GATEWAY (10.88.49.254) and the four database server.follow this
[root@oracle mysql-mmm]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
IPADDR=10.88.49.118
NETWASK=255.255.254.0
GATEWAY=10.88.49.254
DNS1=10.106.185.143
DNS2=10.106.185.138
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
HWADDR=00:15:5D:01:6A:0C

10. Start in database hosts
chkconfig --add mysql-mmm-agent
[root@oracle ~]# mysql-mmm-agent start
-bash: mysql-mmm-agent: command not found
[root@oracle ~]# service mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
[root@oracle ~]# cpan Proc::Daemon
[root@oracle ~]# cpan Log::Log4perl
[root@oracle ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok

11. Start in monitor hosts
chkconfig --add mysql-mmm-monitor
[root@localhost mysql-mmm-2.2.1]# service mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.
failed
[root@oracle ~]# cpan Proc::Daemon
[root@oracle ~]# cpan Log::Log4perl
[root@localhost mysql-mmm-2.2.1]# service mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok

12. Check fail
[root@oracle mysql-mmm]# ping 10.88.49.130
PING 10.88.49.130 (10.88.49.130) 56(84) bytes of data.
From 10.88.49.118 icmp_seq=2 Destination Host Unreachable
From 10.88.49.118 icmp_seq=3 Destination Host Unreachable
From 10.88.49.118 icmp_seq=4 Destination Host Unreachable
From 10.88.49.118 icmp_seq=6 Destination Host Unreachable
From 10.88.49.118 icmp_seq=7 Destination Host Unreachable
From 10.88.49.118 icmp_seq=8 Destination Host Unreachable

12.1 debug error info
在agent.conf 和 monitor.conf 分别加上 debug 1
然后看输出的日志

[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/AWAITING_RECOVERY. Roles:
db2(10.88.49.119) master/AWAITING_RECOVERY. Roles:
db3(10.88.49.122) slave/AWAITING_RECOVERY. Roles:
[root@localhost mysql-mmm]# mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost mysql-mmm]# mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost mysql-mmm]# mmm_control set_online db3
OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!

12.2 ping vip fail
2013/02/19 10:00:15 FATAL Couldn't configure IP '10.88.49.131' on interface 'eth1': undef
2013/02/19 10:00:15 DEBUG Executing /usr/lib/mysql-mmm//agent/mysql_allow_write
Can't locate Net/ARP.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/share/perl5/vendor_perl/MMM/Agent/Helpers/Network.pm line 11.
[1.1]
cpan Net/ARP.pm
yum install libuuid* Error
[2.1] if [1.1] fail , try this
[root@localhost mysql-mmm]# perl -MCPAN -e shell
cpan> install Net::ARP
[ok]

12.3 fail info :
2013/02/19 10:25:23 INFO Added: reader(10.88.49.131), writer(10.88.49.130)
2013/02/19 10:25:23 DEBUG Executing /usr/lib/mysql-mmm//agent/configure_ip eth1 10.88.49.131
Device "eth1" does not exist.
2013/02/19 10:25:23 FATAL Couldn't configure IP '10.88.49.131' on interface 'eth1': ERROR: Could not check if ip 10.88.49.131 is configured on eth1:
2013/02/19 10:25:23 DEBUG Executing /usr/lib/mysql-mmm//agent/sync_with_master
2013/02/19 10:25:23 DEBUG Executing /usr/lib/mysql-mmm//agent/mysql_allow_write
2013/02/19 10:25:23 DEBUG Executing /usr/lib/mysql-mmm//agent/configure_ip eth1 10.88.49.130
Device "eth1" does not exist.
2013/02/19 10:25:23 FATAL Couldn't configure IP '10.88.49.130' on interface 'eth1': ERROR: Could not check if ip 10.88.49.130 is configured on eth1:
2013/02/19 10:25:23 DEBUG Fetching uptime from /proc/uptime
2013/02/19 10:25:23 DEBUG Uptime is 158489.10
2013/02/19 10:25:23 DEBUG Daemon: Answer = 'OK: Status applied successfully!'
[ok] cluster_interface should set the

12.4 When connect reader vip,Lost package, info :
[root@localhost mysql-mmm]# ping 10.88.49.134
PING 10.88.49.134 (10.88.49.134) 56(84) bytes of data.
64 bytes from 10.88.49.134: icmp_seq=3 ttl=64 time=0.265 ms
64 bytes from 10.88.49.134: icmp_seq=6 ttl=64 time=0.699 ms
64 bytes from 10.88.49.134: icmp_seq=9 ttl=64 time=0.482 ms
64 bytes from 10.88.49.134: icmp_seq=12 ttl=64 time=0.405 ms
64 bytes from 10.88.49.134: icmp_seq=15 ttl=64 time=0.430 ms

14. Check all
[root@localhost ~]# mmm_control checks
db2 ping [last change: 2013/02/19 12:41:45] OK
db2 mysql [last change: 2013/02/19 12:41:45] OK
db2 rep_threads [last change: 2013/02/19 12:41:45] OK
db2 rep_backlog [last change: 2013/02/19 12:41:45] OK: Backlog is null
db3 ping [last change: 2013/02/19 12:41:45] OK
db3 mysql [last change: 2013/02/19 12:41:45] OK
db3 rep_threads [last change: 2013/02/19 12:41:45] OK
db3 rep_backlog [last change: 2013/02/19 12:41:45] OK: Backlog is null
db1 ping [last change: 2013/02/19 12:41:45] OK
db1 mysql [last change: 2013/02/19 12:41:45] OK
db1 rep_threads [last change: 2013/02/19 12:41:45] OK
db1 rep_backlog [last change: 2013/02/19 12:41:45] OK: Backlog is null

15 Check m<->m<->s change
15.1change writer from 10.88.49.118 to 10.88.49.119 Stop mysqld in db1 10.88.49.118
show slave status on 10.88.49.122, see Master_Host is '10.88.49.118'
[root@oracle ~]# service mysqld56 stop
Shutting down MySQL... SUCCESS!

15.1.1 show info in monitor host log
[root@localhost ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2013/02/20 10:34:42 INFO Removing all roles from host 'db1':
2013/02/20 10:34:42 INFO Removed role 'reader(10.88.49.134)' from host 'db1'
2013/02/20 10:34:42 INFO Removed role 'writer(10.88.49.130)' from host 'db1'
2013/02/20 10:34:42 INFO Orphaned role 'writer(10.88.49.130)' has been assigned to 'db2'
2013/02/20 10:34:42 INFO Orphaned role 'reader(10.88.49.134)' has been assigned to 'db3'
15.1.2 show info in slave host of 10.88.49.122, slave will change its master_host
[root@localhost ~]# mysql -P3307 -S /data56/mysql.sock -p123456
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.88.49.119

15.1.3 but there are some bad info in monitor host of 10.88.49.123, when you run 'mmm_control show', it may hold on .
2013/02/20 10:37:25 DEBUG Listener: Waiting for connection...
2013/02/20 10:37:28 DEBUG Listener: Waiting for connection...

15.1.4 [] why ?
i have solved this problem, it maybe occur when:
(1): 'peer' paramter is wrong .
(2): 'ping_ips' and 'ips' are wrong

15.2 change writer from 10.88.49.119 to 10.88.49.118
15.2.1 start mysqld in 10.88.49.118, stop mysqld in 10.88.49.119, run 'mmm_control set_online db1' in monitor_host
15.2.2 show slave info in slave_host of 10.88.49.122
[root@localhost ~]# mysql -P3307 -S /data56/mysql.sock -p123456
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.88.49.118
15.3 stop mysqld on slave_host of 10.88.49.122
15.3.1 show info in monitor_host
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.134), reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.132), reader(10.88.49.133)
db3(10.88.49.122) slave/HARD_OFFLINE. Roles:
[] the ips change to mm.

15.3.2 start mysqld on slave_host of 10.88.49.122

15.3.3 show info in monitor_host
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.134), reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.132), reader(10.88.49.133)
db3(10.88.49.122) slave/AWAITING_RECOVERY. Roles:
[] need set online
[root@localhost mysql-mmm]# mmm_control set_online db3
OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.133)
db3(10.88.49.122) slave/ONLINE. Roles: reader(10.88.49.132), reader(10.88.49.134)

15.4 stop master_host db2 of 10.88.49.119
15.4.1 show info in monitor_host
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/HARD_OFFLINE. Roles:
db3(10.88.49.122) slave/ONLINE. Roles: reader(10.88.49.132), reader(10.88.49.133), reader(10.88.49.134)
see db2 is HARD_OFFLINE

15.4.2 start master_host db2 of 10.88.49.119
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/AWAITING_RECOVERY. Roles:
db3(10.88.49.122) slave/ONLINE. Roles: reader(10.88.49.132), reader(10.88.49.133), reader(10.88.49.134)
see db2 is AWAITING_RECOVERY, so need set online
[root@localhost mysql-mmm]# mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost mysql-mmm]# mmm_control show
db1(10.88.49.118) master/ONLINE. Roles: reader(10.88.49.135), writer(10.88.49.130)
db2(10.88.49.119) master/ONLINE. Roles: reader(10.88.49.131), reader(10.88.49.132)
db3(10.88.49.122) slave/ONLINE. Roles: reader(10.88.49.133), reader(10.88.49.134)
see db2 is online

15.5 change writer to db2
mmm_control move_role writer db2

相关文章:

http://blog.csdn.net/hguisu/article/details/7349562
http://blog.chinaunix.net/uid-28437434-id-3471237.html
http://mysql-mmm.org/downloads ;
http://mysql-mmm.org/mmm2:guide
http://dev.mysql.com/doc/internals/en/optimizer-primary-optimizations.html

(0)

相关推荐

  • 详解MySQL高可用MMM搭建方案及架构原理

    先来看看架构,如下图: 部署 1.修改hosts 在所有的服务器中执行相同的操作. vim /etc/hosts 192.168.137.10 master 192.168.137.20 backup 192.168.137.30 slave 192.168.137.40 monitor 2.添加mysql用户 只需要在所有的数据库端执行即可,监控端不需要. GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.137.%' IDEN

  • Keepalived+HAProxy实现MySQL高可用负载均衡的配置

     Keepalived 由于在生产环境使用了mysqlcluster,需要实现高可用负载均衡,这里提供了keepalived+haproxy来实现. keepalived主要功能是实现真实机器的故障隔离及负载均衡器间的失败切换.可在第3,4,5层交换.它通过VRRPv2(Virtual Router Redundancy Protocol) stack实现的. Layer3:Keepalived会定期向服务器群中的服务器.发送一个ICMP的数据包(既我们平时用的Ping程序),如果发现某台服务的

  • keeplive+mysql+drbd高可用架构安装步骤

    DRBD(DistributedReplicatedBlockDevice)是一个基于块设备级别在远程服务器直接同步和镜像数据的开源软件,类似于RAID1数据镜像,通常配合keepalived.heartbeat等HA软件来实现高可用性. DRBD是一种块设备,可以被用于高可用(HA)之中.它类似于一个网络RAID-1功能,当你将数据写入本地文件系统时,数据还将会被发送到网络中另一台主机上.以相同的形式记录在一个文件系统中. 本地(master)与远程主机(backup)的保证实时同步,如果本地

  • MySQL数据库的高可用方案总结

    高可用架构对于互联网服务基本是标配,无论是应用服务还是数据库服务都需要做到高可用.虽然互联网服务号称7*24小时不间断服务,但多多少少有一些时候服务不可用,比如某些时候网页打不开,百度不能搜索或者无法发微博,发微信等.一般而言,衡量高可用做到什么程度可以通过一年内服务不可用时间作为参考,要做到3个9的可用性,一年内只能累计有8个小时不可服务,而如果要做到5个9的可用性,则一年内只能累计5分钟服务中断.所以虽说每个公司都说自己的服务是7*24不间断的,但实际上能做到5个9的屈指可数,甚至根本做不到

  • MySQL高可用解决方案MMM(mysql多主复制管理器)

    一.MMM简介: MMM即Multi-Master Replication Manager for MySQL:mysql多主复制管理器,基于perl实现,关于mysql主主复制配置的监控.故障转移和管理的一套可伸缩的脚本套件(在任何时候只有一个节点可以被写入),MMM也能对从服务器进行读负载均衡,所以可以用它来在一组用于复制的服务器启动虚拟ip,除此之外,它还有实现数据备份.节点之间重新同步功能的脚本.MySQL本身没有提供replication failover的解决方案,通过MMM方案能实

  • MySQL下高可用故障转移方案MHA的超级部署教程

    MHA介绍 MHA是一位日本MySQL大牛用Perl写的一套MySQL故障切换方案,来保证数据库系统的高可用.在宕机的时间内(通常10-30秒内),完成故障切换,部署MHA,可避免主从一致性问题,节约购买新服务器的费用,不影响服务器性能,易安装,不改变现有部署.      还支持在线切换,从当前运行master切换到一个新的master上面,只需要很短的时间(0.5-2秒内),此时仅仅阻塞写操作,并不影响读操作,便于主机硬件维护.   在有高可用,数据一致性要求的系统上,MHA 提供了有用的功能

  • MySQL高可用MMM方案安装部署分享

    1 install mysql 请参考http://www.jb51.net/article/47094.htm 2. Basic configuration of master 1 3. Create users GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%' IDENTIFIED BY 'mmm_monitor'; GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agen

  • MySQL高可用架构之MHA架构全解

    目录 一.介绍 二.组成 三.工作过程 四.架构 五.实例展示 MHA(Master HA)是一款开源的 MySQL 的高可用程序,它为 MySQL 主从复制架构提供了 automating master failover 功能.MHA 在监控到 master 节点故障时,会提升其中拥有最新数据的 slave 节点成为新的master 节点,在此期间,MHA 会通过于其它从节点获取额外信息来避免一致性方面的问题.MHA 还提供了 master 节点的在线切换功能,即按需切换 master/sla

  • 如何搭建 MySQL 高可用高性能集群

    目录 MySQL NDB Cluster 是什么 搭建集群的前置工作 开始部署集群 部署管理服务器 部署数据服务器 部署 SQL 服务 所有集群服务部署完毕,我们来测试一下集群是否真的部署成功 数据库集群部署成功了,总结一下集群的注意事项 MySQL NDB Cluster 是什么 MySQL NDB Cluster 是 MySQL 的一个高可用.高冗余版本,适用于分布式计算环境. 文档链接 搭建集群的前置工作 至少准备 3 台服务器,一台作为管理服务器,两台作为数据服务器和 SQL 服务器,当

  • Mysql在Windows系统快速安装部署方法(绿色免安装版)

    首先下载MySQL的是绿色免安装版,放到随便一个文件夹也可以,这次我直接放在了C盘 步骤: 1.将my-default.ini(ini如果没有后缀就是my-default) 复制后改名为my.ini(ini如果没有后缀就是my),然后复制以下内容把原来的内容全替换掉 [mysql] default-character-set=utf8 [mysqld] #设置3306端口 port=3306 #系统基本目录 basedir=C:/mysql-5.6 #用户数据目录 datadir=C:/mysq

  • MySQL之MHA高可用配置及故障切换实现详细部署步骤

    一.MHA介绍 (一).什么是MHA MHA(MasterHigh Availability)是一套优秀的MySQL高可用环境下故障切换和主从复制的软件. MHA 的出现就是解决MySQL 单点的问题. MySQL故障切换过程中,MHA能做到0-30秒内自动完成故障切换操作. MHA能在故障切换的过程中最大程度上保证数据的一致性,以达到真正意义上的高可用. (二).MHA 的组成 MHA Node(数据节点) MHA Node 运行在每台 MySQL 服务器上. MHA Manager(管理节点

  • MySQL之高可用集群部署及故障切换实现

    一.MHA 1.概念 2.MHA 的组成 3.MHA 的特点 二.搭建MySQL+MHA 思路和准备工作 1.MHA架构 数据库安装 一主两从 MHA搭建 2.故障模拟 模拟主库失效 备选主库成为主库 原故障主库恢复重新加入到MHA成为从库 3.准备4台安装MySQL虚拟机 MHA高可用集群相关软件包 MHAmanager IP:192.168.221.30 MySQL1 IP:192.168.221.20 MySQL2 IP:192.168.221.100 MySQL3 IP: 192.168

  • MySQL数据库实现高可用架构之MHA的实战

    目录 一.MySQLMHA介绍 1.1什么是MHA? 1.2MHA的组成 1.3MHA的特点 二.MySQLMHA搭建 1.MHA架构部分 2.故障模拟部分 3.实验环境 三.实验步骤 1.关闭防火墙和SElinux 2.Master.Slave1.Slave2节点上安装mysql5.7 3.修改Master.Slave1.Slave2节点的主机名 4.修改Master.Slave1.Slave2节点的Mysql主配置文件/etc/my.cnf 5.在Master.Slave1.Slave2节点

随机推荐