MySQL-MMM安装指南(Multi-Master Replication Manager for MySQL)

最基本的MMM安装必须至少需要2个数据库服务器和一个监控服务器下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下:
































function ip hostname server id
monitoring host 192.168.0.10 mon -
master 1 192.168.0.11 db1 1
master 2 192.168.0.12 db2 2
slave 1 192.168.0.13 db3 3
slave 2 192.168.0.14 db4 4

如果是个人学习安装,一下子找5台机器不太容易,可以虚拟机就可以完成。

配置完成后,使用下面的虚拟IP访问MySQL Cluster,他们通过MMM分配到不同的服务器。























ip role description
192.168.0.100 writer 应用程序应该连接到这个ip进行写操作
192.168.0.101 reader 应用程序应该链接到这些ip中的一个进行读操作
192.168.0.102 reader
192.168.0.103 reader
192.168.0.104 reader

结构图如下:

2. Basic configuration of master 1

First we install MySQL on all hosts:
aptitude install mysql-serverThen we edit the configuration file /etc/mysql/my.cnf and add the following lines - be sure to use different server ids for all hosts:

代码如下:

server_id = 1
log_bin = /var/log/mysql/mysql-bin.log
log_bin_index = /var/log/mysql/mysql-bin.log.index
relay_log = /var/log/mysql/mysql-relay-bin
relay_log_index = /var/log/mysql/mysql-relay-bin.index
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1

Then remove the following entry:
bind-address = 127.0.0.1Set to number of masters:
auto_increment_increment = 2Set to a unique, incremented number, less than auto_increment_increment, on each server

auto_increment_offset = 1Do not bind of any specific IP, use 0.0.0.0 instead:

bind-address = 0.0.0.0Afterwards we need to restart MySQL for our changes to take effect:

/etc/init.d/mysql restart

3. Create usersNow we can create the required users. We'll need 3 different users:


















function description privileges
monitor user used by the mmm monitor to check the health of the MySQL servers REPLICATION CLIENT
agent user used by the mmm agent to change read-only mode, replication master, etc. SUPER, REPLICATION CLIENT, PROCESS
relication user used for replication REPLICATION SLAVE



代码如下:

GRANT REPLICATION CLIENT                 ON *.* TO 'mmm_monitor'@'192.168.0.%' IDENTIFIED BY 'monitor_password';
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.0.%'   IDENTIFIED BY 'agent_password';
GRANT REPLICATION SLAVE                  ON *.* TO 'replication'@'192.168.0.%' IDENTIFIED BY 'replication_password';

Note: We could be more restrictive here regarding the hosts from which the users are allowed to connect: mmm_monitor is used from 192.168.0.10. mmm_agent and replication are used from 192.168.0.11 - 192.168.0.14.
Note: Don't use a replication_password longer than 32 characters

4. Synchronisation of data between both databases

I'll assume that db1 contains the correct data. If you have an empty database, you still have to syncronize the accounts we have just created.
First make sure that no one is altering the data while we create a backup.

代码如下:

(db1) mysql> FLUSH TABLES WITH READ LOCK;

Then get the current position in the binary-log. We will need this values when we setup the replication on db2, db3 and db4.

代码如下:

(db1) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 |      374 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

DON'T CLOSE this mysql-shell. If you close it, the database lock will be removed. Open a second console and type:

db1$ mysqldump -u root -p --all-databases > /tmp/database-backup.sql

Now we can remove the database-lock. Go to the first shell:

(db1) mysql> UNLOCK TABLES;Copy the database backup to db2, db3 and db4.

代码如下:

db1$ scp /tmp/database-backup.sql <user>@192.168.0.12:/tmp
db1$ scp /tmp/database-backup.sql <user>@192.168.0.13:/tmp
db1$ scp /tmp/database-backup.sql <user>@192.168.0.14:/tmp

Then import this into db2, db3 and db4:

代码如下:

db2$ mysql -u root -p < /tmp/database-backup.sql
db3$ mysql -u root -p < /tmp/database-backup.sql
db4$ mysql -u root -p < /tmp/database-backup.sql

Then flush the privileges on db2, db3 and db4. We have altered the user-table and mysql has to reread this table.

代码如下:

(db2) mysql> FLUSH PRIVILEGES;
(db3) mysql> FLUSH PRIVILEGES;
(db4) mysql> FLUSH PRIVILEGES;

On debian and ubuntu, copy the passwords in /etc/mysql/debian.cnf from db1 to db2, db3 and db4. This password is used for starting and stopping mysql.
Both databases now contain the same data. We now can setup replication to keep it that way.
Note: Import just only add records from dump file. You should drop all databases before import dump file.

5. Setup replication

Configure replication on db2, db3 and db4 with the following commands:

代码如下:

(db2) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
(db3) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;
(db4) mysql> CHANGE MASTER TO master_host='192.168.0.11', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;

Please insert the values return by “show master status” on db1 at the <file> and <position> tags.
Start the slave-process on all 3 hosts:

代码如下:

(db2) mysql> START SLAVE;
(db3) mysql> START SLAVE;
(db4) mysql> START SLAVE;

Now check if the replication is running correctly on all hosts:

代码如下:

(db2) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

(db3) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

(db4) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.11
                Master_User: replication
                Master_Port: 3306
              Connect_Retry: 60

Now we have to make db1 replicate from db2. First we have to determine the values for master_log_file and master_log_pos:

代码如下:

(db2) mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |       98 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

Now we configure replication on db1 with the following command:

代码如下:

(db1) mysql> CHANGE MASTER TO master_host = '192.168.0.12', master_port=3306, master_user='replication',
              master_password='replication_password', master_log_file='<file>', master_log_pos=<position>;

Now insert the values return by “show master status” on db2 at the <file> and <position> tags.

Start the slave-process:

(db1) mysql> START SLAVE;Now check if the replication is running correctly on db1:

代码如下:

(db1) mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
             Slave_IO_State: Waiting for master to send event
                Master_Host: 192.168.0.12
                Master_User: <replication>
                Master_Port: 3306
              Connect_Retry: 60

Replication between the nodes should now be complete. Try it by inserting some data into both db1 and db2 and check that the data will appear on all other nodes.

6. Install MMM

Create user
Optional: Create user that will be the owner of the MMM scripts and configuration files. This will provide an easier method to securely manage the monitor scripts.

useradd --comment "MMM Script owner" --shell /sbin/nologin mmmdMonitoring host
First install dependencies:

代码如下:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl

Then fetch the latest mysql-mmm-common*.deb and mysql-mmm-monitor*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-monitor*.deb

Database hosts
On Ubuntu First install dependencies:

aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl iproute libnet-arp-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perlThen fetch the latest mysql-mmm-common*.deb and mysql-mmm-agent*.deb and install it:

dpkg -i mysql-mmm-common_*.deb mysql-mmm-agent_*.debOn RedHat

yum install -y mysql-mmm-agentThis will take care of all the dependencies, which may include:

Installed:

mysql-mmm-agent.noarch 0:2.2.1-1.el5

Dependency Installed:

代码如下:

libart_lgpl.x86_64 0:2.3.17-4                                                
mysql-mmm.noarch 0:2.2.1-1.el5                                               
perl-Algorithm-Diff.noarch 0:1.1902-2.el5                                    
perl-DBD-mysql.x86_64 0:4.008-1.rf                                           
perl-DateManip.noarch 0:5.44-1.2.1                                           
perl-IPC-Shareable.noarch 0:0.60-3.el5                                       
perl-Log-Dispatch.noarch 0:2.20-1.el5                                        
perl-Log-Dispatch-FileRotate.noarch 0:1.16-1.el5                             
perl-Log-Log4perl.noarch 0:1.13-2.el5                                        
perl-MIME-Lite.noarch 0:3.01-5.el5                                           
perl-Mail-Sender.noarch 0:0.8.13-2.el5.1                                     
perl-Mail-Sendmail.noarch 0:0.79-9.el5.1                                     
perl-MailTools.noarch 0:1.77-1.el5                                           
perl-Net-ARP.x86_64 0:1.0.6-2.1.el5                                          
perl-Params-Validate.x86_64 0:0.88-3.el5                                     
perl-Proc-Daemon.noarch 0:0.03-1.el5                                         
perl-TimeDate.noarch 1:1.16-5.el5                                            
perl-XML-DOM.noarch 0:1.44-2.el5                                             
perl-XML-Parser.x86_64 0:2.34-6.1.2.2.1                                      
perl-XML-RegExp.noarch 0:0.03-2.el5                                          
rrdtool.x86_64 0:1.2.27-3.el5                                                
rrdtool-perl.x86_64 0:1.2.27-3.el5

Configure MMM

All generic configuration-options are grouped in a separate file called /etc/mysql-mmm/mmm_common.conf. This file will be the same on all hosts in the system:

代码如下:

active_master_role          writer

<host default>
    cluster_interface       eth0
    pid_path                /var/run/mmmd_agent.pid
    bin_path                /usr/lib/mysql-mmm/
    replication_user        replication
    replication_password    replication_password
    agent_user              mmm_agent
    agent_password          agent_password
</host>
<host db1>
    ip                      192.168.0.11
    mode                    master
    peer                    db2
</host>
<host db2>
    ip                      192.168.0.12
    mode                    master
    peer                    db1
</host>
<host db3>
    ip                      192.168.0.13
    mode                    slave
</host>
<host db4>
    ip                      192.168.0.14
    mode                    slave
</host>

<role writer>
    hosts                   db1, db2
    ips                     192.168.0.100
    mode                    exclusive
</role>
<role reader>
    hosts                   db1, db2, db3, db4
    ips                     192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104
    mode                    balanced
</role>

Don't forget to copy this file to all other hosts (including the monitoring host).

On the database hosts we need to edit /etc/mysql-mmm/mmm_agent.conf. Change “db1” accordingly on the other hosts:

代码如下:

include mmm_common.conf
this db1

On the monitor host we need to edit /etc/mysql-mmm/mmm_mon.conf:

代码如下:

include mmm_common.conf

<monitor>
    ip                      127.0.0.1
    pid_path                /var/run/mmmd_mon.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path             /var/lib/misc/mmmd_mon.status
    ping_ips                192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14
</monitor>

<host default>
    monitor_user            mmm_monitor
    monitor_password        monitor_password
</host>

debug 0

ping_ips are some ips that are pinged to determine whether the network connection of the monitor is ok. I used my switch (192.168.0.1) and the four database server.

7. Start MMM

Start the agents
(On the database hosts)

Debian/Ubuntu
Edit /etc/default/mysql-mmm-agent to enable the agent:

ENABLED=1Red Hat
RHEL/Fedora does not enable packages to start at boot time per default policy, so you might have to turn it on manually so the agents will start automatically when server is rebooted:

chkconfig mysql-mmm-agent onThen start it:

/etc/init.d/mysql-mmm-agent startStart the monitor
(On the monitoring host) Edit /etc/default/mysql-mmm-monitor to enable the monitor:

ENABLED=1Then start it:

/etc/init.d/mysql-mmm-monitor start

Wait some seconds for mmmd_mon to start up. After a few seconds you can use mmm_control to check the status of the cluster:

代码如下:

mon$ mmm_control show
  db1(192.168.0.11) master/AWAITING_RECOVERY. Roles:
  db2(192.168.0.12) master/AWAITING_RECOVERY. Roles:
  db3(192.168.0.13) slave/AWAITING_RECOVERY. Roles:
  db4(192.168.0.14) slave/AWAITING_RECOVERY. Roles:

Because its the first startup the monitor does not know our hosts, so it sets all hosts to state AWAITING_RECOVERY and logs a warning message:

代码如下:

mon$ tail /var/log/mysql-mmm/mmm_mon.warn

2009/10/28 23:15:28  WARN Detected new host 'db1': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db1' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db2': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db2' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db3': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db3' to switch it online.
2009/10/28 23:15:28  WARN Detected new host 'db4': Setting its initial state to 'AWAITING_RECOVERY'. Use 'mmm_control set_online db4' to switch it online.

Now we set or hosts online (db1 first, because the slaves replicate from this host):

代码如下:

mon$ mmm_control set_online db1
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db3
OK: State of 'db3' changed to ONLINE. Now you can wait some time and check its new roles!
mon$ mmm_control set_online db4
OK: State of 'db4' changed to ONLINE. Now you can wait some time and check its new roles!

参考:http://mysql-mmm.org/mmm2:guide

(0)

相关推荐

  • 从零开始搭建MySQL MMM架构

    云平台是个好东西,MySQL-mmm的典型配置是需要五台机器,一台作为mmm admin,两台master,两台slave.一下子找五台机器真不容易,何况还要安装同样的操作系统.而有了cloud,简单几步就有了完备的实验环境:四台数据库服务器和一台管理服务器(Memory:8G,CPU:2G,Disk:128G,64bit RHEL6).在此,向为付出辛劳搭建云平台的同事们表示由衷的感谢:-)下面言归正传,开始全新的MySQL mmm之旅. 下面要配置的MySQL Cluster环境包含四台数据

  • Keepalived+HAProxy实现MySQL高可用负载均衡的配置

     Keepalived 由于在生产环境使用了mysqlcluster,需要实现高可用负载均衡,这里提供了keepalived+haproxy来实现. keepalived主要功能是实现真实机器的故障隔离及负载均衡器间的失败切换.可在第3,4,5层交换.它通过VRRPv2(Virtual Router Redundancy Protocol) stack实现的. Layer3:Keepalived会定期向服务器群中的服务器.发送一个ICMP的数据包(既我们平时用的Ping程序),如果发现某台服务的

  • MySQL高可用MMM方案安装部署分享

    1 install mysql 请参考http://www.jb51.net/article/47094.htm 2. Basic configuration of master 1 3. Create users GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'%' IDENTIFIED BY 'mmm_monitor'; GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agen

  • MySQL数据库的高可用方案总结

    高可用架构对于互联网服务基本是标配,无论是应用服务还是数据库服务都需要做到高可用.虽然互联网服务号称7*24小时不间断服务,但多多少少有一些时候服务不可用,比如某些时候网页打不开,百度不能搜索或者无法发微博,发微信等.一般而言,衡量高可用做到什么程度可以通过一年内服务不可用时间作为参考,要做到3个9的可用性,一年内只能累计有8个小时不可服务,而如果要做到5个9的可用性,则一年内只能累计5分钟服务中断.所以虽说每个公司都说自己的服务是7*24不间断的,但实际上能做到5个9的屈指可数,甚至根本做不到

  • MySQL下高可用故障转移方案MHA的超级部署教程

    MHA介绍 MHA是一位日本MySQL大牛用Perl写的一套MySQL故障切换方案,来保证数据库系统的高可用.在宕机的时间内(通常10-30秒内),完成故障切换,部署MHA,可避免主从一致性问题,节约购买新服务器的费用,不影响服务器性能,易安装,不改变现有部署.      还支持在线切换,从当前运行master切换到一个新的master上面,只需要很短的时间(0.5-2秒内),此时仅仅阻塞写操作,并不影响读操作,便于主机硬件维护.   在有高可用,数据一致性要求的系统上,MHA 提供了有用的功能

  • 详解MySQL高可用MMM搭建方案及架构原理

    先来看看架构,如下图: 部署 1.修改hosts 在所有的服务器中执行相同的操作. vim /etc/hosts 192.168.137.10 master 192.168.137.20 backup 192.168.137.30 slave 192.168.137.40 monitor 2.添加mysql用户 只需要在所有的数据库端执行即可,监控端不需要. GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.137.%' IDEN

  • MySQL-MMM安装指南(Multi-Master Replication Manager for MySQL)

    最基本的MMM安装必须至少需要2个数据库服务器和一个监控服务器下面要配置的MySQL Cluster环境包含四台数据库服务器和一台监控服务器,如下: function ip hostname server id monitoring host 192.168.0.10 mon - master 1 192.168.0.11 db1 1 master 2 192.168.0.12 db2 2 slave 1 192.168.0.13 db3 3 slave 2 192.168.0.14 db4 4

  • MySQL高级学习笔记(三):Mysql逻辑架构介绍、mysql存储引擎详解

    Mysql逻辑架构介绍总体概览 和其它数据库相比,MySQL有点与众不同,它的架构可以在多种不同场景中应用并发挥良好作用.主要体现在存储引擎的架构上,插件式的存储引擎架构将查询处理和其它的系统任务以及数据的存储提取相分离 . 这种架构可以根据业务的需求和实际需要选择合适的存储引擎. controller层: Connectors:连接层,c .java等连接mysql 业务逻辑处理成: Connection Pool:连接层 c3p0连接池等 Manager Service util:备份.容灾

  • MySQL配置文件my.cnf中文详解附mysql性能优化方法分享

    下面先说我的服务器的硬件以及论坛情况,CPU: 2颗四核Intel Xeon 2.00GHz内存: 4GB DDR硬盘: SCSI 146GB论坛:在线会员 一般在 5000 人左右 – 最高记录是 13264.下面,我们根据以上硬件配置结合一份已经做过一次优化的my.cnf进行分析说明:有些参数可能还得根据论坛的变化情况以及程序员的程序进行再调整.[mysqld]port = 3306serverid = 1socket = /tmp/mysql.sockskip-locking # 避免My

  • Mysql 5.7.18安装方法及启动MySQL服务的过程详解

    MySQL 是一个非常强大的关系型数据库.但有些初学者在安装配置的时候,遇到种种的困难,在此就不说安装过程了,说一下配置过程.在官网下载的MySQL时候,有msi格式和zip格式.Msi直接运行安装即可,zip则解压在自己喜欢的目录地址即可.在安装这两种的时候,都需要配置才能用.以下介绍主要是msi格式默认的地址:C:\Program Files\ mysql-5.7.18-win32. 一.在安装或者解压后,需要配置环境变量,过程如下:我的电脑->属性->高级系统设置->高级->

  • 安装mysql出错”A Windows service with the name MySQL already exists.“如何解决

    如果以前安装过mysql,卸载重装,很可能会碰到"A Windows service with the name MySQL already exists."这样的提示.即服务已经存在. 我们可以在window任务管理器----服务中查看,发现确实存在,没有卸载干净. 解决这个问题,可以在dos窗口,使用如下命令: 复制代码 代码如下: sc delete mysql 如果成功,出现如下结果: [SC] DeleteService SUCCESS 之后,重启电脑.如果再在任务管理器--

  • Mysql 5.6.37 winx64安装双版本mysql笔记记录

    机器上现在已经存在5.0版本MySQL的情况下,继续安装一个最新版的mysql. 一.官网下载免安装压缩包. 本人下载的是mysql-5.6.37-winx64.zip.将压缩包解压到自定义目录中.例如:D:\mysql-5.6.37. 二.添加环境变量. 右键单击我的电脑->属性->高级系统设置(高级)->环境变量. 点击系统变量下的新建按钮 输入变量名:MYSQL_HOME 输入变量值:D:\mysql-5.6.37 (自定义的解压目录),选择系统变量中的path,点击编辑按钮添加变

  • 新装MySql后登录出现root帐号提示mysql ERROR 1045 (28000): Access denied for use的解决办法

    新装MySQL后,首次执行 mysql -uroot -p 后会发现root密码不为空,要重置root密码请参考以下步骤. 编辑mysql配置文件my.ini(如果是my_default.ini请改名为my.ini),在[mysqld]这个条目下加入 skip-grant-tables 保存退出后重启mysql,点击"开始"->"运行"(快捷键Win+R). 1.停止:输入 net stop mysql 2.启动:输入 net start mysql 这时候在

  • Mysql 服务 1067 错误 的解决方法:修改mysql可执行文件路径

    今天遇到mysql服务1067错误的问题,设置使用系统账户也无法启动mysql,后面认证看了系统的配置信息,发现启动文件也就是mysql安装路径是之前的(也说明之前安装mysql,没去卸载直接安装新的会出错),于是打算修改修改mysql可执行文件路径,换成现在的. 但是各种百度,都说的不明确,后面打算放弃了,干脆重装系统,才发现这个可以解决. 第一步:停止服务MySQL 第二步:(控制台:运行->regedit),根据路径HKEY_LOCAL_MACHINE\SYSTEM\CurrentCont

  • 重新restore了mysql到另一台机器上后mysql 编码问题报错

    如下: 复制代码 代码如下: Warning at /admin/assets/add/ Incorrect string value: '\xE5\x93\x88\xD5\x92\x88...' for column 'Name' at row 1 Request Method: POST Request URL: http://127.0.0.1:8000/admin/assets/add/ Django Version: 1.2.3 Exception Type: Warning Exce

随机推荐