MySQL之-MySQL高可用實現的詳細介紹

一、 基本環境介紹及基本環境配置<br>

節點1: node1.hulala.com????? 192.168.1.35???? centos6.5_64??? 添加8G新硬盤<br>節點2: node2.hulala.com????? 192.168.1.36???? centos6.5_64??? 添加8G新硬盤<br>vip?? 192.168.1.39 ?<br><br>節點1與節點2均需配置<br>修改主機名:

vim?/etc/sysconfig/network  HOSTNAME=node1.hulala.com

配置hosts解析:<br>

vim?/etc/hosts  192.168.1.35????node1.hulala.com?node1  192.168.1.36????node2.hulala.com?node2

同步系統時間:<br>

ntpdate?cn.pool.ntp.org

關閉防火墻與SELINUX<br>

service?iptables?stop  chkconfig?iptables?off  cat?/etc/sysconfig/selinux  SELINUX=disabled

以上配置在兩個節點都需要配置,配置完成之后重啟兩個節點

二:配置ssh互信<br>

[root@node1~]#ssh-keygen?-t?rsa?-b?1024  [root@node1~]#ssh-copy-id?root@192.168.1.36  [root@node2~]#ssh-keygen?-t?rsa?-b?1024  [root@node2~]#ssh-copy-id?root@192.168.1.35

三:DRBD的c(node1和node2執行相同操作)

[root@node1~]#wget?-c?http://www.php.cn/  [root@node1~]#wget?-c?http://www.php.cn/  [root@node1~]#rpm?-ivh?*.rpm

獲取一個sha1值做為shared-secret<br>

[root@node1~]#sha1sum?/etc/drbd.conf  8a6c5f3c21b84c66049456d34b4c4980468bcfb3??/etc/drbd.conf

創建并編輯資源c:/etc/drbd.d/dbcluster.res<br>

[root@node1~]#?vim?/etc/drbd.d/dbcluster.res  resource?dbcluster?{  ????protocol?C;  ????net?{  ????????cram-hmac-alg?sha1;  ????????shared-secret?"8a6c5f3c21b84c66049456d34b4c4980468bcfb3";  ????????after-sb-0pri?discard-zero-changes;  after-sb-1pri?discard-secondary;  ????????after-sb-2pri?disconnect;  ????????rr-conflict?disconnect;  ????}  ????device????/dev/drbd0;  ????disk??????/dev/sdb1;  meta-disk?internal;  ????on?node1.hulala.com?{  ????????address???192.168.1.35:7789;  ????}  ????on?node2.hulala.com?{  ????????address???192.168.1.36:7789;  ????}  }

以上配置所用參數說明:<br>RESOURCE: 資源名稱<br>PROTOCOL: 使用協議”C”表示”同步的”,即收到遠程的寫入確認之后,則認為寫入完成.<br>NET: 兩個節點的SHA1 key是一樣的<br>after-sb-0pri : “Split Brain”發生時且沒有數據變更,兩節點之間正常連接<br>after-sb-1pri : 如果有數據變更,則放棄輔設備數據,并且從主設備同步<br>rr-conflict: 假如前面的設置不能應用,并且drbd系統有角色沖突的話,系統自動斷開節點間連接<br>META-DISK: Meta data保存在同一個磁盤(sdb1)<br>ON : 組成集群的節點<br>將DRBD配置拷貝到node機器:<br>

[root@node1~]#scp?/etc/drbd.d/dbcluster.res?root@192.168.1.36:/etc/drbd.d/

創建資源及c:<br>創建分區(未格式化過)<br>在node1和node2上創建LVM分區:<br>

[#root@node1~]fdisk?/dev/sdb

在node1和node2上給資源(dbcluster)創建meta data:<br>

[root@node1~drbd]#drbdadm?create-md?dbcluster

激活資源(node1和node2都得查看)<br>– 首先確保drbd c已經加載<br>查看是否加載:<br>

#?lsmod?|?grep?drbd

若未加載,則需加載:<br>

#?modprobe?drbd  #?lsmod?|?grep?drbd  drbd??????????????????317261??0  libcrc32c???????????????1246??1?drbd

– 啟動drbd后臺進程:<br>

[root@node1?drbd]#?drbdadm?up?dbcluster  [root@node2?drbd]#?drbdadm?up?dbcluster

查看(node1和node2)drbd狀態:<br>

[root@node2?drbd]#?/etc/init.d/drbd?status  GIT-hash:?7ad5f850d711223713d6dcadc3dd48860321070c?build?by?dag@Build64R6,?2016-10-23?08:16:10  m:res????????cs?????????ro???????????????????ds?????????????????????????p??mounted??fstype  0:dbcluster??Connected??Secondary/Secondary??Inconsistent/Inconsistent??C

從上面的信息可以看到,DRBD服務已經在兩臺機器上運行,但任何一臺機器都不是主機器(“primary” host),因此無法訪問到資源(block device).<br>開始同步:

僅在主節點操作(這里為node1)<br>

[root@node1?drbd]#?drbdadm?—?–overwrite-data-of-peer?primary?dbcluster

查看同步狀態:<br>

<br>

上面的輸出結果的一些說明:<br>cs (connection state): 網絡連接狀態<br>ro (roles): 節點的角色(本節點的角色首先顯示)<br>ds (disk states):硬盤的狀態<br>復制協議: A, B or C(本配置是C)<br>看到drbd狀態為”cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate”即表示同步結束.<br>也可以這樣查看drbd狀態:<br>

[root@centos193?drbd]#?drbd-overview  ??0:dbcluster/0??Connected?Secondary/Primary?UpToDate/UpToDate?C?r—–

創建文件系統:<br>在主節點(Node1)創建文件系統:<br>

[root@node1?drbd]#?mkfs?-t?ext4?/dev/drbd0  mke2fs?1.41.12?(17-May-2010)  Filesystem?label=  OS?type:?Linux  Block?size=4096?(log=2)  …….  180?days,?whichever?comes?first.??Use?tune2fs?-c?or?-i?to?override.

注:沒必要在輔節點(Node2)做同樣的操作,因為DRBD會處理原始磁盤數據的同步.<br>另外,我們也不需要將這個DRBD系統掛載到任何一臺機器(當然安裝c的時候需要臨時掛載來安裝mysql),因為集群管理軟件會處理.還有要確保復制的文件系統僅僅掛載在Active的主服務器上.<br>

四:mysql的安裝

MySQL的安裝也可以參見博文《MySQL之——CentOS6.5 編譯安裝MySQL5.6.16》<br>

1,在node1和node2節點安裝mysql:<br>

yum?install?mysql*?-y

2.node1和node2都操作停止mysql服務<br>

[root@node1~]#?service?mysql?stop  Shutting?down?MySQL.????????[??OK??]

3.node1和node2都操作創建數據庫目錄并將該目錄權限屬主修改為mysql<br>

[root@host1?/]#?mkdir?-p?/mysql/data  [root@host1?/]#?chown?-R?mysql:mysql?/mysql

4,關閉mysql臨時掛載DRBD文件系統到主節點(Node1)<br>

[root@node1?~]#?mount?/dev/drbd0??/mysql/

5.node1和node2都操作修改my.cnf文件修改<br>在[mysqld]下添加新的數據存放路徑<br>

datadir=/mysql/data

6.將默認的數據路徑下的所有文件和目錄cp到新的目錄下(node2不用操作)<br>

[root@host1?mysql]#cd?/var/lib/mysql  [root@host1?mysql]#cp?-R?*?/mysql/data/

node1和node2都操作這里注意copy過去的目錄權限屬主需要修改為mysql,這里直接修改mysql目錄即可.<br>

[root@host1?mysql]#?chown?-R?mysql:mysql?/mysql

7.啟動node1上的mysql進行登陸測試<br>

[root@host1?mysql]#?mysql

8.在節點Node1卸載DRBD文件系統<br>

[root@node1?~]#?umount?/var/lib/mysql_drbd  [root@node1?~]#?drbdadm?secondary?dbcluster

9.將DRBD文件系統掛載節點Node2<br>

[root@node2?~]#?drbdadm?primary?dbcluster  [root@node2?~]#?mount?/dev/drbd0?/mysql/

10.節點Node2上配置MySQL并測試<br>

[root@node1?~]#?scp?node2:/etc/my.cnf?/etc/my.cnf  [root@node2?~]#?chown?mysql?/etc/my.cnf  [root@node2?~]#?chmod?644?/etc/my.cnf

11. node2上做mysql登陸測試<br>

[root@node2?~]#?mysql

12.在Node2上卸載DRBD文件系統,交由集群管理軟件Pacemaker來管理<br>

[root@node2~]#?umount?/var/lib/mysql_drbd  [root@node2~]#?drbdadm?secondary?dbcluster  [root@node2~]#?drbd-overview  ??0:dbcluster/0??Connected?Secondary/Secondary?UpToDate/UpToDate?C?r—–  [root@node2~]#

五:Corosync和Pacemaker的安裝配置(node1和node2都需安裝)

安裝Pacemaker必須依賴:<br>

[root@node1~]#yum?-y?install?automake?autoconf?libtool-ltdl-devel?pkgconfig?python?glib2-devel?libxml2-devel?  libxslt-devel?python-devel?gcc-c++?bzip2-devel?gnutls-devel?pam-devel?libqb-devel

安裝Cluster Stack依賴:<br>

[root@node1~]yum?-y?install?clusterlib-devel?corosynclib-devel

安裝Pacemaker可選依賴:<br>

[root@node1~]yum?-y?install?ncurses-devel?openssl-devel?cluster-glue-libs-devel?docbook-style-xsl

Pacemaker安裝:<br>

[root@node1~]yum?-y?install?pacemaker

crmsh安裝:<br>

[root@node1~]wget?http://www.php.cn/:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo  [root@node1~]yum?-y?install?crmsh

1,配置corosync<br>Corosync Key<br>– 生成節點間安全通信的key:<br>

[root@node1~]#?corosync-keygen
–?將authkey拷貝到node2節點(保持authkey的權限為400):  [root@node~]#?scp?/etc/corosync/authkey?node2:/etc/corosync/  2,[root@node1~]#?cp?/etc/corosync/corosync.conf.example?/etc/corosync/corosync.conf

編輯/etc/corosync/corosync.conf:<br>

#?Please?read?the?corosync.conf.5?manual?page  compatibility:?whitetank  aisexec?{  ????????user:?root  ????????group:?root  }  totem?{  ????????version:?2  secauth:?off  threads:?0  interface?{  ringnumber:?0  bindnetaddr:?192.168.1.0  mcastaddr:?226.94.1.1  mcastport:?4000  ttl:?1  }  }  logging?{  fileline:?off  to_stderr:?no  to_logfile:?yes  to_syslog:?yes  logfile:?/var/log/cluster/corosync.log  debug:?off  timestamp:?on  logger_subsys?{  subsys:?AMF  debug:?off  }  }  amf?{  mode:?disabled  }

– 創建并編輯/etc/corosync/service.d/pcmk,添加”pacemaker”服務<br>

[root@node1~]#?cat?/etc/corosync/service.d/pcmk  service?{  	#?Load?the?Pacemaker?Cluster?Resource?Manager  	name:?pacemaker  	ver:?1  }

將上面兩個配置文件拷貝到另一節點<br>

[root@node1]#?scp?/etc/corosync/corosync.conf?node2:/etc/corosync/corosync.conf  [root@node1]#?scp?/etc/corosync/service.d/pcmk?node2:/etc/corosync/service.d/pcmk

3,啟動corosync和Pacemaker<br>?分別在兩個節點上啟動corosync并檢查.<br>

[root@node1]#?/etc/init.d/corosync?start  Starting?Corosync?Cluster?Engine?(corosync):???????????????[??OK??]  [root@node1~]#?corosync-cfgtool?-s  Printing?ring?status.  Local?node?ID?-1123964736  RING?ID?0  id?=?192.168.1.189  status?=?ring?0?active?with?no?faults  [root@node2]#?/etc/init.d/corosync?start  Starting?Corosync?Cluster?Engine?(corosync):???????????????[??OK??]

– 在兩節點上分別啟動Pacemaker:<br>

[root@node1~]#?/etc/init.d/pacemaker?start  Starting?Pacemaker?Cluster?Manager:????????????????????????[??OK??]  [root@node2~]#?/etc/init.d/pacemaker?start  Starting?Pacemaker?Cluster?Manager:

六、資源配置

<br>配置資源及約束??????????????? ?<br>配置默認屬性<br>查看已存在的配置:

[root@node1?~]#?crm?configure?property?stonith-enabled=false  [root@node1?~]#?crm_verify?-L

禁止STONITH錯誤:<br>

[root@node1?~]#?crm?configure?property?stonith-enabled=false  [root@node1?~]#?crm_verify?-L

讓集群忽略Quorum:<br>

[root@node1~]#?crm?configure?property?no-quorum-policy=ignore

防止資源在恢復之后移動:<br>

[root@node1~]#?crm?configure?rsc_defaults?resource-stickiness=100

設置操作的默認超時:<br>

[root@node1~]#?crm?configure?property?default-action-timeout="180s"

設置默認的啟動失敗是否為致命的:

[root@node1~]#?crm?configure?property?start-failure-is-fatal="false"

配置DRBD資源<br>– 配置之前先停止DRBD:<br>

[root@node1~]#?/etc/init.d/drbd?stop  [root@node2~]#?/etc/init.d/drbd?stop

– 配置DRBD資源:<br>

[root@node1~]#?crm?configure  crm(live)configure#?primitive?p_drbd_mysql?ocf:linbit:drbd?params?drbd_resource="dbcluster"?op?monitor?interval="15s"  ?op?start?timeout="240s"?op?stop?timeout="100s"

– 配置DRBD資源主從關系(定義只有一個Master節點):<br>

crm(live)configure#?ms?ms_drbd_mysql?p_drbd_mysql?meta?master-max="1"?master-node-max="1"?  clone-max="2"?clone-node-max="1"?notify="true"

– 配置文件系統資源,定義掛載點(mount point):<br>

crm(live)configure#?primitive?p_fs_mysql?ocf:heartbeat:Filesystem?params?device="/dev/drbd0"?directory="/var/lib/mysql_drbd/"?fstype="ext4"

配置VIP資源<br>

crm(live)configure#?primitive?p_ip_mysql?ocf:heartbeat:IPaddr2?params?ip="192.168.1.39"?cidr_netmask="24"?op?  monitor?interval="30s"

配置MySQL資源<br>

crm(live)configure#?primitive?p_mysql?lsb:mysql?op?monitor?interval="20s"?  timeout="30s"?op?start?interval="0"?timeout="180s"?op?stop?interval="0"?timeout="240s"

七、組資源和約束

通過”組”確保DRBD,MySQL和VIP是在同一個節點(Master)并且確定資源的啟動/停止順序.<br>

啟動:?p_fs_mysql–&gt;p_ip_mysql-&gt;p_mysql  停止:?p_mysql–&gt;p_ip_mysql–&gt;p_fs_mysql
crm(live)configure#?group?g_mysql?p_fs_mysql?p_ip_mysql?p_mysql

組group_mysql永遠只在Master節點:<br>

crm(live)configure#?colocation?c_mysql_on_drbd?inf:?g_mysql?ms_drbd_mysql:Master

MySQL的啟動永遠是在DRBD Master之后:<br>

crm(live)configure#?order?o_drbd_before_mysql?inf:?ms_drbd_mysql:promote?g_mysql:start

配置檢查和提交<br>

crm(live)configure#?verify  crm(live)configure#?commit  crm(live)configure#?quit

查看集群狀態和failover測試<br>狀態查看:<br>

[root@node1?mysql]#?crm_mon?-1r

Failover測試:<br>將Node1設置為Standby狀態<br>

[root@node1?~]#?crm?node?standby

過幾分鐘查看集群狀態(若切換成功,則看到如下狀態):<br>

[root@node1?~]#?crm?status

將Node1恢復online狀態:<br>

[root@node1?mysql]#?crm?node?online  [root@node1?mysql]#?crm?status

? 版權聲明
THE END
喜歡就支持一下吧
點贊13 分享