A tlas+lvs+keepalived +mysql+主从复制负载均衡搭建部署文档2015-8-31(V1.0)一、部署背景信息基于mysql的负载均衡有很多种方式,如haproxy,前面一篇博客有介绍,还可以用更高效lvs 做负载均衡,下面是基于percona xtradb cluster的三个节点的多主复制+atlas的lvs负载均衡,其实这里是不需要用atlas的,因为atlas是用来做连接池和读写分离的,而多主架构是不需要读写分离的(如果是基于mysql replication的就需要atlas做负载均衡了),但为了测试atlas能不能用lvs做负载均衡,就顺便做了下实验。
1.节点规划1.1 mysql数据节点:db169db172db173三个节点为xtradb cluster节点。
1.2 keepalived节点:db162db163虚拟ip为192.168.1.201haproxy节点(仅为了对比lvs的性能才安装的):db169(部署在xtradb cluster的一个节点上)1.3 atlas节点:和xtradb cluster节点部署在一起,也为三个节点注意:atlas和mysql要部署在一个节点上,如果不在一个节点上则不能用lvs dr模式负载均衡1.4 客户端测试节点:db55ip地址为192.168.1.* ,节点名为db+ip地址末位2.安装lvs及keepavlied(db162、db163上)2.1安装依赖包yum -y install kernel-devel make gcc openssl-devel libnl*下载并连接linux kernel文件,注意版本要一致(uname -a)[root@db163 ~]# ln -s /usr/src/kernels/2.6.32-358.el6.x86_64/ /usr/src/linux安装keepalived、lvs[root@db162 ~]# yum install ipvsadm[root@db162 ~]# yum install keepalived[root@db163 ~]# yum install ipvsadm[root@db163 ~]# yum install keepalived2.2.配置keepavlied,注意lvs不需要单独配置,在keepalived里配置就行了[root@db162 ~]# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {router_id MySQL_LB1}vrrp_sync_group VSG {group {MySQL_Loadblancing}}vrrp_instance MySQL_Loadblancing { state MASTERinterface eth0virtual_router_id 51priority 101advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.1.201}}virtual_server 192.168.1.201 1234 { delay_loop 6lb_algo rrlb_kind DR# nat_mask 255.255.255.0#persistence_timeout 50protocol TCPreal_server 192.168.1.169 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}real_server 192.168.1.172 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}real_server 192.168.1.173 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}}备机上的keepalived配置[root@db163 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs {router_id MySQL_LB2}vrrp_sync_group VSG {group {MySQL_Loadblancing}}vrrp_instance MySQL_Loadblancing {state BACKUPinterface eth0virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.1.201}}virtual_server 192.168.1.201 1234 {delay_loop 6lb_algo rrlb_kind DR# nat_mask 255.255.255.0#persistence_timeout 50protocol TCPreal_server 192.168.1.169 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}real_server 192.168.1.172 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}real_server 192.168.1.173 1234 {weight 3TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 1234}}}3.realserver(数据节点)上的配置分别在三个数据节点db169、db172、db173上安装如下脚本:[root@db172 ~]# cat /etc/init.d/lvsdr.sh#!/bin/bashVIP=192.168.1.201. /etc/rc.d/init.d/functionscase "$1" instart)/sbin/ifconfig lo down/sbin/ifconfig lo upecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce/sbin/sysctl -p >/dev/null 2>&1/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up/sbin/route add -host $VIP dev lo:0echo "LVS-DR real server starts successfully.\n";;stop)/sbin/ifconfig lo:0 down/sbin/route del $VIP >/dev/null 2>&1echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "0" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/all/arp_announceecho "LVS-DR real server stopped.";;status)isLoOn=`/sbin/ifconfig lo:0 | grep "$VIP"`isRoOn=`/bin/netstat -rn | grep "$VIP"`if [ "$isLoOn" == "" -a "$isRoOn" == "" ]; thenecho "LVS-DR real server has to run yet."elseecho "LVS-DR real server is running."fiexit 3;;*)echo "Usage: $0 {start|stop|status}"exit 1esacexit 0增加x权限:chmod +x /etc/init.d/lvsdr.sh增加开机自启动:echo "/etc/init.d/lvsdr.sh start" >> /etc/rc.local4.分别在三个数据节点db169、db172、db173上安装atlas下载atlas,并yum安装yum install -y Atlas-2.1.el6.x86_64.rpm配置atlas[root@db172 ~]# cat /usr/local/mysql-proxy/conf/f [mysql-proxy]#带#号的为非必需的配置项目#管理接口的用户名admin-username = admin#管理接口的密码admin-password = 123456#Atlas后端连接的MySQL主库的IP和端口,可设置多项,用逗号分隔proxy-backend-addresses = 192.168.1.173:3306#Atlas后端连接的MySQL从库的IP和端口,@后面的数字代表权重,用来作负载均衡,若省略则默认为1,可设置多项,用逗号分隔proxy-read-only-backend-addresses = 192.168.1.169:3306@1,192.168.1.172:3306@1#用户名与其对应的加密过的MySQL密码,密码使用PREFIX/bin目录下的加密程序encrypt加密,下行的user1和user2为示例,将其替换为你的MySQL的用户名和加密密码!pwds = usr_test:/iZxz+0GRoA=, usr_test2:/iZxz+0GRoA= ,root:/iZxz+0GRoA=#设置Atlas的运行方式,设为true时为守护进程方式,设为false时为前台方式,一般开发调试时设为false,线上运行时设为truedaemon = true#设置Atlas的运行方式,设为true时Atlas会启动两个进程,一个为monitor,一个为worker,monitor在worker意外退出后会自动将其重启,设为false时只有worker,没有monitor,一般开发调试时设为false,线上运行时设为truekeepalive = true#工作线程数,对Atlas的性能有很大影响,可根据情况适当设置event-threads = 10#日志级别,分为message、warning、critical、error、debug五个级别log-level = message#日志存放的路径log-path = /usr/local/mysql-proxy/log#SQL日志的开关,可设置为OFF、ON、REALTIME,OFF代表不记录SQL日志,ON代表记录SQL 日志,REALTIME代表记录SQL日志且实时写入磁盘,默认为OFF#sql-log = OFF#实例名称,用于同一台机器上多个Atlas实例间的区分#instance = test#Atlas监听的工作接口IP和端口proxy-address = 0.0.0.0:1234#Atlas监听的管理接口IP和端口admin-address = 0.0.0.0:2345#分表设置,此例中person为库名,mt为表名,id为分表字段,3为子表数量,可设置多项,以逗号分隔,若不分表则不需要设置该项#tables = person.mt.id.3#默认字符集,设置该项后客户端不再需要执行SET NAMES语句#charset = utf8#允许连接Atlas的客户端的IP,可以是精确IP,也可以是IP段,以逗号分隔,若不设置该项则允许所有IP连接,否则只允许列表中的IP连接#client-ips = 127.0.0.1, 192.168.1#Atlas前面挂接的LVS的物理网卡的IP(注意不是虚IP),若有LVS且设置了client-ips则此项必须设置,否则可以不设置#lvs-ips = 192.168.1.15.启动数据节点(分别在三个数据节点上db169、db172、db173)5.1 启动mysql数据库5.2 启动atlas : /usr/local/mysql-proxy/bin/mysqld-proxyd test start5.3 启动lvs脚本:/etc/init/lvsdr.sh start6.启动keepalived(db162、db163上)/etc/init.d/keepalived start7.验证:启动keepalived后,主节点为db162,查看vip是不是启动了:[root@db162 ~]# ip ad1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:1d:7d:a8:40:d9 brd ff:ff:ff:ff:ff:ffinet 192.168.1.162/24 brd 192.168.1.255 scope global eth0inet 192.168.1.201/32 scope global eth0inet6 fe80::21d:7dff:fea8:40d9/64 scope linkvalid_lft forever preferred_lft forever验证此节点没有1234端口监听:[root@db162 ~]# netstat -anp|grep 1234此处无输出在192.168.1.55(db55)上连接192.168.1.201(注意此节点没有1234端口在监听,发来的连接会被路由到真正的数据节点)[root@db55 ~]# mysql -h 192.168.1.201 -P1234 -uroot -p123456Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 1871354501Server version: 5.0.81-log Percona XtraDB Cluster binary (GPL) 5.6.19-25.6, Revision 824, wsrep_25.6.r4111Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || dd || mcldb || mysql || mysqlslap || performance_schema || test |+--------------------+7 rows in set (0.00 sec)数据是正确的8.监控lvs使用sysbench压力测试,然后监控线程分布:[root@topdb soft]# sysbench --test=oltp --num-threads=100 --max-requests=100000 --oltp-table-size=1000000 --oltp-test-mode=nontrx --db-driver=mysql --mysql-db=dd --mysql-host=192.168.1.201 --mysql-port=1234 --mysql-user=root --mysql-password=123456 --oltp-nontrx-mode=select --oltp-read-only=on --db-ps-mode=disable runsysbench 0.4.12: multi-threaded system evaluation benchmarkRunning the test with following options:Number of threads: 100Doing OLTP test.Running non-transactional testDoing read-only testUsing Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactionsUsing auto_inc on the id columnMaximum number of requests for OLTP test is limited to 100000Threads started!Done.OLTP test statistics:queries performed:read: 100033write: 0other: 0total: 100033transactions: 100033 (13416.81 per sec.)deadlocks: 0 (0.00 per sec.)read/write requests: 100033 (13416.81 per sec.)other operations: 0 (0.00 per sec.)Test execution summary:total time: 7.4558stotal number of events: 100033total time taken by event execution: 744.5136per-request statistics:min: 0.71msavg: 7.44msmax: 407.23msapprox. 95 percentile: 28.56msThreads fairness:events (avg/stddev): 1000.3300/831.91execution time (avg/stddev): 7.4451/0.00[root@db162 ~]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.1.201:1234 rr-> 192.168.1.169:1234 Route 3 0 33-> 192.168.1.172:1234 Route 3 0 34-> 192.168.1.173:1234 Route 3 0 34可以看出负载均衡在了三个节点。