Mysql Scalability(4)Amoeba - Separate Reads and Writes
Mysql Scalability(4)Amoeba - Separate Reads and Writes
Separate Reads and Writes by Rules
Comments out the MysqlQueryRouter Configuration
<property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property><property name="LRUMapSize">1500</property><property name="defaultPool">master1</property><!--<property name="writePool">master1</property><property name="readPool">slavePool1</property>--><property name="needParse">true</property>
A very simple rules in conf/rule.xml
<amoeba:rule xmlns:amoeba="http://amoeba.meidusa.com/"> <tableRule name="message"schema="sillycat"readPools="slavePool1"writePools="master1"/></amoeba:rule>
That means that database sillycat, table message will follow the reads and writes rules and separate in different databases, but all the other tables will go through the master1.
Sharding Across 4 Machines
4 machines, 2 masters, 2 slaves
Create Table
create table message(id int primary key, title char(255), message char(255) );
create table EVENTS (ID INT primary key, EVENT_REQUEST char(255), EVENT_RESPONSE char(255));
It does not work based on the message and events. So I use branduser at last.
This rule works.
<tableRule name="branduser"schema="sillycat"defaultPools="slavePool1,slavePool2"> <rule name="rule1"> <parameters>id</parameters> <expression><![CDATA[ id <= 100 ]]> </expression> <defaultPools>master1</defaultPools> <readPools>slavePool1</readPools> <writePools>master1</writePools> </rule> <rule name="rule2"> <parameters>id</parameters> <expression><![CDATA[ id > 100 ]]> </expression> <defaultPools>master2</defaultPools> <readPools>slavePool2</readPools> <writePools>master2</writePools> </rule> </tableRule>
log from the router.log
2014-07-29 18:29:27,702 DEBUG route.AbstractQueryRouter - query={sql=[select * from branduser], parameter=[]}, no Column rule, using table:sillycat.branduser default rules:[slavePool1, slavePool2], route to pools:[slavePool1, slavePool2] 2014-07-29 18:30:22,282 DEBUG route.AbstractQueryRouter - query={sql=[insert into branduser(id, username, age) values (15,"manager", 32)], parameter=[]}, matched table:sillycat.branduser, rule:rule1, route to pools:[master2] 2014-07-29 18:30:47,326 DEBUG route.AbstractQueryRouter - query={sql=[select * from branduser], parameter=[]}, no Column rule, using table:sillycat.branduser default rules:[slavePool1, slavePool2], route to pools:[slavePool1, slavePool2] 2014-07-29 18:31:07,936 DEBUG route.AbstractQueryRouter - query={sql=[insert into branduser(id, username, age) values (5,"developer", 28)], parameter=[]}, matched table:sillycat.branduser, rule:rule1, route to pools:[master1]
This rule also works.
<tableRule name="branduser"schema="sillycat"defaultPools="slavePool1,slavePool2"> <rule name="rule1"ruleResult="POOLNAME"> <parameters>id</parameters> <expression><![CDATA[ var hashid = abs(id) % 16; case hashid when range(0,8,1,1) then (isReadStatement ? 'slavePool1' : 'master1'); when range(9,15,1,1) then (isReadStatement ? 'slavePool2' : 'master2'); END CASE; ]]></expression> </rule> </tableRule>
And I place 4 amoeba proxy on all 4 machines. That should be
ubuntu-master:8066
ubuntu-client1:8066
ubuntu-client2:8066
ubuntu-client3:8066
Then use my haproxy to proxy all these things on tcp level. It is on the machine ubuntu-client4. The configuration file is as follow:
#logging options global
log 127.0.0.1 local0 info
maxconn 5120
chroot /opt/haproxy
uid 99
gid 99
daemon
quiet
nbproc 2
pidfile /opt/haproxy/haproxy.pid
#load balancing defaults
defaults
log global
#使用4层代理模式,"mode http"为7层代理模式
mode tcp
#if you set mode to tcp,then you nust change tcplog into httplog
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5s
clitimeout 120s
srvtimeout 120s
#front-end IP for consumers and producters
listen rabbitmq_local_cluster ubuntu-client4:8066
#配置TCP模式
mode tcp
#balance url_param userid
#balance url_param session_id check_post 64
#balance hdr(User-Agent)
#balance hdr(host)
#balance hdr(Host) use_domain_only
#balance rdp-cookie
#balance leastconn
#balance source //ip
#简单的轮询
balance roundrobin
#amoeba 集群节点配置
server amoeba1 ubuntu-master:8066 check inter 5000 rise 2 fall 2
server amoeba2 ubuntu-client1:8066 check inter 5000 rise 2 fall 2
server amoeba2 ubuntu-client2:8066 check inter 5000 rise 2 fall 2
server amoeba2 ubuntu-client3:8066 check inter 5000 rise 2 fall 2
#配置haproxy web监控,查看统计信息
listen private_monitoring :80
mode http
option httplog
stats enable
#设置haproxy监控地址为http://ubuntu-client4/haproxy-status
stats uri /haproxy-status
stats refresh 5s
Then I can connect the mysql server as follow:
mysql -h ubuntu-client4 -P 8066 -u root -p password
References:
http://sillycat.iteye.com/blog/2098033
http://docs.hexnova.com/amoeba/rule-configuration.html
http://docs.hexnova.com/amoeba/rw-splitting.html