Apache ShardingSphere Proxy 负载均衡小练习

我们在容器启动4个mysql实例,docker-compose.xml 内容如下:

version: '3.7'
services:
    mysql8_1:
        image: "mysql:8.0.19"
        container_name: mysql8_1
        ports:
            - "33080:3306"
        environment:
            MYSQL_ROOT_PASSWORD: 12345678
        
​
    mysql8_2:
        image: "mysql:8.0.19"
        container_name: mysql8_2
        ports:
            - "33081:3306"
        environment:
            MYSQL_ROOT_PASSWORD: 12345678
​
    mysql8_3:
        image: "mysql:8.0.19"
        container_name: mysql8_3
        ports:
            - "33082:3306"
        environment:
            MYSQL_ROOT_PASSWORD: 12345678
​
    mysql8_4:
        image: "mysql:8.0.19"
        container_name: mysql8_4
        ports:
            - "33083:3306"
        environment:
            MYSQL_ROOT_PASSWORD: 12345678

然后参照我《mysql5-主从同步配置》和 《mysql8多主一从配置》配置一下数据库的主从同步,完成后继续向下执行,如果你已经有了数据库集群切有主从同步,就不需要做上面的操作。

使用 docker 来启动一个 ShardingProxy

1、获取sharding-proxy的docker镜像

docker pull apache/sharding-proxy:4.0.1

2、配置文件在容器中的路径是/opt/sharding-proxy/conf,所以启动时可以在将此路径映射到容器外部,便于修改配置。

docker run -d -v /${your_work_dir}/conf:/opt/sharding-proxy/conf -e PORT=3308 -p13308:3308 --name shardingproxy apache/sharding-proxy:latest

可以通过-e JVM_OPTS="" 环境变量指定JVM配置,通过映射/opt/sharding-proxy/ext-lib地址到宿主机来方便添加扩展jar包

本机举例,我执行的命令就是

docker run --name shardingProxy -d -v /home/yangyan/conf/sharding-proxy:/opt/sharding-proxy/conf -v /home/yangyan/conf/sharding-proxy/ext-lib:/opt/sharding-proxy/ext-lib -e PORT=3308 -p13308:3308 apache/sharding-proxy:latest

容器启动后,查看日志,发现没有成功启动,缺少mysql的jdbc包,然后我们需要将jar复制到/opt/sharding-proxy/lib/目录下,因为我看了一下这个服务的start.sh脚本,加入到 classpath的路径有lib,但没有ext-lib这个目录,我就放到了lib下,而且我专门试了移动到ext-lib下面是会找不到驱动的。

然后我们增加配置文件 server.yaml 到 shardingProxy 的 conf 目录下

authentication:
    users:
      root:
        password: root
​
props:
  executor.size: 16
  sql.show: true

从简单开始,配置一个主从集群

增加 config-test.xml 到shardingProxy 的 conf 目录下

schemaName: master_slave_db
​
dataSources:
    ds_master1:
        url: jdbc:mysql://mysql8_1:3306/test?serverTimezone=UTC&characterEncoding=utf8&useUnicode=true&useSSL=false
        username: root
        password: 12345678
    ds_slave1:
        url: jdbc:mysql://mysql8_2:3306/test?serverTimezone=UTC&characterEncoding=utf8&useUnicode=true&useSSL=false
        username: root
        password: 12345678
    ds_slave2:
        url: jdbc:mysql://mysql8_3:3306/test?serverTimezone=UTC&characterEncoding=utf8&useUnicode=true&useSSL=false
        username: root
        password: 12345678
​
masterSlaveRule:
    name: ds_ms
    masterDataSourceName: ds_master1
    slaveDataSourceNames:
        - ds_slave1
        - ds_slave2
    loadBalanceAlgorithmType: ROUND_ROBIN

容器启动失败,我遇到了两个问题:

问题1:因为我用的 mysql jdbc jar 包是 java8 编译的,shardingProxy 容器里面java环境是1.7,所以无法成功加载mysql jdbc 驱动,所以需要在容器中替换为 java8 的环境。

这里可以讲本地下载的java8目录通过 docker cp 命令复制到容器中:

docker cp ~/Downloads/java-1.8.0-openjdk-amd64 08343b40ab39:/usr/lib/jvm/

对于 java 环境路径的处理,按正常来说应该是使用 update-java-alternatives命令比较方便,但是我这里一直无法运行成功,所以我就一条条的执行了创建链接

update-alternatives  --install /usr/bin/idlj idlj /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/idlj 999
update-alternatives  --install /usr/bin/wsimport wsimport /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/wsimport 999
update-alternatives  --install /usr/bin/rmic rmic /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/rmic 999
update-alternatives  --install /usr/bin/jinfo jinfo /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jinfo 999
update-alternatives  --install /usr/bin/jsadebugd jsadebugd /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jsadebugd 999
update-alternatives  --install /usr/bin/native2ascii native2ascii /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/native2ascii 999
update-alternatives  --install /usr/bin/jstat jstat /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jstat 999
update-alternatives  --install /usr/bin/javac javac /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/javac 999
update-alternatives  --install /usr/bin/javah javah /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/javah 999
update-alternatives  --install /usr/bin/jps jps /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jps 999
update-alternatives  --install /usr/bin/jstack jstack /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jstack 999
update-alternatives  --install /usr/bin/jrunscript jrunscript /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jrunscript 999
update-alternatives  --install /usr/bin/javadoc javadoc /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/javadoc 999
update-alternatives  --install /usr/bin/jhat jhat /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jhat 999
update-alternatives  --install /usr/bin/javap javap /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/javap 999
update-alternatives  --install /usr/bin/jconsole jconsole /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jconsole 999
update-alternatives  --install /usr/bin/jar jar /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jar 999
update-alternatives  --install /usr/bin/xjc xjc /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/xjc 999
update-alternatives  --install /usr/bin/schemagen schemagen /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/schemagen 999
update-alternatives  --install /usr/bin/extcheck extcheck /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/extcheck 999
update-alternatives  --install /usr/bin/jmap jmap /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jmap 999
update-alternatives  --install /usr/bin/appletviewer appletviewer /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/appletviewer 999
update-alternatives  --install /usr/bin/jstatd jstatd /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jstatd 999
update-alternatives  --install /usr/bin/jdb jdb /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jdb 999
update-alternatives  --install /usr/bin/serialver serialver /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/serialver 999
update-alternatives  --install /usr/bin/wsgen wsgen /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/wsgen 999
update-alternatives  --install /usr/bin/jcmd jcmd /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jcmd 999
update-alternatives  --install /usr/bin/jarsigne jarsignerr /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/jarsigner 999

我没有找到JAVA_HOME这个环境变量在哪个文件里配置的,不过这会执行java和javac已经是运行的jdk8的bin下的文件了,这一块不是本次关注的重点,生产环境时候的时候,确保不要出现java版本太低这种问题。你可以选择其他的方式安装java8到你的环境中。

问题2: 因为的创建mysql集群的时候使用的是单独的一个docker-compose文件,启动shardingProxy容器的时候是单独起的,所以mysql集群和shardingProxy的容器之间的网络是不通的,所以需要讲shardingProxy这个容器加入到myslq集群的网络中。

docker network connect mysql-cluster_default shardingProxy

执行后,进入shardingProxy去ping mysql 集群机器已经可以ping通了,上面一切正常后,我们持续输出日志,方便我们观察集群的执行情况。

docker logs -f shardingProxy

然后我们使用mysql客户端连接 shardingProxy

 mycli -uroot -h 127.0.0.1 -P 13308  --database=master_slave_db

然后我们查询数据库,再插入一条数据

mysql root@127.0.0.1:master_slave_db> show tables;
+------------------+
| Tables_in_test   |
|------------------|
| user             |
+------------------+
1 row in set
Time: 0.009s
mysql root@127.0.0.1:master_slave_db> select * from user;
+------+-----------+
|   id | name     |
|------+-----------|
|    1 | xiaoming |
|    2 | xiaohong |
|    3 | xiaoling |
|    4 | xiaolizi2 |
|    5 | xiaowang |
+------+-----------+
5 rows in set
Time: 0.029s
mysql root@127.0.0.1:master_slave_db> insert into `user`(name) VALUES('dawang')
Query OK, 1 row affected
Time: 0.031s

docker的日志输出如下:

[INFO ] 13:42:53.324 [main] c.a.icatch.provider.imp.AssemblerImp - USING: com.atomikos.icatch.oltp_retry_interval = 10000
[INFO ] 13:42:53.324 [main] c.a.icatch.provider.imp.AssemblerImp - USING: java.naming.provider.url = rmi://localhost:1099
[INFO ] 13:42:53.324 [main] c.a.icatch.provider.imp.AssemblerImp - USING: com.atomikos.icatch.force_shutdown_on_vm_exit = false
[INFO ] 13:42:53.324 [main] c.a.icatch.provider.imp.AssemblerImp - USING: com.atomikos.icatch.default_jta_timeout = 300000
[INFO ] 13:42:53.325 [main] c.a.icatch.provider.imp.AssemblerImp - Using default (local) logging and recovery...
[INFO ] 13:42:53.347 [main] c.a.d.xa.XATransactionalResource - resource-1-ds_master1: refreshed XAResource
[INFO ] 13:42:53.366 [main] c.a.d.xa.XATransactionalResource - resource-2-ds_slave1: refreshed XAResource
[INFO ] 13:42:53.536 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872] REGISTERED
[INFO ] 13:42:53.537 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872] BIND: 0.0.0.0/0.0.0.0:3308
[INFO ] 13:42:53.539 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] ACTIVE
tail: unrecognized file system type 0x794c7630 for ‘/opt/sharding-proxy/logs/stdout.log’. please report this to bug-coreutils@gnu.org. reverting to polling
[INFO ] 13:44:22.871 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0xa355d100, L:/172.17.0.5:3308 - R:/172.17.0.1:58402]
[INFO ] 13:44:22.873 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:44:26.566 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0xf8e8f9e6, L:/172.17.0.5:3308 - R:/172.17.0.1:58442]
[INFO ] 13:44:26.566 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:44:26.580 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0x820865ab, L:/172.17.0.5:3308 - R:/172.17.0.1:58446]
[INFO ] 13:44:26.580 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:44:27.209 [ShardingSphere-Command-2] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:44:27.209 [ShardingSphere-Command-2] ShardingSphere-SQL - SQL: SHOW TABLES ::: DataSources: ds_master1
[INFO ] 13:44:27.325 [ShardingSphere-Command-4] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:44:27.325 [ShardingSphere-Command-4] ShardingSphere-SQL - SQL: select TABLE_NAME, COLUMN_NAME from information_schema.columns
                                   where table_schema = 'None'
                                   order by table_name,ordinal_position ::: DataSources: ds_slave1
[INFO ] 13:48:26.249 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0xea03ccca, L:/172.17.0.5:3308 - R:/172.17.0.1:60580]
[INFO ] 13:48:26.250 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:48:29.239 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0x8c998abc, L:/172.17.0.5:3308 - R:/172.17.0.1:60610]
[INFO ] 13:48:29.239 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:48:30.602 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0xc666fe46, L:/172.17.0.5:3308 - R:/172.17.0.1:60622]
[INFO ] 13:48:30.603 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:48:31.742 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0x2f66e703, L:/172.17.0.5:3308 - R:/172.17.0.1:60636]
[INFO ] 13:48:31.743 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:48:31.759 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ: [id: 0x62df3a93, L:/172.17.0.5:3308 - R:/172.17.0.1:60640]
[INFO ] 13:48:31.759 [epollEventLoopGroup-2-1] i.n.handler.logging.LoggingHandler - [id: 0x0a369872, L:/0.0.0.0:3308] READ COMPLETE
[INFO ] 13:48:31.782 [ShardingSphere-Command-6] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.782 [ShardingSphere-Command-6] ShardingSphere-SQL - SQL: SHOW TABLES ::: DataSources: ds_master1
[INFO ] 13:48:31.800 [ShardingSphere-Command-7] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.800 [ShardingSphere-Command-7] ShardingSphere-SQL - SQL: SELECT @@VERSION ::: DataSources: ds_slave1
[INFO ] 13:48:31.816 [ShardingSphere-Command-9] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.816 [ShardingSphere-Command-9] ShardingSphere-SQL - SQL: SELECT @@VERSION_COMMENT ::: DataSources: ds_slave1
[INFO ] 13:48:31.818 [ShardingSphere-Command-8] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.818 [ShardingSphere-Command-8] ShardingSphere-SQL - SQL: select TABLE_NAME, COLUMN_NAME from information_schema.columns
                                   where table_schema = 'master_slave_db'
                                   order by table_name,ordinal_position ::: DataSources: ds_slave1
[INFO ] 13:48:31.940 [ShardingSphere-Command-10] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.940 [ShardingSphere-Command-10] ShardingSphere-SQL - SQL: SELECT CONCAT("'", user, "'@'",host,"'") FROM mysql.user ::: DataSources: ds_slave1
[INFO ] 13:48:31.979 [ShardingSphere-Command-11] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:31.979 [ShardingSphere-Command-11] ShardingSphere-SQL - SQL: SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES
   WHERE ROUTINE_TYPE="FUNCTION" AND ROUTINE_SCHEMA = "master_slave_db" ::: DataSources: ds_slave1
[INFO ] 13:48:32.010 [ShardingSphere-Command-12] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:32.010 [ShardingSphere-Command-12] ShardingSphere-SQL - SQL: SELECT name from mysql.help_topic WHERE name like "SHOW %" ::: DataSources: ds_slave1
[INFO ] 13:48:34.702 [ShardingSphere-Command-13] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:34.702 [ShardingSphere-Command-13] ShardingSphere-SQL - SQL: show tables ::: DataSources: ds_master1
[INFO ] 13:48:54.065 [ShardingSphere-Command-14] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:48:54.065 [ShardingSphere-Command-14] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave1
[INFO ] 13:50:54.826 [ShardingSphere-Command-15] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 13:50:54.826 [ShardingSphere-Command-15] ShardingSphere-SQL - SQL: insert into `user`(name) VALUES('dawang') ::: DataSources: ds_master1
[INFO ] 14:04:00.439 [ShardingSphere-Command-0] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:04:00.439 [ShardingSphere-Command-0] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave1
[INFO ] 14:04:08.809 [ShardingSphere-Command-1] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:04:08.809 [ShardingSphere-Command-1] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave2
[INFO ] 14:05:15.457 [ShardingSphere-Command-2] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:05:15.457 [ShardingSphere-Command-2] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave1
[INFO ] 14:05:15.833 [ShardingSphere-Command-3] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:05:15.833 [ShardingSphere-Command-3] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave2
[INFO ] 14:05:16.270 [ShardingSphere-Command-4] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:05:16.270 [ShardingSphere-Command-4] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave1
[INFO ] 14:05:16.851 [ShardingSphere-Command-5] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:05:16.851 [ShardingSphere-Command-5] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave2
[INFO ] 14:05:17.151 [ShardingSphere-Command-6] ShardingSphere-SQL - Rule Type: master-slave
[INFO ] 14:05:17.151 [ShardingSphere-Command-6] ShardingSphere-SQL - SQL: select * from user ::: DataSources: ds_slave1

可以看出我们查询数据的时候走的是slave,插入数据的时候走的是 master,然后我执行了很多次的查询,走的都是 slave 节点,并且是在多个slave之间轮训着去查询。

当我们停掉启动一台slave的时候再执行查询,就会发现一次成功,一次失败。因为目前这个架构只是负载均衡,当一个slave出现问题的时候,目前是不会自动移除在这个有问题的slave的,当在轮训飘到这个已经无法连接的slave去执行的时候,会导致查询失败,飘到活着的那个slave的时候就会成功,所以我们下一步要增加故障转移,防止让请求飘到已经出问题的mysql实例上。

配置注册中心

首先我们启动一个 zookpper 实例,为了方便,此处还是使用 docker 来启动一个 zookeeper

docker run -p 2181:2181 --name zk --restart unless-stopped -d zookeeper

因为 zookeeper 比较常用,占用的资源也很小,所以我用了 –restart unless-stopped,表示除非人工stop这个容器,否则这个容器每次都自动重启。

然后我们在 server.yaml 中增加以下配置:

orchestration:
  name: orchestration_ds
  overwrite: true
  registry:
    type: zookeeper
    serverLists: 172.17.0.2:2181
    namespace: orchestration

这里的 serverLists 后面的就是 zookeeper 的地址,启动服务,就可以看到,我们的proxy实例已经注册到了zookeeper了。

sharding-ui(可选)

sharding-ui 是 shardingSphere官方辅助的一个用来数据治理的界面,github仓库地址是https://github.com/apache/incubator-shardingsphere,下载代码后,切换到4.0.1的分支(tag),使用maven 编译前后端(sharding-ui-backend 和 sharding-ui-frontend)项目,然后启动,默认密码是 admin/admin,登录后需要添加注册中心。切换分支是因为之前我们通过docker安装的shardingproxy是4.0.1版本的,因为这个过程比较简单,我这里就不详细说明了。

zkui(可选)

为了方便的查看zookeeper中的信息,可以使用 zkui 这个工具来查看,github仓库:https://github.com/DeemOpen/zkui.git

下载代码,进入项目目录,执行mvn clean package生成 target/zkui-2.0-SNAPSHOT-jar-with-dependencies.jar,然后复制项目中的 config.cfg 与这个jar同一目录下,执行java -jar zkui-2.0-SNAPSHOT-jar-with-dependencies.jar ,打开页面9090,就可以看到zkui的界面了,用户名和密码默认是

用户名: admin
密码: manager

也可以直接使用zkClient.sh 查看:

[zk: localhost:2181(CONNECTED) 9] ls /orchestration/orchestration_ds
[config, state]

当我们通过界面启用或者禁用某个DataSource的时候,zookeeper 会触发WatchedEvent

[zk: localhost:2181(CONNECTED) 18] ls /orchestration/orchestration_ds/state/datasources
[master_slave_db.ds_slave1, master_slave_db.ds_slave2]
WatchedEvent state:SyncConnected type:NodeDataChanged path:/orchestration/orchestration_ds/state/datasources/xxx

在应用内会被org.apache.shardingsphere.orchestration.internal.registry.state.listener.DataSourceStateChangedListener 这个监听器监听到,然后会抛出一个应用内部的事件对象DisabledStateChangedEvent,用的是Guava的一个简单的事件订阅和发布的组件,然后应用内监听到这个事件后,会被带有注解com.google.common.eventbus.Subscribe 的方法处理,会将禁用的dataSource名称加入到MasterSlaveRule对象的禁用DataSource列表中,以后再执行查询时,就会排除掉这些已经被禁用的 SlaveNode。

master_slave_db.ds_slave1 这个名字其实分两部分组成,点前面的是 逻辑schema的名字,点后面的是dataSource的名字。

ShardingSphere Proxy Master Slave 这块有一些不太完善的地方,比如说,所有的slave都已经被DISABLED的情况下,会有除0异常(位置org/apache/shardingsphere/core/strategy/masterslave/RoundRobinMasterSlaveLoadBalanceAlgorithm.java:52),还有一个问题就是获取到某个DataSource之后,很有可能这个DataSource已经连不上了,但是状态并不是DISABLED,目前没有看到有自动DISABLED的逻辑。

我的想法是可以考虑获取一定频率的自动检测每个DataSource获取一个连接执行下isValid方法,如果无效就可以加入到DISABLED里面

Leave a Comment

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据