Apache ShardingSphere sharding-jdbc 分布式事务小练习

XA 标准的分布式事物管理 (atomikos实现)

XA规范中分布式事务有AP,RM,TM组成:

  • 其中应用程序(Application Program ,简称AP):AP定义事务边界(定义事务开始和结束)并访问事务边界内的资源。
  • 资源管理器(Resource Manager,简称RM):Rm管理计算机共享的资源,许多软件都可以去访问这些资源,资源包含比如数据库、文件系统、打印机服务器等。
  • 事务管理器(Transaction Manager ,简称TM):负责管理全局事务,分配事务唯一标识,监控事务的执行进度,并负责事务的提交、回滚、失败恢复等。

[技术标准]  https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf 

我们以纯jdbc的方式来演示,插入100条 order 到三个不同的dataSource(4\5\6)里,XA 事务都是基于两阶段提交。

public class BasedJavaCodeTransaction extends BasedJavaCodeConfig {
​
    private int count = 100;
​
    public void doInsertTransactionWithSharding() throws SQLException {
        TransactionTypeHolder.set(TransactionType.XA);
        DataSource dataSource = this.shardingDataSource(); 
        try (Connection connection = dataSource.getConnection()) {
            connection.setAutoCommit(false);
            for (int i = 0; i < count; i++) {
                PreparedStatement ps = connection.prepareStatement("insert into `order` (user_id,fee) values(?,?)");
                ps.setLong(1, count + i);
                ps.setBigDecimal(2, BigDecimal.valueOf(new Random().nextDouble()).setScale(2, RoundingMode.HALF_UP));
                ps.executeUpdate();
            }
            connection.commit();
        } catch (SQLException sqlException) {
            sqlException.printStackTrace();
        }
    }
}

下面是父类中的方法,创建ShardingDataSource的配置代码如下:

public DataSource shardingDataSource() throws SQLException {
        Map<String, DataSource> dataSourceMap = dataSourceMap();
        TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration("order", "ds${4..6}.order${0..1}");
        orderTableRuleConfig.setDatabaseShardingStrategyConfig(new InlineShardingStrategyConfiguration("user_id", "ds$->{user_id%3 + 4}"));
        orderTableRuleConfig.setTableShardingStrategyConfig(new InlineShardingStrategyConfiguration("id", "order$->{id%2}"));
        ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
        shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);
        KeyGeneratorConfiguration defaultKeyGeneratorConfig = new KeyGeneratorConfiguration("snowflake", "id");
        shardingRuleConfig.setDefaultKeyGeneratorConfig(defaultKeyGeneratorConfig);
        shardingRuleConfig.setMasterSlaveRuleConfigs(Arrays.asList(
                new MasterSlaveRuleConfiguration("ds4", "ds4", Arrays.asList("ds4_slave1"), new LoadBalanceStrategyConfiguration("ROUND_ROBIN")),
                new MasterSlaveRuleConfiguration("ds5", "ds5", Arrays.asList("ds5_slave1"), new LoadBalanceStrategyConfiguration("ROUND_ROBIN")),
                new MasterSlaveRuleConfiguration("ds6", "ds6", Arrays.asList("ds6_slave1"), new LoadBalanceStrategyConfiguration("ROUND_ROBIN"))
        ));
        return ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, props);
    }
protected Map<String, DataSource> dataSourceMap() {
        // 配置真实数据源
        Map<String, DataSource> dataSourceMap = new HashMap<>();
        dataSourceMap.put("ds1", createBasicDataSource("jdbc:mysql://localhost:33080/test"));
        dataSourceMap.put("ds2", createBasicDataSource("jdbc:mysql://localhost:33081/test"));
        dataSourceMap.put("ds3", createBasicDataSource("jdbc:mysql://localhost:33082/test"));
​
        dataSourceMap.put("ds4", createBasicDataSource("jdbc:mysql://localhost:33083/test"));
        dataSourceMap.put("ds5", createBasicDataSource("jdbc:mysql://localhost:33084/test"));
        dataSourceMap.put("ds6", createBasicDataSource("jdbc:mysql://localhost:33085/test"));
​
        dataSourceMap.put("ds4_slave1", createBasicDataSource("jdbc:mysql://localhost:33083/test"));
        dataSourceMap.put("ds5_slave1", createBasicDataSource("jdbc:mysql://localhost:33084/test"));
        dataSourceMap.put("ds6_slave1", createBasicDataSource("jdbc:mysql://localhost:33085/test"));
        return dataSourceMap;
    }

实际上分片使用的dataSource只有ds4、ds5、ds6,执行代码后,控制台输出的日志:

19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] createCompositeTransaction ( 300000 ): created new ROOT transaction with id 127.0.1.1.tm158755394106700001
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Rule Type: sharding
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Logic SQL: insert into `order` (user_id,fee) values(?,?)
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] SQLStatement: InsertSQLStatementContext(super=CommonSQLStatementContext(sqlStatement=org.apache.shardingsphere.sql.parser.sql.statement.dml.InsertStatement@47fba5ef, tablesContext=TablesContext(tables=[Table(name=order, alias=Optional.absent())], schema=Optional.absent())), columnNames=[user_id, fee], insertValueContexts=[InsertValueContext(parametersCount=2, valueExpressions=[ParameterMarkerExpressionSegment(startIndex=41, stopIndex=41, parameterMarkerIndex=0), ParameterMarkerExpressionSegment(startIndex=43, stopIndex=43, parameterMarkerIndex=1), DerivedParameterMarkerExpressionSegment(super=ParameterMarkerExpressionSegment(startIndex=0, stopIndex=0, parameterMarkerIndex=2))], parameters=[100, 0.26])])
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Actual SQL: ds5 ::: insert into `order0` (user_id,fee, id) values(?, ?, ?) ::: [100, 0.26, 459797814231171072]
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] enlistResource ( org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@5b24aea6 ) with transaction 127.0.1.1.tm158755394106700001
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] addParticipant ( XAResourceTransaction: 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31 ) for transaction 127.0.1.1.tm158755394106700001
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.start ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31 , XAResource.TMNOFLAGS ) on resource resource-6-ds5 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@5b24aea6
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] registerSynchronization ( com.atomikos.icatch.jta.Sync2Sync@453455dd ) for transaction 127.0.1.1.tm158755394106700001
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Rule Type: sharding
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Logic SQL: insert into `order` (user_id,fee) values(?,?)
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] SQLStatement: InsertSQLStatementContext(super=CommonSQLStatementContext(sqlStatement=org.apache.shardingsphere.sql.parser.sql.statement.dml.InsertStatement@47fba5ef, tablesContext=TablesContext(tables=[Table(name=order, alias=Optional.absent())], schema=Optional.absent())), columnNames=[user_id, fee], insertValueContexts=[InsertValueContext(parametersCount=2, valueExpressions=[ParameterMarkerExpressionSegment(startIndex=41, stopIndex=41, parameterMarkerIndex=0), ParameterMarkerExpressionSegment(startIndex=43, stopIndex=43, parameterMarkerIndex=1), DerivedParameterMarkerExpressionSegment(super=ParameterMarkerExpressionSegment(startIndex=0, stopIndex=0, parameterMarkerIndex=2))], parameters=[101, 0.32])])
19:12:21 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Actual SQL: ds6 ::: insert into `order1` (user_id,fee, id) values(?, ?, ?) ::: [101, 0.32, 459797814633824257]
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] enlistResource ( org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@7cf1767c ) with transaction 127.0.1.1.tm158755394106700001
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] addParticipant ( XAResourceTransaction: 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D32 ) for transaction 127.0.1.1.tm158755394106700001
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.start ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D32 , XAResource.TMNOFLAGS ) on resource resource-5-ds6 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@7cf1767c
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] registerSynchronization ( com.atomikos.icatch.jta.Sync2Sync@42a565eb ) for transaction 127.0.1.1.tm158755394106700001
........
........
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] beforeCompletion() called on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@4b7c803f
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] beforeCompletion() called on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@6ddec86a
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] beforeCompletion() called on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@2e99311a
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] commit() done (by application) of transaction 127.0.1.1.tm158755394106700001
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.end ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31 , XAResource.TMSUCCESS ) on resource resource-6-ds5 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@5b24aea6
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.prepare ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31 ) returning OK on resource resource-6-ds5 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@5b24aea6
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.end ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D32 , XAResource.TMSUCCESS ) on resource resource-5-ds6 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@7cf1767c
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.prepare ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D32 ) returning OK on resource resource-5-ds6 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@7cf1767c
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.end ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D33 , XAResource.TMSUCCESS ) on resource resource-3-ds4 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@6bd84c05
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.prepare ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D33 ) returning OK on resource resource-3-ds4 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@6bd84c05
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.commit ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31 , false ) on resource resource-6-ds5 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@5b24aea6
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.commit ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D32 , false ) on resource resource-5-ds6 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@7cf1767c
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] XAResource.commit ( 3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D33 , false ) on resource resource-3-ds4 represented by XAResoprepareurce instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@6bd84c05
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] afterCompletion ( STATUS_COMMITTED ) called  on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@2e99311a
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] afterCompletion ( STATUS_COMMITTED ) called  on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@6ddec86a
19:12:21 com.atomikos.logging.Slf4jLogger.logDebug:32 [DEBUG] afterCompletion ( STATUS_COMMITTED ) called  on Synchronization: org.apache.shardingsphere.transaction.xa.jta.datasource.XATransactionDataSource$2@4b7c803f

上面的日志中除了前三条之后的insert日志我都省略掉了避免太长,从日志可以看出,执行的时候创建一个全局的事务id = 127.0.1.1.tm158755394106700001,分别建立了每个不同dataSource的事务,与全局事务id关联,然后当所有的数据都插入完成后,先执行了 prepare() 返回 OK 确保数据可以提交,然后最终执行commit()分别提交了各个数据源的事务。

如果在connection.commit()之前,某个数据节点,比如ds6连接不到了,则这个全局事务涉及到的其他ds也会回滚。测试回滚开启DEBUG日志,打一个断点在connection.commit()调用的语句上,在connection.commit()前停掉ds6,然后再放开断点继续向下执行,则会回滚,控制条可以看到各个ds对应的 rollback日志:

[WARN ] XA resource 'resource-5-ds6': rollback for XID '3132372E302E312E312E746D313538373535333934313036373030303031:3132372E302E312E312E746D31' raised -7: the XA resource has become unavailable
[DEBUG] XAResource.rollback ( 3132372E302E312E312E746D313538373935333833353839373030303031:3132372E302E312E312E746D31 ) on resource resource-3-ds4 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@4142a84c
[DEBUG] XAResource.rollback ( 3132372E302E312E312E746D313538373935333833353839373030303031:3132372E302E312E312E746D31 ) on resource resource-3-ds5 represented by XAResource instance org.apache.shardingsphere.transaction.xa.spi.SingleXAResource@4142a84c

集成 Seata 分布式事务管理

准备

创建undo_log表

Seata 是由阿里开发的一个开源分布式事务框架,使用的时候首先需要给分片的数据库中都创建用来回滚的表:

CREATE TABLE IF NOT EXISTS `undo_log`
(
  `id`            BIGINT(20)   NOT NULL AUTO_INCREMENT COMMENT 'increment id',
  `branch_id`     BIGINT(20)   NOT NULL COMMENT 'branch transaction id',
  `xid`           VARCHAR(100) NOT NULL COMMENT 'global transaction id',
  `context`       VARCHAR(128) NOT NULL COMMENT 'undo_log context,such as serialization',
  `rollback_info` LONGBLOB     NOT NULL COMMENT 'rollback info',
  `log_status`    INT(11)      NOT NULL COMMENT '0:normal status,1:defense status',
  `log_created`   DATETIME     NOT NULL COMMENT 'create datetime',
  `log_modified`  DATETIME     NOT NULL COMMENT 'modify datetime',
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)
) ENGINE = InnoDB
  AUTO_INCREMENT = 1
  DEFAULT CHARSET = utf8 COMMENT ='AT transaction mode undo table';

目前我们使用的 ShardingSphere 的版本是4.0.1,对应的seata版本是1.0.0,最新的seta版本现在是1.2.0,但是我试了一下1.1.0和1.2.0目前集成4.0.1会有错误。

在使用的过程中还发现了一个bug,就是当添加了新的列后,分表的查询会有一个空指针错误,重启proxy后就可以避免此问题,应该是数据库结构变化后没有对应用中LogicSchema中的MetaData进行更新导致的。官方说这个BUG已经在4.1.0里面修复了,不过我还没有来得及验证。

配置 seata.conf

然后需要在 classpath下面有一个配置文件 seata.conf ,内容如下:

client {
    application.id=study-sharding-jdbc
    transaction.service.group = my_test_tx_group
}

具体的配置解释,待补充,官方文档中也没有说的很清楚,我在测试的时候,transaction.service.group 的值得是 my_test_tx_group,如果改为了其他值,就会说注册失败的问题,网上说是需要和seata server 端的vgroup_mapping.xxx的xxx一致,但是我在我本地的seata server配置中,并没有找到这一段my_test_tx_group 的内容。

修改代码使用BASE事务

上面的基于jdbc的代码中,也修改一下,增加一个使用TransactionType.BASE分布式事务的方法,修改为下面的样子,其他的代码是没有变化的

public class BasedJavaCodeTransaction extends BasedJavaCodeConfig {

    private int count = 100;

    public void distributionTransactionWithXA() throws SQLException {
        TransactionTypeHolder.set(TransactionType.XA);
        doInsertOrders();
    }

    public void distributionTransactionWithSEATA() throws SQLException {
        TransactionTypeHolder.set(TransactionType.BASE);
        doInsertOrders();
    }

    private void doInsertOrders() throws SQLException {
        DataSource dataSource = this.shardingDataSource();
        try (Connection connection = dataSource.getConnection()) {
            connection.setAutoCommit(false);
            for (int i = 0; i < count; i++) {
                PreparedStatement ps = connection.prepareStatement("insert into `order` (user_id,fee) values(?,?)");
                ps.setLong(1, count + i);
                ps.setBigDecimal(2, BigDecimal.valueOf(new Random().nextDouble()).setScale(2, RoundingMode.HALF_UP));
                ps.executeUpdate();
            }
            connection.commit();
        }
    }
}

然后运行此方法:

basedJavaCodeTransaction.distributionTransactionWithSEATA();
Thread.sleep(1000); //此处停留一秒,目的是防止执行完数据库操作应用直接结束导致seata端状态不一致

控制台日志

14:01:29 io.seata.config.FileConfiguration.<init>:101 [INFO ] The file name of the operation is registry.conf
14:01:29 io.seata.config.ConfigurationFactory.<clinit>:68 [WARN ] failed to load extConfiguration:not found service provider for : io.seata.config.ExtConfigurationProvider[null] and classloader : sun.misc.Launcher$AppClassLoader@42a57993
io.seata.common.loader.EnhancedServiceNotFoundException: not found service provider for : io.seata.config.ExtConfigurationProvider[null] and classloader : sun.misc.Launcher$AppClassLoader@42a57993
14:01:29 io.seata.config.FileConfiguration.<init>:101 [INFO ] The file name of the operation is file.conf
14:01:29 io.seata.config.ConfigurationFactory.buildConfiguration:118 [WARN ] failed to load extConfiguration:not found service provider for : io.seata.config.ExtConfigurationProvider[null] and classloader : sun.misc.Launcher$AppClassLoader@42a57993
io.seata.common.loader.EnhancedServiceNotFoundException: not found service provider for : io.seata.config.ExtConfigurationProvider[null] and classloader : sun.misc.Launcher$AppClassLoader@42a57993
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.boss-thread-prefix, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.worker-thread-prefix, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.share-boss-worker, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.type, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.worker-thread-size, try to use default value instead.
14:01:29 io.netty.util.internal.logging.InternalLoggerFactory.newDefaultFactory:45 [DEBUG] Using SLF4J as the default logging framework
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.server, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.heartbeat, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.enable-client-batch-send-request, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-selector-thread-size, try to use default value instead.
14:01:29 io.netty.channel.MultithreadEventLoopGroup.<clinit>:44 [DEBUG] -Dio.netty.eventLoopThreads: 24
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-selector-thread-prefix, try to use default value instead.
14:01:29 io.netty.util.internal.InternalThreadLocalMap.<clinit>:56 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
14:01:29 io.netty.util.internal.InternalThreadLocalMap.<clinit>:59 [DEBUG] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
14:01:29 io.netty.channel.nio.NioEventLoop.<clinit>:106 [DEBUG] -Dio.netty.noKeySetOptimization: false
14:01:29 io.netty.channel.nio.NioEventLoop.<clinit>:107 [DEBUG] -Dio.netty.selectorAutoRebuildThreshold: 512
14:01:29 io.netty.util.internal.PlatformDependent0.explicitNoUnsafeCause0:396 [DEBUG] -Dio.netty.noUnsafe: false
14:01:29 io.netty.util.internal.PlatformDependent0.javaVersion0:852 [DEBUG] Java version: 8
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:121 [DEBUG] sun.misc.Unsafe.theUnsafe: available
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:145 [DEBUG] sun.misc.Unsafe.copyMemory: available
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:183 [DEBUG] java.nio.Buffer.address: available
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:244 [DEBUG] direct buffer constructor: available
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:314 [DEBUG] java.nio.Bits.unaligned: available, true
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:379 [DEBUG] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior to Java9
14:01:29 io.netty.util.internal.PlatformDependent0.<clinit>:386 [DEBUG] java.nio.DirectByteBuffer.<init>(long, int): available
14:01:29 io.netty.util.internal.PlatformDependent.unsafeUnavailabilityCause0:1046 [DEBUG] sun.misc.Unsafe: available
14:01:29 io.netty.util.internal.PlatformDependent.tmpdir0:1165 [DEBUG] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
14:01:29 io.netty.util.internal.PlatformDependent.bitMode0:1244 [DEBUG] -Dio.netty.bitMode: 64 (sun.arch.data.model)
14:01:29 io.netty.util.internal.PlatformDependent.<clinit>:177 [DEBUG] -Dio.netty.maxDirectMemory: 477626368 bytes
14:01:29 io.netty.util.internal.PlatformDependent.<clinit>:184 [DEBUG] -Dio.netty.uninitializedArrayAllocationThreshold: -1
14:01:29 io.netty.util.internal.CleanerJava6.<clinit>:92 [DEBUG] java.nio.ByteBuffer.cleaner(): available
14:01:29 io.netty.util.internal.PlatformDependent.<clinit>:204 [DEBUG] -Dio.netty.noPreferDirect: false
14:01:29 io.netty.util.internal.PlatformDependent$Mpsc.<clinit>:907 [DEBUG] org.jctools-core.MpscChunkedArrayQueue: available
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property service.enableDegrade, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-worker-thread-prefix, try to use default value instead.
14:01:29 io.seata.core.rpc.netty.RpcClientBootstrap.start:157 [INFO ] RpcClientBootstrap has started
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-selector-thread-size, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-selector-thread-prefix, try to use default value instead.
14:01:29 io.seata.rm.datasource.AsyncWorker.init:126 [INFO ] Async Commit Buffer Limit: 10000
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.async.commit.buffer.limit, try to use default value instead.
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.thread-factory.client-worker-thread-prefix, try to use default value instead.
14:01:29 io.seata.core.rpc.netty.RpcClientBootstrap.start:157 [INFO ] RpcClientBootstrap has started
14:01:29 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.table.meta.check.enable, try to use default value instead.
14:01:29 io.seata.core.rpc.netty.NettyClientChannelManager.acquireChannel:99 [INFO ] will connect to 127.0.0.1:8091
14:01:29 io.seata.core.rpc.netty.RmRpcClient.lambda$getPoolKeyFunction$0:149 [INFO ] RM will register :jdbc:mysql://localhost:33081/test
14:01:29 io.seata.core.rpc.netty.NettyPoolableFactory.makeObject:56 [INFO ] NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33081/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'} >
14:01:29 io.netty.channel.DefaultChannelId.<clinit>:79 [DEBUG] -Dio.netty.processId: 8550 (auto-detected)
14:01:29 io.netty.util.NetUtil.<clinit>:139 [DEBUG] -Djava.net.preferIPv4Stack: false
14:01:29 io.netty.util.NetUtil.<clinit>:140 [DEBUG] -Djava.net.preferIPv6Addresses: false
14:01:29 io.netty.util.NetUtil.<clinit>:224 [DEBUG] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
14:01:29 io.netty.util.NetUtil$1.run:271 [DEBUG] /proc/sys/net/core/somaxconn: 128
14:01:29 io.netty.channel.DefaultChannelId.<clinit>:101 [DEBUG] -Dio.netty.machineId: 18:56:80:ff:fe:6a:b8:dc (auto-detected)
14:01:29 io.netty.util.ResourceLeakDetector.<clinit>:130 [DEBUG] -Dio.netty.leakDetection.level: simple
14:01:29 io.netty.util.ResourceLeakDetector.<clinit>:131 [DEBUG] -Dio.netty.leakDetection.targetRecords: 4
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:156 [DEBUG] -Dio.netty.allocator.numHeapArenas: 4
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:157 [DEBUG] -Dio.netty.allocator.numDirectArenas: 4
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:159 [DEBUG] -Dio.netty.allocator.pageSize: 8192
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:164 [DEBUG] -Dio.netty.allocator.maxOrder: 11
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:168 [DEBUG] -Dio.netty.allocator.chunkSize: 16777216
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:169 [DEBUG] -Dio.netty.allocator.tinyCacheSize: 512
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:170 [DEBUG] -Dio.netty.allocator.smallCacheSize: 256
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:171 [DEBUG] -Dio.netty.allocator.normalCacheSize: 64
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:172 [DEBUG] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:173 [DEBUG] -Dio.netty.allocator.cacheTrimInterval: 8192
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:174 [DEBUG] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:175 [DEBUG] -Dio.netty.allocator.useCacheForAllThreads: true
14:01:29 io.netty.buffer.PooledByteBufAllocator.<clinit>:176 [DEBUG] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
14:01:29 io.netty.buffer.ByteBufUtil.<clinit>:86 [DEBUG] -Dio.netty.allocator.type: pooled
14:01:29 io.netty.buffer.ByteBufUtil.<clinit>:95 [DEBUG] -Dio.netty.threadLocalDirectBufferSize: 0
14:01:29 io.netty.buffer.ByteBufUtil.<clinit>:98 [DEBUG] -Dio.netty.maxThreadLocalCharBufferSize: 16384
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.serialization, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property transport.compressor, try to use default value instead.
14:01:30 io.netty.util.Recycler.<clinit>:97 [DEBUG] -Dio.netty.recycler.maxCapacityPerThread: 4096
14:01:30 io.netty.util.Recycler.<clinit>:98 [DEBUG] -Dio.netty.recycler.maxSharedCapacityFactor: 2
14:01:30 io.netty.util.Recycler.<clinit>:99 [DEBUG] -Dio.netty.recycler.linkCapacity: 16
14:01:30 io.netty.util.Recycler.<clinit>:100 [DEBUG] -Dio.netty.recycler.ratio: 8
14:01:30 io.netty.buffer.AbstractByteBuf.<clinit>:63 [DEBUG] -Dio.netty.buffer.checkAccessible: true
14:01:30 io.netty.buffer.AbstractByteBuf.<clinit>:64 [DEBUG] -Dio.netty.buffer.checkBounds: true
14:01:30 io.netty.util.ResourceLeakDetectorFactory$DefaultResourceLeakDetectorFactory.newResourceLeakDetector:195 [DEBUG] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@671190ae
14:01:30 io.seata.common.loader.EnhancedServiceLoader.loadFile:236 [INFO ] load Codec[SEATA] extension by class[io.seata.codec.seata.SeataCodec]
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:1, future :io.seata.core.protocol.MessageFuture@4c8ac5b7, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.RmRpcClient.onRegisterMsgSuccess:167 [INFO ] register RM success. server version:1.2.0,channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091]
14:01:30 io.seata.core.rpc.netty.NettyPoolableFactory.makeObject:81 [INFO ] register success, cost 107 ms, version:1.2.0,role:RMROLE,channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091]
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33080/test
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33083/test
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33082/test
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:2, future :io.seata.core.protocol.MessageFuture@c010bd8, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33085/test
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33084/test
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:3, future :io.seata.core.protocol.MessageFuture@66788f55, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:4, future :io.seata.core.protocol.MessageFuture@39a9a64f, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33085/test
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33084/test
14:01:30 io.seata.core.rpc.netty.RmRpcClient.registerResource:206 [INFO ] will register resourceId:jdbc:mysql://localhost:33083/test
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:5, future :io.seata.core.protocol.MessageFuture@23947bb0, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:6, future :io.seata.core.protocol.MessageFuture@27cffbd1, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:7, future :io.seata.core.protocol.MessageFuture@5efca5a, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:8, future :io.seata.core.protocol.MessageFuture@71009f76, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:9, future :io.seata.core.protocol.MessageFuture@7149331d, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.common.loader.EnhancedServiceLoader.loadFile:236 [INFO ] load ContextCore[null] extension by class[io.seata.core.context.ThreadLocalContextCore]
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.tm.commit.retry.count, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.tm.rollback.retry.count, try to use default value instead.
14:01:30 io.seata.common.loader.EnhancedServiceLoader.loadFile:236 [INFO ] load TransactionManager[null] extension by class[io.seata.tm.DefaultTransactionManager]
14:01:30 io.seata.tm.TransactionManagerHolder$SingletonHolder.<clinit>:40 [INFO ] TransactionManager Singleton io.seata.tm.DefaultTransactionManager@31099fd9
14:01:30 io.seata.common.loader.EnhancedServiceLoader.loadFile:236 [INFO ] load LoadBalance[null] extension by class[io.seata.discovery.loadbalance.RandomLoadBalance]
14:01:30 io.seata.core.rpc.netty.NettyClientChannelManager.acquireChannel:99 [INFO ] will connect to 127.0.0.1:8091
14:01:30 io.seata.core.rpc.netty.NettyPoolableFactory.makeObject:56 [INFO ] NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'} >
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:408 [DEBUG] io.seata.core.rpc.netty.TmRpcClient@55694f7c msgId:1, future :io.seata.core.protocol.MessageFuture@279959c8, body:version=1.2.0,extraData=null,identified=true,resultCode=null,msg=null
14:01:30 io.seata.core.rpc.netty.NettyPoolableFactory.makeObject:81 [INFO ] register success, cost 6 ms, version:1.2.0,role:TMROLE,channel:[id: 0x49ce9bb7, L:/127.0.0.1:55018 - R:/127.0.0.1:8091]
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: timeout=60000,transactionName=default
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage timeout=60000,transactionName=default
, channel:[id: 0x49ce9bb7, L:/127.0.0.1:55018 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:30 io.seata.core.context.RootContext.bind:81 [DEBUG] bind 172.18.0.1:8091:2009900080
14:01:30 io.seata.tm.api.DefaultGlobalTransaction.begin:106 [INFO ] Begin new global transaction [172.18.0.1:8091:2009900080]
14:01:30 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Rule Type: sharding
14:01:30 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Logic SQL: insert into `order` (user_id,fee) values(?,?)
14:01:30 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] SQLStatement: InsertSQLStatementContext(super=CommonSQLStatementContext(sqlStatement=org.apache.shardingsphere.sql.parser.sql.statement.dml.InsertStatement@79857f5c, tablesContext=TablesContext(tables=[Table(name=order, alias=Optional.absent())], schema=Optional.absent())), columnNames=[user_id, fee], insertValueContexts=[InsertValueContext(parametersCount=2, valueExpressions=[ParameterMarkerExpressionSegment(startIndex=41, stopIndex=41, parameterMarkerIndex=0), ParameterMarkerExpressionSegment(startIndex=43, stopIndex=43, parameterMarkerIndex=1), DerivedParameterMarkerExpressionSegment(super=ParameterMarkerExpressionSegment(startIndex=0, stopIndex=0, parameterMarkerIndex=2))], parameters=[3, 0.75])])
14:01:30 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Actual SQL: ds4 ::: insert into `order0` (user_id,fee, id) values(?, ?, ?) ::: [3, 0.75, 460444362493394944]
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.report.retry.count, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.report.success.enable, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.lock.retry.policy.branch-rollback-on-conflict, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.lock.retry.internal, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.rm.lock.retry.times, try to use default value instead.
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33083/test,lockKey=order0:460444362493394944
14:01:30 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33083/test,lockKey=order0:460444362493394944
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.undo.log.table, try to use default value instead.
14:01:30 io.seata.config.FileConfiguration$ConfigOperateRunnable.run:266 [WARN ] Could not found property client.undo.log.serialization, try to use default value instead.
14:01:31 io.seata.common.loader.EnhancedServiceLoader.loadFile:236 [INFO ] load UndoLogParser[jackson] extension by class[io.seata.rm.datasource.undo.parser.JacksonUndoLogParser]
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.flushUndoLogs:234 [DEBUG] Flushing UNDO LOG: {"@class":"io.seata.rm.datasource.undo.BranchUndoLog","xid":"172.18.0.1:8091:2009900080","branchId":2009900081,"sqlUndoLogs":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.undo.SQLUndoLog","sqlType":"INSERT","tableName":"`order0`","beforeImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords$EmptyTableRecords","tableName":"order0","rows":["java.util.ArrayList",[]]},"afterImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords","tableName":"order0","rows":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Row","fields":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"id","keyType":"PrimaryKey","type":-5,"value":["java.lang.Long",460444362493394944]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"user_id","keyType":"NULL","type":4,"value":3},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"fee","keyType":"NULL","type":3,"value":["java.math.BigDecimal",0.75]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"create_time","keyType":"NULL","type":93,"value":["java.sql.Timestamp",[1587708090000,0]]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"note","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"address","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"mobile","keyType":"NULL","type":12,"value":null}]]}]]}}]]}
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchId=2009900081,resourceId=null,status=PhaseOne_Done,applicationData=null
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900081,resourceId=null,status=PhaseOne_Done,applicationData=null
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Rule Type: sharding
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Logic SQL: insert into `order` (user_id,fee) values(?,?)
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] SQLStatement: InsertSQLStatementContext(super=CommonSQLStatementContext(sqlStatement=org.apache.shardingsphere.sql.parser.sql.statement.dml.InsertStatement@79857f5c, tablesContext=TablesContext(tables=[Table(name=order, alias=Optional.absent())], schema=Optional.absent())), columnNames=[user_id, fee], insertValueContexts=[InsertValueContext(parametersCount=2, valueExpressions=[ParameterMarkerExpressionSegment(startIndex=41, stopIndex=41, parameterMarkerIndex=0), ParameterMarkerExpressionSegment(startIndex=43, stopIndex=43, parameterMarkerIndex=1), DerivedParameterMarkerExpressionSegment(super=ParameterMarkerExpressionSegment(startIndex=0, stopIndex=0, parameterMarkerIndex=2))], parameters=[4, 0.20])])
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Actual SQL: ds5 ::: insert into `order1` (user_id,fee, id) values(?, ?, ?) ::: [4, 0.20, 460444364385026049]
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33084/test,lockKey=order1:460444364385026049
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33084/test,lockKey=order1:460444364385026049
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.flushUndoLogs:234 [DEBUG] Flushing UNDO LOG: {"@class":"io.seata.rm.datasource.undo.BranchUndoLog","xid":"172.18.0.1:8091:2009900080","branchId":2009900082,"sqlUndoLogs":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.undo.SQLUndoLog","sqlType":"INSERT","tableName":"`order1`","beforeImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords$EmptyTableRecords","tableName":"order1","rows":["java.util.ArrayList",[]]},"afterImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords","tableName":"order1","rows":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Row","fields":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"id","keyType":"PrimaryKey","type":-5,"value":["java.lang.Long",460444364385026049]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"user_id","keyType":"NULL","type":4,"value":4},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"fee","keyType":"NULL","type":3,"value":["java.math.BigDecimal",0.20]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"create_time","keyType":"NULL","type":93,"value":["java.sql.Timestamp",[1587708091000,0]]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"note","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"address","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"mobile","keyType":"NULL","type":12,"value":null}]]}]]}}]]}
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchId=2009900082,resourceId=null,status=PhaseOne_Done,applicationData=null
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900082,resourceId=null,status=PhaseOne_Done,applicationData=null
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Rule Type: sharding
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Logic SQL: insert into `order` (user_id,fee) values(?,?)
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] SQLStatement: InsertSQLStatementContext(super=CommonSQLStatementContext(sqlStatement=org.apache.shardingsphere.sql.parser.sql.statement.dml.InsertStatement@79857f5c, tablesContext=TablesContext(tables=[Table(name=order, alias=Optional.absent())], schema=Optional.absent())), columnNames=[user_id, fee], insertValueContexts=[InsertValueContext(parametersCount=2, valueExpressions=[ParameterMarkerExpressionSegment(startIndex=41, stopIndex=41, parameterMarkerIndex=0), ParameterMarkerExpressionSegment(startIndex=43, stopIndex=43, parameterMarkerIndex=1), DerivedParameterMarkerExpressionSegment(super=ParameterMarkerExpressionSegment(startIndex=0, stopIndex=0, parameterMarkerIndex=2))], parameters=[5, 0.15])])
14:01:31 org.apache.shardingsphere.core.route.SQLLogger.log:99 [INFO ] Actual SQL: ds6 ::: insert into `order0` (user_id,fee, id) values(?, ?, ?) ::: [5, 0.15, 460444364544409600]
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33085/test,lockKey=order0:460444364544409600
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33085/test,lockKey=order0:460444364544409600
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.flushUndoLogs:234 [DEBUG] Flushing UNDO LOG: {"@class":"io.seata.rm.datasource.undo.BranchUndoLog","xid":"172.18.0.1:8091:2009900080","branchId":2009900083,"sqlUndoLogs":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.undo.SQLUndoLog","sqlType":"INSERT","tableName":"`order0`","beforeImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords$EmptyTableRecords","tableName":"order0","rows":["java.util.ArrayList",[]]},"afterImage":{"@class":"io.seata.rm.datasource.sql.struct.TableRecords","tableName":"order0","rows":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Row","fields":["java.util.ArrayList",[{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"id","keyType":"PrimaryKey","type":-5,"value":["java.lang.Long",460444364544409600]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"user_id","keyType":"NULL","type":4,"value":5},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"fee","keyType":"NULL","type":3,"value":["java.math.BigDecimal",0.15]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"create_time","keyType":"NULL","type":93,"value":["java.sql.Timestamp",[1587708091000,0]]},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"note","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"address","keyType":"NULL","type":12,"value":null},{"@class":"io.seata.rm.datasource.sql.struct.Field","name":"mobile","keyType":"NULL","type":12,"value":null}]]}]]}}]]}
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,branchId=2009900083,resourceId=null,status=PhaseOne_Done,applicationData=null
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900083,resourceId=null,status=PhaseOne_Done,applicationData=null
, channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendAsyncRequest:246 [DEBUG] offer message: xid=172.18.0.1:8091:2009900080,extraData=null
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendRequest:317 [DEBUG] write message:SeataMergeMessage xid=172.18.0.1:8091:2009900080,extraData=null
, channel:[id: 0x49ce9bb7, L:/127.0.0.1:55018 - R:/127.0.0.1:8091],active?true,writable?true,isopen?true
14:01:31 io.seata.core.context.RootContext.unbind:137 [DEBUG] unbind 172.18.0.1:8091:2009900080 
14:01:31 io.seata.tm.api.DefaultGlobalTransaction.commit:145 [INFO ] [172.18.0.1:8091:2009900080] commit status: Committed
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:377 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:1, body:xid=172.18.0.1:8091:2009900080,branchId=2009900081,branchType=AT,resourceId=jdbc:mysql://localhost:33083/test,applicationData=null
14:01:31 io.seata.core.rpc.netty.RmMessageListener.onMessage:65 [INFO ] onMessage:xid=172.18.0.1:8091:2009900080,branchId=2009900081,branchType=AT,resourceId=jdbc:mysql://localhost:33083/test,applicationData=null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:95 [INFO ] Branch committing: 172.18.0.1:8091:2009900080 2009900081 jdbc:mysql://localhost:33083/test null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:103 [INFO ] Branch commit result: PhaseTwo_Committed
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendResponse:341 [DEBUG] send response:xid=172.18.0.1:8091:2009900080,branchId=2009900081,branchStatus=PhaseTwo_Committed,result code =Success,getMsg =null,channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091]
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:377 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:2, body:xid=172.18.0.1:8091:2009900080,branchId=2009900082,branchType=AT,resourceId=jdbc:mysql://localhost:33084/test,applicationData=null
14:01:31 io.seata.core.rpc.netty.RmMessageListener.onMessage:65 [INFO ] onMessage:xid=172.18.0.1:8091:2009900080,branchId=2009900082,branchType=AT,resourceId=jdbc:mysql://localhost:33084/test,applicationData=null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:95 [INFO ] Branch committing: 172.18.0.1:8091:2009900080 2009900082 jdbc:mysql://localhost:33084/test null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:103 [INFO ] Branch commit result: PhaseTwo_Committed
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendResponse:341 [DEBUG] send response:xid=172.18.0.1:8091:2009900080,branchId=2009900082,branchStatus=PhaseTwo_Committed,result code =Success,getMsg =null,channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091]
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.channelRead:377 [DEBUG] io.seata.core.rpc.netty.RmRpcClient@55514bee msgId:3, body:xid=172.18.0.1:8091:2009900080,branchId=2009900083,branchType=AT,resourceId=jdbc:mysql://localhost:33085/test,applicationData=null
14:01:31 io.seata.core.rpc.netty.RmMessageListener.onMessage:65 [INFO ] onMessage:xid=172.18.0.1:8091:2009900080,branchId=2009900083,branchType=AT,resourceId=jdbc:mysql://localhost:33085/test,applicationData=null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:95 [INFO ] Branch committing: 172.18.0.1:8091:2009900080 2009900083 jdbc:mysql://localhost:33085/test null
14:01:31 io.seata.rm.AbstractRMHandler.doBranchCommit:103 [INFO ] Branch commit result: PhaseTwo_Committed
14:01:31 io.seata.core.rpc.netty.AbstractRpcRemoting.sendResponse:341 [DEBUG] send response:xid=172.18.0.1:8091:2009900080,branchId=2009900083,branchStatus=PhaseTwo_Committed,result code =Success,getMsg =null,channel:[id: 0x923d19bd, L:/127.0.0.1:55016 - R:/127.0.0.1:8091]
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.batchDeleteUndoLog:163 [DEBUG] batch delete undo log size 1
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.batchDeleteUndoLog:163 [DEBUG] batch delete undo log size 1
14:01:31 io.seata.rm.datasource.undo.AbstractUndoLogManager.batchDeleteUndoLog:163 [DEBUG] batch delete undo log size 1

seata server 日志

2020-04-24 14:01:30.098 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33081/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.127 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33080/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.129 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33083/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.130 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33082/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.131 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33085/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.132 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33084/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.137 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33085/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.137 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33084/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.138 INFO [ServerHandlerThread_1_500]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegRmMessage:127 -RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:33083/test', applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 - R:/127.0.0.1:55016]
2020-04-24 14:01:30.154 INFO [NettyServerNIOWorker_1_24]io.seata.core.rpc.DefaultServerMessageListenerImpl.onRegTmMessage:153 -TM register success,message:RegisterTMRequest{applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group'},channel:[id: 0xc8f80470, L:/127.0.0.1:8091 - R:/127.0.0.1:55018]
2020-04-24 14:01:30.164 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage timeout=60000,transactionName=default
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:30.197 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.DefaultCoordinator.doGlobalBegin:159 -Begin new global transaction applicationId: study-sharding-jdbc,transactionServiceGroup: my_test_tx_group, transactionName: default,timeout:60000,xid:172.18.0.1:8091:2009900080
2020-04-24 14:01:30.868 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33083/test,lockKey=order0:460444362493394944
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:30.877 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.lambda$branchRegister$0:87 -Register branch successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900081, resourceId = jdbc:mysql://localhost:33083/test ,lockKeys = order0:460444362493394944
2020-04-24 14:01:31.107 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900081,resourceId=null,status=PhaseOne_Done,applicationData=null
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.108 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.branchReport:138 -Report branch status successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900081
2020-04-24 14:01:31.131 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33084/test,lockKey=order1:460444364385026049
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.132 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.lambda$branchRegister$0:87 -Register branch successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900082, resourceId = jdbc:mysql://localhost:33084/test ,lockKeys = order1:460444364385026049
2020-04-24 14:01:31.147 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900082,resourceId=null,status=PhaseOne_Done,applicationData=null
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.147 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.branchReport:138 -Report branch status successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900082
2020-04-24 14:01:31.168 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchType=AT,resourceId=jdbc:mysql://localhost:33085/test,lockKey=order0:460444364544409600
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.168 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.lambda$branchRegister$0:87 -Register branch successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900083, resourceId = jdbc:mysql://localhost:33085/test ,lockKeys = order0:460444364544409600
2020-04-24 14:01:31.188 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,branchId=2009900083,resourceId=null,status=PhaseOne_Done,applicationData=null
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.189 INFO [ServerHandlerThread_1_500]io.seata.server.coordinator.AbstractCore.branchReport:138 -Report branch status successfully, xid = 172.18.0.1:8091:2009900080, branchId = 2009900083
2020-04-24 14:01:31.195 INFO [batchLoggerPrint_1]io.seata.core.rpc.DefaultServerMessageListenerImpl.run:214 -SeataMergeMessage xid=172.18.0.1:8091:2009900080,extraData=null
,clientIp:127.0.0.1,vgroup:my_test_tx_group
2020-04-24 14:01:31.556 INFO [AsyncCommitting_1]io.seata.server.coordinator.DefaultCore.doGlobalCommit:240 -Committing global transaction is successfully done, xid = 172.18.0.1:8091:2009900080.
2020-04-24 14:01:32.555 INFO [NettyServerNIOWorker_1_24]io.seata.core.rpc.netty.AbstractRpcRemotingServer.handleDisconnect:254 -127.0.0.1:55016 to server channel inactive.
2020-04-24 14:01:32.555 INFO [NettyServerNIOWorker_1_24]io.seata.core.rpc.netty.AbstractRpcRemotingServer.handleDisconnect:254 -127.0.0.1:55018 to server channel inactive.
2020-04-24 14:01:32.555 INFO [NettyServerNIOWorker_1_24]io.seata.core.rpc.netty.AbstractRpcRemotingServer.handleDisconnect:259 -remove channel:[id: 0xd3e16a98, L:/127.0.0.1:8091 ! R:/127.0.0.1:55016]context:RpcContext{applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group', clientId='study-sharding-jdbc:127.0.0.1:55016', channel=[id: 0xd3e16a98, L:/127.0.0.1:8091 ! R:/127.0.0.1:55016], resourceSets=[]}
2020-04-24 14:01:32.555 INFO [NettyServerNIOWorker_1_24]io.seata.core.rpc.netty.AbstractRpcRemotingServer.handleDisconnect:259 -remove channel:[id: 0xc8f80470, L:/127.0.0.1:8091 ! R:/127.0.0.1:55018]context:RpcContext{applicationId='study-sharding-jdbc', transactionServiceGroup='my_test_tx_group', clientId='study-sharding-jdbc:127.0.0.1:55018', channel=[id: 0xc8f80470, L:/127.0.0.1:8091 ! R:/127.0.0.1:55018], resourceSets=null}

我发现在运行后,如果立即结束应用的话,会导致seata控制台输出 事务没有完成的错误信息一直不停的滚动,但是当我运行完故意sleep一秒后,就不会产生这个错误,这就是最终一致性与强一致性之间的区别,最终一致性的完成状态在时间上是延后于代码执行完毕的。

2020-04-24 13:57:10.485 ERROR[AsyncCommitting_1]io.seata.server.coordinator.DefaultCore.error:61 -Committing branch transaction exception: BR:2009895142/2009895132
2020-04-24 13:57:10.485 INFO [AsyncCommitting_1]io.seata.server.coordinator.DefaultCore.doGlobalCommit:228 -Committing global transaction is NOT done, xid = 172.18.0.1:8091:2009895132.

Leave a Comment

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据