Apache Artemis客户端故障转移发现

编程入门 行业动态 更新时间:2024-10-26 16:33:54
本文介绍了Apache Artemis客户端故障转移发现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在使用Apache Artemis V2.12.0,在两个VM的两个代理中启动了两个实例

I am using Apache Artemis V2.12.0, started two instance of broker in two VM's

broker.xml(myhost1)[myhost2的broker.xml与我使用的端口相似,只有61616]

broker.xml (myhost1) [ broker.xml of myhost2 is similar only the port I used was 61616]

<?xml version="1.0" encoding="UTF-8" standalone="no"?> <configuration xmlns="urn:activemq" xmlns:xsi="www.w3/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd"> <core xmlns="urn:activemq:core"> <bindings-directory>./data/bindings</bindings-directory> <journal-directory>./data/journal</journal-directory> <large-messages-directory>./data/largemessages</large-messages-directory> <paging-directory>./data/paging</paging-directory> <!-- Connectors --> <connectors> <connector name="netty-connector">tcp://10.64.60.100:61617</connector><!-- direct ip addres of host myhost1 --> <connector name="broker2-connector">tcp://myhost2:61616</connector> <!-- ip 10.64.60.101 <- mocked up ip for security reasons --> </connectors> <!-- Acceptors --> <acceptors> <acceptor name="amqp">tcp://0.0.0.0:61617?amqpIdleTimeout=0;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP;useEpoll=true</acceptor> </acceptors> <cluster-connections> <cluster-connection name="myhost1-cluster"> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <message-load-balancing>ON_DEMAND</message-load-balancing> <max-hops>1</max-hops> <static-connectors> <connector-ref>broker2-connector</connector-ref> <!-- defined in the connectors --> </static-connectors> </cluster-connection> </cluster-connections> <security-settings> <security-setting match="#"> <permission type="createNonDurableQueue" roles="amq"/> <permission type="deleteNonDurableQueue" roles="amq"/> <permission type="createDurableQueue" roles="amq"/> <permission type="deleteDurableQueue" roles="amq"/> <permission type="createAddress" roles="amq"/> <permission type="deleteAddress" roles="amq"/> <permission type="consume" roles="amq"/> <permission type="browse" roles="amq"/> <permission type="send" roles="amq"/> <permission type="manage" roles="amq"/> </security-setting> </security-settings> <address-settings> <!--default for catch all--> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-delete-queues>false</auto-delete-queues> <auto-delete-created-queues>false</auto-delete-created-queues> <auto-delete-addresses>false</auto-delete-addresses> </address-setting> </address-settings> </core> </configuration>

在两个节点上启动代理实例后,他们加入了集群,我可以在日志中看到它.

After starting the broker instance on two nodes they joined the cluster, which i can see in logs.

2020-06-03 23:59:17,874 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61617 for protocols [CORE,AMQP] 2020-06-03 23:59:17,910 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live 2020-06-03 23:59:17,910 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.12.0 [localhost, nodeID=e6c6eab6-a456-11ea-94cf-000d3a306e31] 2020-06-03 23:59:18,240 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@5e9820f4 [name=$.artemis.internal.sf.myhost1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, queue=QueueImpl[name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31], temp=false]@2b0263f3 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@5e9820f4 [name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, queue=QueueImpl[name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31], temp=false]@2b0263f3 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-64-60-100], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@24293395[nodeUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61617&host=10-64-60-101, address=, server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31])) [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-64-60-100], discoveryGroupConfiguration=null]] is connected 2020-06-03 23:59:18,364 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin

下面的Java代码将消息发送到集群代理

Below java code sends message to the clustered broker,

  • 第一步:两个代理都在运行

  • Step1: both the brokers where running

第2步:Java客户端已开始向代理发送消息

Step2: The java client was started to send messages to the broker

Step3:从myhost1的控制台中,我看到消息被推送到队列中

Step3: From the console of myhost1, i see messages pushed to the queue

第4步:我在myhost1中停止代理实例

Step4: I stop the broker instance in myhost1

第5步:java客户端日志,在n次尝试引发异常后,重试连接到另一台服务器. (我的期望是它不应引发任何异常)

Step5: java client log, retries to connect to the other server, after n attempts it throws exception. (My expectation is it should NOT throw any exception)

  • Java代码具有我评论过的JNDI方法,即使在这种情况下,发生了推送但发生类似异常的消息.

  • The java code has JNDI approach which i commented, even in this case the messages where pushed but similar exception occured.

    我很累JmsPoolConnectionfactory,即使是同样的问题,当其中一个代理实例在几次重试后停止时,它也会引发异常. (此日志位于代码底部)

    I tired JmsPoolConnectionfactory, even then the same issue, where when one of the broker instance is stopped after few retries it throws exception. (the logs for this are at bottom of the code)

  • 问题: 在客户端使用Java代码如何毫无例外地实现自动发现/故障转移/重新连接.我在cluster-options下使用static-connector.

    Question: Using the java code on client side how to achieve auto discovery/failover/reconnect without any exception. I am using static-connector under the cluster-options.

    package com.demo.artemis.clients; import java.util.Properties; import javax.jms.Connection; import javax.jms.ConnectionFactory; import javax.jms.MessageConsumer; import javax.jms.MessageProducer; import javax.jms.Queue; import javax.jms.Session; import javax.jms.TextMessage; import javax.naming.InitialContext; import org.apache.activemq.artemis.jms.client.ActiveMQJMSConnectionFactory; import org.messaginghub.pooled.jms.JmsPoolConnectionFactory; public class ArtemisClientClustered { public static void main(final String[] args) throws Exception { //only produces the message new ArtemisClientClustered().runProducer(true, false); } public boolean runProducer(boolean produceMesage, boolean consumeMessage) throws Exception{ Connection connection = null; InitialContext initalContext = null; int i = 0; try { Properties jndiProp = new Properties(); jndiProp.put("java.naming.factory.initial", "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"); //jndiProp.put("connectionFactory.ConnectionFactory", "tcp://localhost:61616?producerMaxRate=50"); jndiProp.put("connectionFactory.ConnectionFactory", "(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;"); jndiProp.put("queue.queue/ahm.load-datawarehouse.queue","ahm.load-datawarehouse.queue"); initalContext = new InitialContext(jndiProp); // Step 2. Perfom a lookup on the queue Queue queue = (Queue) initalContext.lookup("queue/myExampleQ.queue"); // Step 3. Perform a lookup on the Connection Factory //ConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616?producerMaxRate=50"); ConnectionFactory cf = (ConnectionFactory)initalContext.lookup("ConnectionFactory"); // ConnectionFactory cf= new ActiveMQJMSConnectionFactory("(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;"); //using the PoolconectionFactory JmsPoolConnectionFactory jmsPoolConnectionFactory = new JmsPoolConnectionFactory(); jmsPoolConnectionFactory.setMaxConnections(8); jmsPoolConnectionFactory.setConnectionFactory(cf); // Step 4. Create a JMS Connection connection = jmsPoolConnectionFactory.createConnection("admin","admin"); // Step 5. Create a JMS Session Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); if(produceMesage) { // Step 6. Create a JMS Message Producer MessageProducer producer = session.createProducer(queue); System.out.println("Will now send as many messages as we can in few seconds..."); // Step 7. Send as many messages as we can in N milliseconds final long duration = 1200000; i=0; long start = System.currentTimeMillis(); while (System.currentTimeMillis() - start <= duration) { TextMessage message = session.createTextMessage("This is text message: " + i++); producer.send(message); } long end = System.currentTimeMillis(); double rate = 1000 * (double) i / (end - start); System.out.println("We sent " + i + " messages in " + (end - start) + " milliseconds"); System.out.println("Actual send rate was " + rate + " messages per second"); // Step 8. For good measure we consumer the messages we produced. } if(consumeMessage) { MessageConsumer messageConsumer = session.createConsumer(queue); connection.start(); System.out.println("Now consuming the messages..."); i = 0; while (true) { TextMessage messageReceived = (TextMessage) messageConsumer.receive(5000); if (messageReceived == null) { break; } i++; } System.out.println("Received " + i + " messages"); } return true; } finally { // Step 9. Be sure to close our resources! if (connection != null) { connection.close(); } } } }

    执行客户端代码的日志消息:客户端启动myhost1和myhost2时都在运行. 一段时间后,我手动停止了myhost1代理,期望myhost2将被客户端自动发现.

    Log message of client code execution: When the client starts both myhost1 and myhost2 was running. After sometime I manually stop the myhost1 broker, expecting the myhost2 will be automatically discovered by the client.

    .... 2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.implty.NettyConnectorFactory@45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true 2020-06-03 23:58:48 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll 2020-06-03 23:58:48 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616 2020-06-03 23:58:48 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616 2020-06-03 23:58:48 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = cf33ff23 2020-06-03 23:58:48 DEBUG Recycler:97 - -Dioty.recycler.maxCapacityPerThread: 4096 2020-06-03 23:58:48 DEBUG Recycler:98 - -Dioty.recycler.maxSharedCapacityFactor: 2 2020-06-03 23:58:48 DEBUG Recycler:99 - -Dioty.recycler.linkCapacity: 16 2020-06-03 23:58:48 DEBUG Recycler:100 - -Dioty.recycler.ratio: 8 2020-06-03 23:58:48 DEBUG AbstractByteBuf:63 - -Dioty.buffer.checkAccessible: true 2020-06-03 23:58:48 DEBUG AbstractByteBuf:64 - -Dioty.buffer.checkBounds: true 2020-06-03 23:58:48 DEBUG ResourceLeakDetectorFactory:195 - Loaded default ResourceLeakDetector: ioty.util.ResourceLeakDetector@6933b6c6 2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:809 - Reconnection successful 2020-06-03 23:58:48 DEBUG NettyConnector:1269 - NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] host 1: 10.44.6.85 ip address: 10.44.6.85 host 2: myhost2 ip address: 10.44.6.85 2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:277 - ClientSessionFactoryImpl received backup update for live/backup pair = TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true / null but it didn't belong to TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true Will now send as many messages as we can in few seconds... ... ... 2020-06-04 00:01:09 WARN client:210 - AMQ212037: Connection failure to myhost2/10.64.60.101:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED] 2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:800 - Trying reconnection attempt 0/-1 2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.implty.NettyConnectorFactory@45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true 2020-06-04 00:01:09 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll 2020-06-04 00:01:09 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616 2020-06-04 00:01:09 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616 2020-06-04 00:01:09 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = d4ed884e 2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed 2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1140 - Backup is not active, trying original connection configuration now. 2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:800 - Trying reconnection attempt 1/-1 2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.implty.NettyConnectorFactory@45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true 2020-06-04 00:01:11 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll 2020-06-04 00:01:11 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616 2020-06-04 00:01:11 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616 2020-06-04 00:01:11 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = 1530857a 2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed 020-06-04 00:01:37 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = d886a84e 2020-06-04 00:01:37 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed 2020-06-04 00:01:37 DEBUG ClientSessionFactoryImpl:1140 - Backup is not active, trying original connection configuration now. Exception in thread "main" javax.jms.JMSException: AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 71 at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:457) at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:361) at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendFullMessage(ActiveMQSessionContext.java:552) at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.sendRegularMessage(ClientProducerImpl.java:296) at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:268) at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:143) at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:125) at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.doSendx(ActiveMQMessageProducer.java:483) at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:220) at org.messaginghub.pooled.jms.JmsPoolMessageProducer.sendMessage(JmsPoolMessageProducer.java:182) at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:90) at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:79) at com.demo.artemis.clients.ArtemisClientClustered.runProducer(ArtemisClientClustered.java:77) at com.demo.artemis.clients.ArtemisClientClustered.main(ArtemisClientClustered.java:26) Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 71] ... 14 more

    注意:当我使用骆驼使用者从该队列中吸收消息并转换到另一个队列时.在我停止代理程序的过程中,使用方计数自动重定向到另一个代理程序实例.从控制台中,我可以看到消费者数量从一个经纪人重定向到另一个经纪人.

    NOTE: When I used the Camel consumer to consumer the message from this queue and transform to another queue. During the process when I stop the broker the consumer count is automatically redirected to the other broker instance. From the console I am able to see the consumer counts redirected from one broker to another.

    <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="www.springframework/schema/beans" xmlns:aop="www.springframework/schema/aop" xmlns:context="www.springframework/schema/context" xmlns:jee="www.springframework/schema/jee" xmlns:tx="www.springframework/schema/tx" xmlns:xsi="www.w3/2001/XMLSchema-instance" xsi:schemaLocation="www.springframework/schema/aop www.springframework/schema/aop/spring-aop-3.1.xsd www.springframework/schema/beans www.springframework/schema/beans/spring-beans-3.1.xsd www.springframework/schema/context www.springframework/schema/context/spring-context-3.1.xsd www.springframework/schema/jee www.springframework/schema/jee/spring-jee-3.1.xsd www.springframework/schema/tx www.springframework/schema/tx/spring-tx-3.1.xsd camel.apache/schema/spring camel.apache/schema/spring/camel-spring.xsd"> <bean id="jmsConnectionFactory" class="org.apache.activemq.artemis.jms.client.ActiveMQJMSConnectionFactory"> <constructor-arg index="0" value="(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;"/> </bean> <bean id="jmsPooledConnectionFactory" class="org.messaginghub.pooled.jms.JmsPoolConnectionFactory" init-method="start" destroy-method="stop"> <property name="maxConnections" value="10" /> <property name="connectionFactory" ref="jmsConnectionFactory" /> </bean> <bean id="jmsConfig" class="org.apache.camelponent.jms.JmsConfiguration"> <property name="connectionFactory" ref="jmsPooledConnectionFactory" /> <property name="concurrentConsumers" value="10" /> </bean> <bean id="jms" class="org.apache.camelponent.jms.JmsComponent"> <property name="configuration" ref="jmsConfig" /> </bean> <camelContext id="camel" xmlns="camel.apache/schema/spring"> <endpoint id="queue1" uri="jms:queue:myExampleQ" /> <endpoint id="queue2" uri="jms:queue:myExampleQ2" /> <route> <from uri="ref:queue1" /> <convertBodyTo type="java.lang.String" /> <transform> <simple>MSG FRM queue1 TO queue2 : ${bodyAs(String)}</simple> </transform> <to uri="ref:queue2" /> </route> </camelContext> </beans>

    推荐答案

    您已配置了一个由2个节点组成的主动/主动群集.这支持连接和消息负载平衡,但不支持透明的故障转移.为了获得透明的故障转移,您需要配置一个主动/被动HA对.查看 ActiveMQ Artemis文档以及随附的HA示例经纪人以获取有关如何执行此操作的更多详细信息.

    You've configured an active/active cluster of 2 nodes. This supports both connection and message load-balancing, but it doesn't support transparent failover. In order to get transparent failover you need to configure an active/passive HA pair. Check the ActiveMQ Artemis documentation as well as HA examples shipped with the broker for more details on how to do that.

    更多推荐

    Apache Artemis客户端故障转移发现

    本文发布于:2023-10-26 10:59:20,感谢您对本站的认可!
    本文链接:https://www.elefans.com/category/jswz/34/1529939.html
    版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
    本文标签:客户端   故障   发现   Apache   Artemis

    发布评论

    评论列表 (有 0 条评论)
    草根站长

    >www.elefans.com

    编程频道|电子爱好者 - 技术资讯及电子产品介绍!