2016年1月26日星期二

ActiveMQ_004:单机环境下配置基于文件共享存储方式的高可用

环境:OS X EI Capitan 10.11.3 + ActiveMQ 5.13.3

在单机环境下,配置三个ActiveMQ的节点的基于文件共享存储方式的高可用。

ActiveMQ 实现高可用的原理很简单,就是使用同一个共享存储,这个共享存储可以是kahaDB、levelDB,以及数据库。
由于数据库的性能比较差,所以这里只介绍文件共享存储方式。

1. 安装
 (1)mkdir activemq-cluster
 (2)cd activemq-cluster
 (3)tar xvf apache-activemq-5.13.3-bin.tar.gz
 (4)cp -r apache-activemq-5.13.3  activemq-node1
 (5)cp -r apache-activemq-5.13.3  activemq-node2
 (6)cp -r apache-activemq-5.13.3  activemq-node3

2. 配置
三个节点的存储都指向同一个位置,这里是 /tmp/kahadb。
由于我是在单机环境下配置,为了避免端口冲突,修改tcp端口,注释其它的协议;以及console端口。
 
(1)vim ./activemq-node1/conf/activemq-node1-master-slave.xml
修改如下:
<persistenceAdapter>
            <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
            <kahaDB directory="/tmp/kahadb"/>
            <!-- <levelDB directory="/tmp/leveldb"/> -->
</persistenceAdapter>
由于我是在单机环境下配置,为了避免端口冲突,修改tcp端口,注释其它的协议。
<transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61617?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <!--
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
             -->
</transportConnectors>

(2)vim ./activemq-node2/conf/activemq-node2-master-slave.xml
修改如下:
<persistenceAdapter>
            <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
            <kahaDB directory="/tmp/kahadb"/>
            <!-- <levelDB directory="/tmp/leveldb"/> -->
</persistenceAdapter>

<transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61618?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <!--
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
             -->
</transportConnectors>

(3)vim ./activemq-node3/conf/activemq-node3-master-slave.xml
修改如下:
<persistenceAdapter>
            <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
            <kahaDB directory="/tmp/kahadb"/>
            <!-- <levelDB directory="/tmp/leveldb"/> -->
</persistenceAdapter>
由于我是在单机环境下配置,为了避免端口冲突,修改tcp端口,注释其它的协议。
<transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61619?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <!--
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
             -->
</transportConnectors>

(4)vim ./activemq-node1/conf/jetty.xml
由于我是在单机环境下配置,为了避免端口冲突,修改console端口。
 <bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
             <!-- the default port number for the web console -->
        <property name="host" value="0.0.0.0"/>
        <property name="port" value="8162"/>
 </bean>

(5)vim ./activemq-node2/conf/jetty.xml
由于我是在单机环境下配置,为了避免端口冲突,修改console端口。
 <bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
             <!-- the default port number for the web console -->
        <property name="host" value="0.0.0.0"/>
        <property name="port" value="8163"/>
 </bean>

(6)vim ./activemq-node3/conf/jetty.xml
由于我是在单机环境下配置,为了避免端口冲突,修改console端口。
 <bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
             <!-- the default port number for the web console -->
        <property name="host" value="0.0.0.0"/>
        <property name="port" value="8164"/>
 </bean>

3. 启动与停止
(1)chmod -R a+x activemq-node1/bin
./activemq-node1/bin/activemq start/stop/restart xbean:activemq-node1-master-slave.xml
(2)chmod -R a+x activemq-node2/bin
./activemq-node2/bin/activemq start/stop/restart xbean:activemq-node2-master-slave.xml
(3)chmod -R a+x activemq-node3/bin
./activemq-node3/bin/activemq start/stop/restart xbean:activemq-node3-master-slave.xml

启动后,同一时间只有一个ActiveMQ节点是Master,其余的节点是Slave。

4. 查看日志
(1)tail -f activemq-node3/data/activemq.log
发现 node3 是 slave:
2016-07-06 09:33:54,518 | INFO  | Database /tmp/kahadb/lock is locked by another server. This broker is now in slave mode waiting a lock to be acquired | org.apache.activemq.store.SharedFileLocker | main
修改 conf/log4j.properties,把日志级别提高到 DEBUG,
log4j.rootLogger=DEBUG, console, logfile
输出如下:
2016-07-06 10:27:38,167 | DEBUG | Database /tmp/kahadb/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File '/tmp/kahadb/lock' could not be locked. | org.apache.activemq.store.SharedFileLocker | main
也就是说,slave 节点会每 10 秒检查一次 lock。

停掉 master,发现 node3 变成 master:
2016-07-06 09:35:54,760 | INFO  | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2016-07-06 09:35:54,779 | INFO  | Recovering from the journal @1:21278 | org.apache.activemq.store.kahadb.MessageDatabase | main
2016-07-06 09:35:54,782 | INFO  | Recovery replayed 25 operations from the journal in 0.014 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2016-07-06 09:35:54,790 | INFO  | PListStore:[/Users/maping/Apache/activemq-cluster/activemq-node3/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2016-07-06 09:35:54,935 | INFO  | Apache ActiveMQ 5.13.3 (localhost, ID:MaPingdeMacBook-Pro.local-63726-1467768954809-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2016-07-06 09:35:54,954 | INFO  | Listening for connections at: tcp://MaPingdeMacBook-Pro.local:61619?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2016-07-06 09:35:54,958 | INFO  | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2016-07-06 09:35:54,962 | INFO  | Apache ActiveMQ 5.13.3 (localhost, ID:MaPingdeMacBook-Pro.local-63726-1467768954809-0:1) started | org.apache.activemq.broker.BrokerService | main
2016-07-06 09:35:54,964 | INFO  | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2016-07-06 09:35:55,553 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2016-07-06 09:35:55,670 | INFO  | ActiveMQ WebConsole available at http://0.0.0.0:8164/ | org.apache.activemq.web.WebConsoleStarter | main
2016-07-06 09:35:55,670 | INFO  | ActiveMQ Jolokia REST API available at http://0.0.0.0:8164/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2016-07-06 09:35:55,712 | INFO  | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2016-07-06 09:35:55,930 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /api | main
2016-07-06 09:35:56,021 | INFO  | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main


5. 客户端
使用failover协议连接:"failover:(tcp://127.0.0.1:61617,tcp://127.0.0.1:61618,tcp://127.0.0.1:61619)");

没有评论: