2015年10月19日星期一
2015年10月14日星期三
AMQ_005:A-MQ 功能演示5:Master/Slave + Networks Broker 高可用可扩展架构
环境:JBoss A-MQ 6.0.0
组合一下Master/Slave Failover 和 Networks Broker,就诞生了现在的高可用可扩展架构。
(1)在root container下创建两个子container
fabric:container-create-child root AMQ-East 2
输出如下:
The following containers have been created successfully:
Container: AMQ-East1.
Container: AMQ-East2.
(2)在root container下创建两个子container
fabric:container-create-child root AMQ-West 2
输出如下:
The following containers have been created successfully:
Container: AMQ-West1.
Container: AMQ-West2.
(3)创建 East container,East container 由一对 master/slave broker 组成,属于 amq-east 组,并且和 amq-west broker 以networks方式连接
fabric:mq-create --group amq-east --networks amq-west --networks-username admin --networks-password admin --assign-container AMQ-East1,AMQ-East2 amq-east-profile
输出如下:
MQ profile amq-east-profile ready
Profile successfully assigned to AMQ-East1
Profile successfully assigned to AMQ-East2
(4)创建 West container,West container 由一对 master/slave broker 组成,属于 amq-west 组,并且和 amq-east broker 以networks方式连接
fabric:mq-create --group amq-west --networks amq-east --networks-username admin --networks-password admin --assign-container AMQ-West1,AMQ-West2 amq-west-profile
输出如下:
MQ profile amq-west-profile ready
Profile successfully assigned to AMQ-West1
Profile successfully assigned to AMQ-West2
(5)查看fabric集群情况
fabric:cluster-list
输出如下:
[cluster] [masters] [slaves] [services]
fusemq/amq-east
amq-east-profile AMQ-East2 AMQ-East1 tcp://MaPingdeMacBook-Pro.local:57047
fusemq/amq-west
amq-west-profile AMQ-West1 AMQ-West2 tcp://MaPingdeMacBook-Pro.local:57050
fusemq/a-mq-east
a-mq-east-broker A-MQ-East - tcp://MaPingdeMacBook-Pro.local:55602
fusemq/a-mq-west
a-mq-west-broker A-MQ-West - tcp://MaPingdeMacBook-Pro.local:55617
(6)生产者
java -jar mq-client.jar consumer --user admin --password admin --brokerUrl "discovery:(fabric:amq-west)"
(7)消费者
java -jar mq-client.jar producer --user admin --password admin --brokerUrl "discovery:(fabric:amq-east)"
参考文献:
1. https://github.com/FuseByExample/external-mq-fabric-client/blob/master/docs/fabric-ha-setup-master-slave.md
组合一下Master/Slave Failover 和 Networks Broker,就诞生了现在的高可用可扩展架构。
(1)在root container下创建两个子container
fabric:container-create-child root AMQ-East 2
输出如下:
The following containers have been created successfully:
Container: AMQ-East1.
Container: AMQ-East2.
(2)在root container下创建两个子container
fabric:container-create-child root AMQ-West 2
输出如下:
The following containers have been created successfully:
Container: AMQ-West1.
Container: AMQ-West2.
(3)创建 East container,East container 由一对 master/slave broker 组成,属于 amq-east 组,并且和 amq-west broker 以networks方式连接
fabric:mq-create --group amq-east --networks amq-west --networks-username admin --networks-password admin --assign-container AMQ-East1,AMQ-East2 amq-east-profile
输出如下:
MQ profile amq-east-profile ready
Profile successfully assigned to AMQ-East1
Profile successfully assigned to AMQ-East2
(4)创建 West container,West container 由一对 master/slave broker 组成,属于 amq-west 组,并且和 amq-east broker 以networks方式连接
fabric:mq-create --group amq-west --networks amq-east --networks-username admin --networks-password admin --assign-container AMQ-West1,AMQ-West2 amq-west-profile
输出如下:
MQ profile amq-west-profile ready
Profile successfully assigned to AMQ-West1
Profile successfully assigned to AMQ-West2
(5)查看fabric集群情况
fabric:cluster-list
输出如下:
[cluster] [masters] [slaves] [services]
fusemq/amq-east
amq-east-profile AMQ-East2 AMQ-East1 tcp://MaPingdeMacBook-Pro.local:57047
fusemq/amq-west
amq-west-profile AMQ-West1 AMQ-West2 tcp://MaPingdeMacBook-Pro.local:57050
fusemq/a-mq-east
a-mq-east-broker A-MQ-East - tcp://MaPingdeMacBook-Pro.local:55602
fusemq/a-mq-west
a-mq-west-broker A-MQ-West - tcp://MaPingdeMacBook-Pro.local:55617
(6)生产者
java -jar mq-client.jar consumer --user admin --password admin --brokerUrl "discovery:(fabric:amq-west)"
(7)消费者
java -jar mq-client.jar producer --user admin --password admin --brokerUrl "discovery:(fabric:amq-east)"
参考文献:
1. https://github.com/FuseByExample/external-mq-fabric-client/blob/master/docs/fabric-ha-setup-master-slave.md
AMQ_004:A-MQ 功能演示4:Networks Broker 负载均衡架构
环境:JBoss A-MQ 6.0.0
Master/Slave架构解决了容错和高可用问题,但是没有解决负载均衡和横向扩展性问题。
1. Networks Broker
两个Broker通过networks连接在一起,这两个Broker分别使用不同的persistence stores。
通过networks连接在一起的Broker,意味着它们之间可以为某个Desitination转发消息。
而客户端只需连接任意一个Broker即可。
(1)创建 A-MQ-East container,属于 a-mq-east 组,并且和 a-mq-west broker 以networks方式连接
fabric:mq-create --group a-mq-east --networks a-mq-west --networks-username admin --networks-password admin --create-container A-MQ-East a-mq-east-broker
输出如下:
MQ profile a-mq-east-broker ready
Successfully created container A-MQ-East
(2)创建 A-MQ-West container,属于 a-mq-west 组,并且和 a-mq-east broker 以networks方式连接
fabric:mq-create --group a-mq-west --networks a-mq-east --networks-username admin --networks-password admin --create-container A-MQ-West a-mq-west-broker
输出如下:
MQ profile a-mq-west-broker ready
Successfully created container A-MQ-West
以上两步创建了两个Containers:A-MQ-East、A-MQ-West,以及这两个Containers对应的两个Profiles:a-mq-east-broker、a-mq-west-broker。
每个 Container 运行一个内嵌的 ActiveMQ 实例,它们之间以networks方式连接在一起。
(3)生产者
java -jar mq-client.jar producer --brokerUrl discovery:fabric:a-mq-east --user admin --password admin --destination queue://summit.test --count 10
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:a-mq-east
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@72cf2e49
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /127.0.0.1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to localhost/127.0.0.1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x150656962ea0009, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55602
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55602
[org.fusesource.mq.ProducerThread] : Sent: test message: 0
[org.fusesource.mq.ProducerThread] : Sent: test message: 1
[org.fusesource.mq.ProducerThread] : Sent: test message: 2
[org.fusesource.mq.ProducerThread] : Sent: test message: 3
[org.fusesource.mq.ProducerThread] : Sent: test message: 4
[org.fusesource.mq.ProducerThread] : Sent: test message: 5
[org.fusesource.mq.ProducerThread] : Sent: test message: 6
[org.fusesource.mq.ProducerThread] : Sent: test message: 7
[org.fusesource.mq.ProducerThread] : Sent: test message: 8
[org.fusesource.mq.ProducerThread] : Sent: test message: 9
[org.fusesource.mq.ProducerThread] : Producer thread finished
Produced: 10
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55602) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.zookeeper.ZooKeeper] : Session: 0x150656962ea0009 closed
[org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55602
(4)消费者
java -jar mq-client.jar consumer --brokerUrl discovery:fabric:a-mq-west --user admin --password admin --destination queue://summit.test --count 10
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:a-mq-west
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@51a669d5
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, sessionid = 0x150656962ea000a, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55617
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55617
Waiting for: 10 messages
[org.fusesource.mq.ConsumerThread] : Received test message: 0
[org.fusesource.mq.ConsumerThread] : Received test message: 1
[org.fusesource.mq.ConsumerThread] : Received test message: 2
[org.fusesource.mq.ConsumerThread] : Received test message: 3
[org.fusesource.mq.ConsumerThread] : Received test message: 4
[org.fusesource.mq.ConsumerThread] : Received test message: 5
[org.fusesource.mq.ConsumerThread] : Received test message: 6
[org.fusesource.mq.ConsumerThread] : Received test message: 7
[org.fusesource.mq.ConsumerThread] : Received test message: 8
[org.fusesource.mq.ConsumerThread] : Received test message: 9
[org.fusesource.mq.ConsumerThread] : Consumer thread finished
Consumed: 10 messages
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55617) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.zookeeper.ZooKeeper] : Session: 0x150656962ea000a closed
[org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55617
小结:Producer 发送消息到 A-MQ-East broker,Consumer 从 A-MQ-West broker 上接收该消息。
Master/Slave架构解决了容错和高可用问题,但是没有解决负载均衡和横向扩展性问题。
1. Networks Broker
两个Broker通过networks连接在一起,这两个Broker分别使用不同的persistence stores。
通过networks连接在一起的Broker,意味着它们之间可以为某个Desitination转发消息。
而客户端只需连接任意一个Broker即可。
(1)创建 A-MQ-East container,属于 a-mq-east 组,并且和 a-mq-west broker 以networks方式连接
fabric:mq-create --group a-mq-east --networks a-mq-west --networks-username admin --networks-password admin --create-container A-MQ-East a-mq-east-broker
输出如下:
MQ profile a-mq-east-broker ready
Successfully created container A-MQ-East
(2)创建 A-MQ-West container,属于 a-mq-west 组,并且和 a-mq-east broker 以networks方式连接
fabric:mq-create --group a-mq-west --networks a-mq-east --networks-username admin --networks-password admin --create-container A-MQ-West a-mq-west-broker
输出如下:
MQ profile a-mq-west-broker ready
Successfully created container A-MQ-West
以上两步创建了两个Containers:A-MQ-East、A-MQ-West,以及这两个Containers对应的两个Profiles:a-mq-east-broker、a-mq-west-broker。
每个 Container 运行一个内嵌的 ActiveMQ 实例,它们之间以networks方式连接在一起。
(3)生产者
java -jar mq-client.jar producer --brokerUrl discovery:fabric:a-mq-east --user admin --password admin --destination queue://summit.test --count 10
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:a-mq-east
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@72cf2e49
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /127.0.0.1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to localhost/127.0.0.1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x150656962ea0009, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55602
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55602
[org.fusesource.mq.ProducerThread] : Sent: test message: 0
[org.fusesource.mq.ProducerThread] : Sent: test message: 1
[org.fusesource.mq.ProducerThread] : Sent: test message: 2
[org.fusesource.mq.ProducerThread] : Sent: test message: 3
[org.fusesource.mq.ProducerThread] : Sent: test message: 4
[org.fusesource.mq.ProducerThread] : Sent: test message: 5
[org.fusesource.mq.ProducerThread] : Sent: test message: 6
[org.fusesource.mq.ProducerThread] : Sent: test message: 7
[org.fusesource.mq.ProducerThread] : Sent: test message: 8
[org.fusesource.mq.ProducerThread] : Sent: test message: 9
[org.fusesource.mq.ProducerThread] : Producer thread finished
Produced: 10
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55602) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.zookeeper.ZooKeeper] : Session: 0x150656962ea0009 closed
[org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55602
(4)消费者
java -jar mq-client.jar consumer --brokerUrl discovery:fabric:a-mq-west --user admin --password admin --destination queue://summit.test --count 10
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:a-mq-west
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@51a669d5
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, sessionid = 0x150656962ea000a, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55617
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55617
Waiting for: 10 messages
[org.fusesource.mq.ConsumerThread] : Received test message: 0
[org.fusesource.mq.ConsumerThread] : Received test message: 1
[org.fusesource.mq.ConsumerThread] : Received test message: 2
[org.fusesource.mq.ConsumerThread] : Received test message: 3
[org.fusesource.mq.ConsumerThread] : Received test message: 4
[org.fusesource.mq.ConsumerThread] : Received test message: 5
[org.fusesource.mq.ConsumerThread] : Received test message: 6
[org.fusesource.mq.ConsumerThread] : Received test message: 7
[org.fusesource.mq.ConsumerThread] : Received test message: 8
[org.fusesource.mq.ConsumerThread] : Received test message: 9
[org.fusesource.mq.ConsumerThread] : Consumer thread finished
Consumed: 10 messages
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55617) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.zookeeper.ZooKeeper] : Session: 0x150656962ea000a closed
[org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55617
小结:Producer 发送消息到 A-MQ-East broker,Consumer 从 A-MQ-West broker 上接收该消息。
AMQ_003:A-MQ 功能演示3:Master/Slave Failover 高可用架构
环境:JBoss A-MQ 6.0.0
1. Master/Slave Failover
Master/Slave Failover是一种高可用解决方案,其实现的重要原理就是 Master/Slave 节点必须指向同一个 persistence store。
A-MQ 使用排它锁来访问 persistence store,先拿到这个锁的节点就是 Master 节点,其它节点是 Slave 节点,Slave 节点不断轮询以获得排它锁,在获得锁之前,它不会发起真正的连接。
一条消息在某一时刻只能存在于一个节点上,当消息所在的节点宕掉时,消息的持久化信息依然安全地保存在persistence store 中,等待其它正常的节点将其获取并发送。
除了使用同一个 persistence store(SAN)来实现Master/Slave Failover这种方式之外,还可以使用RHEL HA以及数据库复制方式实现。
(1)创建Master/Slave container
mq-create --create-container broker1,broker2 tolerant
输出如下:
MQ profile tolerant ready
Jmx Login for root: admin
Jmx Password for root:
Successfully created container broker1
Successfully created container broker2
(2)访问Fabric Management Console,查看tolerant Profile具体配置信息
找到 org.fusesource.mq.fabric.server-tolerant.properties,内容如下:
#Wed Oct 14 08:46:10 CST 2015
connectors=openwire
broker-name=tolerant
config=zk\:/fabric/configs/versions/1.0/profiles/mq-base/broker.xml
data=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/data/tolerant
group=default
standby.pool=default
(3)生产者
java -jar mq-client.jar producer --brokerUrl discovery:fabric:default --user admin --password admin --destination queue://summit.test --count 10
brokerUrl 值为 discovery:fabric:default,是让 JBoss A-MQ 客户端使用自动发现机制去查找 ActiveMQ 节点的连接信息。
因为Fabric registry 在本地(就是本机),所以不需要提供 Dzookeeper.url=<host>:<port>,如果Fabric registry 不在本地,那么需要提供这个参数。
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:default
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@7fe7f581
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, sessionid = 0x150656962ea0003, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55469
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55469
[org.fusesource.mq.ProducerThread] : Sent: test message: 0
[org.fusesource.mq.ProducerThread] : Sent: test message: 1
[org.fusesource.mq.ProducerThread] : Sent: test message: 2
[org.fusesource.mq.ProducerThread] : Sent: test message: 3
[org.fusesource.mq.ProducerThread] : Sent: test message: 4
[org.fusesource.mq.ProducerThread] : Sent: test message: 5
[org.fusesource.mq.ProducerThread] : Sent: test message: 6
[org.fusesource.mq.ProducerThread] : Sent: test message: 7
[org.fusesource.mq.ProducerThread] : Sent: test message: 8
[org.fusesource.mq.ProducerThread] : Sent: test message: 9
[org.fusesource.mq.ProducerThread] : Producer thread finished
Produced: 10
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.zookeeper.ZooKeeper[] : Session: 0x150656962ea0003 closed
org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55469) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55469
(3)查看fabric集群情况
fabric:cluster-list
输出如下:
[cluster] [masters] [slaves] [services]
stats/default
fusemq/default
tolerant broker1 broker2 tcp://MaPingdeMacBook-Pro.local:55469
(4)生产者,增加发送消息数量以及发送间隔
java -jar mq-client.jar producer --brokerUrl discovery:fabric:default --user admin --password admin --destination queue://summit.test --count 1000 -—sleep 1000
在此期间,启动/停止 container:fabric:container-{start,stop} <container name>
fabric:container-stop broker1
观察fabric集群的变化:fabric:cluster-list
fabric:container-start broker1
观察fabric集群的变化:fabric:cluster-list
fabric:container-stop broker2
观察fabric集群的变化:fabric:cluster-list
fabric:container-start broker2
观察fabric集群的变化:fabric:cluster-list
1. Master/Slave Failover
Master/Slave Failover是一种高可用解决方案,其实现的重要原理就是 Master/Slave 节点必须指向同一个 persistence store。
A-MQ 使用排它锁来访问 persistence store,先拿到这个锁的节点就是 Master 节点,其它节点是 Slave 节点,Slave 节点不断轮询以获得排它锁,在获得锁之前,它不会发起真正的连接。
一条消息在某一时刻只能存在于一个节点上,当消息所在的节点宕掉时,消息的持久化信息依然安全地保存在persistence store 中,等待其它正常的节点将其获取并发送。
除了使用同一个 persistence store(SAN)来实现Master/Slave Failover这种方式之外,还可以使用RHEL HA以及数据库复制方式实现。
(1)创建Master/Slave container
mq-create --create-container broker1,broker2 tolerant
输出如下:
MQ profile tolerant ready
Jmx Login for root: admin
Jmx Password for root:
Successfully created container broker1
Successfully created container broker2
(2)访问Fabric Management Console,查看tolerant Profile具体配置信息
找到 org.fusesource.mq.fabric.server-tolerant.properties,内容如下:
#Wed Oct 14 08:46:10 CST 2015
connectors=openwire
broker-name=tolerant
config=zk\:/fabric/configs/versions/1.0/profiles/mq-base/broker.xml
data=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/data/tolerant
group=default
standby.pool=default
(3)生产者
java -jar mq-client.jar producer --brokerUrl discovery:fabric:default --user admin --password admin --destination queue://summit.test --count 10
brokerUrl 值为 discovery:fabric:default,是让 JBoss A-MQ 客户端使用自动发现机制去查找 ActiveMQ 节点的连接信息。
因为Fabric registry 在本地(就是本机),所以不需要提供 Dzookeeper.url=<host>:<port>,如果Fabric registry 不在本地,那么需要提供这个参数。
输出如下:
Using destination: queue://summit.test, on broker: discovery:fabric:default
[org.fusesource.mq.fabric.FabricDiscoveryAgent] : Using local ZKClient
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : Starting StateChangeDispatcher
[org.apache.zookeeper.ZooKeeper] : Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
[org.apache.zookeeper.ZooKeeper] : Client environment:host.name=192.168.56.1
[org.apache.zookeeper.ZooKeeper] : Client environment:java.version=1.7.0_80
[org.apache.zookeeper.ZooKeeper] : Client environment:java.vendor=Oracle Corporation
[org.apache.zookeeper.ZooKeeper] : Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home/jre
[org.apache.zookeeper.ZooKeeper] : Client environment:java.class.path=mq-client.jar
[org.apache.zookeeper.ZooKeeper] : Client environment:java.library.path=/Users/maping/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
[org.apache.zookeeper.ZooKeeper] : Client environment:java.io.tmpdir=/var/folders/3w/hlhsjmns5m9g6xv6xqk2864h0000gn/T/
[org.apache.zookeeper.ZooKeeper] : Client environment:java.compiler=
[org.apache.zookeeper.ZooKeeper] : Client environment:os.name=Mac OS X
[org.apache.zookeeper.ZooKeeper] : Client environment:os.arch=x86_64
[org.apache.zookeeper.ZooKeeper] : Client environment:os.version=10.10.5
[org.apache.zookeeper.ZooKeeper] : Client environment:user.name=maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.home=/Users/maping
[org.apache.zookeeper.ZooKeeper] : Client environment:user.dir=/Users/maping/Redhat/amq/demo/jboss-a-mq-6.0.0.redhat-024/extras
[org.apache.zookeeper.ZooKeeper] : Initiating client connection, connectString=localhost:2181 sessionTimeout=10000 watcher=org.fusesource.fabric.zookeeper.internal.ZKClient@7fe7f581
[org.apache.zookeeper.ClientCnxn] : Opening socket connection to server /fe80:0:0:0:0:0:0:1%1:2181
[org.apache.zookeeper.client.ZooKeeperSaslClient] : Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
[org.apache.zookeeper.ClientCnxn] : Socket connection established to fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, initiating session
[org.apache.zookeeper.ClientCnxn] : Session establishment complete on server fe80:0:0:0:0:0:0:1%1/fe80:0:0:0:0:0:0:1%1:2181, sessionid = 0x150656962ea0003, negotiated timeout = 10000
[org.apache.activemq.transport.discovery.DiscoveryTransport] : Adding new broker connection URL: tcp://MaPingdeMacBook-Pro.local:55469
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully connected to tcp://MaPingdeMacBook-Pro.local:55469
[org.fusesource.mq.ProducerThread] : Sent: test message: 0
[org.fusesource.mq.ProducerThread] : Sent: test message: 1
[org.fusesource.mq.ProducerThread] : Sent: test message: 2
[org.fusesource.mq.ProducerThread] : Sent: test message: 3
[org.fusesource.mq.ProducerThread] : Sent: test message: 4
[org.fusesource.mq.ProducerThread] : Sent: test message: 5
[org.fusesource.mq.ProducerThread] : Sent: test message: 6
[org.fusesource.mq.ProducerThread] : Sent: test message: 7
[org.fusesource.mq.ProducerThread] : Sent: test message: 8
[org.fusesource.mq.ProducerThread] : Sent: test message: 9
[org.fusesource.mq.ProducerThread] : Producer thread finished
Produced: 10
[org.fusesource.mq.ActiveMQService] : Closed JMS connection
[org.fusesource.fabric.zookeeper.internal.AbstractZKClient] : StateChangeDispatcher terminated.
[org.apache.zookeeper.ZooKeeper[] : Session: 0x150656962ea0003 closed
org.apache.zookeeper.ClientCnxn] : EventThread shut down
[org.apache.activemq.transport.failover.FailoverTransport] : Transport (tcp://192.168.56.1:55469) failed, reason: java.io.EOFException, attempting to automatically reconnect
[org.apache.activemq.transport.failover.FailoverTransport] : Successfully reconnected to tcp://MaPingdeMacBook-Pro.local:55469
(3)查看fabric集群情况
fabric:cluster-list
输出如下:
[cluster] [masters] [slaves] [services]
stats/default
fusemq/default
tolerant broker1 broker2 tcp://MaPingdeMacBook-Pro.local:55469
(4)生产者,增加发送消息数量以及发送间隔
java -jar mq-client.jar producer --brokerUrl discovery:fabric:default --user admin --password admin --destination queue://summit.test --count 1000 -—sleep 1000
在此期间,启动/停止 container:fabric:container-{start,stop} <container name>
fabric:container-stop broker1
观察fabric集群的变化:fabric:cluster-list
fabric:container-start broker1
观察fabric集群的变化:fabric:cluster-list
fabric:container-stop broker2
观察fabric集群的变化:fabric:cluster-list
fabric:container-start broker2
观察fabric集群的变化:fabric:cluster-list
AMQ_002:A-MQ 功能演示2:Fabric
环境:JBoss A-MQ 6.0.0
1. 创建 Fabric
Fabric是JBoss A-MQ 包含的一个特性,它是一个集中化管控环境,可以建立和管理多个 A-MQ 实例。它还包括一个运行时注册功能,允许客户端自动发现 A-MQ 实例。
(1)fabric:create --clean --zookeeper-password admin --profile fmc
输出如下:
Using specified zookeeper password:admin
JBossA-MQ:karaf@root>
_ ____ __ __ ____
| | _ \ /\ | \/ |/ __ \
| | |_) | ___ ___ ___ / \ ______| \ / | | | |
_ | | _ < / _ \/ __/ __| / /\ \______| |\/| | | | |
| |__| | |_) | (_) \__ \__ \ / ____ \ | | | | |__| |
\____/|____/ \___/|___/___/ /_/ \_\ |_| |_|\___\_\
JBoss A-MQ (6.0.0.redhat-024)
http://www.redhat.com/products/jbossenterprisemiddleware/amq/
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown JBoss A-MQ.
创建 Fabric,其中包括 Fabric Management Console,该命令做了以下几件事:
• 停止 ActiveMQ 实例
• 启动 Fabric Registry,启动后,可以使用 fabric:status 检查
• 增加了一些新命令。按 Tab 键,会发现现在有354条命令
• 启动 Fabric Management Console
(2)fabric:status
[profile] [instances] [health]
fabric 1 100%
fabric-ensemble-0000-1 1 100%
fmc 1 100%
(3)fabric:create 命令参数介绍
• --clean – 重置 container 到初始状态。
• --zookeeper-password <password> – 这个 password 在将来允许其它 container 加入到这个 Fabric 时要用到。比如 fabric:join – zookeeper-password <password> <your host>:2181
• --profile fmc – Fabric 使用一个概念叫 Profiles,组合了一些发布的代码和配置,它可以很容易的创建,版本化,简化了发布的过程。
2. 访问 Fabric Management Console:http://localhost:8181/ 用户名/口令:admin/admin
(1)Containers – 列出所有加入到这个 Fabric 中的 containers 实例(本地的和远程的)
(2)Cloud – 管理云中的containers,目前只支持 Amazon EC2
(3)Profiles – 列出所有的 Profiles
(4)Patching – Fabric 支持给 containers 打补丁
(5)Users – 管理用户(etc/users.properties),在生产环境中,用户是配置在LDAP中的。
1. 创建 Fabric
Fabric是JBoss A-MQ 包含的一个特性,它是一个集中化管控环境,可以建立和管理多个 A-MQ 实例。它还包括一个运行时注册功能,允许客户端自动发现 A-MQ 实例。
(1)fabric:create --clean --zookeeper-password admin --profile fmc
输出如下:
Using specified zookeeper password:admin
JBossA-MQ:karaf@root>
_ ____ __ __ ____
| | _ \ /\ | \/ |/ __ \
| | |_) | ___ ___ ___ / \ ______| \ / | | | |
_ | | _ < / _ \/ __/ __| / /\ \______| |\/| | | | |
| |__| | |_) | (_) \__ \__ \ / ____ \ | | | | |__| |
\____/|____/ \___/|___/___/ /_/ \_\ |_| |_|\___\_\
JBoss A-MQ (6.0.0.redhat-024)
http://www.redhat.com/products/jbossenterprisemiddleware/amq/
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown JBoss A-MQ.
创建 Fabric,其中包括 Fabric Management Console,该命令做了以下几件事:
• 停止 ActiveMQ 实例
• 启动 Fabric Registry,启动后,可以使用 fabric:status 检查
• 增加了一些新命令。按 Tab 键,会发现现在有354条命令
• 启动 Fabric Management Console
(2)fabric:status
[profile] [instances] [health]
fabric 1 100%
fabric-ensemble-0000-1 1 100%
fmc 1 100%
(3)fabric:create 命令参数介绍
• --clean – 重置 container 到初始状态。
• --zookeeper-password <password> – 这个 password 在将来允许其它 container 加入到这个 Fabric 时要用到。比如 fabric:join – zookeeper-password <password> <your host>:2181
• --profile fmc – Fabric 使用一个概念叫 Profiles,组合了一些发布的代码和配置,它可以很容易的创建,版本化,简化了发布的过程。
2. 访问 Fabric Management Console:http://localhost:8181/ 用户名/口令:admin/admin
(1)Containers – 列出所有加入到这个 Fabric 中的 containers 实例(本地的和远程的)
(2)Cloud – 管理云中的containers,目前只支持 Amazon EC2
(3)Profiles – 列出所有的 Profiles
(4)Patching – Fabric 支持给 containers 打补丁
(5)Users – 管理用户(etc/users.properties),在生产环境中,用户是配置在LDAP中的。
AMQ_001:A-MQ 功能演示1:安装、启动
环境:JBoss A-MQ 6.0.0
1. 安装与启动
(1)unzip jboss-a-mq-6.0.0.redhat-024.zip
(2)vim jboss-a-mq-6.0.0.redhat-024/etc/users.properties 去掉admin注释
(3)vim jboss-a-mq-6.0.0.redhat-024/etc/system.properties 去掉admin注释
activemq.jmx.user=admin
activemq.jmx.password=admin
说明:在 jboss-a-mq-6.2.0.redhat-133 中没有找到这两个属性。
(4)cd jboss-a-mq-6.0.0.redhat-024/bin
(5)./amq,启动Server和交互模式Console
或者 ./amq server,只启动Server
或者 ./amq client,只启动交互模式Console
或者 ./start,后台启动 broker
或者 ./stop,停止后台 broker
(6)log:display | grep fuse
(7)按下tab键,查看所有命令,或者 <command> —-help查看使用帮助
(8)http://localhost:8181/
(9)连接本地或远程的instances:./client —help
2. 基于Queue的消息发送与接收
(1)cd extras
(2)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 10
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 10
说明:
(1)Queue 是一对一通讯。
(2)Queue 允许生产者和消费者异步通讯,即两者通讯时不必都在运行。
(3)failover Transport 当连接断掉的时候会自动重连。
3. Queue的round robin 负载均衡特性
(1)cd extras
(2)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 50
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 50
(4)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 100 --sleep 100
说明:
(1)Queue默认轮流发送消息到各个消费者,所以一个消费者接收奇数消息,另一个接收偶数消息。
4. 基于Topic的消息发送与接收
(1)cd extras
(2)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
(4)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
说明:
(1)Topic 是一对多通讯。
(2)Topic 默认情况下要求生产者和消费者通讯时必须都在运行。
(3)failover Transport 当连接断掉的时候会自动重连。
5. etc/activemq.xml 内容说明
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties and fabric as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties">
<bean class="org.fusesource.mq.fabric.ConfigurationProperties"/>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="${broker-name}"
dataDirectory="${data}"
start="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${data}/kahadb"/>
</persistenceAdapter>
<plugins>
<jaasAuthenticationPlugin configuration="karaf" />
</plugins>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:0?maximumConnections=1000"/>
</transportConnectors>
</broker>
</beans>
说明:
(1)activemq.xml 是一个Spring的配置文件,使用Spring的依赖注入机制。
(2)DestinationPolicy – 使用匹配符配置 Queue/Topic 的限制策略。
比如这里的配置要求
• 所有的Topic 的未决消息不能超过1000条,如果到达了此值,将会停止生产者继续向该Topic发送消息。
• 所有的Queue 不能超过1mb的内存大小,如果到达了此值,将会停止生产者继续向该Queue发送消息。
(3)ManagementContext – 配置使用JMX连接ActiveMQ,默认情况下,使用父容器的JMX连接。
(4)PersistenceAdapter – 控制持久化信息如何保存。默认情况下,ActiveMQ使用KahaDB作为持久化存储,在给生产者发送确认之前将消息持久化。
(5)JaasAuthenticationPlugin – ActiveMQ 使用 JAAS 认证和授权。默认情况下,使用文件 etc/users.properties 保存用户的身份和权限。
(6)SystemUsage – 限制 ActiveMQ 使用的内存和硬盘空间,当超过限制时,ActiveMQ “友善地”停止从生产者接收消息,当消费者取走足够的消息后,会重新从生产者接收消息。
(7)TransportConnectors – 允许客户端使用哪些协议连接ActiveMQ。ActiveMQ支持AMQP 1.0, MQTT, STOMP, and OpenWire,并且可以组合SSL。
1. 安装与启动
(1)unzip jboss-a-mq-6.0.0.redhat-024.zip
(2)vim jboss-a-mq-6.0.0.redhat-024/etc/users.properties 去掉admin注释
(3)vim jboss-a-mq-6.0.0.redhat-024/etc/system.properties 去掉admin注释
activemq.jmx.user=admin
activemq.jmx.password=admin
说明:在 jboss-a-mq-6.2.0.redhat-133 中没有找到这两个属性。
(4)cd jboss-a-mq-6.0.0.redhat-024/bin
(5)./amq,启动Server和交互模式Console
或者 ./amq server,只启动Server
或者 ./amq client,只启动交互模式Console
或者 ./start,后台启动 broker
或者 ./stop,停止后台 broker
(6)log:display | grep fuse
(7)按下tab键,查看所有命令,或者 <command> —-help查看使用帮助
(8)http://localhost:8181/
(9)连接本地或远程的instances:./client —help
2. 基于Queue的消息发送与接收
(1)cd extras
(2)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 10
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 10
说明:
(1)Queue 是一对一通讯。
(2)Queue 允许生产者和消费者异步通讯,即两者通讯时不必都在运行。
(3)failover Transport 当连接断掉的时候会自动重连。
3. Queue的round robin 负载均衡特性
(1)cd extras
(2)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 50
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 50
(4)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination queue://summit.test --count 100 --sleep 100
说明:
(1)Queue默认轮流发送消息到各个消费者,所以一个消费者接收奇数消息,另一个接收偶数消息。
4. 基于Topic的消息发送与接收
(1)cd extras
(2)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
(3)java -jar mq-client.jar consumer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
(4)java -jar mq-client.jar producer --brokerUrl failover://tcp://localhost:61616 --user admin --password admin --destination topic://summit.test --count 10
说明:
(1)Topic 是一对多通讯。
(2)Topic 默认情况下要求生产者和消费者通讯时必须都在运行。
(3)failover Transport 当连接断掉的时候会自动重连。
5. etc/activemq.xml 内容说明
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties and fabric as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties">
<bean class="org.fusesource.mq.fabric.ConfigurationProperties"/>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="${broker-name}"
dataDirectory="${data}"
start="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${data}/kahadb"/>
</persistenceAdapter>
<plugins>
<jaasAuthenticationPlugin configuration="karaf" />
</plugins>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:0?maximumConnections=1000"/>
</transportConnectors>
</broker>
</beans>
说明:
(1)activemq.xml 是一个Spring的配置文件,使用Spring的依赖注入机制。
(2)DestinationPolicy – 使用匹配符配置 Queue/Topic 的限制策略。
比如这里的配置要求
• 所有的Topic 的未决消息不能超过1000条,如果到达了此值,将会停止生产者继续向该Topic发送消息。
• 所有的Queue 不能超过1mb的内存大小,如果到达了此值,将会停止生产者继续向该Queue发送消息。
(3)ManagementContext – 配置使用JMX连接ActiveMQ,默认情况下,使用父容器的JMX连接。
(4)PersistenceAdapter – 控制持久化信息如何保存。默认情况下,ActiveMQ使用KahaDB作为持久化存储,在给生产者发送确认之前将消息持久化。
(5)JaasAuthenticationPlugin – ActiveMQ 使用 JAAS 认证和授权。默认情况下,使用文件 etc/users.properties 保存用户的身份和权限。
(6)SystemUsage – 限制 ActiveMQ 使用的内存和硬盘空间,当超过限制时,ActiveMQ “友善地”停止从生产者接收消息,当消费者取走足够的消息后,会重新从生产者接收消息。
(7)TransportConnectors – 允许客户端使用哪些协议连接ActiveMQ。ActiveMQ支持AMQP 1.0, MQTT, STOMP, and OpenWire,并且可以组合SSL。
2015年10月6日星期二
EAP_042:基于角色控制访问EAP 6控制台
环境:JBoss EAP 6.4.0
需求来自客户,假设客户有3个角色,4个用户:
(1)Andy:系统管理员,可以做任何事情。
(2)Bob:开发项目经理,负责部署应用和相关配置。
(3)Clare 和 Dave:开发人员,只能查看应用配置,不能修改。
1. 创建用户
(1)运行./add-user.sh,分别创建Andy、Bob、Clare、Dave管理型用户。
(2)cp mgmt-user.properties mgmt-group.properties myeap/configuration
(3)cp standalone.xml standalone-rbac.xml
(4)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/myeap -c standalone-rbac.xml -Djboss.socket.binding.port-offset=10000
(5)访问 http://localhost:19990/,用刚创建的用户登录,发现可以做任何事情,不符合用户要求。
2. 修改standalone-rbac.xml如下
<management>
<access-control provider="rbac">
<role-mapping>
<role name="SuperUser">
<include>
<user name="$local"/>
<user name="Andy"/>
</include>
</role>
<role name="Deployer">
<include>
<group alias="group-lead-devs" name="lead-developers"/>
</include>
</role>
<role name="Monitor">
<include>
<group alias="group-standard-devs" name="developers"/>
</include>
</role>
</role-mapping>
</access-control>
</management>
重新启动,这次发现各个用户按照各自角色行使权力。
使用andy用户登录,点击Administration->Role Assignment,可以看到所有用户和组的定义。
参考文献:
1. http://blog.c2b2.co.uk/2014/09/configuring-rbac-in-jboss-eap-and.html
2. http://blog.c2b2.co.uk/2015/01/configuring-rbac-in-jboss-eap-and.html
3. http://blog.arungupta.me/role-based-access-control-wildfly-8/
4. https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.2/html/Security_Guide/Supported_Roles.html
需求来自客户,假设客户有3个角色,4个用户:
(1)Andy:系统管理员,可以做任何事情。
(2)Bob:开发项目经理,负责部署应用和相关配置。
(3)Clare 和 Dave:开发人员,只能查看应用配置,不能修改。
1. 创建用户
(1)运行./add-user.sh,分别创建Andy、Bob、Clare、Dave管理型用户。
(2)cp mgmt-user.properties mgmt-group.properties myeap/configuration
(3)cp standalone.xml standalone-rbac.xml
(4)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/myeap -c standalone-rbac.xml -Djboss.socket.binding.port-offset=10000
(5)访问 http://localhost:19990/,用刚创建的用户登录,发现可以做任何事情,不符合用户要求。
2. 修改standalone-rbac.xml如下
<management>
<access-control provider="rbac">
<role-mapping>
<role name="SuperUser">
<include>
<user name="$local"/>
<user name="Andy"/>
</include>
</role>
<role name="Deployer">
<include>
<group alias="group-lead-devs" name="lead-developers"/>
</include>
</role>
<role name="Monitor">
<include>
<group alias="group-standard-devs" name="developers"/>
</include>
</role>
</role-mapping>
</access-control>
</management>
重新启动,这次发现各个用户按照各自角色行使权力。
使用andy用户登录,点击Administration->Role Assignment,可以看到所有用户和组的定义。
参考文献:
1. http://blog.c2b2.co.uk/2014/09/configuring-rbac-in-jboss-eap-and.html
2. http://blog.c2b2.co.uk/2015/01/configuring-rbac-in-jboss-eap-and.html
3. http://blog.arungupta.me/role-based-access-control-wildfly-8/
4. https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.2/html/Security_Guide/Supported_Roles.html
EAP_041:EAP 功能演示 8:负载均衡
环境:JBoss EAP 6.4.0 + RHEL 6.6 + EWS Httpd 2.1.0
安装和配置好Apache Httpd Server 和 mod_cluster后,就可以为JBoss EAP 配置负载均衡了。
1. Standalone Mode 集群下的负载均衡配置
(1)确定Apache Httpd Server所在机器的IP 地址,比如:192.168.56.101
(2)修改node1的配置文件如下:
<subsystem xmlns="urn:jboss:domain:modcluster:1.2">
<mod-cluster-config advertise-socket="modcluster" connector="ajp" proxy-list="192.168.56.101:80" balancer="myBalancer">
<dynamic-load-provider>
<load-metric type="busyness"/>
</dynamic-load-provider>
</mod-cluster-config>
</subsystem>
(3)node2的配置文件和node1做一样的修改
(4)启动node1,这里必须指定node1所在的机器的IP地址,比如:192.168.56.1
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha-mod_cluster.xml -b 192.168.56.1 -bmanagement 192.168.56.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
(5)启动node2,这里必须指定node2所在的机器的IP地址,比如:192.168.56.1
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha-mod_cluster.xml -b 192.168.56.1 -bmanagement 192.168.56.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
(6)访问http://192.168.56.1:8180/cluster_test
(7)访问http://192.168.56.1:8280/cluster_test
(8)访问http://192.168.56.101/cluster_test,做负载均衡测试。
2. Domain Mode 集群下的负载均衡配置
(1)cp machine1/domain/configuration/domain.xml machine1/domain/configuration/domain-mod_cluster.xml
(2)修改domain-mod_cluster.xml
<subsystem xmlns="urn:jboss:domain:modcluster:1.2">
<mod-cluster-config advertise-socket="modcluster" connector="ajp" proxy-list="192.168.56.101:80" balancer="myBalancer">
<dynamic-load-provider>
<load-metric type="busyness"/>
</dynamic-load-provider>
</mod-cluster-config>
</subsystem>
(3)启动machine1, 这里必须指定machine1所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine1/domain/ --domain-config=domain-mod_cluster.xml --host-config=host-master.xml -b 192.168.56.1 -bmanagement 192.168.56.1
(4)启动machine2, 这里必须指定machine2所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ --host-config=host-slave.xml -b 192.168.56.1 -Djboss.management.native.port=19999 -Djboss.domain.master.address=192.168.56.1
(5)启动machine3, 这里必须指定machine3所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -b 192.168.56.1 -Djboss.management.native.port=29999 -Djboss.domain.master.address=192.168.56.1
(6)访问http://192.168.56.1:8230/cluster_test
(7)访问http://192.168.56.1:9080/cluster_test
(8)访问http://192.168.56.101/cluster_test,做负载均衡测试。
安装和配置好Apache Httpd Server 和 mod_cluster后,就可以为JBoss EAP 配置负载均衡了。
1. Standalone Mode 集群下的负载均衡配置
(1)确定Apache Httpd Server所在机器的IP 地址,比如:192.168.56.101
(2)修改node1的配置文件如下:
<subsystem xmlns="urn:jboss:domain:modcluster:1.2">
<mod-cluster-config advertise-socket="modcluster" connector="ajp" proxy-list="192.168.56.101:80" balancer="myBalancer">
<dynamic-load-provider>
<load-metric type="busyness"/>
</dynamic-load-provider>
</mod-cluster-config>
</subsystem>
(3)node2的配置文件和node1做一样的修改
(4)启动node1,这里必须指定node1所在的机器的IP地址,比如:192.168.56.1
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha-mod_cluster.xml -b 192.168.56.1 -bmanagement 192.168.56.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
(5)启动node2,这里必须指定node2所在的机器的IP地址,比如:192.168.56.1
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha-mod_cluster.xml -b 192.168.56.1 -bmanagement 192.168.56.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
(6)访问http://192.168.56.1:8180/cluster_test
(7)访问http://192.168.56.1:8280/cluster_test
(8)访问http://192.168.56.101/cluster_test,做负载均衡测试。
2. Domain Mode 集群下的负载均衡配置
(1)cp machine1/domain/configuration/domain.xml machine1/domain/configuration/domain-mod_cluster.xml
(2)修改domain-mod_cluster.xml
<subsystem xmlns="urn:jboss:domain:modcluster:1.2">
<mod-cluster-config advertise-socket="modcluster" connector="ajp" proxy-list="192.168.56.101:80" balancer="myBalancer">
<dynamic-load-provider>
<load-metric type="busyness"/>
</dynamic-load-provider>
</mod-cluster-config>
</subsystem>
(3)启动machine1, 这里必须指定machine1所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine1/domain/ --domain-config=domain-mod_cluster.xml --host-config=host-master.xml -b 192.168.56.1 -bmanagement 192.168.56.1
(4)启动machine2, 这里必须指定machine2所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ --host-config=host-slave.xml -b 192.168.56.1 -Djboss.management.native.port=19999 -Djboss.domain.master.address=192.168.56.1
(5)启动machine3, 这里必须指定machine3所在的机器的IP地址,比如:192.168.56.1
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -b 192.168.56.1 -Djboss.management.native.port=29999 -Djboss.domain.master.address=192.168.56.1
(6)访问http://192.168.56.1:8230/cluster_test
(7)访问http://192.168.56.1:9080/cluster_test
(8)访问http://192.168.56.101/cluster_test,做负载均衡测试。
2015年10月5日星期一
EAP_040:安装与配置 mod_cluster
环境:RHEL 6.6 + EWS Httpd 2.1.0
说明,这里安装的是JBoss EAP 6.4.0 自带的mod_cluster,不是从Apache 社区下载的mod_cluster。
1. 访问 access.redhat.com,找到JBoss EAP 6.4.0
(1)检查操作系统环境:
# uname -r
2.6.32-504.el6.x86_64
(2)选择对应的下载软件
点击 Red Hat JBoss Enterprise Application Platform 6.4.0 Webserver Connector Natives for RHEL 6 x86_64
下载后文件名称是:jboss-eap-native-webserver-connectors-6.4.0-RHEL6-x86_64.zip
2. 安装 mod_cluster
(1)# cd /opt/jbshome
(2)# unzip jboss-eap-native-webserver-connectors-6.4.0-RHEL6-x86_64.zip
(3)# cd jboss-eap-6.4/modules/system/layers/base/native/lib64/httpd/modules
(4)# cp *.so /opt/jbshome/jboss-ews-2.1/httpd/modules/
(5)cd /opt/jbshome/jboss-ews-2.1/httpd/modules
(6)rm mod_jk.so
(7)# cp /opt/jbshome/software/jboss-eap-6.4/modules/system/layers/base/native/etc/httpd/conf/mod_cluster.conf /opt/jbshome/jboss-ews-2.1/httpd/conf.d
(8)# vim /opt/jbshome/jboss-ews-2.1/httpd/conf.d/mod_cluster.conf
修改如下:
# mod_proxy_balancer should be disabled when mod_cluster is used
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule advertise_module modules/mod_advertise.so
MemManagerFile /var/cache/mod_cluster
<IfModule manager_module>
#Listen 80
<VirtualHost *:80>
<Directory />
Order deny,allow
Allow from all
</Directory>
ServerAdvertise Off
EnableMCPMReceive On
ManagerBalancerName myBalancer
KeepAliveTimeout 60
MaxKeepAliveRequests 0
<Location /mod_cluster-manager>
SetHandler mod_cluster-manager
Order deny,allow
Allow from all
</Location>
</VirtualHost>
</IfModule>
(9)# cd /opt/jbshome/jboss-ews-2.1
(10)# chown -R apache:apache httpd
修改完后,运行 ls -l 检查一下是否改过来了。
3. 重启 Apache Httpd Server
(1)# cd /opt/jbshome/jboss-ews-2.1/httpd/sbin
(2)# ./apachectl restart
(3)访问 http://localhost/mod_cluster-manager
4. 为了能够让其它机器能够访问Apache Httpd Server 所在的机器,需要把防火墙关掉:
(1)iptables -F
(2)iptables -L -n
(3)临时停掉防火墙:/etc/init.d/iptables stop
(4)彻底关掉防火墙:chkconfig iptables off
(5)访问 http://<IP_ADDRESS>/mod_cluster-manager
5. 如果还不能访问http://<IP_ADDRESS>/mod_cluster-manager,关闭SELinux
(1)vim /etc/sysconfig/selinux,内容如下:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
执行 # setenforce 0, 设置SELinux 成为permissive模式。
说明:如果执行# setenforce 1,将会把SELinux 设置成为enforcing模式。
执行# getenforce,查看确认SELinux 已经设置为permissive模式。
如果为了省事,可以直接设置SELINUX=disabled,这样重启以后,不用每次执行setenforce 0。
说明,这里安装的是JBoss EAP 6.4.0 自带的mod_cluster,不是从Apache 社区下载的mod_cluster。
1. 访问 access.redhat.com,找到JBoss EAP 6.4.0
(1)检查操作系统环境:
# uname -r
2.6.32-504.el6.x86_64
(2)选择对应的下载软件
点击 Red Hat JBoss Enterprise Application Platform 6.4.0 Webserver Connector Natives for RHEL 6 x86_64
下载后文件名称是:jboss-eap-native-webserver-connectors-6.4.0-RHEL6-x86_64.zip
2. 安装 mod_cluster
(1)# cd /opt/jbshome
(2)# unzip jboss-eap-native-webserver-connectors-6.4.0-RHEL6-x86_64.zip
(3)# cd jboss-eap-6.4/modules/system/layers/base/native/lib64/httpd/modules
(4)# cp *.so /opt/jbshome/jboss-ews-2.1/httpd/modules/
(5)cd /opt/jbshome/jboss-ews-2.1/httpd/modules
(6)rm mod_jk.so
(7)# cp /opt/jbshome/software/jboss-eap-6.4/modules/system/layers/base/native/etc/httpd/conf/mod_cluster.conf /opt/jbshome/jboss-ews-2.1/httpd/conf.d
(8)# vim /opt/jbshome/jboss-ews-2.1/httpd/conf.d/mod_cluster.conf
修改如下:
# mod_proxy_balancer should be disabled when mod_cluster is used
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule advertise_module modules/mod_advertise.so
MemManagerFile /var/cache/mod_cluster
<IfModule manager_module>
#Listen 80
<VirtualHost *:80>
<Directory />
Order deny,allow
Allow from all
</Directory>
ServerAdvertise Off
EnableMCPMReceive On
ManagerBalancerName myBalancer
KeepAliveTimeout 60
MaxKeepAliveRequests 0
<Location /mod_cluster-manager>
SetHandler mod_cluster-manager
Order deny,allow
Allow from all
</Location>
</VirtualHost>
</IfModule>
(9)# cd /opt/jbshome/jboss-ews-2.1
(10)# chown -R apache:apache httpd
修改完后,运行 ls -l 检查一下是否改过来了。
3. 重启 Apache Httpd Server
(1)# cd /opt/jbshome/jboss-ews-2.1/httpd/sbin
(2)# ./apachectl restart
(3)访问 http://localhost/mod_cluster-manager
4. 为了能够让其它机器能够访问Apache Httpd Server 所在的机器,需要把防火墙关掉:
(1)iptables -F
(2)iptables -L -n
(3)临时停掉防火墙:/etc/init.d/iptables stop
(4)彻底关掉防火墙:chkconfig iptables off
(5)访问 http://<IP_ADDRESS>/mod_cluster-manager
5. 如果还不能访问http://<IP_ADDRESS>/mod_cluster-manager,关闭SELinux
(1)vim /etc/sysconfig/selinux,内容如下:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
执行 # setenforce 0, 设置SELinux 成为permissive模式。
说明:如果执行# setenforce 1,将会把SELinux 设置成为enforcing模式。
执行# getenforce,查看确认SELinux 已经设置为permissive模式。
如果为了省事,可以直接设置SELINUX=disabled,这样重启以后,不用每次执行setenforce 0。
EAP_039:安装 Apache HTTP Server
环境:RHEL 6.6 + EWS Httpd 2.1.0
说明,这里安装的是JBoss EAP 6.4.0 自带的Apache HTTP Server,不是从Apache 社区下载的Httpd Server。
之所以使用 JBoss EAP 6.4.0 自带的Apache HTTP Server,是为了将来使用它作为负载均衡器。
当然,你使用Apache 社区下载的Httpd Server作为负载均衡器也可以,但通常推荐使用和JBoss EAP 自带的Apache HTTP Server,这样安装更绿色,更简单,更匹配。
另外比Linux自带的Apache Httpd Server管理起来也方便,所有的东西都在解压后的目录中,不用到其它地方找,而且很容易在一台机器上安装多个Apache Httpd Server。
1. 访问 access.redhat.com,找到JBoss EAP 6.4.0,
(1)检查操作系统环境:
# uname -r
2.6.32-504.el6.x86_64
(2)选择对应的下载软件
点击 Red Hat JBoss Enterprise Application Platform 6.4.0 Apache HTTP Server for RHEL 6 x86_64
下载后文件名称是:jboss-ews-httpd-2.1.0-RHEL6-x86_64.zip
2. 需要 apr-util 包的支持
# yum install apr-util
3. 安装 Apache HTTP Server
(1)# cd /opt/jbshome
(2)# unzip jboss-ews-httpd-2.1.0-RHEL6-x86_64.zip
(3)# cd jboss-ews-2.1/httpd
(4)因为不需要用到kerberos认证,删除以下文件,以免启动的时候报错。
# rm conf.d/auth_kerb.conf
# rm modules/mod_auth_kerb.so
(5)./.postinstall
(6)检查有没有 apache 用户、用户组
# id apache
uid=48(apache) gid=48(apache) 组=48(apache)
如果没有,需要创建
# getent group apache >/dev/null || groupadd -g 48 -r apache
# getent passwd apache >/dev/null || useradd -r -u 48 -g apache -s /sbin/nologin -d /opt/jbshome/httpd/www -c "Apache" apache
修改httpd目录的owner为apache
# chown -R apache:apache httpd
修改完后,运行 ls -l 检查一下是否改过来了。
4. 启动与停止Apache HTTP Server
(1)# cd /opt/jbshome/jboss-ews-2.1/httpd/sbin/
(2)./apachectl start
注意,这里apachectl 前面必须加./,否则启动的是系统自带的Apache Httpd Server。
(3)./apachectl stop
注意,这里apachectl 前面必须加./,否则停止的是系统自带的Apache Httpd Server。
(4)检查Apache HTTP Server 是否起来了
# netstat -anp | grep httpd
tcp 0 0 :::80 :::* LISTEN 5298/httpd
(5)访问 http://localhost
说明,这里安装的是JBoss EAP 6.4.0 自带的Apache HTTP Server,不是从Apache 社区下载的Httpd Server。
之所以使用 JBoss EAP 6.4.0 自带的Apache HTTP Server,是为了将来使用它作为负载均衡器。
当然,你使用Apache 社区下载的Httpd Server作为负载均衡器也可以,但通常推荐使用和JBoss EAP 自带的Apache HTTP Server,这样安装更绿色,更简单,更匹配。
另外比Linux自带的Apache Httpd Server管理起来也方便,所有的东西都在解压后的目录中,不用到其它地方找,而且很容易在一台机器上安装多个Apache Httpd Server。
1. 访问 access.redhat.com,找到JBoss EAP 6.4.0,
(1)检查操作系统环境:
# uname -r
2.6.32-504.el6.x86_64
(2)选择对应的下载软件
点击 Red Hat JBoss Enterprise Application Platform 6.4.0 Apache HTTP Server for RHEL 6 x86_64
下载后文件名称是:jboss-ews-httpd-2.1.0-RHEL6-x86_64.zip
2. 需要 apr-util 包的支持
# yum install apr-util
3. 安装 Apache HTTP Server
(1)# cd /opt/jbshome
(2)# unzip jboss-ews-httpd-2.1.0-RHEL6-x86_64.zip
(3)# cd jboss-ews-2.1/httpd
(4)因为不需要用到kerberos认证,删除以下文件,以免启动的时候报错。
# rm conf.d/auth_kerb.conf
# rm modules/mod_auth_kerb.so
(5)./.postinstall
(6)检查有没有 apache 用户、用户组
# id apache
uid=48(apache) gid=48(apache) 组=48(apache)
如果没有,需要创建
# getent group apache >/dev/null || groupadd -g 48 -r apache
# getent passwd apache >/dev/null || useradd -r -u 48 -g apache -s /sbin/nologin -d /opt/jbshome/httpd/www -c "Apache" apache
修改httpd目录的owner为apache
# chown -R apache:apache httpd
修改完后,运行 ls -l 检查一下是否改过来了。
4. 启动与停止Apache HTTP Server
(1)# cd /opt/jbshome/jboss-ews-2.1/httpd/sbin/
(2)./apachectl start
注意,这里apachectl 前面必须加./,否则启动的是系统自带的Apache Httpd Server。
(3)./apachectl stop
注意,这里apachectl 前面必须加./,否则停止的是系统自带的Apache Httpd Server。
(4)检查Apache HTTP Server 是否起来了
# netstat -anp | grep httpd
tcp 0 0 :::80 :::* LISTEN 5298/httpd
(5)访问 http://localhost
EAP_038:EAP 6 功能演示 7:集群(Domain mode)
环境:JBoss EAP 6.4.0
1. 配置 Domain Mode 集群
(1)mkdir machine1
(2)cp -r ./jboss-eap-6.4/domain/ machine1/domain
(3)mkdir machine2
(4)cp -r ./jboss-eap-6.4/domain/ machine2/domain
(5)mkdir machine3
(6)cp -r ./jboss-eap-6.4/domain/ machine3/domain
(7)vim machine1/domain/configuration/domain.xml,找到full-ha profile,修改
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
<hornetq-server >
<clustered >true </clustered >
<persistence-enabled >true </persistence-enabled >
<cluster-password >welcome@1 </cluster-password >
因为 message subsystem 使用自己的集群实现机制,不使用 JGroups 作为集群实现机制,需要增加安全认证。
(8)./domain.sh --host-config=host-master.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine1/domain/
(9)./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 -Djboss.domain.master.address=127.0.0.1
或者 如果想在Shutdown domain controller的情况下,想启动/停止host controller,需要增加参数
./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 --backup -Djboss.domain.master.address=127.0.0.1
然后
./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 --cached-dc -Djboss.domain.master.address=127.0.0.1
(10)./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 -Djboss.domain.master.address=127.0.0.1
或者 如果想在Shutdown domain controller的情况下,想启动/停止host controller,需要增加参数
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 --backup -Djboss.domain.master.address=127.0.0.1
然后
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 --cached-dc -Djboss.domain.master.address=127.0.0.1
(11)http://localhost:9990/console/ 介绍 Domain 配置,启动/停止 Server, 部署 cluster_test.war 到 other-server-group 上。
注意,Domain Mode 下,部署应用是发布到 Server Groups 上,而不是 Servers 上,而且不能只部署到 Server Group 中的某个Server,要部署就是部署到该 Server Group 中的所有 Servers。
(12)http://localhost:8230/ cluster_test/
(13)http://localhost:9080/ cluster_test/
2. 在 Domain Mode 下 Undeploy 一个应用的步骤如下:
(1)在 Server Group Deployments 中,先 disable 该应用,再 remove 该应用。
(2)在 DeploymentContent 中,将其从 Content Repository 中 remove 掉。
1. 配置 Domain Mode 集群
(1)mkdir machine1
(2)cp -r ./jboss-eap-6.4/domain/ machine1/domain
(3)mkdir machine2
(4)cp -r ./jboss-eap-6.4/domain/ machine2/domain
(5)mkdir machine3
(6)cp -r ./jboss-eap-6.4/domain/ machine3/domain
(7)vim machine1/domain/configuration/domain.xml,找到full-ha profile,修改
(9)./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 -Djboss.domain.master.address=127.0.0.1
或者 如果想在Shutdown domain controller的情况下,想启动/停止host controller,需要增加参数
./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 --backup -Djboss.domain.master.address=127.0.0.1
然后
./domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine2/domain/ -Djboss.management.native.port=19999 --cached-dc -Djboss.domain.master.address=127.0.0.1
(10)./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 -Djboss.domain.master.address=127.0.0.1
或者 如果想在Shutdown domain controller的情况下,想启动/停止host controller,需要增加参数
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 --backup -Djboss.domain.master.address=127.0.0.1
然后
./domain.sh -Djboss.domain.base.dir=/Users/maping/Redhat/eap/demo/machine3/domain/ --host-config=host-slave.xml -Djboss.management.native.port=29999 --cached-dc -Djboss.domain.master.address=127.0.0.1
(12)http://localhost:8230/
(13)http://localhost:9080/
2.
(1)在 Server Group Deployments 中,先 disable 该应用,再 remove 该应用。
(2)在 DeploymentContent 中,将其从 Content Repository 中 remove 掉。
2015年10月1日星期四
EAP_037:EAP 6 功能演示 6:集群(Standalone mode)
环境:JBoss EAP 6.4.0
1. 集群的作用
(1)High Availability (HA) :服务的高可用性
(2)Scalability:服务的可扩展性
(3)Failover:服务的故障转移性
(4)Fault Tolerance:服务的容错性
2. 在EAP 6 中,配置好集群后,集群服务并不是一直起作用,而是按需启动和停止, 取决于应用中是否配置了 distributable 特性。
在cluster_test.war中web.xml内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5"
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<display-name>cluster_test</display-name>
<distributable/>
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
<welcome-file>default.html</welcome-file>
<welcome-file>default.htm</welcome-file>
<welcome-file>default.jsp</welcome-file>
</welcome-file-list>
</web-app>
3. 在EAP 6 中,实现集群的组件是以下三个:
(1)org.jboss.as.clustering.jgroups:提供server之间的通信、诊断、发现。
(2)org.jboss.as.clustering.infinispan:提供缓存,以及对象的复制。
(3)org.jboss.as.modcluster:负载均衡器,如果使用 Apache httpd。
4. 配置 Standalone Mode 集群
(1)mkdir node1
(2)cp -r ./jboss-eap-6.4/standalone/configuration ./jboss-eap-6.4/standalone/deployments ./jboss-eap-6.4/standalone/lib node1
(3)mkdir node2
(4)cp -r ./jboss-eap-6.4/standalone/configuration ./jboss-eap-6.4/standalone/deployments ./jboss-eap-6.4/standalone/lib node2
(5)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
或者使用 TCP 配置
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha-tcp.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
(6)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
或者使用 TCP 配置
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha-tcp.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
(7)http://localhost:10090/console/ 部署 cluster_test.war
(8)http://localhost:10190/console/ 部署 cluster_test.war
(9)http://localhost:8180/cluster_test/
(10)http://localhost:8280/cluster_test/
1. 集群的作用
(1)High Availability (HA) :服务的高可用性
(2)Scalability:服务的可扩展性
(3)Failover:服务的故障转移性
(4)Fault Tolerance:服务的容错性
2. 在EAP 6 中,配置好集群后,集群服务并不是一直起作用,而是按需启动和停止, 取决于应用中是否配置了 distributable 特性。
在cluster_test.war中web.xml内容如下:
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.5"
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
<display-name>cluster_test</display-name>
<distributable/>
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
<welcome-file>default.html</welcome-file>
<welcome-file>default.htm</welcome-file>
<welcome-file>default.jsp</welcome-file>
</welcome-file-list>
</web-app>
3. 在EAP 6 中,实现集群的组件是以下三个:
(1)org.jboss.as.clustering.jgroups:提供server之间的通信、诊断、发现。
(2)org.jboss.as.clustering.infinispan:提供缓存,以及对象的复制。
(3)org.jboss.as.modcluster:负载均衡器,如果使用 Apache httpd。
4. 配置 Standalone Mode 集群
(1)mkdir node1
(2)cp -r ./jboss-eap-6.4/standalone/configuration ./jboss-eap-6.4/standalone/deployments ./jboss-eap-6.4/standalone/lib node1
(3)mkdir node2
(4)cp -r ./jboss-eap-6.4/standalone/configuration ./jboss-eap-6.4/standalone/deployments ./jboss-eap-6.4/standalone/lib node2
(5)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
或者使用 TCP 配置
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node1 -c standalone-ha-tcp.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
(6)./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
或者使用 TCP 配置
./standalone.sh -Djboss.server.base.dir=/Users/maping/Redhat/eap/demo/node2 -c standalone-ha-tcp.xml -b 127.0.0.1 -bmanagement 127.0.0.1 -u 239.255.100.100 -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
(7)http://localhost:10090/console/ 部署 cluster_test.war
(8)http://localhost:10190/console/ 部署 cluster_test.war
(9)http://localhost:8180/cluster_test/
(10)http://localhost:8280/cluster_test/
EAP_036:EAP 6 JVM 设置
环境:JBoss EAP 6.4.0
1. Standalone mode
在 Standalone mode 中,由于只有一个进程,即 EAP Server,其 JVM 配置在 standalone.conf中,内容如下:
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
2. Domain mode
在 Domain mode 中,Host Controller 负责启动和停止 EAP Server,所以 EAP Server 的 JVM 配置也由 Host Controller 负责。
domain.conf 中的 JVM 配置是 Host Controller 的 JVM 配置,不是 EAP Server 的配置,内容如下:
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
在 Domain mode 中的 EAP Server 的 JVM 配置由以下三个地方决定:
(1)host.xml 文件中的 <jvm> 定义
(2)domain.xml 文件中的 <server-group> 定义
(3)host.xml 文件中的 <server> 定义
从下往上,在 <server>中的 JVM 定义覆盖在 <server-group> 中的 JVM 定义,在 <server-group> 中的 JVM 定义覆盖<jvm> 中的 JVM 定义。
例1:在 host.xml 文件中的 <jvm> 定义
<jvms>
<jvm name="small_jvm">
<heap size="64m" max-size="128m"/>
</jvm>
<jvm name="production_jvm">
<heap size="2048m" max-size="2048m"/>
<permgen size="128m" max-size="512m"/>
<stack size="1024k"/>
<jvm-options>
<option value="-XX:-UseParallelGC"/>
</jvm-options>
</jvm>
</jvms>
例2:<server-group> 和 <server> 定义都可以引用 jvm 定义,引用的同时,也可以覆盖其中的JVM配置。
<server-group name="groupA" profile="default">
<jvm name="production_jvm"/>
<socket-binding-group ref="standard-sockets"/>
</server-group>
<server-group name="groupB" profile="default">
<jvm name="production_jvm">
<heap size="1024m" max-size="1024m"/>
</jvm>
<socket-binding-group ref="standard-sockets"/>
</server-group>
<server name="test_server" group="groupB" auto-start="true">
<socket-binding-group ref="standard-sockets" port-offset="300"/>
<jvm name="production_jvm">
<heap size="256m"/>
</jvm>
</server>
3. 检查某个 EAP Server 使用内存的情况: ps -eaf | grep test_server
1. Standalone mode
在 Standalone mode 中,由于只有一个进程,即 EAP Server,其 JVM 配置在 standalone.conf中,内容如下:
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
2. Domain mode
在 Domain mode 中,Host Controller 负责启动和停止 EAP Server,所以 EAP Server 的 JVM 配置也由 Host Controller 负责。
domain.conf 中的 JVM 配置是 Host Controller 的 JVM 配置,不是 EAP Server 的配置,内容如下:
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.policy-permissions=true"
在 Domain mode 中的 EAP Server 的 JVM 配置由以下三个地方决定:
(1)host.xml 文件中的 <jvm> 定义
(2)domain.xml 文件中的 <server-group> 定义
(3)host.xml 文件中的 <server> 定义
从下往上,在 <server>中的 JVM 定义覆盖在 <server-group> 中的 JVM 定义,在 <server-group> 中的 JVM 定义覆盖<jvm> 中的 JVM 定义。
例1:在 host.xml 文件中的 <jvm> 定义
<jvms>
<jvm name="small_jvm">
<heap size="64m" max-size="128m"/>
</jvm>
<jvm name="production_jvm">
<heap size="2048m" max-size="2048m"/>
<permgen size="128m" max-size="512m"/>
<stack size="1024k"/>
<jvm-options>
<option value="-XX:-UseParallelGC"/>
</jvm-options>
</jvm>
</jvms>
例2:<server-group> 和 <server> 定义都可以引用 jvm 定义,引用的同时,也可以覆盖其中的JVM配置。
<server-group name="groupA" profile="default">
<jvm name="production_jvm"/>
<socket-binding-group ref="standard-sockets"/>
</server-group>
<server-group name="groupB" profile="default">
<jvm name="production_jvm">
<heap size="1024m" max-size="1024m"/>
</jvm>
<socket-binding-group ref="standard-sockets"/>
</server-group>
<server name="test_server" group="groupB" auto-start="true">
<socket-binding-group ref="standard-sockets" port-offset="300"/>
<jvm name="production_jvm">
<heap size="256m"/>
</jvm>
</server>
3. 检查某个 EAP Server 使用内存的情况: ps -eaf | grep test_server
订阅:
博文 (Atom)