2014年8月31日星期日

MAC_021:Eclipse常用快捷键

运行环境:MAC OS X 10.9.4 + JBoss Developer Studio 5.0.2

1. Command + O:显示大纲

2. 
Command + 1:快速修复

3. 
Command + D:删除当前行


4. Command + Option + ↓:复制当前行到下一行


5. Command + Option + ↑:复制当前行到上一行

6. 
Option + ↓:当前行和下面一行交互位置


7. Option + ↑:当前行和上面一行交互位置


8. Option + ←:前一个编辑的页面

9. 
Option + →:下一个编辑的页面


10. Option + Return:显示当前选择资源的属性
 *

11. Shift + Return:在当前行的下一行插入空行

12. 
Shift + Control + Return:在当前行插入空行


13. Control + Q:定位到最后编辑的地方


14. Control + M:最大化当前的Edit或View(再按则最小化)
*

15. Command + /:注释当前行,再按则取消注释

16. 
Command + T:快速显示当前类的继承结构

17. 
Command + W:关闭当前Editer

18. 
Command + K:参照当前选中的Word快速定位到下一个


19. Command + E:快速显示当前Editer的下拉列表(如果当前页面没有显示的用黑体表示)

20. 
Option + /:代码助手完成一些代码的插入(俗称“智能提示”)

21. 
Command + Shift + E:显示管理当前打开的所有的View的管理器

22. 
Command + J:正向增量查找(按下Command + J后,你所输入的每个字母编辑器都提供快速匹配定位到某个单词,如果没有,则在Stutes Line中显示没有找到了)

23. 
Command + Shift + J:反向增量查找


24. Command + Shift + W:关闭所有打开的Editer

25. 
Command + Shift + X:把当前选中的文本全部变为大写


26. Command + Shift + Y:把当前选中的文本全部变为小写


27. Command + Shift + F:格式化当前代码 *

28. 
Command + Shift + P:定位到对于的匹配符(譬如{})(从前面定位后面时,光标要在匹配符里面,后面到前面,则反之)*

29. Option + Command + R:重命名(尤其是变量和类的Rename效果比较明显)


30. Option + Shift + M:抽取方法(这是重构里面最常用的方法之一了,尤其是对一大堆泥团代码有用)


31. Option + Command + C:修改函数结构(有N个函数调用了这个方法,修改一次就搞定)

32. 
Option + Command + L:抽取本地变量(可以直接把一些魔法数字和字符串抽取成一个变量,尤其是多处调用的时候)

33. 
Option + Shift + F:把Class中的Local变量变为Field变量(比较实用的功能)


34. Command + Z:Undo

35. Command + F:查找与替换

2014年8月26日星期二

Linux_083:RHEL下安装PostgreSQL

运行环境:RHEL 6.5  + PostgreSQL 9.3.5

安装方式有rpm方式、yum方式、源代码方式、图形化方式,这里介绍yum方式。

1. 下载PostgreSQL 9.3.5
下载地址:http://www.postgresql.org/download/,进入下载页面后,选择Red Hat Linux。
然后找到“PostgreSQL Yum Repository”,点击“repository RPM listing”,在最新的发布版本PostgreSQL 9.3中点击选择“Red Hat Enterprise Linux 6 - x86_64”下载。

2. 运行 yum install pgdg-redhat93-9.3-1.noarch.rpm 安装PostgreSQL Yum 源

3. 运行 yum install postgresql93-server 安装PostgreSQL Server

4. 初始化数据库 service postgresql-9.3 initdb

5. 开机启动 chkconfig postgresql-9.3 on

6. 查看安装 rpm -qa | grep postgresql,输出如下:postgresql93-libs-9.3.5-1PGDG.rhel6.x86_64
postgresql93-server-9.3.5-1PGDG.rhel6.x86_64
postgresql93-9.3.5-1PGDG.rhel6.x86_64

7. 启动/停止数据库 service postgresql-9.3 start|stop|restart|status
查看更多参数,可以输入service postgresql-9.3,输出如下:
Usage: /etc/init.d/postgresql-9.3 {start|stop|status|restart|upgrade|condrestart|try-restart|reload|force-reload|initdb|promote}

8. 确认数据库可以本地访问
# su - postgres
-bash-4.1$ psql
psql (9.3.5)
postgres-# \q
-bash-4.1$ psql -l
                                     资料库列表
   名称    |  拥有者  | 字元编码 |  校对规则   |    Ctype    |       存取权限  
    
-----------+----------+----------+-------------+-------------+------------------
-----
 postgres  | postgres | UTF8     | zh_CN.UTF-8 | zh_CN.UTF-8 |
 template0 | postgres | UTF8     | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres     
    +
           |          |          |             |             | postgres=CTc/post
gres
 template1 | postgres | UTF8     | zh_CN.UTF-8 | zh_CN.UTF-8 | =c/postgres     
    +
           |          |          |             |             | postgres=CTc/post
gres
(3 行记录)

9. 允许远程访问PostgreSQL数据库
(1)修改 /var/lib/pgsql/9.3/data/postgresql.conf文件
将文件中的listen_addresses项值设定为“*”。
(2)修改 /var/lib/pgsql/9.3/data/pg_hba.conf文件

在host    all             all             127.0.0.1/32            ident这一行之下,增加一行,内容如下:
host    all             all             192.168.0.0/24          trust

如果你的网段地址是其它的,请修改192.168.0.0/24部分
注意,这里必须是trust,否则使用SQuirreLSQL连不上。
因为SQuirreLSQL使用的是jdbc方式连接,这里要设置成trust。
使用pgAdmin III是可以正确连接的。这主要是由于用户密码认证方式引起的,Postgresql数据库安装好后默认采用md5密码加密认证方式。
各个字段说明:
(1)TYPE:可选的值有:local、host。
前者只能允许本地的用户登陆Postgres数据库;后者可以接受远程客户登陆。
(2)DATABASE:连接用户可以使用的数据库名字。
可以使Postgres的一个具体的数据库名,也可以使用“all”来允许用户访问所有数据库。
(3)USER:可以指定某个具体的用户来连接Postgres数据库(还要结合后面的地址字段)。
可以使用“all”来允许所有用户连接数据库。
(4)DIDR-ADDRESS:IP地址与掩码的另一种表示方法。
Postgres是通过这个字段来了解,允许那些IP或IP网段连接此服务器。
它的格式是: IP地址/掩码。
这个掩码和子网掩码是一个道理,只不过是用一个小于等于32的正数来表示,表示的是子网掩码中高几位为1。
比如,255.255.255.0 就是“24”,说明高24位是1。
192.168.0.1/32 相当于 IP为192.168.0.1,子网掩码为255.255.255.255的网段,
很显然,这只表明192.168.0.1IP自己。
(5)METHOD:这是验证方法。可选的值有:
  • reject:拒绝这个IP的用户访问
  • md5:密码以md5作为hash编码
  • password:密码作为明文传输
  • krb5:密码以krb5作为hash编码
下面举例说明:
(1)host all all 192.168.0.1/32 md5
允许IP为192.168.0.1的所有用户登陆到Postgres服务器的所有数据库,采用md5验证。
(2)host testdb testuser 192.168.0.1/24 md5
允许用户testuser在192.168.0.XX的网段任意机器登陆Postgres服务器,并且只能使用数据库testdb,采用md5验证。

10. 使用SQuirreLSQL客户端连接PostgreSQL数据库
下载PostgreSQL jdbc Driver,下载地址:http://jdbc.postgresql.org/download.html。
根据说明,选择JDBC41 Postgresql Driver, Version 9.3-1102

11. 查询都有哪些schema?
SELECT nspname FROM pg_namespace;

12. 查询某schema下都有哪些表?
SELECT * FROM pg_tables WHERE schemaname ='camel';

13. 查询某个表结构?
\d camel.orders

参考文献:
1. http://www.cnblogs.com/mchina/archive/2012/06/06/2539003.html
2. http://blog.chinaunix.net/uid-354915-id-3498734.html
3. http://www.cnblogs.com/jevonsea/archive/2013/01/24/2874184.html
4. http://www.cnblogs.com/shineqiujuan/archive/2010/08/14/1799755.html
5. http://www.blogjava.net/hengic/articles/217873.html
6. http://www.360doc.com/content/13/0822/10/10384031_309039914.shtml

2014年8月24日星期日

Linux_082:常用命令之二十三:mount umount

运行环境:RHEL 6.5

mount 命令格式:mount option [-o mount_options] device dir

 1. -t vfstype 指定文件系统的类型。常用类型有:
(1)光盘或光盘镜像:iso9660
(2)DOS FAT16 文件系统:msdos
(3)Windows 9x fat32 文件系统:vfat
(4)Windows NT ntfs文件系统:ntfs
(5)Windows 文件网络共享:smbfs
(6)UNIX(LINUX)文件网络共享:nfs

2. -a 挂载 /etc/fstab文件中满足条件的文件系统
格式: mount -a -t type -o mount_options 不需要指定设备/目录

3. -f 测试 mount 系统,只检查设备和目录,并不真正挂载文件系统

4. -o options 指定设备或档案的挂接方式。常用的参数有:
(1)loop:用来把一个文件当成硬盘分区挂接上系统
(2)ro:采用只读方式挂接设备
(3)rw:采用读写方式挂接设备
(4)iocharset:指定访问文件系统所用字符集

5. device 要挂载的设备

6. dir 设备在系统上的挂载点

常用命令举例:

挂载其它设备前,首先应该使用fdisk -l 命令查看系统的硬盘和分区情况。

1. 挂载光盘或光盘iso镜像文件
(1)建立一个目录用作挂载点:# mkdir /mnt/iso
(2)# mount -o loop -t iso9660 /home/maping/cl280-rhel-6.4-r18156.iso /mnt/iso
(3)使用df -lh命令来检查/mnt/iso是否挂载了。
如果有多个光盘iso镜像文件,需要先卸载文件先挂载的iso像镜:umount /home/maping/cl280-rhel-6.4-r18156.iso,然后再挂载新的光盘iso镜像文件。

2. 挂载NTFS格式硬盘
(1)建立一个目录用作挂载点:# mkdir /mnt/ntfs
(2)# mount -t ntfs /dev/hda6 /mnt/ntfs

3. 挂载U盘
(1)建立一个目录用作挂载点:# mkdir /mnt/udisk
(2)# mount -o iocharset=utf8 /dev/sdb1 /mnt/udisk
这里使用了-o iocharset=utf8,是为了防止中文目录文件显示为问号。

4. 挂载光驱
 一般来说,光驱的设备文件是/dev/hdc。
(1)建立一个目录用作挂载点:# mkdir /mnt/cdrom
(2)# mount -o iocharset=utf8 /dev/hdc /mnt/cdrom

5. 挂载软驱
一般来说,软驱的设备文件是/dev/fd0。
(1)建立一个目录用作挂载点:# mkdir /mnt/floopy
(2)# mount -o iocharset=utf8 /dev/fd0 /mnt/floopy

6. 挂载nfs
(1)建立一个目录用作挂载点:# mkdir /mnt/nfs
(2)# mount -t nfs -o rw 10.140.133.9:/export/home/sunky /mnt/nfs

7. 挂载smbfs
(1)建立一个目录用作挂载点:# mkdir /mnt/samba
(2)# mount -t smbfs -o username=administrator,password=pldy123 //10.140.133.23/c$ /mnt/samba
注:administrator 和 pldy123 是IP地址为10.140.133.23 windows计算机的一个用户名和密码, c$是这台计算机的一个磁盘共享。

参考文献:
1. http://blog.csdn.net/hancunai0017/article/details/6995284
2. http://taisongts08.blog.163.com/blog/static/78134350201076111728689/
3. http://www.cnblogs.com/BloodAndBone/archive/2010/10/14/1851598.html
4. http://adelphos.blog.51cto.com/2363901/1613483/

Linux_081:常用命令之二十二:dd

运行环境:RHEL 6.5

1.  使用0填充 192.168.0.1~6机器的第1块硬盘(/dev/sda),大小为1M,填充10次,然后重启。

$ sudo -i
# for i in {1..6}
>do ssh 192.168.0.$i "dd if=/dev/zero of=/dev/sda bs=1M count=10;reboot"
>done
   
由于/dev/sda通常用作启动硬盘,因此执行上面的命令后,MBR将遭到破坏,重启时,由于没有引导分区,只能选择其它方式启动(光盘、U盘、网络)。
在重新初始化教室环境时,这个命令比较有用,所有机器重启后,选择网络安装,一次搞定多台机器环境安装。

2.  拷贝光盘内容到指定文件夹,并保存为cd.iso文件
# dd if=/dev/cdrom(hdc) of=/root/cd.iso

这样做的好处是:
(1)减轻了光驱的磨损。
(2)现在硬盘容量巨大存放几十个光盘镜像文件不成问题,使用mount/umount命令随用随调十分方便。
(3)硬盘的读取速度要远远高于光盘的读取速度,CPU占用率大大降低。

参考文献:
1. http://blog.sina.com.cn/s/blog_8b5bb24f01016y3o.html
2. http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=2325561&highlight=

2014年8月23日星期六

Linux_080:RHEL下使用vpnc连接VPN

运行环境:RHEL 6.5

1. 下载EPEL
下载地址: http://dl.fedoraproject.org/pub/epel/6/i386/
找到epel-release-6-8.noarch.rpm,点击下载。
或者用wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm 下载。

2. 安装EPEL
# chmod a+x epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

3. 安装vpnc
# yum clean all
# yum install vpnc

4. 配置vpnc
# vi /etc/vpnc/default.conf
修改后,内容如下:

#IPSec gateway my.vpn.gateway
#IPSec ID my.ipsec.id
#IPSec secret mysecret
# your username goes here:
#Xauth username

IPSec gateway 203.114.244.92
IKE Authmode psk
IPSec ID RH-standard
IPSec secret nodnerip
IKE DH Group dh2
NAT Traversal Mode natt
Xauth username pma

5. 启动/停止vpnc
(1)启动:# vpnc
(2)停止:# vpnc-disconnect

参考文献:
1. http://www.cncentos.com/forum.php?mod=viewthread&tid=1751&extra=

MAC_020:使用vpnc连接VPN

运行环境:MAC OS X 10.9.4

之所以使用vpnc而不使用MAC自带的VPN客户端,是发现自带的VPN客户端有时候不好使。

1. 安装vpnc,运行sudo port install vpnc

注意,需要事先安装MacPorts,安装步骤请参考《MAC下安装MacPorts》。
Password:
--->  Computing dependencies for vpnc
--->  Dependencies to be installed: tuntaposx
--->  Fetching archive for tuntaposx
--->  Attempting to fetch tuntaposx-20111101_0.darwin_13.x86_64.tbz2 from http://packages.macports.org/tuntaposx
--->  Attempting to fetch tuntaposx-20111101_0.darwin_13.x86_64.tbz2.rmd160 from http://packages.macports.org/tuntaposx
--->  Installing tuntaposx @20111101_0
--->  Activating tuntaposx @20111101_0
--->  Cleaning tuntaposx
--->  Fetching archive for vpnc
--->  Attempting to fetch vpnc-0.5.3_0.darwin_13.x86_64.tbz2 from http://packages.macports.org/vpnc
--->  Attempting to fetch vpnc-0.5.3_0.darwin_13.x86_64.tbz2.rmd160 from http://packages.macports.org/vpnc
--->  Installing vpnc @0.5.3_0
--->  Activating vpnc @0.5.3_0
--->  Cleaning vpnc
--->  Updating database of binaries
--->  Scanning binaries for linking errors
--->  No broken files found.

2. 下载并安装TunTap
下载地址:http://tuntaposx.sourceforge.net/index.xhtml

3. 修改/opt/local/etc/vpnc/default.conf,修改后内容如下:
#IPSec gateway
#IPSec ID
#IPSec secret
#IKE Authmode hybrid
#Xauth username
#Xauth password

IPSec gateway 203.114.244.92
IKE Authmode psk
IPSec ID RH-standard
IPSec secret nodnerip
IKE DH Group dh2
NAT Traversal Mode natt
Xauth username pma


4. 启动/停止 vpnc
(1)启动:sudo vpnc
如果遇到如下错误:
MaPingdeMacBook-Pro:vpnc root# vpnc              
Enter password for pma@203.114.244.92:
Error binding to source port. Try '--local-port 0'
Failed to bind to 0.0.0.0:500: Address already in use
按照提示,使用sudo vpnc --local-port 0 启动vpnc,输出如下:
Enter password for pma@203.114.244.92:
Connect Banner:
| Unauthorized Access to this or any other Red Hat Inc. device
| is strictly prohibited. Violators will be prosecuted.
|

route: writing to routing socket: Can't assign requested address
add net 10.66.114.94: gateway 10.66.114.94: Can't assign requested address
add host 203.114.244.92: gateway 192.168.0.1
add net 172.16.0.0: gateway 10.66.114.94
add net 10.0.0.0: gateway 10.66.114.94
add net 10.66.127.17: gateway 10.66.114.94
add net 10.68.5.26: gateway 10.66.114.94
VPNC started in background (pid: 24992)...

(2)停止:sudo vpnc-disconnect,输出如下:
Terminating vpnc daemon (pid: 24894)       

参考文献:
1. http://vlinux.iteye.com/blog/593613
2. http://jingyan.baidu.com/article/48b37f8d4c44cb1a646488a1.html

2014年8月18日星期一

JDG_007:JDG 6.3 Quick Start 例子学习:GUI Demo

运行环境:JBoss Data Grid 6.3.0

GUI Demo 是Infinispan 6.0.2中的例子,运行在Library mode下。

下载Infinispan 6.0.2,下载地址:http://infinispan.org/。

1. 多个GUI Demo运行在一个Local Cluster之中

1.1 解压为infinispan-6.0.2.Final-all,进入bin目录,然后执行./runGuiDemo.sh。

1.2 再次执行./runGuiDemo.sh。

1.3 在1或2的GUI上操作数据,会发现在另一个GUI上都可以看到数据。

1.4 默认情况下,GUI Demo使用的配置文件是etc/config-samples/gui-demo-cache-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
      xmlns="urn:infinispan:config:6.0">

   <global>
      <transport clusterName="demoCluster"/>
      <globalJmxStatistics enabled="true"/>
   </global>

   <default>
      <jmxStatistics enabled="true"/>
      <clustering mode="distribution">
         <l1 enabled="true" lifespan="60000"/>
         <hash numOwners="2" />
         <sync/>
      </clustering>
   </default>
</infinispan>

说明:
(1)你可以根据需要多次执行./runGuiDemo.sh,所有的GUI Demo实例都在一个Local Cluster之中。
(2)你也可以指定自己的配置文件:-Dinfinispan.demo.cfg=file:/path/to/config.xml

2. 多个GUI Demo运行在多个Local Cluster之中

2.1 进入bin目录,然后执行./runGuiDemo.sh ../etc/config-samples/relay1.xml两次

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
      xmlns="urn:infinispan:config:6.0">
  
   <global>
      <transport clusterName="demoCluster1">
         <properties>
            <property name="configurationFile" value="config-samples/jgroups-relay1.xml" />
         </properties>
      </transport>
      <globalJmxStatistics enabled="true"/>
   </global>

   <default>
      <jmxStatistics enabled="true"/>
      <clustering mode="distribution">
         <l1 enabled="false" lifespan="10000"/>
         <hash numOwners="2" />
         <!--<hash numOwners="2" class="org.infinispan.distribution.ch.TopologyAwareConsistentHash"/>-->
         <async/>
      </clustering>
   </default>
</infinispan>

2.2 进入bin目录,然后执行./runGuiDemo.sh ../etc/config-samples/relay2.xml两次

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
      xmlns="urn:infinispan:config:6.0">
  
   <global>
      <transport clusterName="demoCluster2">
         <properties>
            <property name="configurationFile" value="config-samples/jgroups-relay2.xml" />
         </properties>
      </transport>
      <globalJmxStatistics enabled="true"/>
   </global>

   <default>
      <jmxStatistics enabled="true"/>
      <clustering mode="distribution">
         <l1 enabled="false" lifespan="10000"/>
         <hash numOwners="2" />
         <!--<hash numOwners="2" class="org.infinispan.distribution.ch.TopologyAwareConsistentHash"/>-->
         <async/>
      </clustering>
   </default>
</infinispan>

参考文献:
1. http://infinispan.org/docs/6.0.x/getting_started/getting_started.html

JDG_006:JDG 6.3 Quick Start 例子学习:Clustered Cache

运行环境:JBoss Data Grid 6.3.0

Clustered Cache 是Infinispan Quick Start中的例子,运行在Library mode下。
代码地址:https://github.com/infinispan/infinispan-quickstart/tree/master/clustered-cache。

下载后,进入clustered-cache目录,然后

1. 编译
mvn clean compile dependency:copy-dependencies -DstripVersion

2. 运行在replication模式下
 (1)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -r A
 (2)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -r B
 (2)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -r C

可以继续启动多个节点,各个节点之间彼此全复制所有的缓存项目。


3. 运行在distribution模式下
 (1)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -d A
 (2)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -d B
 (2)java -cp "target/classes:target/dependency/*" org.infinispan.quickstart.clusteredcache.Node -d C 
可以继续启动多个节点,以分布式的方式在各个节点上存储数据,每个缓存项目数据有2份。

4. Node.java
package org.infinispan.quickstart.clusteredcache;

import org.infinispan.Cache;
import org.infinispan.configuration.cache.CacheMode;
import org.infinispan.configuration.cache.ConfigurationBuilder;
import org.infinispan.configuration.global.GlobalConfigurationBuilder;
import org.infinispan.manager.DefaultCacheManager;
import org.infinispan.manager.EmbeddedCacheManager;
import org.infinispan.quickstart.clusteredcache.util.LoggingListener;
import org.infinispan.util.logging.BasicLogFactory;
import org.jboss.logging.BasicLogger;
import org.jboss.logging.Logger;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;

public class Node {

    private static final BasicLogger log = Logger.getLogger(Node.class);

    private final boolean useXmlConfig;
    private final String cacheName;
    private final String nodeName;
    private volatile boolean stop = false;

    public Node(boolean useXmlConfig, String cacheName, String nodeName) {
        this.useXmlConfig = useXmlConfig;
        this.cacheName = cacheName;
        this.nodeName = nodeName;
    }

    public static void main(String[] args) throws Exception {
        boolean useXmlConfig = false;
        String cache = "repl";
        String nodeName = null;

        for (String arg : args) {
            if ("-x".equals(arg)) {
                useXmlConfig = true;
            } else if ("-p".equals(arg)) {
                useXmlConfig = false;
            } else if ("-d".equals(arg)) {
                cache = "dist";
            } else if ("-r".equals(arg)) {
                cache = "repl";
            } else {
                nodeName = arg;
            }
        }
        new Node(useXmlConfig, cache, nodeName).run();
    }

    public void run() throws IOException, InterruptedException {
        EmbeddedCacheManager cacheManager = createCacheManager();
        final Cache cache = cacheManager.getCache(cacheName);
        System.out.printf("Cache %s started on %s, cache members are now %s\n", cacheName, cacheManager.getAddress(),
                cache.getAdvancedCache().getRpcManager().getMembers());

        // Add a listener so that we can see the puts to this node
        cache.addListener(new LoggingListener());

        printCacheContents(cache);

        Thread putThread = new Thread() {
            @Override
            public void run() {
                int counter = 0;
                while (!stop) {
                    try {
                        cache.put("key-" + counter, "" + cache.getAdvancedCache().getRpcManager().getAddress() + "-" + counter);
                    } catch (Exception e) {
                        log.warnf("Error inserting key into the cache", e);
                    }
                    counter++;

                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                        break;
                    }
                }
            }
        };
        putThread.start();

        System.out.println("Press Enter to print the cache contents, Ctrl+D/Ctrl+Z to stop.");
        while (System.in.read() > 0) {
            printCacheContents(cache);
        }

        stop = true;
        putThread.join();
        cacheManager.stop();
        System.exit(0);
    }

    /**
     * {@link org.infinispan.Cache#entrySet()}
     *
     * @param cache
     */
    private void printCacheContents(Cache cache) {
        System.out.printf("Cache contents on node %s\n", cache.getAdvancedCache().getRpcManager().getAddress());

        ArrayList> entries = new ArrayList>(cache.entrySet());
        Collections.sort(entries, new Comparator>() {
            @Override
            public int compare(Map.Entry o1, Map.Entry o2) {
                return o1.getKey().compareTo(o2.getKey());
            }
        });
        for (Map.Entry e : entries) {
            System.out.printf("\t%s = %s\n", e.getKey(), e.getValue());
        }
        System.out.println();
    }

    private EmbeddedCacheManager createCacheManager() throws IOException {
        if (useXmlConfig) {
            return createCacheManagerFromXml();
        } else {
            return createCacheManagerProgrammatically();
        }
    }

    private EmbeddedCacheManager createCacheManagerProgrammatically() {
        System.out.println("Starting a cache manager with a programmatic configuration");
        DefaultCacheManager cacheManager = new DefaultCacheManager(
                GlobalConfigurationBuilder.defaultClusteredBuilder()
                .transport().nodeName(nodeName).addProperty("configurationFile", "jgroups.xml")
                .build(),
                new ConfigurationBuilder()
                .clustering()
                .cacheMode(CacheMode.REPL_SYNC)
                .build()
        );
        // The only way to get the "repl" cache to be exactly the same as the default cache is to not define it at all
        cacheManager.defineConfiguration("dist", new ConfigurationBuilder()
                .clustering()
                .cacheMode(CacheMode.DIST_SYNC)
                .hash().numOwners(2)
                .build()
        );
        return cacheManager;
    }

    private EmbeddedCacheManager createCacheManagerFromXml() throws IOException {
        System.out.println("Starting a cache manager with an XML configuration");
        System.setProperty("nodeName", nodeName);
        return new DefaultCacheManager("infinispan.xml");
    }

}
 

5. infinispan.xml

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
    xmlns="urn:infinispan:config:6.0">
    <global>
        <transport nodeName="${nodeName}">
            <properties>
                <property name="configurationFile" value="jgroups.xml"/>
            </properties>
        </transport>
    </global>

    <default>
        <!-- Configure a synchronous replication cache -->
        <clustering mode="replication">
            <sync/>
        </clustering>
    </default>

    <namedCache name="repl">
        <!-- Use the configuration of the default cache as it is -->
    </namedCache>

    <namedCache name="dist">
        <!-- Configure a synchronous distribution cache -->
        <clustering mode="distribution">
            <sync/>
            <hash numOwners="2"/>
        </clustering>
    </namedCache>

</infinispan>

2014年8月17日星期日

MAC_019:安装和使用PostgreSQL

运行环境:MAC OS X 10.9.4 + PostgreSQL 9.3.4

1. 使用brew安装:brew install postgresql -v
运行后,会提示如下信息:
Warning: postgresql-9.3.4 already installed
说明我的MAC已经自带了PostgreSQL 9.3.4,那就直接使用吧。

2. 启动PostgreSQL

pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start

3. 停止PostgreSQL
pg_ctl -D /usr/local/var/postgres stop -s -m fast

4. 创建用户:rhqadmin
createuser rhqadmin -P,输出如下:
Enter password for new role:
Enter it again:

5.创建数据库:rhqdb,owner为rhqadmin
createdb rhqdb -O rhqadmin -E UTF8 -e,输出如下:
CREATE DATABASE rhqdb OWNER rhqadmin ENCODING 'UTF8';

更多数据库创建信息可以 "createdb --help" 查看。

6. 连接数据库:rhqdb,用户:rhqadmin
psql -U rhqadmin -d rhqdb -h 127.0.0.1,输出如下:
psql (9.3.4)
Type "help" for help.

rhqdb=>

7. 连接PostgreSQL数据库后,进入数据库操作
(1)显示已创建的数据库:\l
在不连接进 PostgreSQL 数据库的情况下,也可以在终端上查看显示已创建的列表:psql -l
(2)显示数据库表:\c rhqdb
(3)创建一个名为 test 的表:CREATE TABLE test(id int, text VARCHAR(50));
(4)插入一条记录:INSERT INTO test(id, text) VALUES(1, 'sdfsfsfsdfsdfdf');
(5)查询记录:SELECT * FROM test WHERE id = 1;
(6)更新记录:UPDATE test SET text = 'aaaaaaaaaaaaa' WHERE id = 1;
(7)删除指定的记录:DELETE FROM test WHERE id = 1;
(8)删除表:DROP TABLE test;
(9)删除数据库:DROP DATABASE dbname;
在不连接进 PostgreSQL 数据库的情况下,也可以在终端上删除数据库:dropdb -U rhqadmin rhqdb

参考文献:
1. http://dhq.me/mac-postgresql-install-usage

MAC_018:制作U盘启动盘

运行环境:MAC OS X 10.9.4

1. 把iso文件转换为dmg格式
语法如下:
hdiutil convert -format UDRW -o /path/to/generate/img/file /path/to/your/iso/file
例如:
hdiutil convert -format UDRW -o ./rhel-server-6.5-x86_64-dvd.img ./rhel-server-6.5-x86_64-dvd.iso
该命令会生成一个.img的磁盘镜像文件,MAC OS X会默认追加一个.dmg,即生成的文件后缀是.img.dmg。

2. 查看U盘的盘符:diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.3 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS Macintosh HD            499.4 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                            RHEL_6.4 x86_64        *3.7 GB     disk1
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     Apple_partition_scheme                        *7.3 MB     disk2
   1:        Apple_partition_map                         32.3 KB    disk2s1
   2:                  Apple_HFS Adobe Flash Player I... 7.3 MB     disk2s2
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *16.0 GB    disk3
   1:             Windows_FAT_32 雨林木风 GH             16.0 GB    disk3s4

可以看出,我这里的U盘的盘符是/dev/disk3。

3.  卸载U盘:diskutil unmountDisk /dev/disk3

4. 将镜像写入U盘:sudo dd if=rhel-server-6.5-x86_64-dvd.img.dmg of=/dev/rdisk3 bs=1m
此处要注意,of参数指定的设备名千万别写错了,否则悔之晚矣。
of参数指定的设备名,可以用上面找到的/dev/disk3,也可以用/dev/rdisk3,此处的“r”据说会写入较快。

5. 推出U盘:diskutil eject /dev/disk3

现在,可以用U盘引导启动并进行RHEL 6.5系统的安装啦!

参考文献:
1. http://jiangbo.me/blog/2011/11/09/create_ubuntu_usb_startdisk_on_mac/
2. http://blog.csdn.net/jiangbo_hit/article/details/6952151
3. http://www.linuxidc.com/Linux/2013-04/82973.htm

2014年8月15日星期五

JDG_005:JDG 6.3 Quick Start 例子学习:Football

运行环境:JBoss Data Grid 6.3.0

Football运行在remote client-server 模式下,展示了如何使用Hot Rod、Memcached、Rest客户端来访问Cache。

学习重点:

1. 配置Cache
(1)配置datasource
<subsystem xmlns="urn:jboss:domain:datasources:1.1">
            <!-- Define this Datasource with jndi name java:jboss/dataso urces/ExampleDS -->
            <datasources>
                <datasource jndi-name="java:jboss/datasources/ExampleDS"
                            pool-name="ExampleDS" enabled="true" use-java-context="true">
                    <!-- The connection URL uses H2 Database Engine with in-memory database called test -->
                    <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url>
                    <!-- JDBC driver name -->
                    <driver>h2</driver>
                    <!-- Credentials -->
                    <security>
                        <user-name>sa</user-name>
                        <password>sa</password>
                    </security>
                </datasource>
                <!-- Define the JDBC driver called 'h2' -->
                <drivers>
                    <driver name="h2" module="com.h2database.h2">
                        <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
                    </driver>
                </drivers>
            </datasources>        
      </subsystem>

(2)配置Cache
        <subsystem xmlns="urn:infinispan:server:core:6.1" default-cache-container="local">
            <cache-container name="local" default-cache="default" statistics="true">
                <local-cache name="default" start="EAGER">
                    <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
                    <transaction mode="NONE"/>
                </local-cache>
                <local-cache name="memcachedCache" start="EAGER" statistics="true">
                    <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
                    <transaction mode="NONE"/>
                </local-cache>
                <local-cache name="namedCache" start="EAGER" statistics="true"/>
                <!-- ADD a local cache called 'teams' -->
                <local-cache name="teams" start="EAGER"
                             batching="false" statistics="true">
                    <!-- Disable transactions for this cache -->
                    <transaction mode="NONE" />
                    <!-- Define the JdbcBinaryStores to point to the ExampleDS previously defined -->
                    <string-keyed-jdbc-store datasource="java:jboss/datasources/ExampleDS" passivation="false"
                                             preload="false" purge="false">
                        <!-- Define the database dialect -->
                        <property name="databaseType">H2</property>
                        <!-- specifies information about database table/column names and data types -->
                        <string-keyed-table prefix="JDG">
                            <id-column name="id" type="VARCHAR"/>
                            <data-column name="datum" type="BINARY"/>
                            <timestamp-column name="version" type="BIGINT"/>
                        </string-keyed-table>
                    </string-keyed-jdbc-store>
                </local-cache>
                <!-- End of local cache called 'teams' definition -->
            </cache-container>
            <cache-container name="security"/>
</subsystem>

可以看出,Hot Rod 和 REST endpoints 使用名为teams的Cache,而memcached endpoint默认使用memcachedCache。

(3)配置rest-connector,去掉认证
 <subsystem xmlns="urn:infinispan:server:endpoint:6.1">
            <hotrod-connector socket-binding="hotrod" cache-container="local">
                <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>
            </hotrod-connector>
            <memcached-connector socket-binding="memcached" cache-container="local"/>
            <rest-connector virtual-server="default-host" cache-container="local" />
 </subsystem>

2. 启动JBoss Data Grid 6.3
(1)cd /Users/maping/Redhat/Datagrid/jboss-datagrid-6.3.0-server/bin
(2)./standalone.sh -c standalone-football.xml

3. 运行客户端
分别进入hotrod-endpoint、memcached-endpoint、rest-endpoint,然后执行:mvn exec:java

JDG_004:JDG 6.3 Quick Start 例子学习:Carmart(Transactional)

运行环境:JBoss Data Grid 6.3.0

Carmart(Transactional)与Carmart基本一样,但是多了事务,不同于数据库事务,这里是内存级的事务。
Carmart(Transactional)只能运行在library mode下,因为目前只有library mode支持事务。

学习重点:

1. 事务是如何配置开启的?

@ApplicationScoped
public class JBossASCacheContainerProvider implements CacheContainerProvider {
    private Logger log = Logger.getLogger(this.getClass().getName());

    private BasicCacheContainer manager;

    public BasicCacheContainer getCacheContainer() {
        if (manager == null) {
            GlobalConfiguration glob = new GlobalConfigurationBuilder()
                .nonClusteredDefault() //Helper method that gets you a default constructed GlobalConfiguration, preconfigured for use in LOCAL mode
                .globalJmxStatistics().enable() //This method allows enables the jmx statistics of the global configuration.
                .jmxDomain("org.infinispan.carmart.tx")  //prevent collision with non-transactional carmart
                .build(); //Builds  the GlobalConfiguration object
            Configuration loc = new ConfigurationBuilder()
                .jmxStatistics().enable() //Enable JMX statistics
                .clustering().cacheMode(CacheMode.LOCAL) //Set Cache mode to LOCAL - Data is not replicated.
                .transaction().transactionMode(TransactionMode.TRANSACTIONAL).autoCommit(false) //Enable Transactional mode with autocommit false
                .lockingMode(LockingMode.OPTIMISTIC).transactionManagerLookup(new GenericTransactionManagerLookup()) //uses GenericTransactionManagerLookup - This is a lookup class that locate transaction managers in the most  popular Java EE application servers. If no transaction manager can be found, it defaults on the dummy transaction manager.
                .locking().isolationLevel(IsolationLevel.REPEATABLE_READ) //Sets the isolation level of locking
                .eviction().maxEntries(4).strategy(EvictionStrategy.LIRS) //Sets  4 as maximum number of entries in a cache instance and uses the LIRS strategy - an efficient low inter-reference recency set replacement policy to improve buffer cache performance
                .persistence().passivation(false).addSingleFileStore().purgeOnStartup(true) //Disable passivation and adds a SingleFileStore that is purged on Startup
                .build(); //Builds the Configuration object
            manager = new DefaultCacheManager(glob, loc, true);
            log.info("=== Using DefaultCacheManager (library mode) ===");
        }
        return manager;
    }

    @PreDestroy
    public void cleanUp() {
        manager.stop();
        manager = null;
    }
}


2. 事务是如何保证的?

@Model
public class CarManager {

    private Logger log = Logger.getLogger(this.getClass().getName());

    public static final String CACHE_NAME = "carcache";

    public static final String CAR_NUMBERS_KEY = "carnumbers";

    @Inject
    private CacheContainerProvider provider;

    /*
     * Injects the javax.transaction.UserTransaction - The TransactionManager lookup is configured on
     * JBossASCacheContainerProvider/TomcatCacheContainerProvider impl classes for CacheContainerProvider
     */
    @Inject
    private UserTransaction utx;

    private BasicCache carCache;

    private String carId;
    private Car car = new Car();

    public CarManager() {
    }

    public String addNewCar() {
        carCache = provider.getCacheContainer().getCache(CACHE_NAME);
        try {
            utx.begin();
            List carNumbers = getNumberPlateList(carCache);
            carNumbers.add(car.getNumberPlate());
            carCache.put(CAR_NUMBERS_KEY, carNumbers);
            carCache.put(CarManager.encode(car.getNumberPlate()), car);
            utx.commit();
        } catch (Exception e) {
            if (utx != null) {
                try {
                    utx.rollback();
                } catch (Exception e1) {
                }
            }
        }
        return "home";
    }

    public String addNewCarWithRollback() {
        boolean throwInducedException = true;
        carCache = provider.getCacheContainer().getCache(CACHE_NAME);
        try {
            utx.begin();
            List carNumbers = getNumberPlateList(carCache);
            carNumbers.add(car.getNumberPlate());
            // store the new list of car numbers and then throw an exception -> roll-back
            // the car number list should not be stored in the cache
            carCache.put(CAR_NUMBERS_KEY, carNumbers);
            if (throwInducedException)
                throw new RuntimeException("Induced exception");
            carCache.put(CarManager.encode(car.getNumberPlate()), car);
            utx.commit();
        } catch (Exception e) {
            if (utx != null) {
                try {
                    utx.rollback();
                    log.info("Rolled back due to: " + e.getMessage());
                } catch (Exception e1) {
                }
            }
        }
        return "home";
    }

    /**
     * Operate on a clone of car number list so that we can demonstrate transaction roll-back.
     */
    @SuppressWarnings("unchecked")
    private List getNumberPlateList(BasicCache carCacheLoc) {
        List result = null;
        List carNumberList = (List) carCacheLoc.get(CAR_NUMBERS_KEY);
        if (carNumberList == null) {
            result = new LinkedList();
        } else {
            result = new LinkedList(carNumberList);
        }
        return result;
    }

    public String showCarDetails(String numberPlate) {
        carCache = provider.getCacheContainer().getCache(CACHE_NAME);
        try {
            utx.begin();
            this.car = (Car) carCache.get(encode(numberPlate));
            utx.commit();
        } catch (Exception e) {
            if (utx != null) {
                try {
                    utx.rollback();
                } catch (Exception e1) {
                }
            }
        }
        return "showdetails";
    }

    public List getCarList() {
        List result = null;
        try {
            utx.begin();
            // retrieve a cache
            carCache = provider.getCacheContainer().getCache(CACHE_NAME);
            // retrieve a list of number plates from the cache
            result = getNumberPlateList(carCache);
            utx.commit();
        } catch (Exception e) {
            if (utx != null) {
                try {
                    utx.rollback();
                } catch (Exception e1) {
                }
            }
        }
        return result;
    }

    public String removeCar(String numberPlate) {
        carCache = provider.getCacheContainer().getCache(CACHE_NAME);
        try {
            utx.begin();
            carCache.remove(encode(numberPlate));
            List carNumbers = getNumberPlateList(carCache);
            carNumbers.remove(numberPlate);
            carCache.put(CAR_NUMBERS_KEY, carNumbers);
            utx.commit();
        } catch (Exception e) {
            if (utx != null) {
                try {
                    utx.rollback();
                } catch (Exception e1) {
                }
            }
        }
        return null;
    }

    public void setCarId(String carId) {
        this.carId = carId;
    }

    public String getCarId() {
        return carId;
    }

    public void setCar(Car car) {
        this.car = car;
    }

    public Car getCar() {
        return car;
    }

    public static String encode(String key) {
        try {
            return URLEncoder.encode(key, "UTF-8");
        } catch (UnsupportedEncodingException e) {
            throw new RuntimeException(e);
        }
    }

    public static String decode(String key) {
        try {
            return URLDecoder.decode(key, "UTF-8");
        } catch (UnsupportedEncodingException e) {
            throw new RuntimeException(e);
        }
    }
}

JDG_003:JDG 6.3 Quick Start 例子学习:Carmart

运行环境:JBoss Data Grid 6.3.0

Carmart既可以运行在Library mode下,也可以运行在Remote Client-Server mode下。

学习重点:

1. Library mode和Remote Client-Server mode编译时是如何切换的?
(1)编译前,选择library-jbossas profile,profile具体信息查看pom.xml。

@ApplicationScoped
public class LocalCacheContainerProvider extends CacheContainerProvider {
    private Logger log = Logger.getLogger(this.getClass().getName());

    private BasicCacheContainer manager;

    public BasicCacheContainer getCacheContainer() {
        if (manager == null) {
            GlobalConfiguration glob = new GlobalConfigurationBuilder()
                .nonClusteredDefault() //Helper method that gets you a default constructed GlobalConfiguration, preconfigured for use in LOCAL mode
                .globalJmxStatistics().enable() //This method allows enables the jmx statistics of the global configuration.
                .build(); //Builds  the GlobalConfiguration object
            Configuration loc = new ConfigurationBuilder()
                .jmxStatistics().enable() //Enable JMX statistics
                .clustering().cacheMode(CacheMode.LOCAL) //Set Cache mode to LOCAL - Data is not replicated.
                .locking().isolationLevel(IsolationLevel.REPEATABLE_READ) //Sets the isolation level of locking
                .eviction().maxEntries(4).strategy(EvictionStrategy.LIRS) //Sets  4 as maximum number of entries in a cache instance and uses the LIRS strategy - an efficient low inter-reference recency set replacement policy to improve buffer cache performance
                .persistence().passivation(false).addSingleFileStore().purgeOnStartup(true) //Disable passivation and adds a SingleFileStore that is Purged on Startup
                .build(); //Builds the Configuration object
            manager = new DefaultCacheManager(glob, loc, true);
            log.info("=== Using DefaultCacheManager (library mode) ===");
        }
        return manager;
    }

    @PreDestroy
    public void cleanUp() {
        manager.stop();
        manager = null;
    }
}

(2)编译前,选择remote-jbossas profile,profile具体信息查看pom.xml。
@ApplicationScoped
public class RemoteCacheContainerProvider extends CacheContainerProvider {

    private Logger log = Logger.getLogger(this.getClass().getName());

    private BasicCacheContainer manager;

    public BasicCacheContainer getCacheContainer() {
        if (manager == null) {
            ConfigurationBuilder builder = new ConfigurationBuilder();
            builder.addServer()
                 .host(jdgProperty(DATAGRID_HOST))
                 .port(Integer.parseInt(jdgProperty(HOTROD_PORT)));
            manager = new RemoteCacheManager(builder.build());
            log.info("=== Using RemoteCacheManager (Hot Rod) ===");
        }
        return manager;
    }

    @PreDestroy
    public void cleanUp() {
        manager.stop();
        manager = null;
    }
}

2. 依赖注入到CarManager中的CacheContainerProvider,使用的到底是哪个实现?
@Inject
private CacheContainerProvider provider;
选择不同的profile进行编译时,只有一个实现,要么是library mode,要么是remote client-server mode,所以不会冲突。

3. EmbeddedCacheManager 返回的是Cache类型的缓存实例,RemoteCacheManager返回的是 RemoteCache类型的缓存实例,二者有共同的接口吗?
有,BasicCache。
private BasicCache carCache;

4. 编译、部署、运行

4.1 library mode
(1)编译:选择library-jbossas profile
(2)启动JBoss EAP 6.3
cd /Users/maping/Redhat/Eap/jboss-eap-6.3-jdg-carmart/bin
./standalone.sh。
(3)访问:http://localhost:8080/jboss-carmart。

4.2 remote client-server mode
(1)编译:选择remote-jbossas profile
(2)启动JBoss EAP 6.3
cd /Users/maping/Redhat/Eap/jboss-eap-6.3-jdg-carmart/bin
./standalone.sh
(3)修改JBoss Data Grid 6.3中的配置文件 standalone.xml,增加carcache
<subsystem xmlns="urn:infinispan:server:core:6.1" default-cache-container="local">
            <cache-container name="local" default-cache="default" statistics="true">
                <local-cache name="carcache" start="EAGER"
                             batching="false"
                             statistics="true">
                    <eviction strategy="LIRS" max-entries="4"/>
                </local-cache>
                <local-cache name="default" start="EAGER">
                    <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
                    <transaction mode="NONE"/>
                </local-cache>
                <local-cache name="memcachedCache" start="EAGER">
                    <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
                    <transaction mode="NONE"/>
                </local-cache>
                <local-cache name="namedCache" start="EAGER"/>
            </cache-container>
            <cache-container name="security"/>
</subsystem>

(4)启动JBoss Data Grid 6.3
/Users/maping/Redhat/Datagrid/jboss-datagrid-6.3.0-server/bin
./standalone.sh -c standalone-carmart.xml

(5)访问:http://localhost:8080/jboss-carmart。

JDG_002:JDG 6.3 Quick Start 例子学习:Hello World

运行环境:JBoss Data Grid 6.3.0

Hello World 只能运行在Library mode下。

学习重点:

1. 如何从Servlet访问Cache ? 
使用CDI依赖注入到Servlet中。
@Inject
DefaultCacheManager m;

2. 如何从JSF页面访问Cache ? 
使用CDI依赖注入到Managed Bean中,然后JSF通过Managed Bean访问Cache。
 @Inject
 DefaultCacheManager m;

3.  如何设置DefaultCacheManager ?

@ApplicationScoped
public class MyCacheManagerProvider {

    private static final long ENTRY_LIFESPAN = 60 * 1000; // 60 seconds

    @Inject
    private Logger log;

    private DefaultCacheManager manager;

    public DefaultCacheManager getCacheManager() {
        if (manager == null) {
            log.info("\n\n DefaultCacheManager does not exist - constructing a new one\n\n");

            GlobalConfiguration glob = new GlobalConfigurationBuilder().clusteredDefault() // Builds a default clustered configuration
                    .transport().addProperty("configurationFile", "jgroups-udp.xml") // provide a specific JGroups configuration
                    .globalJmxStatistics().allowDuplicateDomains(true).enable() // This method enables the jmx statistics of the global configuration and allows for duplicate JMX domains
                    .build(); // Builds the GlobalConfiguration object
            Configuration loc = new ConfigurationBuilder().jmxStatistics().enable() // Enable JMX statistics
                    .clustering().cacheMode(CacheMode.DIST_SYNC) // Set Cache mode to DISTRIBUTED with SYNCHRONOUS replication
                    .hash().numOwners(2) // Keeps two copies of each key/value pair
                    .expiration().lifespan(ENTRY_LIFESPAN) // Set expiration - cache entries expire after some time (given by the lifespan parameter) and are removed from the cache (cluster-wide).
                    .build();
            manager = new DefaultCacheManager(glob, loc, true);
        }
        return manager;
    }

    @PreDestroy
    public void cleanUp() {
        manager.stop();
        manager = null;
    }

}

注意,这里的配置使用了集群方式。

4.  在Servlet和Managed Bean中依赖注入的 DefaultCacheManage 是如何与 MyCacheManagerProvider关联的?

public class Resources {

    @Inject
    MyCacheManagerProvider cacheManagerProvider;

    @Produces
    Logger getLogger(InjectionPoint ip) {
        String category = ip.getMember().getDeclaringClass().getName();
        return Logger.getLogger(category);
    }

    @Produces
    DefaultCacheManager getDefaultCacheManager() {
        return cacheManagerProvider.getCacheManager();
    }

}
这里使用了@Produces,为各种资源“注入”了Logger和DefaultCacheManager对象。

5. 启动JBoss EAP 6.3
(1)启动JBoss EAP Server1
cd /Users/maping/Redhat/Eap/jboss-eap-6.3-jdg-helloworld-1/bin
./standalone.sh
(2)启动JBoss EAP Server 2
cd /Users/maping/Redhat/Eap/jboss-eap-6.3-jdg-helloworld-2/bin
./standalone.sh -Djboss.socket.binding.port-offset=100

6.编译和部署 jboss-helloworld-jdg.war到JBoss EAP Server1和Server2上。


7. 运行
(1)http://localhost:8080/jboss-helloworld-jdg
(2)http://localhost:8180/jboss-helloworld-jdg
无论在哪个Server上操作,另外一个Server上都能“看到”操作后的结果。

2014年8月9日星期六

EAP_020:配置EAP集群(Domain方式)

运行环境:JBoss EAP 6.3.0


1. Domain Controller (192.168.0.100)配置
(1)创建管理员账户
 ./add-user.sh 你要添加哪个类型?
 a) 管理型用户 (mgmt-users.properties)
 b) 应用型用户 (application-users.properties)
(a):

输入要添加的新用户的细节。
使用从现有属性文件发现的区 'ManagementRealm'。
用户名 : admin
用户名 'admin' 太容易猜测
你确定要添加用户 'admin' yes/no? yes
下面列出了对密码的要求。要修改这些限制,请编辑 add-user.properties 配置文件。
 - 密码不能为下列限制值之一 {root, admin, administrator}
 - 密码必须至少包含 8 字符, 1 个字母, 1 个数字, 1 非字母/数字字符
 - 密码必须和用户名不同。
密码 :
重新输入密码 :
你希望这个用户属于哪些组?(请输入一个用逗号隔开的列表,或者留空表示没有)[  ]:
将要添加域 'admin' 的用户 'ManagementRealm'
这是正确的吗?  yes/No? yes
添加用户 'admin' 至 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-cluster/standalone/configuration/mgmt-users.properties'
添加用户 'admin' 至 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-cluster/domain/configuration/mgmt-users.properties'
添加属于组 admin 的用户 '' 到文件 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-cluster/standalone/configuration/mgmt-groups.properties'
添加属于组 admin 的用户 '' 到文件 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-cluster/domain/configuration/mgmt-groups.properties'
这个新用户将用于 AS 进程来连接另外一个 AS 进程(如从域控制器)?
 例如,用于连接到主控制器的从主机控制器、用于服务器和服务器间的 EJB 调用的远程连接。
yes/no? yes
要表示这个用户,在 server-identities 定义 <secret value="d2VsY29tZUAx" /> 里添加下列内容。

(2)修改domain.xml,替换server-groups部分如下:
 <server-groups>
        <server-group name="cluster-ha" profile="ha">
            <jvm name="default"/>
            <socket-binding-group ref="ha-sockets" />
        </server-group>
 </server-groups>

(3)启动Domain Controller
./domain.sh --host-config=host-master.xml -b 192.168.0.100 -bmanagement=192.168.0.100

2. Host Controller (192.168.0.105)配置
(1)修改host-slave.xml,设定host name为host1
 <host name="host1" xmlns="urn:jboss:domain:1.6">

(2)修改host-slave.xml,在domain-controller的remote中增加一个属性:username
<remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" username=“admin” security-realm="ManagementRealm"/>

(3)修改host-slave.xml,替换secret的value为admin的加密口令
<security-realms>
            <security-realm name="ManagementRealm">
                <server-identities>
                     <!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
                     <secret value="d2VsY29tZUAx"/>
                </server-identities>

                <authentication>
                    <local default-user="$local" skip-group-loading="true"/>
                    <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/>
                </authentication>
                <authorization map-groups-to-roles="false">
                    <properties path="mgmt-groups.properties" relative-to="jboss.domain.config.dir"/>
                </authorization>
</security-realm>

(4)修改host-slave.xml,替换servers部分如下:
<servers>
        <server name="server-one" group="cluster-ha"/>
        <server name="server-two" group="cluster-ha">
            <!-- server-two avoids port conflicts by incrementing the ports in
                 the default socket-group declared in the server-group -->
            <socket-bindings port-offset="150"/>
        </server>
</servers>

(5)启动Host1
./domain.sh --host-config=host-slave.xml -Djboss.domain.master.address=192.168.0.100 -Djboss.bind.address=192.168.0.105 -Djboss.bind.address.management=192.168.0.105

3. Host Controller (192.168.0.107)配置
(1)修改host-slave.xml,设定host name为host2
 <host name="host2" xmlns="urn:jboss:domain:1.6">
(2)(3)(4)与第2步中的(2)(3)(4)相同。
(5)启动Host2
./domain.sh --host-config=host-slave.xml -Djboss.domain.master.address=192.168.0.100 -Djboss.bind.address=192.168.0.107 -Djboss.bind.address.management=192.168.0.107

4. 验证集群配置成功
(1)在Domain Controller中的日志中,看到如下信息:
[Host Controller] 16:23:44,594 INFO  [org.jboss.as.domain] (Host Controller Service Threads - 18) JBAS010918: 已注册的远程从主机 "host1", JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19)
[Host Controller] 16:25:39,032 INFO  [org.jboss.as.domain] (Host Controller Service Threads - 30) JBAS010918: 已注册的远程从主机 "host2", JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19)
(2)访问 http://192.168.0.100:9990/console,来回切换Host,能够分别看到主机master、主机host1、主机host2中的server信息。

参考文献:
1. http://blog.akquinet.de/2012/06/29/managing-cluster-nodes-in-domain-mode-of-jboss-as-7-eap-6/
2. http://www.jbossauthority.com/jboss-tutorial-jboss-clustered-server/

EAP_019:使用domain方式管理多个EAP Server

运行环境:JBoss  EAP 6.3.0

使用domain方式的目的是将多台服务器组成一个服务器组(Server Group),并为一个服务器组内的多台主机(Host)提供:
(1)单点集中配置(通过一个域控制器,即Domain Controller,实现组内主机的统一配置)。
(2)单点统一部署,通过域控制器将项目一次部署至组内全部主机。
在没有domain方式时,要想让多台服务器或几组服务器保持统一的配置,一个一个分别的去手工维护,是非常麻烦的事情。

实验环境:两台电脑,IP分别为 192.168.0.100和192.168.0.105 ,分别运行JBoss EAP 6.3.0,并组成一个服务器组(Server Group)。
其中,192.168.0.100作为Domain Controller,命名为“master”;192.168.0.105 作为 Host Controller,命名为“slave”。
注意,master/slave只是为了区别这两台机器,并不是一主一从的含义,实际配置中,可以有多台Host Controller。

1. 在Domain Controller 上的配置(192.168.0.100)
(1)添加名为slave的管理员账号, 这个账号是为Host Controller(192.168.0.105)添加的,是为了Host Controller和Domain Controller之间的通信。

 ./add-user.sh
你要添加哪个类型?
 a) 管理型用户 (mgmt-users.properties)
 b) 应用型用户 (application-users.properties)
(a):

输入要添加的新用户的细节。
使用从现有属性文件发现的区 'ManagementRealm'。
用户名 (slave) :   
下面列出了对密码的要求。要修改这些限制,请编辑 add-user.properties 配置文件。
 - 密码不能为下列限制值之一 {root, admin, administrator}
 - 密码必须至少包含 8 字符, 1 个字母, 1 个数字, 1 非字母/数字字符
 - 密码必须和用户名不同。
密码 :
重新输入密码 :
你希望这个用户属于哪些组?(请输入一个用逗号隔开的列表,或者留空表示没有)[  ]:
将要添加域 'slave' 的用户 'ManagementRealm'
这是正确的吗?  yes/No? yes
添加用户 'slave' 至 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-controller/standalone/configuration/mgmt-users.properties'
添加用户 'slave' 至 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-controller/domain/configuration/mgmt-users.properties'
添加属于组 slave 的用户 '' 到文件 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-controller/standalone/configuration/mgmt-groups.properties'
添加属于组 slave 的用户 '' 到文件 '/Users/maping/Redhat/Eap/jboss-eap-6.3-domain-controller/domain/configuration/mgmt-groups.properties'
这个新用户将用于 AS 进程来连接另外一个 AS 进程(如从域控制器)?
 例如,用于连接到主控制器的从主机控制器、用于服务器和服务器间的 EJB 调用的远程连接。
yes/no? yes
要表示这个用户,在 server-identities 定义 <secret value="d2VsY29tZUAx" /> 里添加下列内容。

(2)启动Domain Controller
./domain.sh -b 192.168.0.100 -bmanagement=192.168.0.100

2. 在Host Controller 上的配置(192.168.0.105)
(1)修改host.xml,修改host name 为 slave
这个名字必须和在Domain Controller中创建的管理员账户名称一致,否则认证通不过。
 <host name="slave" xmlns="urn:jboss:domain:1.6">
(2)修改host.xml,在ManagementRealm security-realm中增加server-identites
注意,secret value就是在Domain Controller中创建的slave管理员账户的加密口令。
<security-realm name="ManagementRealm">
             <server-identities>
                   <secret value="d2VsY29tZUAx" />
             </server-identities>
                <authentication>
                    <local default-user="$local" skip-group-loading="true" />
                    <properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/>
                </authentication>
                <authorization map-groups-to-roles="false">
                    <properties path="mgmt-groups.properties" relative-to="jboss.domain.config.dir"/>
                </authorization>
            </security-realm>
(3)修改host.xml,修改后的domain-controller如下:
 <domain-controller>
       <!-- Alternative remote domain controller configuration with a host and port -->
        <remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
    </domain-controller>
 (4)修改host.xml,为了避免名称冲突,把server name都改一遍,修改后的servers如下:
<servers>
        <server name="server-four" group="main-server-group">
            <!-- Remote JPDA debugging for a specific server
            <jvm name="default">
              <jvm-options>
                <option value="-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n"/>
              </jvm-options>
           </jvm>
           -->
        </server>
        <server name="server-five" group="main-server-group" auto-start="true">
            <!-- server-two avoids port conflicts by incrementing the ports in
                 the default socket-group declared in the server-group -->
            <socket-bindings port-offset="150"/>
        </server>
        <server name="server-six" group="other-server-group" auto-start="false">
            <!-- server-three avoids port conflicts by incrementing the ports in
                 the default socket-group declared in the server-group -->
            <socket-bindings port-offset="250"/>
        </server>
</servers>
(5)启动Host Controller
./domain.sh -Djboss.domain.master.address=192.168.0.100

3. 确认Host Controller加入了Domain Controller
(1)在Domain Controller中的日志中,看到如下信息:
[Host Controller] 10:40:30,852 INFO  [org.jboss.as.domain] (Host Controller Service Threads - 30) JBAS010918: 已注册的远程从主机 "slave", JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19)
(2)访问 http://192.168.0.100:9990/console,来回切换Host,能够分别看到主机master和主机slave中的server信息。

4. domain方式原理图


参考文献:
1. http://blog.csdn.net/kylinsoong/article/details/12683745
2. http://wenku.baidu.com/link?url=wOf_twvNLN6xluIDNXIeXYGk_SOkko1bJp5kfBrcBqrg8kD1AQS2K5z29MXZrZTxms8DqIfEfGoZYPRsEIzK5scLDp6ZcNBthzxA_8vhfpe
3. http://blog.csdn.net/lzzyok/article/details/7895092
4. https://access.redhat.com/solutions/218053
5. https://docs.jboss.org/author/display/AS72/Operating+modes
6. http://jbosscn.iteye.com/blog/1045347