2017年1月30日星期一

OpenShift_050:使用 binary 方式部署 .war 到 Tomcat 8

环境:OCP 3.4

从 OpenShift 3.2 开始支持 binary 方式部署,即支持直接部署 war 文件。

1. 首先确认 jboss-webserver30-tomcat8-openshift Image Stream 导入成功
oc get is jboss-webserver30-tomcat8-openshift -n openshift
输出如下:
NAME                                  DOCKER REPO                                                                 TAGS      UPDATED
jboss-webserver30-tomcat8-openshift   registry.example.com:5000/jboss-webserver-3/webserver30-tomcat8-openshift

导入后的 Image Stream 有个问题,就是没有 Tags,还需要导入相应的 Image。
docker tag registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:latest registry.example.com:5000/jboss-webserver-3/webserver30-tomcat8-openshift:latest (之前已做)

docker push registry.example.com:5000/jboss-webserver-3/webserver30-tomcat8-openshift (之前已做)

oc import-image jboss-webserver30-tomcat8-openshift --insecure -n openshift

再次检查,这次有了 tag。
oc get is jboss-webserver30-tomcat8-openshift -n openshift
输出如下:
NAME                                  DOCKER REPO                                                                 TAGS      UPDATED
jboss-webserver30-tomcat8-openshift   registry.example.com:5000/jboss-webserver-3/webserver30-tomcat8-openshift   latest    7 seconds ago

2. 根据 image stream 创建一个新 build,并指定 binary=true
oc new-build --image-stream=jboss-webserver30-tomcat8-openshift --binary=true --name=myapp

参数 --binary=true 表示 build 将会使用 binary 而不是源代码。
输出如下:
--> Found image d8f7cc1 (6 weeks old) in image stream "openshift/jboss-webserver30-tomcat8-openshift" under tag "latest" for "jboss-webserver30-tomcat8-openshift"

    JBoss Web Server 3.0
    --------------------
    Platform for building and running web applications on JBoss Web Server 3.0 - Tomcat v8

    Tags: builder, java, tomcat8

    * A source build using binary input will be created
      * The resulting image will be pushed to image stream "myapp:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label build=myapp ...
    imagestream "myapp" created
    buildconfig "myapp" created
--> Success

3. 开始 build 
可以直接指定 war 文件,也可以指定存放 .war 文件的目录,两种方式选择一种即可。

3.1 指定 war 文件 
oc start-build myapp --from-file=cluster_test_repl.war --follow=true --wait=true
输出如下:
Uploading file "cluster_test_repl.war" as binary input for the build ...
build "myapp-1" started
Receiving source from STDIN as file cluster_test_repl.war



Pushing image 172.30.98.122:5000/applications/myapp:latest ...
Pushed 6/7 layers, 89% complete
Pushed 7/7 layers, 100% complete
Push successful

注意,这样做不会成功,原因是 .war 文件没有复制到 /opt/webserver/webapps 目录下。
这一点从输出日志也可以看出来,没有进行复制操作。应该是个 BUG。

经过摸索,可以这样做
oc start-build myapp --from-file=. --follow=true --wait=true
注意,此种方式指定的目录下面必须有一个 deployments 目录,deployments 目录下面放置 .war 文件。也就是说,此时 --from-file 和 --from-dir 功能是一样的。
输出如下:
Uploading directory "." as binary input for the build ...
build "myapp-1" started
Receiving source from STDIN as archive ...

Copying all deployments war artifacts from /home/jboss/source/deployments directory into /opt/webserver/webapps for later deployment...
'/home/jboss/source/deployments/cluster_test_repl.war' -> '/opt/webserver/webapps/cluster_test_repl.war'


Pushing image 172.30.98.122:5000/applications/myapp:latest ...
Pushed 6/7 layers, 86% complete
Pushed 7/7 layers, 100% complete
Push successful

3.2 指定存放 .war 文件目录 
这种方式可以部署多个 war 文件,这里在当前目录下的 deployments 目录下放置 cluster_test.war 文件。
注意,指定的目录下面必须有一个 deployments 目录,deployments 目录下面放置 .war 文件。
oc start-build myapp --from-dir=. --follow=true --wait=true

输出如下:
Uploading directory "." as binary input for the build ...
build "myapp-1" started
Receiving source from STDIN as archive ...

Copying all deployments war artifacts from /home/jboss/source/deployments directory into /opt/webserver/webapps for later deployment...
'/home/jboss/source/deployments/cluster_test_repl.war' -> '/opt/webserver/webapps/cluster_test_repl.war'


Pushing image 172.30.98.122:5000/applications/myapp:latest ...
Pushed 6/7 layers, 86% complete
Pushed 7/7 layers, 100% complete
Push successful

4. 创建应用
oc new-app myapp --insecure-registry=true
输出如下:
--> Found image 6304e7c (42 seconds old) in image stream "applications/myapp" under tag "latest" for "myapp"

    applications/myapp-1:5c9b5f29
    -----------------------------
    Platform for building and running web applications on JBoss Web Server 3.0 - Tomcat v8

    Tags: builder, java, tomcat8

    * This image will be deployed in deployment config "myapp"
    * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "myapp"
      * Other containers can access this service through the hostname "myapp"

--> Creating resources ...
    deploymentconfig "myapp" created
    service "myapp" created
--> Success
    Run 'oc status' to view your app.

5. 创建 route
oc expose svc myapp

6. 访问 http://myapp-applications.apps.example.com/cluster_test_repl/index.jsp

7. 清理
如果有错,执行以下命令清理,然后重做:
oc delete bc/myapp is/myapp dc/myapp routes/myapp svc/myapp

参考文献:
1. https://blog.openshift.com/binary-input-sources-openshift-3-2/
2. http://playbooks-rhtconsulting.rhcloud.com/playbooks/app_dev/binary_deployment_howto.html

2017年1月29日星期日

OpenShift_049:Master 节点是如何调度 Node 节点的?

1. 默认的调度算法
步骤如下:
1.1 按顺序执行 predicates functions 过滤 node 节点
具体顺序如下:
(1) 匹配 pod node selector
如果 pod node selector 为空,那么表示匹配所有 node 节点。
(2) 有足够的资源运行 pod
如果 project 或 pod 没有定义 default resource limits,那么表示匹配所有 node 节点。
(3) pod 定义的端口在 node 上可以获得
pod 中定义的端口会映射到 node 的 TCP 活 UDP 端口。
(4) 有足够的 host volume
pod 中定义的 PVC 会请求 node 的文件系统,确保有足够的空间。
(5) 匹配 region label

符合以上要求(必须每一项都返回 true)的 node 节点,进入下一步。

1.2 按顺序执行 priorities functions 过滤 node 节点
合格的 node 节点将被打分,分数乘以权重 weight,最后得分范围是从 0 到 10,数字越高的 node 节点表示越有可能被调度。
具体顺序如下:
(1) 运行 pod 数量少的 node,得分高。
(2) 没有运行同一个 service 的 pod 的 node,得分高。
(3) 有不同 zone label 的 node,得分高。

1.3 选择最合适的 node 节点
计算最终的分数,总分数是累加(各项得分 * 各项权重)。
按照得分排序,分数高的 node 节点将被调度,分数相同的 node 节点,将随机选择一个。
另外,从 1.1 和 1.2 可以看出,node 和 pod 的 label 对 node 的选择影响很大。

说明:如果始终没有合格的 node 节点,pod 依然会创建,但是会一直处于 Pending 状态,直到等到一个合格的 node 节点。



2. region 和 zone 

2.1 zone
一个 zone 代表一组 node 集合,这些 node 共享相同的物理基础架构,比如机架、电源、网络、交换机,或者就是整个数据中心。
如果基础架构出现故障,会导致 zone 中的所有 node 都失败,进而运行其上的 pod 也都失败。
默认情况下,zone 是反亲和的:serviceAntiAffinity。这样设置的目的是让来自同一个 service 的 pod 运行在不同的 zone 中,防止因为某个 zone 的失败导致该应用 pod 全部失败。

2.2 region
一个 region 代表一组 node 集合,这些 node 是为了特别的用途而保留的,比如为了某个用户,为了某些关键服务,为了某些特别的硬件。
默认情况下,region 是亲和的:serviceAffinity。这样设置的目的是让来自同一个 service 的 pod 运行在同一个 region 中,以减少网络传输和网络延迟。

通常来说,同一个 region 包括不同的 zone,可以理解为一个地区有多个数据中心,防止因为单点物理故障,导致所有的 node 失败。

3. 多 master 环境 region 和 zone 的建议设置
oc label node master1.paascloud.example.com region=infra zone=east
oc label node master2.paascloud.example.com region=infra zone=west
oc label node node1.paascloud.example.com region=primary zone=east
oc label node node2.paascloud.example.com region=primary zone=north
oc label node node3.paascloud.example.com region=backup zone=west
oc label node node4.paascloud.example.com region=backup zone=south

4. 调度的配置文件
cat /etc/origin/master/scheduler.json
{
   "apiVersion": "v1",
   "kind": "Policy",
   "predicates": [
       {
           "name": "NoDiskConflict"
       },
       {
           "name": "NoVolumeZoneConflict"
       },
       {
           "name": "MaxEBSVolumeCount"
       },
       {
           "name": "MaxGCEPDVolumeCount"
       },
       {
           "name": "GeneralPredicates"
       },
       {
           "name": "PodToleratesNodeTaints"
       },
       {
           "name": "CheckNodeMemoryPressure"
       },
       {
           "name": "CheckNodeDiskPressure"
       },
       {
           "name": "MatchInterPodAffinity"
       },
       {
           "argument": {
               "serviceAffinity": {
                   "labels": [
                       "region"
                   ]
               }
           },
           "name": "Region"
       }
   ],
   "priorities": [
       {
           "name": "LeastRequestedPriority",
           "weight": 1
       },
       {
           "name": "BalancedResourceAllocation",
           "weight": 1
       },
       {
           "name": "SelectorSpreadPriority",
           "weight": 1
       },
       {
           "name": "NodePreferAvoidPodsPriority",
           "weight": 10000
       },
       {
           "name": "NodeAffinityPriority",
           "weight": 1
       },
       {
           "name": "TaintTolerationPriority",
           "weight": 1
       },
       {
           "name": "InterPodAffinityPriority",
           "weight": 1
       },
       {
           "argument": {
               "serviceAntiAffinity": {
                   "label": "zone"
               }
           },
           "name": "Zone",
           "weight": 2
       }
   ]
}

5. 查看默认的 node selector
cat /etc/origin/master/master-config.yaml | grep defaultNodeSelector

6. 修改 Project 的 openshift.io/node-selector
oc edit -o json namespace default
输出如下:
{
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
        "name": "default",
        "selfLink": "/api/v1/namespaces/default",
        "uid": "505e20bc-dee6-11e6-9eb6-080027fc450b",
        "resourceVersion": "875",
        "creationTimestamp": "2017-01-20T07:59:05Z",
        "annotations": {
            "openshift.io/node-selector": "",
            "openshift.io/sa.initialized-roles": "true",
            "openshift.io/sa.scc.mcs": "s0:c6,c0",
            "openshift.io/sa.scc.supplemental-groups": "1000030000/10000",
            "openshift.io/sa.scc.uid-range": "1000030000/10000"
        }
    },
    "spec": {
        "finalizers": [
            "kubernetes",
            "openshift.io/origin"
        ]
    },
    "status": {
        "phase": "Active"
    }
}

也可以用命令行
oc annotate namespace applications openshift.io/node-selector='app=yes' --overwrite
oc annotate namespace applications openshift.io/node-selector- --overwrite

参考文献:
1. https://my.oschina.net/jxcdwangtao/blog/824965
2. https://github.com/kubernetes/kubernetes/blob/3fd14d97fb13ba2849e0c908aaff18efcece70c1/pkg/api/types.go#L2146

OpenShift_048:使用 binary 方式部署 .war 到 JBoss EAP 7 集群

环境:OCP 3.4

从 OpenShift 3.2 开始支持 binary 方式部署,即支持直接部署 war 文件。

1. 根据 image stream 创建一个新 build,并指定 binary=true
oc new-build --image-stream=jboss-eap70-openshift --binary=true --name=myapp

参数 --binary=true 表示 build 将会使用 binary 而不是源代码。
输出如下:
--> Found image 92138ab (6 weeks old) in image stream "openshift/jboss-eap70-openshift" under tag "latest" for "jboss-eap70-openshift"

    JBoss EAP 7.0
    -------------
    Platform for building and running JavaEE applications on JBoss EAP 7.0

    Tags: builder, javaee, eap, eap7

    * A source build using binary input will be created
      * The resulting image will be pushed to image stream "myapp:latest"
      * A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label build=myapp ...
    imagestream "myapp" created
    buildconfig "myapp" created
--> Success

2. 开始 build 
可以直接指定 war 文件,也可以指定存放 .war 文件的目录,两种方式选择一种即可。

2.1 指定 war 文件
oc start-build myapp --from-file=cluster_test_repl.war --follow=true --wait=true
输出如下:
Uploading file "cluster_test_repl.war" as binary input for the build ...
build "myapp-1" started
Receiving source from STDIN as file cluster_test_repl.war

Copying all war artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
'/home/jboss/source/./cluster_test_repl.war' -> '/opt/eap/standalone/deployments/cluster_test_repl.war'
Copying all ear artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all rar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all jar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all war artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all ear artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all rar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all jar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...


Pushing image 172.30.98.122:5000/applications/myapp:latest ...
Pushed 6/7 layers, 87% complete
Pushed 7/7 layers, 100% complete
Push successful

2.2 指定存放 .war 文件目录 
这种方式可以部署多个 war 文件,这里在当前目录下放置 cluster_test_repl.war 文件。
注意,指定的目录下可以直接放 .war 文件,不必须有一个 deployments 目录,这一点和 tomcat 8 不一样。
oc start-build myapp --from-dir=. --follow=true --wait=true
输出如下:
Uploading directory "." as binary input for the build ...
build "myapp-1" started
Receiving source from STDIN as archive ...

Copying all war artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
'/home/jboss/source/./cluster_test_repl.war' -> '/opt/eap/standalone/deployments/cluster_test_repl.war'
Copying all ear artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all rar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all jar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
Copying all war artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all ear artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all rar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
Copying all jar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...


Pushing image 172.30.98.122:5000/applications/myapp:latest ...
Pushed 6/7 layers, 87% complete
Pushed 7/7 layers, 100% complete
Push successful

3. 创建应用
oc new-app myapp
输出如下:
--> Found image c3032f8 (About a minute old) in image stream "applications/myapp" under tag "latest" for "myapp"

    applications/myapp-1:4f873094
    -----------------------------
    Platform for building and running JavaEE applications on JBoss EAP 7.0

    Tags: builder, javaee, eap, eap7

    * This image will be deployed in deployment config "myapp"
    * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "myapp"
      * Other containers can access this service through the hostname "myapp"

--> Creating resources ...
    deploymentconfig "myapp" created
    service "myapp" created
--> Success
    Run 'oc status' to view your app.

4. 创建 route
oc expose svc myapp

5. 访问 http://myapp-applications.apps.example.com/cluster_test_repl/index.jsp

6. 配置集群
弹性扩展 myapp
oc scale dc/myapp --replicas=2

给 default sa 授权 view
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)

创建 eap7-service-account sa
oc create serviceaccount eap7-service-account -n $(oc project -q)

给 eap7-service-account sa 授权 view
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap7-service-account -n $(oc project -q)

为 myapp DeploymentConfig 增加环境变量
oc env dc/myapp -e OPENSHIFT_KUBE_PING_NAMESPACE=$(oc project -q) OPENSHIFT_KUBE_PING_LABELS=app=myapp

查看 app pod 日志
oc logs -f < app pod1 name > 
oc logs -f < app pod2 name >

刷新页面,删除接受请求的 pod,继续刷新页面,发现数字继续增长。
// TODO 数字没有继续增长,而是重新增长

7. 清理
如果有错,执行以下命令清理,然后重做:
oc delete bc/myapp is/myapp dc/myapp routes/myapp svc/myapp

参考文献:
1. https://blog.openshift.com/binary-input-sources-openshift-3-2/
2. http://labs.openshift3roadshow.com/roadshow/clustering
3. https://access.redhat.com/documentation/en/red-hat-jboss-middleware-for-openshift/3/single/red-hat-jboss-enterprise-application-platform-for-openshift/#clustering

2017年1月28日星期六

OpenShift_047:RoadShow 回顾之七:JBoss EAP 7 集群

环境:OCP 3.4

OpenShift JBoss EAP 集群是通过 Kubernetes 发现其它的 JBoss EAP 容器,并组成一个集群。
具体配置在 standalone-openshift.xml 文件中的 openshift.KUBE_PING 元素中的 JGroups protocol stack。
要想 KUBE_PING 工作正常,必须要做到如下条件:
  • 必须设置 OPENSHIFT_KUBE_PING_NAMESPACE 环境变量
  • 必须设置 OPENSHIFT_KUBE_PING_LABELS 环境变量
  • 必须给 sa 授权访问 Kubernetes REST API
之前已经给 default sa 授权访问 Kubernetes REST API。

扩展 mlbparks pod 到 2 个
oc scale dc/mlbparks --replicas=2

 点击 Open Java Console,应该出现如下画面:
  但是我点击 Open Java Console 后,出现如下错误:

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/

2017年1月27日星期五

OpenShift_046:RoadShow 回顾之六:使用 CI/CD Pipeline 自动化部署

环境:OCP 3.4

本实验使用 Pipeline 来实现自动化部署,模拟了开发和生产环境,比之前的 S2I 更复杂,更适合真实情况。

当配置或源代码被修改后,Pipeline 将会做以下事情,来决定此次改变是否适合从开发到生产环境。

  • 克隆 Git 仓库代码
  • 编译代码,运行单元测试
  • 运行 S2I,生成应用的镜像
  • 部署应用的镜像到开发环境
  • 运行自动化测试,测试开发环境中的应用镜像
  • 人工测试,测试开发环境中的应用镜像
  • 等待开发经理同意或不同意把此次构建的应用镜像部署到生产环境

1. 由于 node2 资源紧张,增加 node1 上的label:app=yes
oc label node node1.example.com app=yes

2. 创建 jenkins 应用

选择 Jenkins (Ephemeral) 模板,按默认参数部署即可。
部署成功后,会自动创建一个 jenkins sa,为其赋予 edit 权限
oc policy add-role-to-user edit -z jenkins

3. 去除 nationalparks service 的 label: type=parksmap-backend
这样做的目的是为将来使用 nationalparks-live 替代 nationalparks。
oc describe svc nationalparks
oc label svc nationalparks type-

4. 创建 Live 环境

4.1 创建 Live MongoDB
选择 MongoDB (Ephemeral) 模板,参数如下:

4.2 部署 nationalparks:live 
创建 nationalparks:live Image Stream
oc tag nationalparks:latest nationalparks:live
说明:创建一个 Tag:nationalparks:live,其指向 nationalparks:latest 所指向的镜像。
将来自动构建只会更新 nationalparks:latest tag,手工命令(人工确认触发的自动构建)才会更新 nationalparks:live tag。

创建完 nationalparks:live tag,还要发布与之相对应的镜像。


为 nationalparks-live 创建 route

为 nationalparks-live 的 DeploymentConfig 增加环境变量
oc env dc nationalparks-live -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_SERVER_HOST=mongodb-live

为 nationalparks-live 的 Service 增加 label:type=parksmap-backend
oc describe svc nationalparks-live
oc label svc nationalparks-live type=parksmap-backend

为 Live MongoDB 加载数据
http://nationalparks-live-applications.apps.example.com/ws/data/load
确认 Live MongoDB 加载数据成功
http://nationalparks-live-applications.apps.example.com/ws/data/all

访问并刷新 http://parksmap-applications.apps.example.com/,能看到来自 nationalparks-live 的数据。

5. 创建 OpenShift Pipeline
不允许自动部署 nationalparks (dev)
默认情况下,每构建一次 nationalparks,都会重新部署 nationalparks:latest 镜像。
以后将使用 Pipeline 方式构建,因此就不再允许自动部署 nationalparks (dev)。
oc set triggers dc/nationalparks --from-image=nationalparks:latest --remove

创建 dev-live-pipeline template
cd /tmp;
git clone http://git.example.com/git/nationalparks.git;
cd /tmp/nationalparks/ose3
oc create -f pipeline-template.yaml -n openshift

根据 dev-live-pipeline template 创建 nationalparks-pipeline

查看 nationalparks-pipeline 配置
node {
  stage 'Build'
  openshiftBuild(buildConfig: 'nationalparks', namespace: 'applications', showBuildLogs: 'true')
  
  stage 'Deploy Dev'
  openshiftDeploy(deploymentConfig: 'nationalparks', namespace: 'applications')

  stage 'Test Dev'
  sh 'curl -s http://nationalparks.applications.svc.cluster.local:8080/ws/data/load'
  sh 'curl -s http://nationalparks.applications.svc.cluster.local:8080/ws/data/all | grep -q "Grand Canyon National Park"'
  
  stage 'Deploy Live'
  input 'Promote the Dev image to Live?'
  openshiftTag(srcStream: 'nationalparks', srcTag: 'latest', destStream: 'nationalparks', destTag: 'live', namespace: 'applications', destinationNamespace: 'applications')
  // openshiftDeploy(deploymentConfig: 'nationalparks-live', namespace: 'applications')
}

点击 Start Pipeline,开始构建,jenkins 会报下面这个错误:
......
Proceeding
[Pipeline] sh
[workspace] Running shell script
+ curl -s http://nationalparks.applications.svc.cluster.local:8080/ws/data/load
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 7
Finished: FAILURE

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/

OpenShift_045:RoadShow 回顾之五:使用 ConfigMap 替代环境变量

环境:OCP 3.4

当环境变量很多时,设置和维护都变得比较麻烦,这时,ConfigMap 就可以派上用场了。
使用 ConfigMap 另一个的好处是与平台无关,镜像可以随便迁移。

1. 克隆 nationalparks 源代码(在 Master 机器上操作)
cd /tmp;
git clone http://git.example.com/git/nationalparks.git;

2. 创建 ConfigMap nationalparks
cd /tmp/nationalparks/ose3
oc create configmap nationalparks --from-file=application.properties=./application-dev.properties
其中 application-dev.properties 内容如下:
# NationalParks MongoDB
mongodb.server.host=mongodb
mongodb.user=mongodb
mongodb.password=mongodb
mongodb.database=mongodb

3. 把 ConfigMap nationalparks mount 到容器中
oc set volumes dc/nationalparks --add -m /opt/openshift/config --configmap-name=nationalparks
上面这个命令把 ConfigMap nationalparks 的内容:文件 application.properties 放置到容器的 /opt/openshift/config 目录下。
DeploymentConfiguration nationalparks 检测到配置变化后,会自动重新部署。

4. 去掉环境变量配置
ConfigMap 配置好之后,就可以删除之前配置的环境变量了。
oc env dc/nationalparks MONGODB_USER- MONGODB_PASSWORD- MONGODB_DATABASE- MONGODB_SERVER_HOST-

确认去掉环境变量后,使用 ConfigMap 可以连接到数据库
http://nationalparks-applications.apps.example.com/ws/data/all

去掉之后,发现无法连接上数据库。// TODO

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/


OpenShift_044:RoadShow 回顾之四:为 nationalparks 应用添加 MongoDB

环境:OCP 3.4
1. 选择 MongoDB (Ephemeral) 模板,输入如下参数值:

2. 为 nationalparks 的 DeploymentConfig 增加环境变量
oc env dc nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_SERVER_HOST=mongodb

确认环境变量设置成功
oc env dc/nationalparks --list
输出如下:
# deploymentconfigs nationalparks, container nationalparks
MONGODB_USER=mongodb
MONGODB_PASSWORD=mongodb
MONGODB_DATABASE=mongodb
MONGODB_SERVER_HOST=mongodb

由于修改了 DC 的环境变量,nationalparks 会自动重新发布。

3. 为数据库增加数据
查询数据
http://nationalparks-applications.apps.example.com/ws/data/all
输出如下:
[]

加载数据
http://nationalparks-applications.apps.example.com/ws/data/load
输出如下:
Items inserted in database: 2740

再次查询数据,这下有很多数据了。

nationalparks 是怎么连上数据库呢?
cd ~/mygit
vim nationalparks/src/main/java/com/openshift/evg/roadshow/parks/db/MongoDBConnection.java

@PostConstruct
    public void initConnection() {
        String mongoHost = env.getProperty("mongodb.server.host", "127.0.0.1"); // env var MONGODB_SERVER_HOST takes precedence
        String mongoPort = env.getProperty("mongodb.server.port", "27017"); // env var MONGODB_SERVER_PORT takes precedence
        String mongoUser = env.getProperty("mongodb.user", "mongodb"); // env var MONGODB_USER takes precedence
        String mongoPassword = env.getProperty("mongodb.password", "mongodb"); // env var MONGODB_PASSWORD takes precedence
        String mongoDBName = env.getProperty("mongodb.database", "mongodb"); // env var MONGODB_DATABASE takes precedence

        try {
            String mongoURI = "mongodb://" + mongoUser + ":" + mongoPassword + "@" + mongoHost + ":" + mongoPort + "/" + mongoDBName;
            System.out.println("[INFO] Connection string: " + mongoURI);
            MongoClient mongoClient = new MongoClient(new MongoClientURI(mongoURI));
            mongoDB = mongoClient.getDatabase(mongoDBName);
        } catch (Exception e) {
            System.out.println("[ERROR] Creating the mongoDB. " + e.getMessage());
            mongoDB = null;
        }
    }


4. 为 nationalparks service 增加 label:type=parksmap-backend
注意,教材里说要为 nationalparks route 增加 label:type=parksmap-backend 是错误的,应该是为 nationalparks service 增加 label:type=parksmap-backend。

oc label svc nationalparks type=parksmap-backend

确认 label 添加成功
oc describe svc nationalparks
输出如下:
Name: nationalparks
Namespace: applications
Labels: app=nationalparks
type=parksmap-backend
Selector: deploymentconfig=nationalparks
Type: ClusterIP
IP: 172.30.69.152
Port: 8080-tcp 8080/TCP
Endpoints: 10.129.0.76:8080
Session Affinity: None
No events.

5. 重新查看 http://parksmap-applications.apps.example.com/

6. 疑问
重启所有机器后,发现只能显示 mlbparks,感觉好像只能选择一个 parksmap-backend。
具体原因有待研究。
不重启修改后,可以同时显示 mlbparks 和 nationalparks。

oc describe route nationalparks
oc describe svc nationalparks
oc label route nationalparks type-
oc label route nationalparks type-
oc label route nationalparks type=parksmap-backend
oc label svc nationalparks type=parksmap-backend

oc describe route mlbparks
oc describe svc mlbparks
oc label route mlbparks type-
oc label svc mlbparks type-
oc label route mlbparks type=parksmap-backend
oc label svc mlbparks type=parksmap-backend

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/

2017年1月26日星期四

OpenShift_043:RoadShow 回顾之三:部署 nationalparks

环境:OCP 3.4


1. 克隆 nationalparks 源代码 (在 MAC 机器上操作)
cd ~/mygit
git clone -b 1.2.1 https://github.com/openshift-roadshow/nationalparks.git
scp -r nationalparks/ root@registry.example.com:/opt/

2. 初始化 nationalparks git 仓库(在 Registry 机器上操作)
mkdir -p /opt/git/repo/nationalparks.git;
cd /opt/git/repo/nationalparks.git;
git init --bare;
git update-server-info;
mv hooks/post-update.sample hooks/post-update;

3. 拷贝 nationalparks 代码,并提交(在 Registry 机器上操作)
cd /opt;
mv nationalparks nationalparks-demo

git clone file:///opt/git/repo/nationalparks.git/;

cp nationalparks-demo/* nationalparks -rf;
cp nationalparks-demo/.sti nationalparks -rf;
cp nationalparks-demo/.htaccess nationalparks -rf;
cp nationalparks-demo/.gitignore nationalparks -rf;
cd nationalparks;
git add .;
git commit -m 'initial upload';
git push origin master;

4. 下载 Builder Image(在 MAC 机器上做)
docker pull jorgemoralespou/s2i-java
docker save -o s2i-java.tar.gz jorgemoralespou/s2i-java
scp s2i-java.tar.gz root@192.168.56.112:/opt/ose/images/
说明:之所以下载 jorgemoralespou/s2i-java,是因为在 nationalparks/ose/application-template.json 中使用了它。

5. 加载 Builder Image(在 Registry 机器上做)
docker load -i s2i-java.tar.gz
docker tag jorgemoralespou/s2i-java:latest registry.example.com:5000/jorgemoralespou/s2i-java:latest
docker push registry.example.com:5000/jorgemoralespou/s2i-java:latest

6. 为 Builder Image 创建 Image Stream(在 Master 机器上做)
cd /opt
oc create -f simple-s2i-java-is.json -n openshift
其中 simple-s2i-java-is.json 内容如下:
{
    "kind": "ImageStream",
    "apiVersion": "v1",
    "metadata": {
        "name": "simple-java-s2i",
        "namespace": "openshift",
        "creationTimestamp": null
    },
    "spec": {
        "dockerImageRepository": "registry.example.com:5000/jorgemoralespou/s2i-java",
        "tags": [
            {
                "name": "latest",
                "annotations": {
                    "description": "Simple Java 1.8 S2I builder",
                    "iconClass": "icon-jboss",
                    "supports": "java:8",
                    "tags": "builder,java",
                    "version": "1.0"
                },
                "from": {
                    "kind": "DockerImage",
                    "name": "registry.example.com:5000/jorgemoralespou/s2i-java"
                },
                "generation": 1,
                "importPolicy": {
                    "insecure": true
                }
            }
        ]
    }
}

7. 部署 nationalparks 应用

别忘了在 Build Config 中增加环境变量:
MAVEN_MIRROR_URL = http://192.168.56.1:8081/nexus/content/groups/public/

8. 访问 nationalparks 应用
http://nationalparks-applications.apps.example.com/ws/info/
输出如下:
{"id":"nationalparks","displayName":"National Parks","center":{"latitude":"47.039304","longitude":"14.505178"},"zoom":4}

9. 清理
如果有错,执行以下命令清理,然后重做:
oc delete is/nationalparks bc/nationalparks dc/nationalparks svc/nationalparks routes/nationalparks

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/
2. https://github.com/jorgemoralespou/s2i-java
3. https://github.com/openshift-roadshow/nationalparks

OpenShift_042:RoadShow 回顾之二:部署 mlbparks

环境:OCP 3.4

缩写 MLB 代表 Major League Baseball,美国职棒大联盟。

1. 下载 mlbparks 源代码(在 MAC 机器上做)
cd ~/mygit
git clone -b 1.0.0 https://github.com/openshift-roadshow/mlbparks.git
scp -r mlbparks/ root@registry.example.com:/opt/

2. 初始化 mlbparks git 仓库(在 Registry 机器上操作)
mkdir -p /opt/git/repo/mlbparks.git;
cd /opt/git/repo/mlbparks.git;
git init --bare;
git update-server-info;
mv hooks/post-update.sample hooks/post-update;

3. 拷贝 mlbparks 代码,并提交(在 Registry 机器上操作)
cd /opt;
mv mlbparks mlbparks-demo

git clone file:///opt/git/repo/mlbparks.git/;

cp mlbparks-demo/* mlbparks -rf;
cp mlbparks-demo/.sti mlbparks -rf;
cp mlbparks-demo/.htaccess mlbparks -rf;
cp mlbparks-demo/.gitignore mlbparks -rf;
cd mlbparks;
git add .;
git commit -m 'initial upload';
git push origin master;

4. 克隆 mlbparks git 仓库,并部署 mlbparks 应用(在 Master 上操作)
cd /tmp;
git clone http://git.example.com/git/mlbparks.git;

修改 /tmp/mlbparks/ose3/application-template-eap.json
把 parameters 中的 GIT_URI 值改为 http://git.example.com/git/mlbparks.git
把 parameters 中的 MAVEN_MIRROR_URL 值改为 http://192.168.56.1:8081/nexus/content/groups/public/

创建 template
cd /tmp/mlbparks/ose3
oc create -f application-template-eap.json

创建  jboss-eap70-openshift Image Stream
由于 template 中使用了 jboss-eap70-openshift Image Stream,因此需要创建该 IS,否则会报如下错误:
An error occurred while starting the build.Error resolving ImageStreamTag jboss-eap70-openshift:1.4 in namespace openshift: imagestreams "jboss-eap70-openshift" not found
关于如何创建 jboss-eap70-openshift IS,请参考《部署 MyBank 到 JBoss EAP 7》。

创建应用
oc new-app mlbparks --name=mlbparks
输出如下:
--> Deploying template "applications/mlbparks" to project applications

     mlbparks
     ---------
     Application template MLBParks backend running on JBoss EAP and using MongoDB

     * With parameters:
        * Application Name=mlbparks
        * Application route=
        * Mongodb App=mongodb-mlbparks
        * Git source repository=http://git.example.com/git/mlbparks.git
        * Git branch/tag reference=master
        * Maven mirror url=http://192.168.56.1:8081/nexus/content/groups/public/
        * Database name=mongodb
        * MONGODB_NOPREALLOC=
        * MONGODB_SMALLFILES=
        * MONGODB_QUIET=
        * Database user name=userdEl # generated
        * Database user password=aa7rm6gw # generated
        * Database admin password=h7adaF7C # generated
        * GitHub Trigger=5oAIAQI4 # generated
        * Generic Trigger=IcsEXf8s # generated

--> Creating resources ...
    configmap "mlbparks" created
    service "mongodb-mlbparks" created
    deploymentconfig "mongodb-mlbparks" created
    imagestream "mlbparks" created
    buildconfig "mlbparks" created
    deploymentconfig "mlbparks" created
    service "mlbparks" created
    route "mlbparks" created
--> Success
    Build scheduled, use 'oc logs -f bc/mlbparks' to track its progress.
    Run 'oc status' to view your app.

OpenShift 做了如下事情:
  • 配置并启动了一个 build
  • 克隆 git 仓库
  • 配置并部署 MongoDB
  • 自动生成 user, password, database name
  • 配置环境变量,使得应用连接到 MongoDB
  • 创建 services
  • 为应用的 route 增加 label:type=parksmap-backend
如果不想修改 template,则需要在创建应用时指定参数 GIT_URI
oc new-app mlbparks --name=mlbparks -p GIT_URI=http://git.example.com/git/mlbparks.git

但这样设置后,build 的时候依然会失败,因为默认会去访问外网 Maven 仓库,由于无法访问,所以报错。
可以在控制台的 BuildConfig 中增加该参数MAVEN_MIRROR_URL=http://192.168.56.1:8081/nexus/content/groups/public/
也可以用命令行重新 build
oc new-app mlbparks --name=mlbparks -p GIT_URI=http://git.example.com/git/mlbparks.git -p MAVEN_MIRROR_URL=http://192.168.56.1:8081/nexus/content/groups/public/

注意 build 时间比较长,10分钟左右,这次成功了!


5. 清理(在 Master 上操作)
如果有错,执行以下命令清理,然后重做:
oc delete configmaps/mlbparks is/mlbparks bc/mlbparks dc/mlbparks svc/mlbparks routes/mlbparks dc/mongodb-mlbparks svc/mongodb-mlbparks 

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/
2. https://github.com/openshift-roadshow/mlbparks

OpenShift_041:RoadShow 回顾之一:部署 parksmap

环境:OCP 3.4

2017年3月22日,在上海举办了 OpenShift 3.4 RoadShow,反响很好。
准备写一个系列文章,离线部署所有实验,并总结一下其中重要的知识点。
本文是第一篇。
1. 先去下载该镜像 (在 MAC 机器上做)
docker pull openshiftroadshow/parksmap:1.2.0
docker save -o parksmap-1.2.0.tar.gz openshiftroadshow/parksmap:1.2.0
scp parksmap-1.2.0.tar.gz root@192.168.56.112:/opt/ose/images/

2. 加载该镜像 (在 Registry 机器上做)
cd /opt/ose/images/
docker load -i parksmap-1.2.0.tar.gz
docker tag openshiftroadshow/parksmap:1.2.0 registry.example.com:5000/openshiftroadshow/parksmap:1.2.0
docker push registry.example.com:5000/openshiftroadshow/parksmap:1.2.0

3. 创建 parksmap 应用(在 Master 机器上做)
oc project applications
oc new-app registry.example.com:5000/openshiftroadshow/parksmap:1.2.0 --name=parksmap --insecure-registry=true
注意,这一步在控制台上创建不成功,错误如下:
Could not load image metadata.
Internal error occurred: Get https://registry.example.com:5000/v2/: EOF
接下来手动部署并创建 route。

查看 pod 日志,发现有错误:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/applications/pods/parksmap-1-6pnrv. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..

给 default sa 分配 view 权限
oc project applications
oc policy add-role-to-user view -z default

手动重新部署,查看 pod 日志,发现上面的错误没有了。

4. 访问 http://parksmap-applications.apps.example.com/

5. 清理
如果有错,执行以下命令清理,然后重做:
oc delete is/parksmap dc/parksmap svc/parksmap

参考文献:
1. http://labs.openshift3roadshow.com/roadshow/
2. https://hub.docker.com/r/openshiftroadshow/parksmap/

OpenShift_040:一键部署 kafka 集群

环境:OCP 3.4

OpenShift_039:删除应用所有对象后 PV 依旧不可用的问题

环境:OCP 3.4

做 《OpenShift_036:一键部署 mysql 主从集群》 时,发现一个问题,就是删除应用的所有对象(route、service、dc、bc、pod、pvc)后,pv 依旧不可用。
此时,运行 oc get pv,输出如下:
NAME           CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                       REASON    AGE
mysql-volume   1Gi        RWX           Retain          Released   applications/mysql-master             1h
状态是 Released,虽然释放了,但是下次重新创建应用时,pod 始终处于 pending 状态,原因就是 pvc 和 pv 无法绑定。

仔细检查 PV 的定义,发现有个 persistentVolumeReclaimPolicy 参数,于是把 Retain 改为 Recycle。
echo '{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
    "name": "mysql-volume"
  },
  "spec": {
    "capacity": {
        "storage": "1Gi"
    },
    "accessModes": [ "ReadWriteMany" ],
    "nfs": {
        "path": "/srv/nfs/mysql-vol",
        "server": "registry.example.com"
    },
    "persistentVolumeReclaimPolicy": "Recycle"
  }
}' | oc create -f -

重新创建,发现还是不行。
运行 oc describe pv mysql-volume,发现,原来需要运行一个 pod 来做“清扫”工作,而这个 pod 需要 pull openshift3/ose-recycler:v3.4.0.39 这个镜像。
于是先去下载该镜像 (在 MAC 机器上做)
docker pull registry.access.redhat.com/openshift3/ose-recycler:v3.4.0.39 
docker save -o ose-recycler-v3.4.0.39.tar.gz registry.access.redhat.com/openshift3/ose-recycler:v3.4.0.39
scp ose-recycler-v3.4.0.39.tar.gz root@192.168.56.112:/opt/ose/images/

加载该镜像 (在 Registry 机器上做)
cd /opt/ose/images/
docker load -i ose-recycler-v3.4.0.39.tar.gz
docker tag registry.access.redhat.com/openshift3/ose-recycler:v3.4.0.39 registry.example.com:5000/openshift3/ose-recycler:v3.4.0.39
docker push registry.example.com:5000/openshift3/ose-recycler:v3.4.0.39

再次运行 oc describe pv mysql-volume,这次 Volume recycled。


再次运行  oc get pv,输出如下:
NAME           CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
mysql-volume   1Gi        RWX           Recycle         Available                       25m
这次状态变成了 Available。

重新创建应用时,这次成功了!

2017年1月24日星期二

OpenShift_038:一键部署 hadoop 主从集群

环境:OCP 3.4

本实验全部材料来自 KiwenLau 和我的同事陈耿,在此表示感谢!
KiwenLau 制作的 hadoop cluster 镜像非常棒!



1. 下载 kiwenlau/hadoop-master:0.1.0 镜像
docker pull kiwenlau/hadoop-master:0.1.0
docker pull kiwenlau/hadoop-slave:0.1.0

2. 制作镜像
git clone https://github.com/nichochen/openshift3-demo-hadoop

cd openshift3-demo-hadoop/nic-hadoop-master/
make

vim Dockerfile
Dockerfile 内容如下:
FROM kiwenlau/hadoop-master:0.1.0

# move all confugration files into container
ADD files/* /root/
RUN chmod +x /root/*.sh

EXPOSE 22 7373 7946 9000 50010 50020 50070 50075 50090 50475 8030 8031 8032 8033 8040 8042 8060 8088 50060

3. 下载 nic-hadoop-master 镜像(在 MAC 机器上操作)
docker save -o nic-hadoop-master.tar.gz nic-hadoop-master
scp nic-hadoop-master.tar.gz root@registry.example.com:/opt/ose/images/

docker save -o hadoop-slave.tar.gz kiwenlau/hadoop-slave:0.1.0 
scp hadoop-slave.tar.gz root@registry.example.com:/opt/ose/images/

4. 加载 nic-hadoop-master 镜像(在 Registry 机器上操作)
docker load -i nic-hadoop-master.tar.gz
docker tag nic-hadoop-master:latest registry.example.com:5000/nic-hadoop-master:latest
docker push registry.example.com:5000/nic-hadoop-master

docker load -i hadoop-slave.tar.gz
docker tag kiwenlau/hadoop-slave:0.1.0 registry.example.com:5000/kiwenlau/hadoop-slave:0.1.0
docker push registry.example.com:5000/kiwenlau/hadoop-slave:0.1.0 

5. 创建 template

5.1 创建 hadoop-master template
oc create -f hadoop-master.json -n openshift
hadoop-master.json 内容如下:
{
  "kind": "Template",
  "apiVersion": "v1",
  "metadata": {
    "name": "hadoop-master",
    "creationTimestamp": null,
    "annotations": {
      "description": "Hadoop Master",
      "iconClass": "icon-mysql-database",
      "tags": "hadoop"
    }
  },
  "objects": [
          {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "${APP_SERVICE_NAME}",
        "creationTimestamp": null
      },
      "spec": {
        "ports": [
          {
            "name": "app1",
            "protocol": "TCP",
            "port": 22,
            "targetPort": 22,
            "nodePort": 0
          },{
            "name": "app112",
            "protocol": "TCP",
            "port": 7373,
            "targetPort": 7373,
            "nodePort": 0
          },{
            "name": "app2",
            "protocol": "TCP",
            "port": 7946,
            "targetPort": 7946,
            "nodePort": 0
          },{
            "name": "app3",
            "protocol": "TCP",
            "port": 9000,
            "targetPort": 9000,
            "nodePort": 0
          },{
            "name": "app4",
            "protocol": "TCP",
            "port": 50010,
            "targetPort": 50010,
            "nodePort": 0
          },{
            "name": "app5",
            "protocol": "TCP",
            "port": 50020,
            "targetPort": 50020,
            "nodePort": 0
          },{
            "name": "app6",
            "protocol": "TCP",
            "port": 50070,
            "targetPort": 50070,
            "nodePort": 0
          },{
            "name": "app7",
            "protocol": "TCP",
            "port": 50075,
            "targetPort": 50075,
            "nodePort": 0
          },{
            "name": "app8",
            "protocol": "TCP",
            "port": 50090,
            "targetPort": 50090,
            "nodePort": 0
          },{
            "name": "app9",
            "protocol": "TCP",
            "port": 50475,
            "targetPort": 50475,
            "nodePort": 0
          },{
            "name": "app10",
            "protocol": "TCP",
            "port": 8030,
            "targetPort": 8030,
            "nodePort": 0
          },{
            "name": "app11",
            "protocol": "TCP",
            "port": 8031,
            "targetPort": 8031,
            "nodePort": 0
          },{
            "name": "app12",
            "protocol": "TCP",
            "port": 8032,
            "targetPort": 8032,
            "nodePort": 0
          },{
            "name": "app13",
            "protocol": "TCP",
            "port": 8033,
            "targetPort": 8033,
            "nodePort": 0
          },{
            "name": "app14",
            "protocol": "TCP",
            "port": 8040,
            "targetPort": 8040,
            "nodePort": 0
          },{
            "name": "app15",
            "protocol": "TCP",
            "port": 8042,
            "targetPort": 8042,
            "nodePort": 0
          },{
            "name": "app16",
            "protocol": "TCP",
            "port": 8060,
            "targetPort": 8060,
            "nodePort": 0
          },{
            "name": "app17",
            "protocol": "TCP",
            "port": 8088,
            "targetPort": 8088,
            "nodePort": 0
          },{
            "name": "app18",
            "protocol": "TCP",
            "port": 50060,
            "targetPort": 50060,
            "nodePort": 0
          }
        ],
        "selector": {
          "name": "${APP_SERVICE_NAME}"
        },
        "portalIP": "",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {}
      }
    },
    {
      "kind": "DeploymentConfig",
      "apiVersion": "v1",
      "metadata": {
        "name": "${APP_SERVICE_NAME}",
        "creationTimestamp": null
      },
      "spec": {
        "strategy": {
          "type": "Recreate",
          "resources": {}
        },
        "triggers": [
                    {
            "type": "ConfigChange"
          }
        ],
        "replicas": 1,
        "selector": {
          "name": "${APP_SERVICE_NAME}"
        },
        "template": {
          "metadata": {
            "creationTimestamp": null,
            "labels": {
              "name": "${APP_SERVICE_NAME}"
            }
          },
          "spec": {
            "containers": [
              {
                "name": "hadoop-master",
                "image": "nic-hadoop-master",
                "command": ["bash","-c","/root/start-ssh-serf.sh && sleep 365d"],
                "ports": [
                  {
                    "containerPort": 22,
                    "protocol": "TCP"
                  },{
                    "containerPort": 7373,
                    "protocol": "TCP"
                  },{
                    "containerPort": 7946,
                    "protocol": "TCP"
                  },{
                    "containerPort": 9000,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50010,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50020,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50070,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50075,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50090,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50475,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8030,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8031,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8032,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8033,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8040,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8042,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8060,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8088,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50060,
                    "protocol": "TCP"
                  }
                ],
                "env": [
                    {
                    "name": "WAR_URI",
                    "value": "${WAR_URI}"
                  } ],
                "resources": {},
                "volumeMounts": [
                                 ],
                "terminationMessagePath": "/dev/termination-log",
                "imagePullPolicy": "IfNotPresent",
                "capabilities": {},
                "securityContext": {
                  "capabilities": {},
                  "privileged": false
                }
              }
            ],
            "volumes": [
              {
                "name": "${APP_SERVICE_NAME}-data",
                "emptyDir": {
                  "medium": ""
                }
              }
            ],
            "restartPolicy": "Always",
            "dnsPolicy": "ClusterFirst"
          }
        }
      },
      "status": {}
    }
  ],
  "parameters": [
    {
      "name": "APP_SERVICE_NAME",
      "description": "Application service name",
      "value": "hadoop-master",
      "required": true
    }
  ],
  "labels": {
    "template": "app-template"
  }
}

5.2 创建 hadoop-master template
oc create -f hadoop-slave.json -n openshift

hadoop-slave.json 内容如下:
{
  "kind": "Template",
  "apiVersion": "v1",
  "metadata": {
    "name": "hadoop-slave",
    "creationTimestamp": null,
    "annotations": {
      "description": "Hadoop Slave",
      "iconClass": "icon-mysql-database",
      "tags": "hadoop"
    }
  },
  "objects": [
      {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "${APP_SERVICE_NAME}",
        "creationTimestamp": null
      },
      "spec": {
        "ports": [
          {
            "name": "app1",
            "protocol": "TCP",
            "port": 22,
            "targetPort": 7373,
            "nodePort": 0
          },{
            "name": "app2",
            "protocol": "TCP",
            "port": 7946,
            "targetPort": 7946,
            "nodePort": 0
          },{
            "name": "app3",
            "protocol": "TCP",
            "port": 9000,
            "targetPort": 9000,
            "nodePort": 0
          },{
            "name": "app4",
            "protocol": "TCP",
            "port": 50010,
            "targetPort": 50010,
            "nodePort": 0
          },{
            "name": "app5",
            "protocol": "TCP",
            "port": 50020,
            "targetPort": 50020,
            "nodePort": 0
          },{
            "name": "app6",
            "protocol": "TCP",
            "port": 50070,
            "targetPort": 50070,
            "nodePort": 0
          },{
            "name": "app7",
            "protocol": "TCP",
            "port": 50075,
            "targetPort": 50075,
            "nodePort": 0
          },{
            "name": "app8",
            "protocol": "TCP",
            "port": 50090,
            "targetPort": 50090,
            "nodePort": 0
          },{
            "name": "app9",
            "protocol": "TCP",
            "port": 50475,
            "targetPort": 50475,
            "nodePort": 0
          },{
            "name": "app10",
            "protocol": "TCP",
            "port": 8030,
            "targetPort": 8030,
            "nodePort": 0
          },{
            "name": "app11",
            "protocol": "TCP",
            "port": 8031,
            "targetPort": 8031,
            "nodePort": 0
          },{
            "name": "app12",
            "protocol": "TCP",
            "port": 8032,
            "targetPort": 8032,
            "nodePort": 0
          },{
            "name": "app13",
            "protocol": "TCP",
            "port": 8033,
            "targetPort": 8033,
            "nodePort": 0
          },{
            "name": "app14",
            "protocol": "TCP",
            "port": 8040,
            "targetPort": 8040,
            "nodePort": 0
          },{
            "name": "app15",
            "protocol": "TCP",
            "port": 8042,
            "targetPort": 8042,
            "nodePort": 0
          },{
            "name": "app16",
            "protocol": "TCP",
            "port": 8060,
            "targetPort": 8060,
            "nodePort": 0
          },{
            "name": "app17",
            "protocol": "TCP",
            "port": 8088,
            "targetPort": 8088,
            "nodePort": 0
          },{
            "name": "app18",
            "protocol": "TCP",
            "port": 50060,
            "targetPort": 50060,
            "nodePort": 0
          }
        ],
        "selector": {
          "name": "${APP_SERVICE_NAME}"
        },
        "portalIP": "",
        "type": "ClusterIP",
        "sessionAffinity": "None"
      },
      "status": {
        "loadBalancer": {}
      }
    },
    {
      "kind": "DeploymentConfig",
      "apiVersion": "v1",
      "metadata": {
        "name": "${APP_SERVICE_NAME}",
        "creationTimestamp": null
      },
      "spec": {
        "strategy": {
          "type": "Recreate",
          "resources": {}
        },
        "triggers": [
                    {
            "type": "ConfigChange"
          }
        ],
        "replicas": 1,
        "selector": {
          "name": "${APP_SERVICE_NAME}"
        },
        "template": {
          "metadata": {
            "creationTimestamp": null,
            "labels": {
              "name": "${APP_SERVICE_NAME}"
            }
          },
          "spec": {
            "containers": [
              {
                "name": "hadoop-slave",
                "image": "kiwenlau/hadoop-slave:0.1.0",
                "command": ["bash","-c","export JOIN_IP=$HADOOP_MASTER_SERVICE_HOST;/root/start-ssh-serf.sh ; ssh -o StrictHostKeyChecking=no $JOIN_IP \"/root/config.sh;/root/restart.sh\"; sleep 365d"],
                "ports": [
                  {
                    "containerPort": 22,
                    "protocol": "TCP"
                  },{
                    "containerPort": 7373,
                    "protocol": "TCP"
                  },{
                    "containerPort": 7946,
                    "protocol": "TCP"
                  },{
                    "containerPort": 9000,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50010,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50020,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50070,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50075,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50090,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50475,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8030,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8031,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8032,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8033,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8040,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8042,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8060,
                    "protocol": "TCP"
                  },{
                    "containerPort": 8088,
                    "protocol": "TCP"
                  },{
                    "containerPort": 50060,
                    "protocol": "TCP"
                  }
                ],
                "env": [
                  ],
                "resources": {},
                "volumeMounts": [
                                 ],
                "terminationMessagePath": "/dev/termination-log",
                "imagePullPolicy": "IfNotPresent",
                "capabilities": {},
                "securityContext": {
                  "capabilities": {},
                  "privileged": false
                }
              }
            ],
            "volumes": [
              {
                "name": "${APP_SERVICE_NAME}-data",
                "emptyDir": {
                  "medium": ""
                }
              }
            ],
            "restartPolicy": "Always",
            "dnsPolicy": "ClusterFirst"
          }
        }
      },
      "status": {}
    }
  ],
  "parameters": [
    {
      "name": "APP_SERVICE_NAME",
      "description": "Application service name",
      "value": "hadoop-slave",
      "required": true
    }
  ],
  "labels": {
    "template": "hadoop-slave-template"
  }
}


6.  创建应用

6.1 选择 hadoop-master template,创建应用
oc new-app --template=hadoop-master

6.2 选择 hadoop-slave template,创建应用
oc new-app --template=hadoop-slave
 
7.  进入 hadoop-master pod,执行
bash
serf members
cd
./start-hadoop.sh

8.  清除 hadoop 主从集群(在 Master 机器上操作)
oc delete svc $(oc get svc | grep hadoop |awk '{print $1}')
oc delete dc $(oc get dc | grep hadoop |awk '{print $1}')
oc delete rc $(oc get rc | grep hadoop |awk '{print $1}')
oc delete routes $(oc get routes | grep hadoop |awk '{print $1}')
oc delete pod $(oc get pod | grep hadoop |awk '{print $1}')

参考文献:
1. https://github.com/nichochen/openshift3-demo-hadoop
2. http://kiwenlau.com/2016/06/12/160612-hadoop-cluster-docker-update/