2017年2月24日星期五

OpenShift_060:使用 Jenkins Pipeline 构建应用

环境:OCP 3.4

1. 下载 Jenkins 镜像 (在 MAC 机器上操作)
参考《离线安装 OCP 3.4 之 下载安装介质

docker save -o jenkins-slave.tar.gz registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7

2. 加载 Jenkins 镜像 (在 Registry 机器上操作)

docker load -i jenkins-slave.tar.gz

docker tag registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 registry.example.com:5000/openshift3/jenkins-slave-base-rhel7
docker push registry.example.com:5000/openshift3/jenkins-slave-base-rhel7

docker tag registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 registry.example.com:5000/openshift3/jenkins-slave-maven-rhel7
docker push registry.example.com:5000/openshift3/jenkins-slave-maven-rhel7

docker tag registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7 registry.example.com:5000/openshift3/jenkins-slave-nodejs-rhel7
docker push registry.example.com:5000/openshift3/jenkins-slave-nodejs-rhel7

3. 创建 myphp 应用,但是去掉自动 build 和 自动 deploy (在 Console 上操作)
参考《离线安装 OCP 3.4 之测试 PHP 应用
点击展开 Show advanced routing, build, deployment and source options,不勾选如下四项:
  • Automatically build a new image when the builder image changes
  • Launch the first build when the build configuration is created
  • New image is available
  • Deployment configuration changes

4. 从 .yml 脚本创建 pipeline

pipeline.yml 脚本内容如下:

apiVersion: v1
kind: BuildConfig
metadata:
  name: myfirstpipeline
  labels:
    name: myfirstpipeline
  annotations:
    pipeline.alpha.openshift.io/uses: '[{"name": "myphp", "namespace": "", "kind": "DeploymentConfig"}]'
spec:
  triggers:
    -
      type: GitHub
      github:
        secret: secret101
    -
      type: Generic
      generic:
        secret: secret101
  runPolicy: Serial
  source:
    type: None
  strategy:
    type: JenkinsPipeline
    jenkinsPipelineStrategy:
      jenkinsfile: "node('maven') {\nstage 'build'\nopenshiftBuild(buildConfig: 'myphp', showBuildLogs: 'true')\nstage 'deploy'\nopenshiftDeploy(deploymentConfig: 'myphp')\nopenshiftScale(deploymentConfig: 'myphp',replicaCount: '2')\n}"
  output:
  resources:
postCommit:

执行后会发现在当前 project 中,创建并启动了 jenkins 和 jenkins-jnlp pod。 

5. 选择 Builds -> Pipeline -> Start Build


查看 pipeline 构建过程,

点击 View log,登录 https://jenkins-applications.apps.example.com,查看构建日志



6. 访问 http://myphp-applications.apps.example.com/

参考文献:
1. https://blog.openshift.com/cicd-with-openshift/
2. https://github.com/VeerMuchandi/pipeline-example
3. https://github.com/OpenShiftDemos/openshift-cd-demo

2017年2月21日星期二

OpenShift_059:部署 helloworld-msa 到 OpenShift Origin

环境:MAC OS X 10.12.3 + OpenShift Origin 1.4.1

注意,安装过程需要访问外网,且不要连接 VPN。

1. 下载镜像
docker pull fabric8/java-jboss-openjdk8-jdk:1.2.1
docker pull ryanj/centos7-nodejs:6.4.0
docker pull jboss/base-jdk:8
docker pull registry.access.redhat.com/openshift3/nodejs-010-rhel7
docker pull registry.access.redhat.com/jboss-eap-7/eap70-openshift

2. 确认每个应用能够构建成功
请参考《OpenShift_057:手动构建各个微服务及镜像》。
注意,Dockerfile 中的 FROM 基础镜像要在本地镜像中都存在。


2. 启动 OpenShift Origin
oc cluster up --host-data-dir=/Users/maping/mygit/redhat-helloworld-msa --use-existing-config
oc login -u system:admin https://127.0.0.1:8443
oc adm policy add-cluster-role-to-user admin admin

3. 安装

3.1 修改 helloworld-msa.yml 内容
由于之前修改了各个应用的 Dockerfile,并且重新构建了应用,因此我去掉了如下任务:
- name: Create Workdir
- name: Checkout source code from Github
- name: Compile Java Projects
- name: NPM install NodeJS Projects
 
3.2 修改 vars.yml 内容
根据我的环境,修改了以下条目:
  • workdir: /Users/maping/mygit/redhat-helloworld-msa
  • openshift: 127.0.0.1:8443
  • username: developer
  • password: developer
  • admin_username: admin
  • admin_password: admin
  • domain: 192.168.56.1.xip.io
  • adjust_scc: false
  • project_name: helloworld-msa
  • clone_modules: true
  • deploy_jenkins: false
  • deploy_hystrix: true
  • deploy_zipkin: true
  • deploy_hawkular_apm: false
  • deploy_keycloak: false
3.3 运行脚本安装 
ansible-playbook helloworld-msa.yml
需要等待 20 分钟左右,请耐心。

oc get pod | grep Running
输出如下:
aloha-2-teufc             1/1       Running     0          23m
api-gateway-1-crkct       1/1       Running     0          23m
bonjour-2-f7o8e           1/1       Running     0          23m
frontend-3-p8ord          1/1       Running     0          20m
hola-1-nn2gj              1/1       Running     0          23m
hystrix-dashboard-8xiz5   1/1       Running     0          18m
ola-2-s5q3c               1/1       Running     0          23m
turbine-server-ofboi      1/1       Running     0          23m
zipkin-mysql-owh4f        1/1       Running     0          23m
zipkin-query-sc6ln        1/1       Running     0          23m

发现 hystrix-dashboard-8xiz5 日志有错误,是权限问题
2017-02-21 13:34:20.325:WARN:oejuc.FileNoticeLifeCycleListener:main:
java.io.FileNotFoundException: /opt/jetty/jetty.state (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.(FileOutputStream.java:213)
at java.io.FileOutputStream.(FileOutputStream.java:133)
at java.io.FileWriter.(FileWriter.java:78)
at org.eclipse.jetty.util.component.FileNoticeLifeCycleListener.writeState(FileNoticeLifeCycleListener.java:45)
at org.eclipse.jetty.util.component.FileNoticeLifeCycleListener.lifeCycleStarted(FileNoticeLifeCycleListener.java:62)

oc login -u system:admin https://127.0.0.1:8443
oc adm policy add-scc-to-user anyuid -z ribbon

删除 hystrix-dashboard pod,会自动创建一个新的,确认没有错误。
oc delete pod  hystrix-dashboard-8xiz5

4. 运行







2017年2月20日星期一

Jenkins_008:使用 Pipeline 构建 Maven 项目

环境:MAC X OS 10.12.3 + Jenkins 2.47

1. 新建一个项目,类型选择 Pipeline

2. 配置 Pipeline
https://github.com/TTFHW/jenkins_pipeline_java_maven.git





3. Build

4. 查看 Jenkinsfile
node {
   // Mark the code checkout 'stage'....
   stage 'Checkout'
  
   git url: 'https://github.com/TTFHW/jenkins_pipeline_java_maven.git'

   // Get the maven tool.
   // ** NOTE: This 'M3' maven tool must be configured
   // **       in the global configuration.          
   def mvnHome = tool 'M3'

   // Mark the code build 'stage'....
   stage 'Build'
   // Run the maven build
   sh "${mvnHome}/bin/mvn -Dmaven.test.failure.ignore clean package"
   step([$class: 'JUnitResultArchiver', testResults: '**/target/surefire-reports/TEST-*.xml'])
}

参考文献:
1.  https://github.com/TTFHW

Jenkins_007:使用 Pipeline 入门例子

环境:MAC X OS 10.12.3 + Jenkins 2.47

1. 新建一个项目,类型选择 Pipeline

2. 配置 Pipeline
https://github.com/TTFHW/jenkins_pipeline_hello.git


3. Build

4. 查看 Jenkinsfile

node {
   stage 'Stage 1'
           echo 'Hello World 1'
   stage 'Stage 2'
           echo 'Hello World 2'
}

参考文献:
1.  https://github.com/TTFHW

Jenkins_006:配置 Global Tool Configuration

环境:MAC X OS 10.12.3 + Jenkins 2.47

首先更新到 Jenkins 2 的最新版本 + 最新的插件,防止因为版本老旧产生问题。

1. 配置 JDK


2. 配置 Git

3. 配置 Gradle

4. 配置 Ant



5. 配置 Maven



2017年2月19日星期日

OpenShift_058:部署 Hawkular APM 到 OpenShift Origin(Dockerfile 方式)

环境:MAC OS X 10.12.3 + OpenShift Origin 1.4.1 + Hawkular APM 0.14.0.Final

1. 下载基础镜像
docker pull jboss/base-jdk:8

2. 启动 OpenShift Origin
oc cluster up
oc login -u system:admin https://127.0.0.1:8443
oc adm policy add-cluster-role-to-user admin admin

3. 部署 Hawkular APM
git clone https://github.com/jboss-dockerfiles/hawkular-apm.git (已做)
cd hawkular-apm/hawkular-apm-server

修改 Dockerfile
  把 ENV HAWKULAR_APM_VERSION 0.14.1.Final
  改为 ENV HAWKULAR_APM_VERSION 0.14.0.Final
  把从网上下载 hawkular-apm-dist-0.14.0.Final.zip 改为从本地拷贝
  # Download Hawkular-APM from github
  COPY hawkular-apm-dist-$HAWKULAR_APM_VERSION.zip $HOME

  RUN cd $HOME \
  #    && curl -O -L https://github.com/hawkular/hawkular-apm/releases/download/$HAWKULAR_APM_VERSION/hawkular-apm-dist-$HAWKULAR_APM_VERSION.zip \
      && unzip -d $JBOSS_HOME hawkular-apm-dist-$HAWKULAR_APM_VERSION.zip \
      && rm hawkular-apm-dist-$HAWKULAR_APM_VERSION.zip

cp  ~/Tools/hawkular/apm/hawkular-apm-dist-0.14.0.Final.zip .

oc new-build --binary --name=hawkular-apm-server
oc start-build hawkular-apm-server --from-dir=. --follow
oc new-app hawkular-apm-server
oc expose service hawkular-apm-server

4. 部署 Hawkular APM
oc get pod
输出如下:
NAME                          READY     STATUS      RESTARTS   AGE
hawkular-apm-server-1-build   0/1       Completed   0          35m
hawkular-apm-server-1-so6c1   1/1       Running     0          34m
 

oc logs  hawkular-apm-server-1-so6c1
查找到如下信息:
Username: adminN77iFln
Password: ijbkLNCGbYI2b27cv

oc get route
输出如下:
NAME                  HOST/PORT                                            PATH      SERVICES              PORT       TERMINATION
hawkular-apm-server   hawkular-apm-server-myproject.192.168.1.109.xip.io             hawkular-apm-server   8080-tcp

5. 启动 JBoss EAP 7 (已经部署 jboss-helloworld 和 jboss-helloworld-mdb)
启动 JBoss EAP 7 前,要设置 Hawkular APM 环境变量,此步很重要!!!
. ~/Tools/hawkular/apm/dist/apm/setenv.sh 9411
 

export HAWKULAR_APM_URI=http://hawkular-apm-server-myproject.192.168.1.109.xip.io:80
export HAWKULAR_APM_USERNAME=adminN77iFln
export HAWKULAR_APM_PASSWORD=ijbkLNCGbYI2b27cv

cd /Users/maping/Redhat/eap/demo/7.0/2017-02-04
./jboss-eap-7.0/bin/standalone.sh -Djboss.server.base.dir=./myeap -c standalone-full.xml -Djboss.socket.binding.port-offset=10000

6. 访问 Hawkular APM
刷新 http://localhost:18080/jboss-helloworld/HelloWorld
刷新 http://localhost:18080/jboss-helloworld-mdb/HelloWorldMDBServletClient
刷新 http://hawkular-apm-server-myproject.192.168.1.109.xip.io/



参考文献:
1. http://www.hawkular.org/blog/2016/11/25/hawkular-apm-on-openshift.html
2. http://www.hawkular.org/blog/2016/07/14/hawkular-apm-openshift.html
3. https://hawkular.gitbooks.io/hawkular-apm-user-guide/content/quickstart/

2017年2月18日星期六

Docker_017:配置 Proxy 下载镜像

环境:MAC OS X 10.12.3 + Docker 1.13.1

下载 Docker Hub 上的镜像很慢,连上 VPN 后,可以设置 Proxy 加快下载速度。

注意,下载完毕后,需要把 Proxy 设置回 No proxy,否则 oc cluster up 启动不起来。

2017年2月16日星期四

OpenShift_057:手动构建各个微服务及镜像

环境:OCP 3.4

注意, 这个微服务例子非常占用内存,因此至少要给运行微服务的 Node 节点分配 4G 内存。

cd ~/mygit/redhat-helloworld-msa

1. 创建一个 admin 用户,并且赋予权限 (在 Master 机器上操作)
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

oc adm policy add-cluster-role-to-user admin admin

2. 下载微服务基础镜像 (在 MAC 机器上操作)
docker pull fabric8/java-jboss-openjdk8-jdk:1.2.1
docker pull ryanj/centos7-nodejs:6.4.0
docker pull jboss/base-jdk:8
docker pull registry.access.redhat.com/openshift3/nodejs-010-rhel7 (之前已下载)
docker pull registry.access.redhat.com/jboss-eap-7/eap70-openshift (之前已下载)

docker save -o redhat-msa-basic.tar.gz fabric8/java-jboss-openjdk8-jdk:1.2.1 ryanj/centos7-nodejs:6.4.0 jboss/base-jdk:8

scp redhat-msa-basic.tar.gz root@192.168.56.112:/opt/ose/images/

3. 加载微服务基础镜像(在 Registry 机器上操作)
cd /opt/ose/images
docker load -i redhat-msa-basic.tar.gz

docker tag fabric8/java-jboss-openjdk8-jdk:1.2.1 registry.example.com:5000/fabric8/java-jboss-openjdk8-jdk:1.2.1
docker push registry.example.com:5000/fabric8/java-jboss-openjdk8-jdk:1.2.1

docker tag ryanj/centos7-nodejs:6.4.0 registry.example.com:5000/ryanj/centos7-nodejs:6.4.0
docker push registry.example.com:5000/ryanj/centos7-nodejs:6.4.0

docker tag jboss/base-jdk:8 registry.example.com:5000/jboss/base-jdk:8
docker push registry.example.com:5000/jboss/base-jdk:8

4. 登录并创建 helloworld-msa project (在 MAC 机器上操作)oc login -u admin -p admin https://master.example.com:8443
oc new-project helloworld-msa

5. 手动构建各个微服务及镜像(在 MAC 机器上操作)
 
5.1 构建 hola 微服务
. ~/setJdk8Env.sh
git clone https://github.com/redhat-helloworld-msa/hola
cd hola/
mvn clean package
修改 Dockerfile
  把 FROM fabric8/java-jboss-openjdk8-jdk:1.2.1
  改成 FROM registry.example.com:5000/fabric8/java-jboss-openjdk8-jdk:1.2.1
oc new-build --binary --name=hola -l app=hola
oc start-build hola --from-dir=. --follow
oc new-app hola -l app=hola,hystrix.enabled=true
oc expose service hola

使能 readiness probe
oc set probe dc/hola --readiness --get-url=http://:8080/api/health

测试
http://hola-helloworld-msa.apps.example.com/api/hola
输出如下:
Hola de hola-1-molbq

5.2 构建 aloha 微服务
. ~/setJdk8Env.sh
git clone https://github.com/redhat-helloworld-msa/aloha
cd aloha/
mvn clean package
修改 Dockerfile
  把 FROM fabric8/java-jboss-openjdk8-jdk:1.2.1
  改成 FROM registry.example.com:5000/fabric8/java-jboss-openjdk8-jdk:1.2.1
oc new-build --binary --name=aloha -l app=aloha
oc start-build aloha --from-dir=. --follow
oc new-app aloha -l app=aloha,hystrix.enabled=true
oc expose service aloha

使能 jolokia 和 readiness probe
oc env dc/aloha AB_ENABLED=jolokia; oc patch dc/aloha -p '{"spec":{"template":{"spec":{"containers":[{"name":"aloha","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}'
oc set probe dc/aloha --readiness --get-url=http://:8080/api/health

测试
http://aloha-helloworld-msa.apps.example.com/api/aloha
输出如下:
Aloha mai aloha-3-grw22

5.3 构建 ola 微服务
. ~/setJdk8Env.sh
git clone https://github.com/redhat-helloworld-msa/ola
cd ola/
mvn clean package
修改 Dockerfile
  把 FROM fabric8/java-jboss-openjdk8-jdk:1.2.1
  改成 FROM registry.example.com:5000/fabric8/java-jboss-openjdk8-jdk:1.2.1
oc new-build --binary --name=ola -l app=ola
oc start-build ola --from-dir=. --follow
oc new-app ola -l app=ola,hystrix.enabled=true
oc expose service ola

使能 jolokia 和 readiness probe
oc env dc/ola AB_ENABLED=jolokia; oc patch dc/ola -p '{"spec":{"template":{"spec":{"containers":[{"name":"ola","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}'
oc set probe dc/ola --readiness --get-url=http://:8080/api/health

测试
http://ola-helloworld-msa.apps.example.com/api/ola
输出如下:
Olá de ola-3-28zue

5.4 构建 bonjour 微服务
git clone https://github.com/redhat-helloworld-msa/bonjour
cd bonjour/
npm install
修改 Dockerfile
  把 FROM ryanj/centos7-nodejs:6.4.0
  改成 FROM registry.example.com:5000/ryanj/centos7-nodejs:6.4.0
oc new-build --binary --name=bonjour -l app=bonjour
oc start-build bonjour --from-dir=. --follow
oc new-app bonjour -l app=bonjour
oc expose service bonjour

使能 readiness probe
oc set probe dc/bonjour --readiness --get-url=http://:8080/api/health

测试
http://bonjour-helloworld-msa.apps.example.com/api/bonjour
输出如下:
Bonjour de bonjour-2-p57za

5.5 构建 api-gateway 微服务
. ~/setJdk8Env.sh 
git clone https://github.com/redhat-helloworld-msa/api-gateway
cd api-gateway/
mvn clean package
oc new-build --binary --name=api-gateway -l app=api-gateway
oc start-build api-gateway --from-dir=. --follow
oc new-app api-gateway -l app=api-gateway,hystrix.enabled=true
oc expose service api-gateway

使能 jolokia 和 readiness probe
oc env dc/api-gateway AB_ENABLED=jolokia; oc patch dc/api-gateway -p '{"spec":{"template":{"spec":{"containers":[{"name":"api-gateway","ports":[{"containerPort": 8778,"name":"jolokia"}]}]}}}}'
oc set probe dc/api-gateway --readiness --get-url=http://:8080/health

测试
http://api-gateway-helloworld-msa.apps.example.com/api
输出如下:
["Aloha mai aloha-3-1p2ox","Bonjour de bonjour-2-p57za","Hola de hola-2-mdu7z","Olá de ola-3-50nsy"]

5.6 构建 frontend 微服务
git clone https://github.com/redhat-helloworld-msa/frontend
cd frontend/
npm install
修改 Dockerfile
  把 FROM registry.access.redhat.com/openshift3/nodejs-010-rhel7
  改成 FROM registry.example.com:5000/openshift3/nodejs-010-rhel7
  把 ENV OS_SUBDOMAIN='rhel-cdk.10.1.2.2.xip.io' \
  改成 ENV OS_SUBDOMAIN='apps.example.com' \
oc new-build --binary --name=frontend -l app=frontend
oc start-build frontend --from-dir=. --follow
oc new-app frontend -l app=frontend
oc expose service frontend

测试
http://frontend-helloworld-msa.apps.example.com/
输出如下:

5.7 构建 kubeflix 微服务
oc create -f http://central.maven.org/maven2/io/fabric8/kubeflix/packages/kubeflix/1.0.17/kubeflix-1.0.17-kubernetes.yml
oc new-app kubeflix
oc expose service hystrix-dashboard --port=8080
oc policy add-role-to-user admin system:serviceaccount:helloworld-msa:turbine

使能 hystrix dashboard
oc env dc/frontend ENABLE_HYSTRIX=true

测试
http://hystrix-dashboard-helloworld-msa.apps.example.com/
输出如下:

5.8 构建 zipkin 微服务
oc create -f http://repo1.maven.org/maven2/io/fabric8/zipkin/zipkin-starter-minimal/0.0.8/zipkin-starter-minimal-0.0.8-kubernetes.yml
oc expose service zipkin-query

使能 zipkin dashboard in frontend
oc env dc/frontend ENABLE_ZIPKIN=true

测试
http://zipkin-query-helloworld-msa.apps.example.com/

参考文献:
1. https://youtu.be/SPATMHP-xw8
2. http://bit.ly/helloworldmsa
3. https://github.com/redhat-helloworld-msa
4. http://developers.redhat.com/products/cdk/docs-and-apis/

OpenShift_056:根据已有镜像部署部署微服务:redhat-helloworld-msa

环境:OCP 3.4

https://github.com/redhat-helloworld-msa/ 是红帽提供的一个非常好的微服务运行在 OpenShift 上的例子。
本文根据 docker hub 上已有的镜像来部署微服务。

1. 下载微服务镜像 (在 MAC 机器上操作)
docker pull redhatmsa/hola
docker pull redhatmsa/frontend
docker pull redhatmsa/api-gateway
docker pull redhatmsa/namaste
docker pull redhatmsa/ola
docker pull redhatmsa/aloha
docker pull redhatmsa/hello
docker pull redhatmsa/bonjour
docker pull fabric8/hystrix-dashboard:1.0.17
docker pull fabric8/turbine-server:1.0.17
docker pull fabric8/zipkin-mysql:0.1.1
docker pull openzipkin/zipkin-query:1.40.2

docker save -o redhat-msa.tar.gz redhatmsa/hola redhatmsa/frontend redhatmsa/api-gateway redhatmsa/namaste redhatmsa/ola redhatmsa/aloha redhatmsa/hello redhatmsa/bonjour fabric8/hystrix-dashboard:1.0.17 fabric8/turbine-server:1.0.17 fabric8/zipkin-mysql:0.1.1 openzipkin/zipkin-query:1.40.2

scp redhat-msa.tar.gz root@192.168.56.112:/opt/ose/images/

2. 加载微服务镜像(在 Registry 机器上操作)
cd /opt/ose/images
docker load -i redhat-msa.tar.gz

docker tag redhatmsa/hola:latest registry.example.com:5000/redhatmsa/hola:latest
docker push registry.example.com:5000/redhatmsa/hola

docker tag redhatmsa/frontend:latest registry.example.com:5000/redhatmsa/frontend:latest
docker push registry.example.com:5000/redhatmsa/frontend

docker tag redhatmsa/api-gateway:latest registry.example.com:5000/redhatmsa/api-gateway:latest
docker push registry.example.com:5000/redhatmsa/api-gateway

docker tag redhatmsa/namaste:latest registry.example.com:5000/redhatmsa/namaste:latest
docker push registry.example.com:5000/redhatmsa/namaste

docker tag redhatmsa/ola:latest registry.example.com:5000/redhatmsa/ola:latest
docker push registry.example.com:5000/redhatmsa/ola

docker tag redhatmsa/aloha:latest registry.example.com:5000/redhatmsa/aloha:latest
docker push registry.example.com:5000/redhatmsa/aloha

docker tag redhatmsa/hello:latest registry.example.com:5000/redhatmsa/hello:latest
docker push registry.example.com:5000/redhatmsa/hello

docker tag redhatmsa/bonjour:latest registry.example.com:5000/redhatmsa/bonjour:latest
docker push registry.example.com:5000/redhatmsa/bonjour

docker tag fabric8/hystrix-dashboard:1.0.17 registry.example.com:5000/fabric8/hystrix-dashboard:1.0.17
docker push registry.example.com:5000/fabric8/hystrix-dashboard:1.0.17

docker tag fabric8/turbine-server:1.0.17 registry.example.com:5000/fabric8/turbine-server:1.0.17
docker push registry.example.com:5000/fabric8/turbine-server:1.0.17

docker tag fabric8/zipkin-mysql:0.1.1 registry.example.com:5000/fabric8/zipkin-mysql:0.1.1
docker push registry.example.com:5000/fabric8/zipkin-mysql:0.1.1

docker tag openzipkin/zipkin-query:1.40.2 registry.example.com:5000/openzipkin/zipkin-query:1.40.2
docker push registry.example.com:5000/openzipkin/zipkin-query:1.40.2

3. 创建一个 admin 用户,并且赋予权限 (在 Master 机器上操作)
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

4. 安装与配置各个微服务 (在 Master 机器上操作)

oc new-project helloworld-msa

(1)hola
oc new-app registry.example.com:5000/redhatmsa/hola --insecure-registry
oc expose svc hola --hostname=hola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://hola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/hola (在 MAC 上操作)
(2)namaste
oc new-app registry.example.com:5000/redhatmsa/namaste --insecure-registry
oc expose svc namaste --hostname=namaste-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://namaste-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/namaste(在 MAC 上操作)

发现 service 暴露的 8778 端口不对,应该是 8080
oc get svc -o wide
oc delete svc namaste

oc expose dc namaste --port=8080

oc delete route namaste
oc expose svc namaste --hostname=namaste-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
(3)ola
oc new-app registry.example.com:5000/redhatmsa/ola --insecure-registry
oc expose svc ola --hostname=ola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://ola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/ola(在 MAC 上操作)

发现 service 暴露的 8778 端口不对,应该是 8080 
oc get svc -o wide
oc delete svc ola

oc expose dc ola --port=8080
oc delete route ola
oc expose svc ola --hostname=ola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
(4)aloha
oc new-app registry.example.com:5000/redhatmsa/aloha --insecure-registry
oc expose svc aloha --hostname=aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/aloha(在 MAC 上操作)

发现 service 暴露的 8778 端口不对,应该是 8080 
oc get svc -o wide
oc delete svc aloha
oc expose dc aloha --port=8080
oc delete route aloha 
oc expose svc aloha --hostname=aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io

发现  aloha pod 启动失败
I> No access restrictor found, access to all MBean is allowed
Jolokia: Agent started with URL http://10.128.0.22:8778/jolokia/
Exception in thread "main" java.lang.IllegalStateException: Failed to create cache dir
    at io.vertx.core.impl.FileResolver.setupCacheDir(FileResolver.java:256)
    at io.vertx.core.impl.FileResolver.(FileResolver.java:79)
    at io.vertx.core.impl.VertxImpl.(VertxImpl.java:138)
    at io.vertx.core.impl.VertxImpl.(VertxImpl.java:114)
    at io.vertx.core.impl.VertxImpl.(VertxImpl.java:110)
    at io.vertx.core.impl.VertxFactoryImpl.vertx(VertxFactoryImpl.java:34)
    at io.vertx.core.Vertx.vertx(Vertx.java:79)
    at com.redhat.developers.msa.aloha.AlohaApplication.main(AlohaApplication.java:24)


修改 SCC(Security Context Control),为 sa default  授予 anyuid SCC,允许该 sa 以任何用户在容器中操作。
如果 Dockerfile 中定义了 USER,就是使用该 USER;如果 Dockerfile 中没有定义了 USER,就使用 root。
oc login -u system:admin
oadm policy add-scc-to-user anyuid -z default
oc edit scc anyuid
看到如下信息就对了:
users:
- system:serviceaccount:helloworld-msa:default

删除失败的 pod,会自动创建一个新的 pod,这次启动成功了。 
(5)hello
oc new-app registry.example.com:5000/redhatmsa/hello --insecure-registry
oc expose svc hello --hostname=hello-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://hello-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/hello(在 MAC 上操作) 
(6)bonjour
oc new-app registry.example.com:5000/redhatmsa/bonjour --insecure-registry
oc expose svc bonjour --hostname=bonjour-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
curl http://bonjour-helloworld-msa.rhel-cdk.10.1.2.2.xip.io/api/bonjour(在 MAC 上操作)
(7)frontend
oc new-app registry.example.com:5000/redhatmsa/frontend --insecure-registry
oc expose svc frontend --hostname=frontend-helloworld-msa.rhel-cdk.10.1.2.2.xip.io



(8)api-gateway
oc new-app registry.example.com:5000/redhatmsa/api-gateway --insecure-registry
oc expose svc api-gateway --hostname=api-gateway-helloworld-msa.rhel-cdk.10.1.2.2.xip.io

发现 service 暴露的 8778 端口不对,应该是 8080
oc get svc -o wide
oc delete svc api-gateway
oc expose dc api-gateway --port=8080
oc delete route api-gateway
oc expose svc api-gateway --hostname=api-gateway-helloworld-msa.rhel-cdk.10.1.2.2.xip.io


(9)hystrix-dashboard
下载 http://central.maven.org/maven2/io/fabric8/kubeflix/packages/kubeflix/1.0.17/kubeflix-1.0.17-kubernetes.yml 

把 image: "hystrix-dashboard:1.0.17"
修改为 image: "registry.example.com:5000/fabric8/hystrix-dashboard:1.0.17"
把  image: "fabric8/turbine-server:1.0.17"
修改为 image: "registry.example.com:5000/fabric8/turbine-server:1.0.17"
scp kubeflix-1.0.17-kubernetes.yml  root@192.168.56.111:/root/

oc create -f kubeflix-1.0.17-kubernetes.yml
oc new-app kubeflix --insecure-registry=true
oc expose service hystrix-dashboard --port=8080 --hostname=hystrix-dashboard-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
oc policy add-role-to-user admin system:serviceaccount:helloworld-msa:turbine
oc expose svc turbine-server --port=8080 --hostname=turbine-server-helloworld-msa.rhel-cdk.10.1.2.2.xip.io

发现  hystrix-dashboard pod 启动成功,但是 log 中有错误信息:
java.io.FileNotFoundException: /opt/jetty/jetty.state (Permission denied)

修改 SCC(Security Context Control),为 sa ribbon  授予 anyuid SCC,允许该 sa 以任何用户在容器中操作。
如果 Dockerfile 中定义了 USER,就是使用该 USER;如果 Dockerfile 中没有定义了 USER,就使用 root。
oc login -u system:admin
oadm policy add-scc-to-user anyuid -z ribbon
oc edit scc anyuid
看到如下信息就对了:
users:
- system:serviceaccount:helloworld-msa:ribbon

删除失败的 pod,会自动创建一个新的 pod,这次启动成功,并且没有错误。 

如果有错,执行以下操作清理:
oc delete template kubeflix
oc delete sa ribbon
oc delete sa turbine
oc delete svc hystrix-dashboard
oc delete svc turbine-server
oc delete rc hystrix-dashboard
oc delete rc turbine-server
oc delete route hystrix-dashboard
oc delete deployments.extensions turbine-server

5. 增加项到 /etc/hosts  (在 MAC 机器上操作)
192.168.56.113 hola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 namaste-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 ola-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 aloha-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 hello-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 frontend-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 api-gateway-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 hystrix-dashboard-helloworld-msa.rhel-cdk.10.1.2.2.xip.io
192.168.56.113 turbine-server-helloworld-msa.rhel-cdk.10.1.2.2.xip.io

OpenShift_055:主机能够 ssh 虚机,同时允许虚机访问外网

环境:OCP 3.4

build 微服务容器时,需要访问外网,因此要求 OpenShift 虚机(Master、Node、Registry)能够访问外网。
为了能够访问外网的同时保证内网工作依旧,保持原来的 Host Only 网卡不动,在增加一块 NAT 网卡。
因为我的环境中,Master 同时作为内部 DNS Server,其它机器 DNS1 配置指向自己,自身的 DNS 配置再指向 Master。
其中 Node1、Node2、Registry 配置类似,Master 新增一个 nameserver 8.8.8.8,具体配置如下:

1.  Master 配置
(1)新添加一块网卡,使用 NAT 模式
运行 ip a 命令,查看新增加的网卡名称,我这里是 enp0s8
(2)DNS 指向 Master
vim /etc/dnsmasq.d/openshift-cluster.conf
内容如下:
local=/example.com/
address=/.apps.example.com/192.168.56.113
server=8.8.8.8
说明:
192.168.56.113 是 router 所在的 Node 机器 IP 地址,
增加了一个 dns server=8.8.8.8,用于上外网。
(3)ifcfg-enp0s3 网卡配置
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
#PEERDNS=yes
#PEERROUTES=yes
IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_FAILURE_FATAL=no
#IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=036c9bda-7cbd-4740-9bc3-f066042afde9
DEVICE=enp0s3
ONBOOT=yes
DNS1=192.168.56.111
IPADDR=192.168.56.111
PREFIX=24
GATEWAY=192.168.56.1
(4)ifcfg-enp0s8 网卡配置
如果没有 ifcfg-enp0s8 文件,新建一个,为避免 UUID 重复,使用 uuidgen 生成一个新的。
vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s8
UUID=00cb8299-feb9-55b6-a378-3fdc720e0bc6
DEVICE=enp0s8
ONBOOT=yes
(5)锁定 /etc/resolv.conf 文件,禁止任何服务修改(可选)
vim /etc/resolv.conf
内容如下:nameserver 指向自己
# Generated by NetworkManager
search example.com
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
nameserver 192.168.56.111

锁定(可选)
chattr +i /etc/resolv.conf

2.  Registry 配置
(1)新添加一块网卡,使用 NAT 模式
运行 ip a 命令,查看新增加的网卡名称,我这里是 enp0s8
(2)DNS 指向 Master
vim /etc/dnsmasq.d/openshift-cluster-node.conf
内容如下:
server=192.168.56.111
(3)ifcfg-enp0s3 网卡配置
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
#PEERDNS=yes
#PEERROUTES=yes
IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_FAILURE_FATAL=no
#IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=625caf3e-78d3-4075-9256-8c5924bedda4
DEVICE=enp0s3
ONBOOT=yes
DNS1=192.168.56.112
IPV6INIT=no
IPADDR=192.168.56.112
PREFIX=24
GATEWAY=192.168.56.1
(4)ifcfg-enp0s8 网卡配置
如果没有 ifcfg-enp0s8 文件,新建一个,为避免 UUID 重复,使用 uuidgen 生成一个新的。
vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s8
UUID=7a4c54a9-f606-4bab-b818-3e884d896b7d
DEVICE=enp0s8
ONBOOT=yes
(5)锁定 /etc/resolv.conf 文件,禁止任何服务修改(可选)
vim /etc/resolv.conf
内容如下:nameserver 指向自己
# Generated by NetworkManager
search example.com
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
nameserver 192.168.56.112

锁定(可选)
chattr +i /etc/resolv.conf

3.  Node1 配置
(1)新添加一块网卡,使用 NAT 模式
运行 ip a 命令,查看新增加的网卡名称,我这里是 enp0s8
(2)DNS 指向 Master
vim /etc/dnsmasq.d/openshift-cluster-node.conf
内容如下:
server=192.168.56.111
(3)ifcfg-enp0s3 网卡配置
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
#PEERDNS=yes
#PEERROUTES=yes
IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_FAILURE_FATAL=no
#IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=036c9bda-7cbd-4740-9bc3-f066042afde9
DEVICE=enp0s3
ONBOOT=yes
DNS1=192.168.56.113
IPV6INIT=no
IPADDR=192.168.56.113
PREFIX=24
GATEWAY=192.168.56.1
(4)ifcfg-enp0s8 网卡配置
如果没有 ifcfg-enp0s8 文件,新建一个,为避免 UUID 重复,使用 uuidgen 生成一个新的。
vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s8
UUID=2659ef37-b614-4f81-bcf5-ff7703a81440
DEVICE=enp0s8
ONBOOT=yes
(5)锁定 /etc/resolv.conf 文件,禁止任何服务修改(可选)
vim /etc/resolv.conf
内容如下:nameserver 指向自己
# Generated by NetworkManager
search example.com
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
nameserver 192.168.56.113

锁定(可选)
chattr +i /etc/resolv.conf

4.  Node2 配置
(1)新添加一块网卡,使用 NAT 模式
运行 ip a 命令,查看新增加的网卡名称,我这里是 enp0s8
(2)DNS 指向 Master
vim /etc/dnsmasq.d/openshift-cluster-node.conf
内容如下:
server=192.168.56.111
(3)ifcfg-enp0s3 网卡配置
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
#PEERDNS=yes
#DEFROUTE=yes
#PEERROUTES=yes
IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_FAILURE_FATAL=no
#IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=036c9bda-7cbd-4740-9bc3-f066042afde9
DEVICE=enp0s3
ONBOOT=yes
DNS1=192.168.56.114
IPV6INIT=no
IPADDR=192.168.56.114
PREFIX=24
GATEWAY=192.168.56.1
(4)ifcfg-enp0s8 网卡配置
如果没有 ifcfg-enp0s8 文件,新建一个,为避免 UUID 重复,使用 uuidgen 生成一个新的。
vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
内容如下:
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s8
UUID=9569f670-485f-4738-816e-c963a610da71
DEVICE=enp0s8
ONBOOT=yes
(5)锁定 /etc/resolv.conf 文件,禁止任何服务修改(可选)
vim /etc/resolv.conf
内容如下:nameserver 指向自己
# Generated by NetworkManager
search example.com
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
nameserver 192.168.56.114

锁定 (可选)
chattr +i /etc/resolv.conf

5. 重启所有机器 
启动顺序:先启动 Master,等 Master 启动完毕后,并且可以确认可以访问外网后,再启动Node1,Node2,Registry。

(1)在 Master 机器上执行 nslookup www.baidu.com
输出如下:
Server:        192.168.56.111
Address:    192.168.56.111#53

Non-authoritative answer:
www.baidu.com    canonical name = www.a.shifen.com.
Name:    www.a.shifen.com
Address: 119.75.218.70
Name:    www.a.shifen.com
Address: 119.75.217.109

(2)在 Registry 机器上执行 nslookup www.baidu.com
输出如下:
Server:        192.168.56.112
Address:    192.168.56.112#53

Non-authoritative answer:
www.baidu.com    canonical name = www.a.shifen.com.
Name:    www.a.shifen.com
Address: 119.75.218.70
Name:    www.a.shifen.com
Address: 119.75.217.109

(3)在 Node1 机器上执行 nslookup www.baidu.com
输出如下:
Server:        192.168.56.113
Address:    192.168.56.113#53

Non-authoritative answer:
www.baidu.com    canonical name = www.a.shifen.com.
Name:    www.a.shifen.com
Address: 119.75.217.109
Name:    www.a.shifen.com
Address: 119.75.218.70

(4)在 Node2 机器上执行 nslookup www.baidu.com
输出如下:
Server:        192.168.56.114
Address:    192.168.56.114#53

Non-authoritative answer:
www.baidu.com    canonical name = www.a.shifen.com.
Name:    www.a.shifen.com
Address: 119.75.218.70
Name:    www.a.shifen.com
Address: 119.75.217.109

在每台机器上执行 ping www.baidu.com,确认可以访问外网。

2017年2月12日星期日

OpenShift_054:离线安装 OCP 3.4 之安装与配置 Registry Console

环境:OCP 3.4

1. 下载 registry-console 镜像 (在 MAC 机器上操作)
docker pull registry.access.redhat.com/openshift3/registry-console
docker save -o registry-console.tar.gz registry.access.redhat.com/openshift3/registry-console
scp registry-console.tar.gz root@192.168.56.112:/opt/ose/images/

2. 加载 registry-console 镜像 (在 Registry 机器上操作)
cd /opt/ose/images
docker load -i registry-console.tar.gz
docker tag registry.access.redhat.com/openshift3/registry-console:latest registry.example.com:5000/openshift3/registry-console:latest
docker push registry.example.com:5000/openshift3/registry-console

3. 安装与配置 registry-console (在 Master 机器上操作)
创建一个 admin 用户,并且赋予权限
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

查看 docker-registry service
oc project default
oc get svc
输出如下:
NAME              CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry   172.30.98.122    <none>        5000/TCP                  22d
kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     22d
ose-router        172.30.184.252   <none>        80/TCP,443/TCP,1936/TCP   22d

暴露已有的 docker-registry service,为 docker-registry service 创建一个 passthrough route
oc create route passthrough --service=docker-registry --hostname=docker-registry.apps.example.com -n default

查看 docker-registry route 定义
oc get route/docker-registry -o yaml
输出如下:
apiVersion: v1
kind: Route
metadata:
  creationTimestamp: 2017-02-12T08:49:55Z
  labels:
    docker-registry: default
  name: docker-registry
  namespace: default
  resourceVersion: "6918"
  selfLink: /oapi/v1/namespaces/default/routes/docker-registry
  uid: 3a2bc2f4-f100-11e6-82df-080027fc450b
spec:
  host: docker-registry.apps.example.com
  port:
    targetPort: 5000-tcp
  tls:
    termination: passthrough
  to:
    kind: Service
    name: docker-registry
    weight: 100
  wildcardPolicy: None
status:
  ingress:
  - conditions:
    - lastTransitionTime: 2017-02-12T08:49:55Z
      status: "True"
      type: Admitted
    host: docker-registry.apps.example.com
    routerName: ose-router
    wildcardPolicy: None

创建 registry-console route
oc create route passthrough --service registry-console --port registry-console -n default

部署 registry console 应用
oc new-app -n default --template=registry-console \
    -p IMAGE_PREFIX="registry.example.com:5000/openshift3/" \
    -p IMAGE_VERSION="latest" \
    -p OPENSHIFT_OAUTH_PROVIDER_URL="https://master.example.com:8443" \
    -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \
    -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}')

4. 确认 registry console 创建并运行成功
oc get pod
输出如下:
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-1-krvma    1/1       Running   3          23d
ose-router-1-cc24j         1/1       Running   3          23d
registry-console-1-43ky0   1/1       Running   0          44s

oc get route
输出如下:
NAME               HOST/PORT                                   PATH      SERVICES           PORT               TERMINATION
docker-registry    docker-registry.apps.example.com                      docker-registry    5000-tcp           passthrough
registry-console   registry-console-default.apps.example.com             registry-console   registry-console   passthrough

https://registry-console-default.apps.example.com/registry


5. 如果有错,执行以下命令清除,然后重新执行上述步骤
oc delete dc registry-console
oc delete svc registry-console
oc delete is registry-console
oc delete oauthclients cockpit-oauth-client

参考文献:
1. 《OpenShift_Container_Platform-3.4-Installation_and_Configuration-en-US.pdf》 3.2.6.1 P87

OpenShift_053:离线安装 OCP 3.4 之安装与配置 Jenkins

环境:OCP 3.4

1. 安装与配置 Jenkins (在 Master 机器上操作)
创建一个 admin 用户,并且赋予权限
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

确认 jenkins template 存在
oc get template -n openshift |grep jenkins
输出如下:
jenkins-ephemeral                               Jenkins service, without persistent storage....                                    6 (all set)       6
jenkins-persistent                              Jenkins service, with persistent storage....                                       7 (all set)       7

确认 jenkins image stream 存在
oc get is -n openshift |grep jenkins
输出如下:
jenkins      172.30.98.122:5000/openshift/jenkins      latest,2,1                   3 weeks ago

确认 jenkins image stream tag 存在
oc edit template jenkins-persistent -n openshift
查找 JENKINS_IMAGE_STREAM_TAG
- description: Name of the ImageStreamTag to be used for the Jenkins image.
  displayName: Jenkins ImageStreamTag
  name: JENKINS_IMAGE_STREAM_TAG
  value: jenkins:latest
说明:value: jenkins:latest 表明是使用最新的 jenkins 版本,这里是 jenkins 2。

查看 jenkins image stream 内容 oc edit is jenkins -n openshift

2. 创建 jenkins 持久化存储 (在 Registry 机器上操作)
yum -y install nfs-utils
export volname=jenkins
mkdir -p /srv/nfs/${volname}
chown nfsnobody:nfsnobody /srv/nfs/${volname}
chmod 700 /srv/nfs/${volname}
echo "/srv/nfs/${volname} *(rw,sync,all_squash)" >> /etc/exports
systemctl restart rpcbind nfs-server nfs-lock nfs-idmap
showmount -e

3. 验证 NFS Server 创建成功(在 Node1 机器上操作)
mkdir -p /mnt/nfs
mount -t nfs nfs.example.com:/srv/nfs/jenkins /mnt/nfs
umount /mnt/nfs

4. 创建 PV (在 Master 机器上操作,如果出错,此步需要重做)
echo '{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
    "name": "jenkins-volume"
  },
  "spec": {
    "capacity": {
        "storage": "5Gi"
        },
    "accessModes": [ "ReadWriteOnce" ],
    "nfs": {
        "path": "/srv/nfs/jenkins",
        "server": "nfs.example.com"
    }
  }
}' | oc create -f -

5.  创建 jenkins 项目
oc new-project sharedjenkins
oc project sharedjenkins
oc annotate namespace sharedjenkins openshift.io/node-selector='infra=yes' --overwrite

OCP 3.3 执行如下命令:
oc new-app jenkins-persistent -n sharedjenkins -p JENKINS_PASSWORD=admin

OCP 3.4 执行如下命令:
oc new-app jenkins-persistent -n sharedjenkins
注意,OCP 3.4 以后 Jenkins 认证和 OCP 认证集成到一起了,因此不再需要 -p 参数。

确认 pvc 创建并绑定成功
oc get pvc
输出如下:
NAME      STATUS    VOLUME           CAPACITY   ACCESSMODES   AGE
jenkins   Bound     jenkins-volume   5Gi        RWO           1m

6. 确认安装配置 Jenkins 成功
oc get pod
输出如下:
NAME              READY     STATUS    RESTARTS   AGE
jenkins-1-lxdwn   1/1       Running   0          6m

http://jenkins-sharedjenkins.apps.example.com






7.  如果出错,执行以下命令清除,等待一分钟,然后重新把上述步骤做一遍
oc delete dc  jenkins
oc delete route jenkins
oc delete pvc jenkins
oc delete sa jenkins
oc delete rolebinding jenkins_edit
oc delete service jenkins-jnlp
oc delete service jenkins
oc delete project sharedjenkins

OpenShift_052:离线安装 OCP 3.4 之安装与配置 Logging

环境:OCP 3.4

1. 安装与配置 Logging (在 Master 机器上操作)
创建一个 admin 用户,并且赋予权限
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

切换到 logging project
oc project logging

如果不希望 fluentd 爬虫进程收集 infra=yes 节点的日志,执行如下命令,否则不要执行
oc annotate namespace logging openshift.io/node-selector='infra=yes' --overwrite

创建账户
oc new-app logging-deployer-account-template

给 service account 账户赋权
oadm policy add-cluster-role-to-user oauth-editor system:serviceaccount:logging:logging-deployer
oadm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd
oadm policy add-cluster-role-to-user rolebinding-reader system:serviceaccount:logging:aggregated-logging-elasticsearch

说明:第 4 个赋权,如果不执行,会有这个异常:openshift Discover: [security_exception] no permissions for indices:data/read/msearch

创建 configmap
oc create configmap logging-deployer \
   --from-literal kibana-hostname=kibana.apps.example.com \
   --from-literal public-master-url=https://master.example.com:8443 \
   --from-literal es-cluster-size=3 \
   --from-literal es-instance-ram=4G

确认 configmap 创建成功
oc edit configmap logging-deployer
输出如下:
......
apiVersion: v1
data:
  es-cluster-size: "3"
  es-instance-ram: 4G
  kibana-hostname: kibana.apps.example.com
  public-master-url: https://master.example.com:8443
kind: ConfigMap
metadata:
  creationTimestamp: 2017-02-12T03:07:51Z
  name: logging-deployer
  namespace: logging
  resourceVersion: "6549"
  selfLink: /api/v1/namespaces/logging/configmaps/logging-deployer
  uid: 70d238a1-f0d0-11e6-a94b-080027fc450b

如果以前安装过 logging-deployer,需要先将其卸载;如果没装过,则无需执行
oc new-app logging-deployer-template --param MODE=uninstall --param IMAGE_VERSION=v3.4 --param IMAGE_PREFIX=registry.example.com:5000/openshift3/

部署 EFK Stack
oadm policy add-cluster-role-to-user cluster-admin system:serviceaccount:openshift:logging-deployer

oc new-app logging-deployer-template \
             --param IMAGE_VERSION=v3.4 \
             --param MODE=install \
--param IMAGE_PREFIX=registry.example.com:5000/openshift3/ \
--param KIBANA_HOSTNAME=kibana.apps.example.com \
--param KIBANA_OPS_HOSTNAME=kibana-ops.apps.example.com \
--param PUBLIC_MASTER_URL=https://master.example.com:8443 \
--param ES_INSTANCE_RAM=4G \
--param ES_OPS_INSTANCE_RAM=4G

等待所有 pod 创建并运行成功
oc get pod
最终输出如下:
NAME                          READY     STATUS      RESTARTS   AGE
logging-curator-1-i818h       1/1       Running     0          1m
logging-deployer-39dv4        0/1       Completed   0          2m
logging-es-6uawm5uj-1-fgz78   1/1       Running     0          1m
logging-es-8gt7nc7t-1-x1694   1/1       Running     0          1m
logging-es-wxgw7qs2-1-hpcwz   1/1       Running     0          1m
logging-kibana-1-ddd2n        2/2       Running     0          1m

oc get svc
最终输出如下:
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
logging-es               172.30.172.188   <none>        9200/TCP   12m
logging-es-cluster       172.30.247.145   <none>        9300/TCP   12m
logging-es-ops           172.30.31.22     <none>        9200/TCP   12m
logging-es-ops-cluster   172.30.95.70     <none>        9300/TCP   12m
logging-kibana           172.30.18.121    <none>        443/TCP    12m
logging-kibana-ops       172.30.154.22    <none>        443/TCP    12m

oc get route
最终输出如下:
NAME                 HOST/PORT                     PATH      SERVICES             PORT      TERMINATION
logging-kibana       kibana.apps.example.com                 logging-kibana       <all>     reencrypt
logging-kibana-ops   kibana-ops.apps.example.com             logging-kibana-ops   <all>     reencrypt

让 fluentd 爬虫进程从所有节点收集日志
oc label node --all logging-infra-fluentd=true
输出如下:
node "master.example.com" labeled
node "node1.example.com" labeled
node "node2.example.com" labeled

oc get node --show-labels
输出如下:
NAME                 STATUS                     AGE       LABELS
master.example.com   Ready,SchedulingDisabled   67d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.example.com,logging-infra-fluentd=true
node1.example.com    Ready                      67d       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,infra=yes,kubernetes.io/hostname=node1.example.com,logging-infra-fluentd=true
node2.example.com    Ready                      67d       app=yes,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2.example.com,logging-infra-fluentd=true

oc get pod -o wide
可以看到多了几个  logging-fluentd pod,输出如下:
NAME                          READY     STATUS      RESTARTS   AGE       IP            NODE
logging-curator-1-qyzox       1/1       Running     0          2h        10.128.0.11   node1.example.com
logging-deployer-8er59        0/1       Completed   0          2h        10.129.0.3    node2.example.com
logging-es-gh05jndn-1-p1hfp   1/1       Running     0          2h        10.129.0.7    node2.example.com
logging-es-gh7kvmwl-1-hl9wm   1/1       Running     0          2h        10.128.0.10   node1.example.com
logging-es-o3nrrfcx-1-c2cab   1/1       Running     0          2h        10.129.0.8    node2.example.com
logging-fluentd-230am         1/1       Running     0          2h        10.128.0.13   node1.example.com
logging-fluentd-fite6         1/1       Running     0          2h        10.129.0.9    node2.example.com
logging-fluentd-lgn1f         1/1       Running     0          2h        10.130.0.2    master.example.com
logging-kibana-1-3gj0a        2/2       Running     0          2h        10.128.0.12   node1.example.com

2. 确认是否安装成功
https://kibana.apps.example.com
说明:不是所有 Project 的日志都可以查询,比如 Management Infrastructure Project 中没有 pod,所以就没有信息,界面上一直显示 Searching 状态。

刷新几次应用界面,然后切换到该 Project,会显示一些日志数据:

3. 如果安装配置有问题,执行以下命令清除,然后重做
在 Master 机器上操作:
oc delete sa logging-deployer
oc delete sa aggregated-logging-kibana
oc delete sa aggregated-logging-elasticsearch
oc delete sa aggregated-logging-fluentd
oc delete sa aggregated-logging-curator
oc delete clusterrole oauth-editor
oc delete clusterrole daemonset-admin
oc delete rolebinding logging-deployer-edit-role
oc delete rolebinding logging-elasticsearch-view-role
oc delete clusterrole rolebinding-reader
oc delete rolebinding logging-deployer-dsadmin-role
oc delete configmaps logging-deployer

2017年2月9日星期四

OpenShift_051:离线安装 OCP 3.4 之安装与配置 Metrics

环境:OCP 3.4

注意, Metrics 非常占用内存,因此至少要给运行 Metrics 的 Node 节点分配 4G 内存。

1. 安装与配置 Metrics (在 Master 机器上操作)
创建一个 admin 用户,并且赋予权限
htpasswd -b /etc/origin/master/htpasswd admin admin
oadm policy add-cluster-role-to-user admin admin

用 system:admin 系统用户登录到 openshift-infra
oc login -u system:admin
oc project openshift-infra
oc get node --show-labels

openshift-infra project 中的 pod 只能部署到 infra=yes 的 node 上
oc project openshift-infra
oc annotate namespace openshift-infra openshift.io/node-selector='infra=yes' --overwrite

修改 metrics-depolyer.yaml 的版本信息和 registry 参数
cp /usr/share/ansible/openshift-ansible/roles/openshift_hosted_templates/files/v1.4/enterprise/metrics-deployer.yaml ~/

vim metrics-deployer.yaml
 name: IMAGE_VERSION
  value: "3.4.0"
==> "v3.4"
  name: IMAGE_PREFIX
  value: "registry.access.redhat.com/openshift3/"
==> "registry.example.com:5000/openshift3/"

之所以要做如上修改,是因为 docker images | grep metrics-deployer
输出如下:
registry.example.com:5000/openshift3/metrics-deployer                        v3.4

修改或者增加 metricsPublicURL 参数
vim /etc/origin/master/master-config.yaml
找到 assetConfig:,在其后添加如下一行:
metricsPublicURL: "https://metrics.apps.example.com/hawkular/metrics"

重启 master 和 node
systemctl restart atomic-openshift-{master,node};

创建 service account 账户
oc create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-deployer
secrets:
- name: metrics-deployer
EOF


给 service account 账户赋权
oadm policy add-role-to-user edit system:serviceaccount:openshift-infra:metrics-deployer

oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:openshift-infra:heapster

确认权限设置成功
oc get rolebinding
oc get clusterrolebinding

设置 service account 账户口令 (如果出错,此步需要c)
oc secrets new metrics-deployer nothing=/dev/null

修改 iptables
vim /etc/sysconfig/iptables
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT

重启 iptables
systemctl restart iptables

2. 创建 NFS Server (在 Registry 机器上操作)
yum -y install nfs-utils
export volname=cassandra
mkdir -p /srv/nfs/${volname}
chown nfsnobody:nfsnobody /srv/nfs/${volname}
chmod 700 /srv/nfs/${volname}
echo "/srv/nfs/${volname} *(rw,sync,all_squash)" >> /etc/exports
systemctl restart rpcbind nfs-server nfs-lock nfs-idmap
systemctl enable nfs-server
showmount -e

3. 验证 NFS Server 创建成功(在 Node1 机器上操作)
mkdir -p /mnt/nfs
mount -t nfs nfs.example.com:/srv/nfs/cassandra /mnt/nfs
umount /mnt/nfs

4. 创建 PV(在 Master 机器上操作,如果出错,此步需要重做)
echo '{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
    "name": "cassandra-volume"
  },
  "spec": {
    "capacity": {
        "storage": "10Gi"
        },
    "accessModes": [ "ReadWriteOnce","ReadWriteMany" ],
    "nfs": {
        "path": "/srv/nfs/cassandra",
        "server": "nfs.example.com"
    }
  }
}' | oc create -f -

5.  给 metrics-deployer.yaml 参数赋值,并创建部署(在 Master 机器上操作)
oc process -f metrics-deployer.yaml -v HAWKULAR_METRICS_HOSTNAME=metrics.apps.example.com -v CASSANDRA_PV_SIZE=10Gi | oc create -f -
这一步时间较长,请耐心等待......

oc get pod -w
NAME                         READY     STATUS             RESTARTS   AGE
hawkular-cassandra-1-2asw6   0/1       Running            0          2m
hawkular-metrics-p3wvl       0/1       CrashLoopBackOff   2          2m
heapster-we30w               0/1       Running            0          2m
metrics-deployer-0gv9q       1/1       Running            0          2m
oc logs hawkular-metrics-p3wvl
输出如下:
Error: the service account for Hawkular Metrics does not have permission to view resources in this namespace. View permissions are required for Hawkular Metrics to function properly.
Usually this can be resolved by running: oc adm policy add-role-to-user view system:serviceaccount:openshift-infra:hawkular -n openshift-infra

于是按照提示,执行
oc adm policy add-role-to-user view system:serviceaccount:openshift-infra:hawkular -n openshift-infra

执行以下命令清除,等待一分钟,然后重新把上述步骤做一遍
在 Master 机器上执行如下操作:
oc delete all --selector="metrics-infra"
oc delete sa --selector="metrics-infra"
oc delete templates --selector="metrics-infra"
oc delete secrets --selector="metrics-infra"
oc delete pvc --selector="metrics-infra"
oc delete pv cassandra-volume
oc delete sa metrics-deployer
oc delete secret metrics-deployer
在 Registry 机器上执行如下操作:
cd /srv/nfs/cassandra/
rm -rf *

6.  验证 Metrics 是否成功
以验证模式运行,该命令会重启部署一个新的 metrics-deployer pod
oc process -f metrics-deployer.yaml -v HAWKULAR_METRICS_HOSTNAME=metrics.apps.example.com -v CASSANDRA_PV_SIZE=10Gi -v MODE=validate | oc create -f -

查看 pod 日志,确定没有错误,且验证通过;如果有错,可以查看错误详细信息
oc logs metrics-deployer-sa2cq
输出如下:
......
  Will retry in 5 seconds.
========================
--- validate_deployment_artifacts ---
--- validate_deployed_project ---

VALIDATION SUCCEEDED
validate_nodes_accessible: ok
validate_deployment_artifacts: ok
validate_deployed_project:
Success!

诊断 MetricsApiProxy
oadm diagnostics MetricsApiProxy

curl -k -X GET https://`oc get pod $(oc get pods | grep -i hawkular-metrics | awk '{print $1}') -o template --template='{{.status.podIP}}'`:8443/hawkular/metrics/status
输出如下:
{"MetricsService":"STARTED","Implementation-Version":"0.21.5.Final-redhat-1","Built-From-Git-SHA1":"632f908a52d3e45b3a0bafa84e117ec6ca87bb19"}

oc describe pod hawkular-metrics-mwn9i | grep -i IP
输出如下:
IP:            10.128.0.19

curl -k https://10.128.0.19:8443/hawkular/metrics/status
输出如下:
{"MetricsService":"STARTED","Implementation-Version":"0.21.5.Final-redhat-1","Built-From-Git-SHA1":"632f908a52d3e45b3a0bafa84e117ec6ca87bb19"}

curl -k https://metrics.apps.example.com/hawkular/metrics/status
输出如下:
{"MetricsService":"STARTED","Implementation-Version":"0.21.5.Final-redhat-1","Built-From-Git-SHA1":"632f908a52d3e45b3a0bafa84e117ec6ca87bb19"}




2017年2月8日星期三

Tips_028:解决 Firefox、Chrome 下无法访问特定端口问题

环境:MAC OS X 10.12.3

安装 Apache Httpd 时,发现 80 端口可以访问,但是修改成 6666 端口后无法访问。

1.  Firefox
在地址栏输入 about:config,然后在右键新建一个字符串键network.security.ports.banned.override,值就是将需访问网站的端口号:6666。
如有多个端口,用半角逗号隔开,例:87,6666,556,6667。
在能保证安全的前提下,还可以写成 0-65535,这样就可以浏览任意端口的网站了。

2. Chrome
open -n "/Applications/Google Chrome.app" --args --explicitly-allowed-ports=6666 'http://192.168.56.112:6666'

参考文献:
1. https://zhidao.baidu.com/question/748566990124327372.html
2. http://blog.csdn.net/jerryvon/article/details/7236508

2017年2月6日星期一

VirtualBox_011:主机可以 ssh 客机,同时客机可以访问外网

环境:MAC OS X 10.12.3 (主机)+ RHEL 7.3(客机)

1. 最小化安装 RHEL 7.3

2. 设置自动启动网卡
网卡默认使用的是 NAT 模式,但是默认没有启动,需要设置成自动启动: 

把 ONBOOT=no 改成 ONBOOT=yes
修改完毕后,内容如下:

# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=5415f95c-3d27-404f-a1cc-909d3a53c0b9
DEVICE=enp0s3
ONBOOT=yes

3. 重启虚机,这时应该能够访问外网

4. 停止虚机,增加另一块网卡,选择 HostOnly Adapter



3. 重启虚机,运行 ip a,会发现多了一块网卡

# ip a 
输出如下:
1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:1a:85:36 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 84934sec preferred_lft 84934sec
    inet6 fe80::d4c:6a78:da33:b565/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:61:6f:94 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.119/24 brd 192.168.56.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet 192.168.56.100/24 brd 192.168.56.255 scope global secondary dynamic enp0s8
       valid_lft 977sec preferred_lft 977sec
    inet6 fe80::a00:27ff:fe61:6f94/64 scope link
       valid_lft forever preferred_lft forever


4. 修改新的网卡配置文件
如果系统自动生成了新的网卡配置文件,就使用系统生成的。
如果系统没有自动生成新的网卡配置文件,就复制一份。
# cp /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-enp0s8
修改完毕后,内容如下:
# vi /etc/sysconfig/network-scripts/ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
#PEERDNS=yes
#PEERROUTES=yes
IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_FAILURE_FATAL=no
#IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s8
#UUID=5415f95c-3d27-404f-a1cc-909d3a53c0b9
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.56.119
PREFIX=24
GATEWAY=192.168.56.1


5. 重启虚机,现在主机可以 ssh 客机了
MaPingdeMacBook-Pro:~ maping$ ssh root@192.168.56.119
输出如下:
root@192.168.56.119's password:
Last login: Mon Feb  6 15:56:55 2017 from gateway