前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >厉害了!全CI/CD工具链的实现 | 基于OCP离线: Openshift3.9学习系列第五篇

厉害了!全CI/CD工具链的实现 | 基于OCP离线: Openshift3.9学习系列第五篇

作者头像
魏新宇
发布2018-07-30 14:44:38
2K0
发布2018-07-30 14:44:38
举报

前言

本文仅代表作者的个人观点;

本文的内容仅限于技术探讨,不能直接作为指导生产环境的素材;

本文素材是红帽公司产品技术和手册;

本文分为系列文章,一共六篇,本文是第五篇。

第一篇:

Openshift3.9高可用部署考虑点1

第二篇:

干货巨献:Openshift3.9的网络管理大全.加长篇---Openshift3.9学习系列第二篇

第三篇:

身份验证和权限管理---Openshift3.9学习系列第三篇

第四篇:

容器计算资源管理&网络QoS的实现---Openshift3.9学习系列第四篇

一、离线安装

在实际客户的环境中,OCP大多数情况是离线安装。今天我们来介绍一下离线安装OCP,并基于OCP部署整个CD/CD工具链。

我们的实验环境有七类角色的对象,有的位于公网,有的位于私网。

Baston,即堡垒机用于整体的OCP安装和后续OCP的运维使用。

isolated位于公网,它用于获取公网上的各类镜像和资源。

loadbalance位于公网,用于接受外部访问请求。

Master、node、InfraNode、NFS server这些和OCP相关的组件,都位于私网。

网络方面,有两类:集群网络和公有网络。

整个环境的拓扑如下:

整个环境中,上面方框的组件可以访问外网;下面方框的组件不能访问外网。

  • 外网客户通过80,443访问loadbalancer(一个软负载,haproxy);
  • loadbalance可以与OCP的master和infranode通讯,用户转发请求;
  • 堡垒机可以访问OCP的所有相关节点,用于对OCP的安装和运维
  • isolated 节点可以访问外网,也可以被OCP访问。它访问外网获取软件包和镜像,然后为OCP提供repo以及代码和依赖的中转站(中转到OCP环境中的maven和nexcus)
  • OCP的master、node、infranode和NFS节点都不能访问外网。

本实验比较长,我先讲一下整个逻辑:

在整个实验中isolated node充当了重要的作用:

  1. 给OCP节点做repo,提供yum源,以便安装OCP
  2. 将红帽官网的镜像组件拉到isolated node上搭建的registry,然后被导入到OCP的docker-registry中。通过oc import导入image stream的方式。
  3. OCP安装好以后,我们会在OCP上搭建Nexus;在Nexus上构建一个maven2的repository。然后,通过isolated node中转,先从公网获取依赖包,再将依赖导入到maven2的repository。
  4. OCP安装好以后,我们会在OCP上搭建Gogs,用于源码的仓库。然后,通过isolated node中转,先从github获取源码,再将源码导入到Gogs的仓库。

到最后做S2I的时候,builder image位于OCP的docker-registry中,源码位于本地的gogs中,maven依赖位于nexus的maven2 repository。从而通过完全离线的方式实现了CI/CD。

由于篇幅有限,本文没有展示Jenkins的部署和通过Jenkins做构建。OCP可以直接部署容器化的Jenkins,原理是一样的。具体的实验,请参照另外一篇文章。

Openshift上应用构建与部署方式大全

接下来,我们进入正文。

首先登录堡垒机,确认除了isolated节点,其他系统都是可通的:

登录isolated节点,确认 RPM Repository配置(缓存位置是/var/cache/httpd/proxy/):

二、安装并填充Image Registry

在isolated节点上,安装Docker image registry:skopeo和jq

rpm -i https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

yum makecache

yum install -y docker-distribution skopeo jq

编辑/etc/docker-distribution/registry/config.yml并替换root目录行:

将:

变更为:

使用命令行:

sed -i 's/^.*rootdirectory.*$/ rootdirectory: \/srv\/repohost\/registry/' /etc/docker-distribution/registry/config.yml

配置日志的监控:

cat << EOF >> /etc/docker-distribution/registry/config.yml

log:

accesslog:

disabled: false

level: info

formatter: text

fields:

service: registry

environment: staging

EOF

创建用于在注册表中保存图像的目录,并使用systemd启用,启动和检查Docker注册表的状态:

mkdir -p /srv/repohost/registry

systemctl enable docker-distribution

systemctl start docker-distribution

systemctl status docker-distribution

三、从公网拷贝images到Isolated Registry

将Red Hat注册表中(registry.access.redhat.com)的所有映像复制到本地Docker注册表,并确保将它们写入本地注册表,其标记符合当前版本的OCP的要求。

我们将通过四个shell完成这个操作,写成4个shell的目的是为了并发执行,提高效率。

shell1:确保已标记为v3.9.14的image保留v3.9.14标记:

RHT_TAG=v3.9.14

LOCAL_TAG=v3.9.14

IMAGES_SAME_PATCH="ose-ansible ose-cluster-capacity ose-deployer ose-docker-builder ose-docker-registry ose-egress-http-proxy ose-egress-router ose-haproxy-router ose-pod ose-sti-builder ose container-engine efs-provisioner node openvswitch oauth-proxy logging-auth-proxy logging-curator logging-elasticsearch logging-fluentd logging-kibana metrics-cassandra metrics-hawkular-metrics metrics-heapster oauth-proxy ose ose-service-catalog prometheus-alert-buffer prometheus-alertmanager prometheus registry-console ose-web-console ose-template-service-broker ose-ansible-service-broker logging-deployer metrics-deployer ose-service-catalog mediawiki-apb postgresql-apb mariadb-apb mysql-apb"

time for image in ${IMAGES_SAME_PATCH}

do

latest_version=`skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/$image | jq ".RepoTags | map(select(startswith((\"${RHT_TAG}\")))) |.[] "| sort -V | tail -1 | tr -d '"'`

if [[ "$latest_version" == "" ]]; then latest_version='latest';fi

echo "Copying image: $image version: $latest_version"

skopeo copy --dest-tls-verify=false docker://registry.access.redhat.com/openshift3/${image}:${latest_version} docker://localhost:5000/openshift3/${image}:${LOCAL_TAG}

echo "Copied image: $image version: $latest_version"

done

执行shell1:

shell2:将带有latest 标签的image复制到v3.9.14标签:

# 7 minutes

RHT_TAG='latest'

LOCAL_TAG='v3.9.14'

IMAGES_LATEST_TO_PATCH="ose-recycler prometheus-node-exporter"

time for image in ${IMAGES_LATEST_TO_PATCH}

do

latest_version=`skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/$image | jq ".RepoTags | map(select(startswith((\"${RHT_TAG}\")))) |.[] "| sort -V | tail -1 | tr -d '"'`

if [[ "$latest_version" == "" ]]; then latest_version='latest';fi

echo "Copying image: $image version: $latest_version"

skopeo copy --dest-tls-verify=false docker://registry.access.redhat.com/openshift3/${image}:${latest_version} docker://localhost:5000/openshift3/${image}:${LOCAL_TAG} &

done

执行shell2:

shell3:将v3.9标签复制到最新标签:

RHT_TAG='v3.9'

LOCAL_TAG='latest'

# Latest tags point to older releases. Need to use version-specific tag::

IMAGES_MAJOR_LATEST="jenkins-2-rhel7 jenkins-slave-base-rhel7 jenkins-slave-maven-rhel7 jenkins-slave-nodejs-rhel7"

time for image in ${IMAGES_MAJOR_LATEST}

do

latest_version=`skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/$image | jq ".RepoTags | map(select(startswith((\"${RHT_TAG}\")))) |.[] "| sort -V | tail -1 | tr -d '"'`

if [[ "$latest_version" == "" ]]; then latest_version='latest';fi

echo "Copying image: $image version: $latest_version"

skopeo copy --dest-tls-verify=false docker://registry.access.redhat.com/openshift3/${image}:${latest_version} docker://localhost:5000/openshift3/${image}:${LOCAL_TAG}

done

执行shell3:

shell4:复制应用程序的图像(包括OpenShift Ansible Broker的etcd):

# Nexus and Gogs (latest) from docker.io

for image in sonatype/nexus3 wkulhanek/gogs

do

skopeo copy --dest-tls-verify=false docker://docker.io/${image}:latest docker://localhost:5000/${image}:latest

done

# from registry.access.redhat.com

for image in rhel7/etcd rhscl/postgresql-96-rhel7 jboss-eap-7/eap70-openshift

do

skopeo copy --dest-tls-verify=false docker://registry.access.redhat.com/$image:latest docker://localhost:5000/${image}:latest

done

执行shell4:

过了一会:

shell1执行完毕:

shell2执行完毕:

shell3执行完毕:

shell4执行完毕:

四、准备安装OCP

确认master和node节点的docker进程正常运行:

ansible nodes -mshell -a'systemctl status docker| grep Active'

确认master和node节点都能正常访问isolated的yum源,用于安装部署OCP:

ansible all -mshell -a'yum repolist -v| grep baseurl'

登录到NFS server上,为pv准备目录:

mkdir -p /srv/nfs/user-vols/pv{1..200}

for pvnum in {1..50} ; do

echo /srv/nfs/user-vols/pv${pvnum} *(rw,root_squash) >> /etc/exports.d/openshift-uservols.exports

chown -R nfsnobody.nfsnobody /srv/nfs

chmod -R 777 /srv/nfs

done

systemctl restart nfs-server

NFS server映射出去了50个目录:

五、配置ansible inventory 文件

安装的目标效果是:

  • Deploy a three-master HA OpenShift deployment with a load balancer for API and web console load balancing.
  • Make sure the load balancer listens on port 443 (and not 8443).
  • Set a default node selector of env=app.
  • Set two nodes to be infra nodes by setting the env=infra tag on them.
  • Deploy OpenShift routers with wildcard domains on both of the infra nodes.
  • Deploy an integrated OpenShift image registry on your infra nodes.
  • Deploy Fluentd logging and Hawkular metrics with persistent storage for all nodes in the cluster.
  • Deploy Prometheus-based metrics and alerting.
  • Deploy the template service broker and service catalog.
  • Deploy the OpenShift Ansible Broker and storage for the etcd pod.
  • Set up HTTP authorization via an htaccess file.
  • Set up ovs-networkpolicy as SDN plug-in.

并且安装好的OCP可以访问以下内容:

  • Additional registries
  • Custom image prefixes for:

metrics

logging

service catalog

cockpit deployer

template service broker

OpenShift Ansible Broker

Prometheus

Prometheus alertmanager

Prometheus alertbuffer

Prometheus OAuth proxy

查看配置好的inventory文件:

[root@bastion ~]# cat /var/preserve/hosts

[OSEv3:vars]

###########################################################################

### Ansible Vars

###########################################################################

timeout=60

ansible_become=yes

ansible_ssh_user=ec2-user

###########################################################################

### OpenShift Basic Vars

###########################################################################

deployment_type=openshift-enterprise

containerized=false

openshift_disable_check="disk_availability,memory_availability,docker_image_availability"

# default project node selector

osm_default_node_selector='env=app'

openshift_hosted_infra_selector="env=infra"

# Configure node kubelet arguments. pods-per-core is valid in OpenShift Origin 1.3 or OpenShift Container Platform 3.3 and later.

openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['85'], 'image-gc-low-threshold': ['75']}

# Configure logrotate scripts

# See: https://github.com/nickhammond/ansible-logrotate

logrotate_scripts=[{"name": "syslog", "path": "/var/log/cron\n/var/log/maillog\n/var/log/messages\n/var/log/secure\n/var/log/spooler\n", "options": ["daily", "rotate 7","size 500M", "compress", "sharedscripts", "missingok"], "scripts": {"postrotate": "/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true"}}]

###########################################################################

### OpenShift Optional Vars

###########################################################################

# Enable cockpit

osm_use_cockpit=true

osm_cockpit_plugins=['cockpit-kubernetes']

###########################################################################

### OpenShift Master Vars

###########################################################################

openshift_master_api_port=443

openshift_master_console_port=443

openshift_master_cluster_method=native

openshift_master_cluster_hostname=loadbalancer1.a6d9.internal

openshift_master_cluster_public_hostname=loadbalancer1.a6d9.example.opentlc.com

openshift_master_default_subdomain=apps.a6d9.example.opentlc.com

#openshift_master_ca_certificate={'certfile': '/root/intermediate_ca.crt', 'keyfile': '/root/intermediate_ca.key'}

openshift_master_overwrite_named_certificates=True

###########################################################################

### OpenShift Network Vars

###########################################################################

osm_cluster_network_cidr=10.1.0.0/16

openshift_portal_net=172.30.0.0/16

#os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'

##########################################################################

### Disconnected Install Vars

### Requires a docker registry at isolated1.a6d9.internal:5000

###########################################################################

# sets the debug level for all OpenShift components. Default is 2

#debug_level=8

# used for container-based install, not RPM

system_images_registry=isolated1.a6d9.internal:5000

# https://bugzilla.redhat.com/show_bug.cgi?id=1461465 target release 3.9

#the enterprise registry will not be added to the docker registries.

#also enables insecure registries, somehow.

openshift_docker_ent_reg=''

# https://bugzilla.redhat.com/show_bug.cgi?id=1516534 target release 3.10

oreg_url=isolated1.a6d9.internal:5000/openshift3/ose-${component}:${version}

openshift_examples_modify_imagestreams=true

openshift_docker_additional_registries=isolated1.a6d9.internal:5000

openshift_docker_insecure_registries=isolated1.a6d9.internal:5000

openshift_docker_blocked_registries=registry.access.redhat.com,docker.io

openshift_metrics_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_metrics_image_version=v3.9.14

openshift_logging_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_logging_image_version=v3.9.14

ansible_service_broker_image_prefix=isolated1.a6d9.internal:5000/openshift3/ose-

ansible_service_broker_image_tag=v3.9.14

ansible_service_broker_etcd_image_prefix=isolated1.a6d9.internal:5000/rhel7/

ansible_service_broker_etcd_image_tag=latest

openshift_service_catalog_image_prefix=isolated1.a6d9.internal:5000/openshift3/ose-

openshift_service_catalog_image_version=v3.9.14

openshift_cockpit_deployer_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_cockpit_deployer_version=v3.9.14

template_service_broker_prefix=isolated1.a6d9.internal:5000/openshift3/ose-

template_service_broker_version=v3.9.14

openshift_web_console_prefix=isolated1.a6d9.internal:5000/openshift3/ose-

openshift_web_console_version=v3.9.14

# PROMETHEUS SETTINGS

openshift_prometheus_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_prometheus_image_version=v3.9.14

openshift_prometheus_alertmanager_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_prometheus_alertmanager_image_version=v3.9.14

openshift_prometheus_alertbuffer_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_prometheus_alertbuffer_image_version=v3.9.14

openshift_prometheus_oauth_proxy_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_prometheus_oauth_proxy_image_version=v3.9.14

openshift_prometheus_node_exporter_image_prefix=isolated1.a6d9.internal:5000/openshift3/

openshift_prometheus_node_exporter_image_version=v3.9.14

##########################################################################

## OpenShift Authentication Vars

###########################################################################

# htpasswd auth

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Defining htpasswd users

#openshift_master_htpasswd_users={'user1': '<pre-hashed password>', 'user2': '<pre-hashed password>'}

# or

openshift_master_htpasswd_file=/root/htpasswd.openshift

###########################################################################

### OpenShift Metrics and Logging Vars

###########################################################################

# Enable cluster metrics

openshift_metrics_install_metrics=True

openshift_metrics_storage_kind=nfs

openshift_metrics_storage_access_modes=['ReadWriteOnce']

openshift_metrics_storage_nfs_directory=/srv/nfs

openshift_metrics_storage_nfs_options='*(rw,root_squash)'

openshift_metrics_storage_volume_name=metrics

openshift_metrics_storage_volume_size=10Gi

openshift_metrics_storage_labels={'storage': 'metrics'}

openshift_metrics_cassandra_nodeselector={"env":"infra"}

openshift_metrics_hawkular_nodeselector={"env":"infra"}

openshift_metrics_heapster_nodeselector={"env":"infra"}

## Add Prometheus Metrics:

openshift_hosted_prometheus_deploy=true

openshift_prometheus_node_selector={"env":"infra"}

openshift_prometheus_namespace=openshift-metrics

# Prometheus

openshift_prometheus_storage_kind=nfs

openshift_prometheus_storage_access_modes=['ReadWriteOnce']

openshift_prometheus_storage_nfs_directory=/srv/nfs

openshift_prometheus_storage_nfs_options='*(rw,root_squash)'

openshift_prometheus_storage_volume_name=prometheus

openshift_prometheus_storage_volume_size=10Gi

openshift_prometheus_storage_labels={'storage': 'prometheus'}

openshift_prometheus_storage_type='pvc'

# For prometheus-alertmanager

openshift_prometheus_alertmanager_storage_kind=nfs

openshift_prometheus_alertmanager_storage_access_modes=['ReadWriteOnce']

openshift_prometheus_alertmanager_storage_nfs_directory=/srv/nfs

openshift_prometheus_alertmanager_storage_nfs_options='*(rw,root_squash)'

openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager

openshift_prometheus_alertmanager_storage_volume_size=10Gi

openshift_prometheus_alertmanager_storage_labels={'storage': 'prometheus-alertmanager'}

openshift_prometheus_alertmanager_storage_type='pvc'

# For prometheus-alertbuffer

openshift_prometheus_alertbuffer_storage_kind=nfs

openshift_prometheus_alertbuffer_storage_access_modes=['ReadWriteOnce']

openshift_prometheus_alertbuffer_storage_nfs_directory=/srv/nfs

openshift_prometheus_alertbuffer_storage_nfs_options='*(rw,root_squash)'

openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer

openshift_prometheus_alertbuffer_storage_volume_size=10Gi

openshift_prometheus_alertbuffer_storage_labels={'storage': 'prometheus-alertbuffer'}

openshift_prometheus_alertbuffer_storage_type='pvc'

# Already set in the disconnected section

# openshift_prometheus_node_exporter_image_version=v3.9

# Enable cluster logging

openshift_logging_install_logging=True

openshift_logging_storage_kind=nfs

openshift_logging_storage_access_modes=['ReadWriteOnce']

openshift_logging_storage_nfs_directory=/srv/nfs

openshift_logging_storage_nfs_options='*(rw,root_squash)'

openshift_logging_storage_volume_name=logging

openshift_logging_storage_volume_size=10Gi

openshift_logging_storage_labels={'storage': 'logging'}

# openshift_logging_kibana_hostname=kibana.apps.a6d9.example.opentlc.com

openshift_logging_es_cluster_size=1

openshift_logging_es_nodeselector={"env":"infra"}

openshift_logging_kibana_nodeselector={"env":"infra"}

openshift_logging_curator_nodeselector={"env":"infra"}

###########################################################################

### OpenShift Project Management Vars

###########################################################################

# Configure additional projects

openshift_additional_projects={'openshift-template-service-broker': {'default_node_selector': ''}}

###########################################################################

### OpenShift Router and Registry Vars

###########################################################################

openshift_hosted_router_replicas=2

#openshift_hosted_router_certificate={"certfile": "/path/to/router.crt", "keyfile": "/path/to/router.key", "cafile": "/path/to/router-ca.crt"}

openshift_hosted_registry_replicas=1

openshift_hosted_registry_storage_kind=nfs

openshift_hosted_registry_storage_access_modes=['ReadWriteMany']

openshift_hosted_registry_storage_nfs_directory=/srv/nfs

openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'

openshift_hosted_registry_storage_volume_name=registry

openshift_hosted_registry_storage_volume_size=20Gi

openshift_hosted_registry_pullthrough=true

openshift_hosted_registry_acceptschema2=true

openshift_hosted_registry_enforcequota=true

###########################################################################

### OpenShift Service Catalog Vars

###########################################################################

openshift_enable_service_catalog=true

template_service_broker_install=true

openshift_template_service_broker_namespaces=['openshift']

ansible_service_broker_install=true

ansible_service_broker_local_registry_whitelist=['.*-apb$']

openshift_hosted_etcd_storage_kind=nfs

openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"

openshift_hosted_etcd_storage_nfs_directory=/srv/nfs

openshift_hosted_etcd_storage_labels={'storage': 'etcd-asb'}

openshift_hosted_etcd_storage_volume_name=etcd-asb

openshift_hosted_etcd_storage_access_modes=['ReadWriteOnce']

openshift_hosted_etcd_storage_volume_size=10G

###########################################################################

### OpenShift Hosts

###########################################################################

[OSEv3:children]

lb

masters

etcd

nodes

nfs

[lb]

loadbalancer1.a6d9.internal host_zone=ap-southeast-1a

[masters]

master1.a6d9.internal host_zone=ap-southeast-1a

master2.a6d9.internal host_zone=ap-southeast-1a

master3.a6d9.internal host_zone=ap-southeast-1a

[etcd]

master1.a6d9.internal host_zone=ap-southeast-1a

master2.a6d9.internal host_zone=ap-southeast-1a

master3.a6d9.internal host_zone=ap-southeast-1a

[nodes]

## These are the masters

master1.a6d9.internal openshift_hostname=master1.a6d9.internal openshift_node_labels="{'logging':'true','openshift_schedulable':'False','cluster': 'a6d9', 'zone': 'ap-southeast-1a'}"

master2.a6d9.internal openshift_hostname=master2.a6d9.internal openshift_node_labels="{'logging':'true','openshift_schedulable':'False','cluster': 'a6d9', 'zone': 'ap-southeast-1a'}"

master3.a6d9.internal openshift_hostname=master3.a6d9.internal openshift_node_labels="{'logging':'true','openshift_schedulable':'False','cluster': 'a6d9', 'zone': 'ap-southeast-1a'}"

## These are infranodes

infranode2.a6d9.internal openshift_hostname=infranode2.a6d9.internal openshift_node_labels="{'logging':'true','cluster': 'a6d9', 'env':'infra', 'zone': 'ap-southeast-1a'}"

infranode1.a6d9.internal openshift_hostname=infranode1.a6d9.internal openshift_node_labels="{'logging':'true','cluster': 'a6d9', 'env':'infra', 'zone': 'ap-southeast-1a'}"

## These are regular nodes

node2.a6d9.internal openshift_hostname=node2.a6d9.internal openshift_node_labels="{'logging':'true','cluster': 'a6d9', 'env':'app', 'zone': 'ap-southeast-1a'}"

node3.a6d9.internal openshift_hostname=node3.a6d9.internal openshift_node_labels="{'logging':'true','cluster': 'a6d9', 'env':'app', 'zone': 'ap-southeast-1a'}"

node1.a6d9.internal openshift_hostname=node1.a6d9.internal openshift_node_labels="{'logging':'true','cluster': 'a6d9', 'env':'app', 'zone': 'ap-southeast-1a'}"

[nfs]

support1.a6d9.internal openshift_hostname=support1.a6d9.internal

六、OCP安装

ansible-playbook -f 20 /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

ansible-playbook -f 20 /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

过一会,安装完毕:

查看节点:

查看已经部署好的pod:

七、创建pv

在master上:

mkdir /root/pvs

for volume in pv{1..200} ; do

cat << EOF > /root/pvs/${volume}

{

"apiVersion": "v1",

"kind": "PersistentVolume",

"metadata": {

"name": "${volume}"

},

"spec": {

"capacity": {

"storage": "10Gi"

},

"accessModes": [ "ReadWriteOnce" ],

"nfs": {

"path": "/srv/nfs/user-vols/${volume}",

"server": "support1.$GUID.internal"

},

"persistentVolumeReclaimPolicy": "Recycle"

}

}

EOF

echo "Created def file for ${volume}";

done;

cat /root/pvs/* | oc create -f -

八、配置 Load Balancer

登录到load balancer上,它是一个haproxy。

打开80端口:

iptables -A OS_FIREWALL_ALLOW -p tcp -m tcp --dport 80 -j ACCEPT

service iptables save

在/etc/haproxy/haproxy.cfg中添加条目以配置HAProxy以转发到infra节点:

export GUID=`hostname | awk -F. '{print $2}'`

MASTER1=`host master1.$GUID.internal | cut -f4 -d" "`

MASTER2=`host master2.$GUID.internal | cut -f4 -d" "`

MASTER3=`host master3.$GUID.internal | cut -f4 -d" "`

INFRANODE1=`host infranode1.$GUID.internal | cut -f4 -d" "`

INFRANODE2=`host infranode2.$GUID.internal | cut -f4 -d" "`

cat <<EOF > /etc/haproxy/haproxy.cfg

# Global settings

#---------------------------------------------------------------------

global

maxconn 20000

log /dev/log local0 info

chroot /var/lib/haproxy

pidfile /var/run/haproxy.pid

user haproxy

group haproxy

daemon

# turn on stats unix socket

stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------

# common defaults that all the 'listen' and 'backend' sections will

# use if not designated in their block

#---------------------------------------------------------------------

defaults

mode http

log global

option httplog

option dontlognull

# option http-server-close

option forwardfor except 127.0.0.0/8

option redispatch

retries 3

timeout http-request 10s

timeout queue 1m

timeout connect 10s

timeout client 300s

timeout server 300s

timeout http-keep-alive 10s

timeout check 10s

maxconn 20000

listen stats :9000

mode http

stats enable

stats uri /

frontend atomic-openshift-all-the-things-http

bind *:80

mode tcp

option tcplog

default_backend atomic-openshift-apps-http

frontend atomic-openshift-all-the-things-https

bind *:443

mode tcp

option tcplog

tcp-request inspect-delay 5s

tcp-request content accept if { req_ssl_hello_type 1 }

acl host_masters req_ssl_sni -i loadbalancer1.${GUID}.example.opentlc.com loadbalancer1.${GUID}.internal

use_backend atomic-openshift-api if host_masters

default_backend atomic-openshift-apps-https

frontend atomic-openshift-all-the-things-http

bind *:80

mode tcp

option tcplog

default_backend atomic-openshift-apps-http

backend atomic-openshift-api

balance source

mode tcp

server master0 $MASTER1:443 check

server master1 $MASTER2:443 check

server master2 $MASTER3:443 check

backend atomic-openshift-apps-https

balance source

mode tcp

server infranode1 $INFRANODE1:443 check

server infranode2 $INFRANODE2:443 check

backend atomic-openshift-apps-http

balance source

mode tcp

server infranode1 $INFRANODE1:80 check

server infranode2 $INFRANODE2:80 check

EOF

查看配置好的文件:

下图展示了那种请求会被转发到后端的master节点:

那面展示了那种求情会被转发到infra节点,也就是roouer所在的节点:

systemctl restart haproxy ; systemctl status haproxy

ss -lntp

访问页面进行测试:

九、部署CI/CD工具

Gogs是一个带有Web前端的源代码存储库管理器。

Nexus是一个工件存储库,用于存储构建依赖关系(以及其他功能)。

以root身份,从其中一个master获取.kube目录到堡垒机,通过这个操作,我们后面可以在堡垒机上通过system:admin登录OCP:

将所有可能需要的images(例如PostgreSQL,Gogs和Nexus3)从Isolated1. $ GUID.internal主机导入到在infra节点上运行pod的OpenShift集成docker-registry服务:

oc import-image docker-registry.default.svc:5000/gogs:latest --from=isolated1.$GUID.internal:5000/wkulhanek/gogs:latest --confirm --insecure=true -n openshift

oc import-image docker-registry.default.svc:5000/sonatype/nexus3:latest --from=isolated1.$GUID.internal:5000/sonatype/nexus3:latest --confirm --insecure=true -n openshift

oc import-image docker-registry.default.svc:5000/rhscl/postgresql:latest --from=isolated1.$GUID.internal:5000/rhscl/postgresql-96-rhel7:latest --confirm --insecure=true -n openshift

oc tag postgresql:latest postgresql:9.6 -n openshift

oc import-image docker-registry.default.svc:5000/openshift/jboss-eap70-openshift:latest --from=isolated1.$GUID.internal:5000/jboss-eap-7/eap70-openshift:latest --confirm --insecure=true -n openshift

oc tag jboss-eap70-openshift:latest jboss-eap70-openshift:1.7 -n openshift

查看is,已经导入成功:

部署Nexus3:

oc new-project cicd

echo "apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: nexus-pvc

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 10Gi" | oc create -f -

oc new-app openshift/nexus3:latest

oc rollout pause dc nexus3

oc patch dc nexus3 --patch='{ "spec": { "strategy": { "type": "Recreate" }}}'

oc set resources dc nexus3 --limits=memory=2Gi --requests=memory=1Gi

oc set volume dc/nexus3 --add --overwrite --name=nexus3-volume-1 --mount-path=/nexus-data/ --type persistentVolumeClaim --claim-name=nexus-pvc

oc set probe dc/nexus3 --liveness --failure-threshold 3 --initial-delay-seconds 60 -- echo ok

oc set probe dc/nexus3 --readiness --failure-threshold 3 --initial-delay-seconds 60 --get-url=http://:8081/repository/maven-public/

oc rollout resume dc nexus3

oc expose svc nexus3

nexus3在部署过程中:

部署完毕:

十、在Nexus中配置repository

在本节中,我们使用Web控制台在Nexus中为构建工件准备存储库。

由于这是完全脱机的环境,因此Nexus无法充当代理存储库。 因此,有必要创建托管的Maven2 repository ,然后将构建任何给定应用程序所需的所有工件复制到存储库中。 所有必要的工件都已在zip文件中提供。

登录上一小节创建好的Nexus:http://nexus3-cicd.apps.$GUID.example.opentlc.com。在Nexus中,创建一个名为offline的托管Maven2 repository:

点击齿轮:

点击创建repository:

点击创建:

登录到isolated节点上,下载依赖项zip文件并将其解压缩到$ HOME / repository目录(只有这个机器可以访问外网下载资源):

cd $HOME

wget http://admin.na.shared.opentlc.com/openshift-tasks-repository.zip

unzip -o openshift-tasks-repository.zip -d $HOME

在$ HOME / repository中创建以下nexusimport.sh脚本,这简化了将依赖项加载到Nexus中的过程。

cd $HOME/repository

cat << EOF > ./nexusimport.sh

#!/bin/bash

# copy and run this script to the root of the repository directory containing files

# this script attempts to exclude uploading itself explicitly so the script name is important

# Get command line params

while getopts ":r:u:p:" opt; do

case \$opt in

r) REPO_URL="\$OPTARG"

;;

u) USERNAME="\$OPTARG"

;;

p) PASSWORD="\$OPTARG"

;;

esac

done

find . -type f \\

-not -path './mavenimport\.sh*' \\

-not -path '*/\.*' \\

-not -path '*/\^archetype\-catalog\.xml*' \\

-not -path '*/\^maven\-metadata\-local*\.xml' \\

-not -path '*/\^maven\-metadata\-deployment*\.xml' | \\

sed "s|^\./||" | \\

xargs -t -I '{}' curl -s -S -u "\$USERNAME:\$PASSWORD" -X PUT -T {} \${REPO_URL}/{} ;

EOF

chmod +x $HOME/repository/nexusimport.sh

现在运行nexusimport.sh脚本将整个存储库上传到Nexus:

cd $HOME/repository

./nexusimport.sh -u admin -p admin123 -r http://nexus3-cicd.apps.a6d9.example.opentlc.com/repository/offline/

(带变量命令./nexusimport.sh -u admin -p admin123 -r http://$(oc get route nexus3 --template='{{ .spec.host }}' -n cicd)/repository/offline/)

过一会,导入完毕:

通过浏览器访问这个maven repository,可以看到了里面已经有不少内容,都是刚才导入进去的:

十一、Gogs部署

为Gogs创建PostgreSQL数据库:

oc project cicd

oc new-app postgresql-persistent --param POSTGRESQL_DATABASE=gogs --param POSTGRESQL_USER=gogs --param POSTGRESQL_PASSWORD=gogs --param VOLUME_CAPACITY=4Gi -lapp=postgresql_gogs

部署成功:

使用以下内容创建名为$ HOME / gogs.yaml的Gogs模板文件:

echo 'kind: Template

apiVersion: v1

metadata:

annotations:

description: The Gogs git server. Requires a PostgreSQL database.

tags: instant-app,gogs,datastore

iconClass: icon-github

name: gogs

objects:

- kind: Service

apiVersion: v1

metadata:

annotations:

description: The Gogs servers http port

labels:

app: ${APPLICATION_NAME}

name: ${APPLICATION_NAME}

spec:

ports:

- name: 3000-tcp

port: 3000

protocol: TCP

targetPort: 3000

selector:

app: ${APPLICATION_NAME}

deploymentconfig: ${APPLICATION_NAME}

sessionAffinity: None

type: ClusterIP

- kind: Route

apiVersion: v1

id: ${APPLICATION_NAME}-http

metadata:

annotations:

description: Route for applications http service.

labels:

app: ${APPLICATION_NAME}

name: ${APPLICATION_NAME}

spec:

host: ${GOGS_ROUTE}

to:

name: ${APPLICATION_NAME}

- kind: DeploymentConfig

apiVersion: v1

metadata:

labels:

app: ${APPLICATION_NAME}

name: ${APPLICATION_NAME}

spec:

replicas: 1

selector:

app: ${APPLICATION_NAME}

deploymentconfig: ${APPLICATION_NAME}

strategy:

rollingParams:

intervalSeconds: 1

maxSurge: 25%

maxUnavailable: 25%

timeoutSeconds: 600

updatePeriodSeconds: 1

type: Rolling

template:

metadata:

labels:

app: ${APPLICATION_NAME}

deploymentconfig: ${APPLICATION_NAME}

spec:

containers:

- image: \"\"

imagePullPolicy: Always

name: ${APPLICATION_NAME}

ports:

- containerPort: 3000

protocol: TCP

resources: {}

terminationMessagePath: /dev/termination-log

volumeMounts:

- name: gogs-data

mountPath: /data

- name: gogs-config

mountPath: /opt/gogs/custom/conf

readinessProbe:

httpGet:

path: /

port: 3000

scheme: HTTP

initialDelaySeconds: 3

timeoutSeconds: 1

periodSeconds: 20

successThreshold: 1

failureThreshold: 3

livenessProbe:

httpGet:

path: /

port: 3000

scheme: HTTP

initialDelaySeconds: 3

timeoutSeconds: 1

periodSeconds: 10

successThreshold: 1

failureThreshold: 3

dnsPolicy: ClusterFirst

restartPolicy: Always

terminationGracePeriodSeconds: 30

volumes:

- name: gogs-data

persistentVolumeClaim:

claimName: gogs-data

- name: gogs-config

configMap:

name: gogs-config

items:

- key: app.ini

path: app.ini

test: false

triggers:

- type: ConfigChange

- imageChangeParams:

automatic: true

containerNames:

- ${APPLICATION_NAME}

from:

kind: ImageStreamTag

name: ${GOGS_IMAGE}

namespace: openshift

type: ImageChange

- kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: gogs-data

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: ${GOGS_VOLUME_CAPACITY}

- kind: ConfigMap

apiVersion: v1

metadata:

name: gogs-config

data:

app.ini: |

APP_NAME = Gogs

RUN_MODE = prod

RUN_USER = gogs

[database]

DB_TYPE = postgres

HOST = postgresql:5432

NAME = ${DATABASE_NAME}

USER = ${DATABASE_USER}

PASSWD = ${DATABASE_PASSWORD}

SSL_MODE = disable

[repository]

ROOT = /data/repositories

[server]

ROOT_URL=http://${GOGS_ROUTE}

[security]

INSTALL_LOCK = true

[mailer]

ENABLED = false

[service]

ENABLE_CAPTCHA = false

REGISTER_EMAIL_CONFIRM = false

ENABLE_NOTIFY_MAIL = false

DISABLE_REGISTRATION = false

REQUIRE_SIGNIN_VIEW = false

[picture]

DISABLE_GRAVATAR = false

ENABLE_FEDERATED_AVATAR = true

[webhook]

SKIP_TLS_VERIFY = true

parameters:

- name: APPLICATION_NAME

description: The name for the application.

required: true

value: gogs

- name: GOGS_ROUTE

description: The route for the Gogs Application

required: true

- name: GOGS_VOLUME_CAPACITY

description: Volume space available for data, e.g. 512Mi, 2Gi

required: true

value: 4Gi

- name: DATABASE_USER

displayName: Database Username

required: true

value: gogs

- name: DATABASE_PASSWORD

displayName: Database Password

required: true

value: gogs

- name: DATABASE_NAME

displayName: Database Name

required: true

value: gogs

- name: GOGS_IMAGE

displayName: Gogs Image and tag

required: true

value: gogs:latest' > $HOME/gogs.yaml

从模板创建gogs:

oc process -f $HOME/gogs.yaml --param GOGS_ROUTE=gogs-cicd.apps.a6d9.example.opentlc.com|oc create -f -

gogs部署成功:

接下来,登录gogs:

注册一个账户:

注册成功以后,登录后。创建一个名为CICDLabs的组织。

在CICDLabs组织下,创建一个名为openshift-tasks的存储库。

此存储库必须是公共的,不休修改可见性的选项。

截止到现在,我们已经在OpenShift可以访问的Gogs中创建了一个空的源代码存储库。 接下来,将代码推送到此存储库,并根据该代码和Nexus中的依赖项进行构建。

十二、将openshift-tasks源代码推送到Gogs

从GitHub克隆openshift-tasks存储库并将其推送到Gogs存储库:

在isolated节点上:

cd $HOME

git clone https://github.com/wkulhanek/openshift-tasks.git

cd $HOME/openshift-tasks

在本地Git存储库中设置远程Git存储库位置,并通过执行以下操作将其推送到Gogs。执行push命令时,git会提示您输入Gogs的用户名和密码 - 使用我们刚刚在Gogs中注册的用户名和密码。

git remote add gogs http://gogs-cicd.apps.a6d9.example.opentlc.com/CICDLabs/openshift-tasks.git

git push -u gogs master

登录到gogs,查看repository的内容:

截止到目前,我们完成了如下的工作:

  • 构建器映像(EAP 7.0)的正确image stream位于不能访问外网的OCP环境中。
  • 源代码位于已不能访问外网的Gogs存储库中。
  • 所有Maven构建依赖项都在Nexus中。

十三、从堡垒机运行构建

在本小节中,我们讲使用eap70-basic-s2i模板来创建openshift-tasks应用程序。

在root上作为堡垒机,创建任务项目,然后创建应用程序:

我们将Nexus代理存储库指定为构建器映像的参数。 每个Red Hat xPaaS构建器映像都了解变量MAVEN_MIRROR_URL。 此外,由于模板对分支和上下文目录有一些非合理的默认值,因此需要将分支显式设置为master,将context目录设置为empty。

oc new-project tasks

oc new-app eap70-basic-s2i --param MAVEN_MIRROR_URL=http://nexus3.cicd.svc.cluster.local:8081/repository/offline --param SOURCE_REPOSITORY_URL=http://gogs.cicd.svc.cluster.local:3000/CICDLabs/openshift-tasks --param SOURCE_REPOSITORY_REF=master --param CONTEXT_DIR= --param APPLICATION_NAME=tasks

查看构建日志:

首先下载依赖:

构建:

构建成功后push镜像:

接下来触发dc,部署pod:

我们通过浏览器登录,检查应用:

应用部署成功,可以访问:

魏新宇

  • 红帽资深解决方案架构师
  • 专注开源云计算、容器及自动化运维在金融行业的推广
  • 拥有MBA、ITIL V3、Cobit5、C-STAR、TOGAF9.1(鉴定级)等管理认证。
  • 拥有红帽RHCE/RHCA、VMware VCP-DCV、VCP-DT、VCP-Network、VCP-Cloud、AIX、HPUX等技术认证。
本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-07-13,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 大魏分享 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档