博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes学习之路(26)之kubeasz+ansible部署集群
阅读量:5117 次
发布时间:2019-06-13

本文共 6632 字,大约阅读时间需要 22 分钟。

目录

学习文档:https://github.com/gjmzj/kubeasz/

1、环境说明

IP 主机名 角色 虚拟机配置
192.168.56.11 k8s-master deploy、master1、lb1、etcd 4c4g
192.168.56.12 k8s-master2 master2、lb2 4c4g
192.168.56.13 k8s-node01 etcd、node 2c2g
192.168.56.14 k8s-node02 etcd、node 2c2g
192.168.56.110 vip
系统内核 3.10 docker版本 18.09
k8s版本 1.13 etcd版本 3.0

2、准备工作

  • 四台机器,全部执行:
yum install -y epel-releaseyum update -y yum install python -y
  • deploy节点安装ansible并配置密钥认证
yum install -y ansiblessh-keygenfor ip in 11 12 13 14;do ssh-copy-id 192.168.56.$ip;done
  • deploy节点编排K8S
[root@k8s-master ~]# git clone https://github.com/gjmzj/kubeasz.git[root@k8s-master ~]# mv kubeasz/* /etc/ansible/

从百度云网盘下载二进制文件 https://pan.baidu.com/s/1c4RFaA#list/path=%2F

可以根据自己所需版本,下载对应的tar包,这里我下载1.13
经过一番折腾,最终把k8s.1-13-5.tar.gz的tar包放到了depoly上

[root@k8s-master ~]# tar -zxf k8s.1-13-5.tar.gz [root@k8s-master ~]# mv bin/* /etc/ansible/bin/
  • 配置集群参数
[root@k8s-master ~]# cd /etc/ansible/[root@k8s-master ansible]# cp example/hosts.m-masters.example hostscp: overwrite ‘hosts’? y[root@k8s-master ansible]# vim hosts  #根据实际情况的ip进行更改[deploy]192.168.56.11 NTP_ENABLED=no    #设置集群是否安装 chrony 时间同步[etcd]  #etcd集群请提供如下NODE_NAME,注意etcd集群必须是1,3,5,7...奇数个节点192.168.56.11 NODE_NAME=etcd1192.168.56.13 NODE_NAME=etcd2192.168.56.14 NODE_NAME=etcd3[kube-master]192.168.56.11192.168.56.12[kube-node]192.168.56.13192.168.56.14[lb]    # 负载均衡(目前已支持多于2节点,一般2节点就够了) 安装 haproxy+keepalived192.168.56.12 LB_ROLE=backup192.168.56.11 LB_ROLE=master## 集群 MASTER IP即 LB节点VIP地址,为区别与默认apiserver端口,设置VIP监听的服务端口8443# 公有云上请使用云负载均衡内网地址和监听端口[all:vars]DEPLOY_MODE=multi-masterMASTER_IP="192.168.56.110"  #设置vipKUBE_APISERVER="https://{
{ MASTER_IP }}:8443"CLUSTER_NETWORK="flannel"SERVICE_CIDR="10.68.0.0/16"CLUSTER_CIDR="172.20.0.0/16"NODE_PORT_RANGE="20000-40000"CLUSTER_KUBERNETES_SVC_IP="10.68.0.1"CLUSTER_DNS_SVC_IP="10.68.0.2"CLUSTER_DNS_DOMAIN="cluster.local."bin_dir="/opt/kube/bin"ca_dir="/etc/kubernetes/ssl"base_dir="/etc/ansible"#修改完成后,测试hosts[root@k8s-master ansible]# ansible all -m ping192.168.56.12 | SUCCESS => { "changed": false, "ping": "pong"}192.168.56.13 | SUCCESS => { "changed": false, "ping": "pong"}192.168.56.14 | SUCCESS => { "changed": false, "ping": "pong"}192.168.56.11 | SUCCESS => { "changed": false, "ping": "pong"}

3、分步骤安装

3.1、创建证书和安装准备

[root@k8s-master ansible]# ansible-playbook 01.prepare.yml

3.2、安装etcd集群

[root@k8s-master ansible]# ansible-playbook 02.etcd.yml[root@k8s-master ansible]# bash#验证etcd集群状态[root@k8s-master ansible]# systemctl status etcd#在任一 etcd 集群节点上执行如下命令[root@k8s-master ansible]# for ip in 11 13 14;do ETCDCTL_API=3 etcdctl --endpoints=https://192.168.56.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health;donehttps://192.168.56.11:2379 is healthy: successfully committed proposal: took = 7.967375mshttps://192.168.56.13:2379 is healthy: successfully committed proposal: took = 12.557643mshttps://192.168.56.14:2379 is healthy: successfully committed proposal: took = 9.70078ms

3.3、安装docker

[root@k8s-master ansible]# ansible-playbook 03.docker.yml

3.4、安装master节点

[root@k8s-master ansible]# ansible-playbook 04.kube-master.yml #查看进程状态[root@k8s-master ansible]# systemctl status kube-apiserver[root@k8s-master ansible]# systemctl status kube-controller-manager[root@k8s-master ansible]# systemctl status kube-scheduler[root@k8s-master ansible]# kubectl get componentstatus  #查看集群状态NAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-0               Healthy   {"health":"true"}   etcd-1               Healthy   {"health":"true"}   etcd-2               Healthy   {"health":"true"}

3.5、安装node节点

[root@k8s-master ansible]# ansible-playbook 05.kube-node.yml[root@k8s-master ansible]# systemctl status kubelet[root@k8s-master ansible]# systemctl status kube-proxy[root@k8s-master ansible]# kubectl get nodesNAME            STATUS                     ROLES    AGE     VERSION192.168.56.11   Ready,SchedulingDisabled   master   6m56s   v1.13.5192.168.56.12   Ready,SchedulingDisabled   master   6m57s   v1.13.5192.168.56.13   Ready                      node     40s     v1.13.5192.168.56.14   Ready                      node     40s     v1.13.5

3.6、部署集群网络

[root@k8s-master ansible]# ansible-playbook 06.network.yml [root@k8s-master ansible]# kubectl get pod -n kube-system   #查看flannel相关podNAME                          READY   STATUS    RESTARTS   AGEkube-flannel-ds-amd64-856rg   1/1     Running   0          115skube-flannel-ds-amd64-j4542   1/1     Running   0          115skube-flannel-ds-amd64-q9cmh   1/1     Running   0          115skube-flannel-ds-amd64-rhg66   1/1     Running   0          115s

3.7、部署集群插件(dns,dashboard)

[root@k8s-master ansible]# ansible-playbook 07.cluster-addon.yml [root@k8s-master ansible]# kubectl get svc -n kube-system   #查看服务NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGEheapster               ClusterIP   10.68.29.48    
80/TCP 64skube-dns ClusterIP 10.68.0.2
53/UDP,53/TCP,9153/TCP 71skubernetes-dashboard NodePort 10.68.117.7
443:24190/TCP 64smetrics-server ClusterIP 10.68.107.56
443/TCP 69s[root@k8s-master ansible]# kubectl cluster-info #查看集群信息Kubernetes master is running at https://192.168.56.110:8443CoreDNS is running at https://192.168.56.110:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxykubernetes-dashboard is running at https://192.168.56.110:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.[root@k8s-master ansible]# kubectl top node #查看节点资源使用率NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 192.168.56.11 523m 13% 2345Mi 76% 192.168.56.12 582m 15% 1355Mi 44% 192.168.56.13 182m 10% 791Mi 70% 192.168.56.14 205m 11% 804Mi 71%

一步ansible安装k8s集群命令如下:

ansible-playbook 90.setup.yml

3.8、测试DNS解析

[root@k8s-master ansible]# kubectl run nginx --image=nginx --expose --port=80[root@k8s-master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh/ # nslookup nginx.default.svc.cluster.localServer:     10.68.0.2Address:    10.68.0.2:53Name:   nginx.default.svc.cluster.localAddress: 10.68.149.79

转载于:https://www.cnblogs.com/linuxk/p/10762832.html

你可能感兴趣的文章
SpringMvc之参数绑定注解详解之一
查看>>
控制台输出三角形
查看>>
Oracle下载 OPatch
查看>>
什么是servlet?
查看>>
IsCallback和IsPostBack的区别
查看>>
Spring整合Quartz实现持久化、动态设定时间
查看>>
边工作边刷题:70天一遍leetcode: day 3
查看>>
ssh远程执行命令
查看>>
附加作业3
查看>>
MM物料移动BW数据源介绍
查看>>
python3 re正则匹配数据获取案例
查看>>
思途CMS
查看>>
ios 得到设备信息(记录下)
查看>>
JsonHelper(Json帮助类)
查看>>
Eclipse用法与技巧——导入工程时报错(already exist in the workspace)
查看>>
Linux-Bond-Configure
查看>>
关于static继承的问题
查看>>
MAX脚本发送贴图的另外一个方式
查看>>
把Arraylist转换成GameObject[]
查看>>
退出整个Android程序的工具类
查看>>