Redis问题
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
Redis被配置为保存数据库快照,但它目前不能持久化到硬盘。用来修改集合数据的命令不能用。请查看Redis日志的详细错误信息。
原因
强制关闭Redis快照导致不能持久化。
解决方案
进入服务器后
将stop-writes-on-bgsave-error设置为no
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
完成以上操作后,重新刷新网页~真的可以了,牛逼!!然后我就去睡觉了。
然而又过了两天,网站又崩了~问题和上面一样,于是又去网上找解决方案,在这篇文章中找到了解决方案(https://www.cnblogs.com/qq78292959/p/3994349.html)
文章中说道将config set stop-writes-on-bgsave-error 设置为no仅仅是让redis忽略了这个异常,使得程序能够继续往下运行,但实际上数据还是会存储到硬盘失败。
查看redis的日志,会发现一行警告:
“WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.”(警告:过量使用内存设置为0!在低内存环境下,后台保存可能失败。为了修正这个问题,请在/etc/sysctl.conf 添加一项 'vm.overcommit_memory = 1' ,然后重启(或者运行命令'sysctl vm.overcommit_memory=1' )使其生效。)
————————————————————————
使用kubectl logs 出现 failed to create fsnotify watcher: too many open files
这是因为系统默认的单个用户使用epoll的文件描述符上限fs.inotify.max_user_instances=128太小,重新设置此值:
sysctl fs.inotify.max_user_instances=8192
sysctl fs.inotify.max_user_watches=40960
或者
echo 20480 > /proc/sys/fs/inotify/max_user_instances
echo 40960 > /proc/sys/fs/inotify/max_user_watches
echo 20480 > /proc/sys/fs/inotify/max_user_instances
cat /proc/sys/fs/inotify/max_user_watches
8192
echo 40960 > /proc/sys/fs/inotify/max_user_watches
vim /etc/sysctl.conf
fs.inotify.max_user_instances=20480
fs.inotify.max_user_watches=40960
————————————————————————
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
————————————————————————
Use kubectl drain to gracefully terminate all pods on the node while marking the node as unschedulable:
kubectl drain $NODENAME
This keeps new pods from landing on the node while you are trying to get them off.
For pods with a replica set, the pod will be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Perform maintenance work on the node.
Make the node schedulable again:
kubectl uncordon $NODENAME
————————————————————————
mgr@kali:~$ ku4 -n kube-system get cm coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes bigtree.dev in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts {
111.30.77.185 www.zhongdengwang.org.cn
111.30.77.184 ws.zhongdengwang.org.cn
111.30.77.181 wsquery.zhongdengwang.org.cn
172.16.104.45 disconf-zk-0.disconf-zk.infra.svc.bigtree.internal
fallthrough
}
prometheus :9153
proxy bigtree.com 172.16.104.200
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-07-19T03:07:39Z"
name: coredns
namespace: kube-system
resourceVersion: "41267276"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 5df6957e-a9d2-11e9-a9db-525400200098
————————————————————————
k8s nginx-ingress 上传文件大小限制
k8s集群中,将图片或是文件上传到文件服务器上,
可是大于1M是就会报错
413 Request Entity Too Large
以前用的是:
# ingress.kubernetes.io/proxy-body-size: "50m"
现在用的是:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
最新版的ingress部署是需要configmap和RBAC的。
一、在web的nginx.conf中添加
client_body_buffer_size 50m;
client_max_body_size 100m;
二、在各个服务中修改
annotations:
nginx.ingress.kubernetes.io/proxy-body-size:"50m"
mgr@kali:~/bin$ ku4 -n ecdf-test get ing scfp -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 50m
creationTimestamp: "2019-08-26T06:48:35Z"
generation: 1
name: scfp
namespace: ecdf-test
resourceVersion: "35108388"
selfLink: /apis/extensions/v1beta1/namespaces/ecdf-test/ingresses/scfp
uid: 86f1f8b1-c7cd-11e9-83f5-525400200098
spec:
rules:
- host: platform.scfptest.bigtree.com
http:
paths:
- backend:
serviceName: scfp-svc
servicePort: 80
path: /
status:
loadBalancer: {}
————————————————————————
k2集群更新证书过程:
1、备份
[root@docker10 ~]# tar czvf kubernetes_bak.tar.gz /etc/kubernetes
2、移除过期证书
mv apiserver.crt apiserver.crt.bak
mv apiserver.key apiserver.key.bak
mv apiserver-kubelet-client.crt apiserver-kubelet-client.crt.bak
mv apiserver-kubelet-client.key apiserver-kubelet-client.key.bak
mv front-proxy-client.crt front-proxy-client.crt.bak
mv front-proxy-client.key front-proxy-client.key.bak
3、添加代理
[root@docker10 ~]# kubeadm alpha phase certs apiserver --apiserver-advertise-address 127.0.0.1
unable to get URL "https://dl.k8s.io/release/stable-1.9.txt": Get https://dl.k8s.io/release/stable-1.9.txt: dial tcp 35.201.71.162:443: i/o timeout
[root@docker10 ~]# export https_proxy=http://proxy.bigtree.com:8118
[root@docker10 ~]# kubeadm alpha phase certs apiserver --apiserver-advertise-address 127.0.0.1
[certificates] Using the existing apiserver certificate and key.
4、签发证书
[root@docker10 ~]# kubeadm alpha phase certs apiserver --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=172.16.104.41,10.100.1.1,bigtree.internal
[root@docker10 ~]# kubeadm alpha phase certs apiserver --apiserver-advertise-address 127.0.0.1
[certificates] Using the existing apiserver certificate and key.
[root@docker10 ~]# kubeadm alpha phase certs apiserver-kubelet-client
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[root@docker10 ~]# kubeadm alpha phase certs front-proxy-client
[certificates] Using the existing front-proxy-client certificate and key.
5、生成配置文件
[root@docker10 ~]# kubeadm alpha phase kubeconfig all --apiserver-advertise-address 127.0.0.1
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
6、重起master服务
[root@docker10 ~]# for i in `docker ps |grep -e kube-apiserver -e kube-controller -e kube-scheduler |awk '{print $1}'`; do docker kill -s HUP $i; done
[root@docker10 ~]# systemctl restart kubelet
————————————————————————
k8s用到的更新镜像方式
1、patch更新镜像
双镜像:
kubectl patch deployment gb-scf-web --patch '{"spec": {"template":{"spec":{"containers":[{"name":"spring-jar","image":"registry.bigtree.com:5000/bigtree/test/gb-scf-web:ef1de8f57ee9776dd55425d0ea74715bb5d44a7a"}]}}}}'
单镜像:
kubectl patch deployment gb-scf-web --patch '{"spec": {"template":{"spec":{"initContainers":[{"name":"spring-jar","image":"registry.bigtree.com:5000/bigtree/test/gb-scf-web:ef1de8f57ee9776dd55425d0ea74715bb5d44a7a"}]}}}}'
2、rc set image更新镜像
kubectl -n $namespace set image rc scf-batch scf-batch=registry:5000/bigtree/saas/dubbo/service/scf-batch:$BUILD_TIME-$GIT_VERSION
for i in `kubectl -n $namespace get po |grep scf-batch |awk '{print $1}'`; do kubectl -n $namespace delete po $i; done
3、deployment更新镜像
kubectl -n $namespace set image deployment/scf-nc scf-nc=registry:5000/bigtree/saas/dubbo/service/scf-nc:$BUILD_TIME-$GIT_VERSION
————————————————————————
集群证书更新
0、查看证书过期时间
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
1、备份已过期的证书和配置文件
tar czvf kubernetes_bak.tar.gz /etc/kubernetes
2、生成新证书
kubeadm alpha phase certs apiserver --apiserver-advertise-address 127.0.0.1
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
3、生成新配置文件
kubeadm alpha phase kubeconfig all --apiserver-advertise-address 127.0.0.1
4、重启master节点
for i in `docker ps |grep -e kube-apiserver -e kube-controller -e kube-scheduler |awk '{print $1}'`; do docker kill -s HUP $i; done
————————————————————————
为Kubernetes集群添加用户
Kubernetes中的用户
K8S中有两种用户(User)——服务账号(ServiceAccount)和普通意义上的用户(User)
ServiceAccount是由K8S管理的,而User通常是在外部管理,K8S不存储用户列表——也就是说,添加/编辑/删除用户都是在外部进行,无需与K8S API交互,虽然K8S并不管理用户,但是在K8S接收API请求时,是可以认知到发出请求的用户的,实际上,所有对K8S的API请求都需要绑定身份信息(User或者ServiceAccount),这意味着,可以为User配置K8S集群中的请求权限
有什么区别?
最主要的区别上面已经说过了,即ServiceAccount是K8S内部资源,而User是独立于K8S之外的。从它们的本质可以看出:
User通常是人来使用,而ServiceAccount是某个服务/资源/程序使用的
User独立在K8S之外,也就是说User是可以作用于全局的,在任何命名空间都可被认知,并且需要在全局唯一
而ServiceAccount作为K8S内部的某种资源,是存在于某个命名空间之中的,在不同命名空间中的同名ServiceAccount被认为是不同的资源
K8S不会管理User,所以User的创建/编辑/注销等,需要依赖外部的管理机制,K8S所能认知的只有一个用户名 ServiceAccount是由K8S管理的,创建等操作,都通过K8S完
这里说的添加用户指的是普通意义上的用户,即存在于集群外的用户,为k8s的使用者。
实际上叫做添加用户也不准确,用户早已存在,这里所做的只是使K8S能够认知此用户,并且控制此用户在集群内的权限
用户验证
尽管K8S认知用户靠的只是用户的名字,但是只需要一个名字就能请求K8S的API显然是不合理的,所以依然需要验证此用户的身份
在K8S中,有以下几种验证方式:
X509客户端证书
客户端证书验证通过为API Server指定--client-ca-file=xxx选项启用,API Server通过此ca文件来验证API请求携带的客户端证书的有效性,一旦验证成功,API Server就会将客户端证书Subject里的CN属性作为此次请求的用户名
静态token文件
通过指定--token-auth-file=SOMEFILE选项来启用bearer token验证方式,引用的文件是一个包含了 token,用户名,用户ID 的csv文件 请求时,带上Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269头信息即可通过bearer token验证
静态密码文件
通过指定--basic-auth-file=SOMEFILE选项启用密码验证,类似的,引用的文件时一个包含 密码,用户名,用户ID 的csv文件 请求时需要将Authorization头设置为Basic BASE64ENCODED(USER:PASSWORD)
这里只介绍客户端验证
为用户生成证书
假设我们操作的用户名为tom
首先需要为此用户创建一个私钥
openssl genrsa -out tom.key 2048
接着用此私钥创建一个csr(证书签名请求)文件,其中我们需要在subject里带上用户信息(CN为用户名,O为用户组)
openssl req -new -key tom.key -out tom.csr -subj "/CN=tom/O=MGM"
其中/O参数可以出现多次,即可以有多个用户组
找到K8S集群(API Server)的CA证书文件,其位置取决于安装集群的方式,通常会在/etc/kubernetes/pki/路径下,会有两个文件,一个是CA证书(ca.crt),一个是CA私钥(ca.key)
通过集群的CA证书和之前创建的csr文件,来为用户颁发证书
openssl x509 -req -in tom.csr -CA path/to/ca.crt -CAkey path/to/ca.key -CAcreateserial -out tom.crt -days 365
-CA和-CAkey参数需要指定集群CA证书所在位置,-days参数指定此证书的过期时间,这里为365天
最后将证书(tom.crt)和私钥(tom.key)保存起来,这两个文件将被用来验证API请求
为用户添加基于角色的访问控制(RBAC)
角色(Role)
在RBAC中,角色有两种——普通角色(Role)和集群角色(ClusterRole),ClusterRole是特殊的Role,相对于Role来说:
Role属于某个命名空间,而ClusterRole属于整个集群,其中包括所有的命名空间
ClusterRole能够授予集群范围的权限,比如node资源的管理,比如非资源类型的接口请求(如"/healthz"),比如可以请求全命名空间的资源(通过指定 --all-namespaces)
为用户添加角色
首先创造一个角色
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: a-1
name: admin
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
这是在a-1命名空间内创建了一个admin管理员角色,这里只是用admin角色举例,实际上如果只是为了授予用户某命名空间管理员的权限的话,是不需要新建一个角色的,K8S已经内置了一个名为admin的ClusterRole
将角色和用户绑定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin-binding
namespace: a-1
subjects:
- kind: User
name: tom
apiGroup: ""
roleRef:
kind: Role
name: admin
apiGroup: ""
如yaml中所示,RoleBinding资源创建了一个 Role-User 之间的关系,roleRef节点指定此RoleBinding所引用的角色,subjects节点指定了此RoleBinding的受体,可以是User,也可以是前面说过的ServiceAccount,在这里只包含了名为 tom 的用户
添加命名空间管理员的另一种方式
前面说过,K8S内置了一个名为admin的ClusterRole,所以实际上我们无需创建一个admin Role,直接对集群默认的admin ClusterRole添加RoleBinding就可以了
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin-binding
namespace: a-1
subjects:
- kind: User
name: tom
apiGroup: ""
roleRef:
kind: ClusterRole
name: admin
apiGroup: ""
这里虽然引用的是作为ClusterRole的admin角色,但是其权限被限制在RoleBinding admin-binding所处的命名空间,即a-1内
如果想要添加全命名空间或者说全集群的管理员,可以使用cluster-admin角色
到此为止,我们已经:
为tom用户提供了基于X509证书的验证
为a-1命名空间创造了一个名为admin的角色
为用户tom和角色admin创建了绑定关系
为kubectl配置用户
tom已经是管理员了,现在我们想要通过kubectl以tom的身份来操作集群,需要将tom的认证信息添加进kubectl的配置,即~/.kube/config中
这里假设config中已经配置好了k8s集群
通过命令
kubectl config set-credentials tom --client-certificate=path/to/tom.crt --client-key=path/to/tom.key
将用户tom的验证信息添加进kubectl的配置
此命令会在配置中添加一个名为tom的用户
kubectl config set-context tom@aliyun --cluster=aliyun --namespace=a-1 --user=tom
此命令添加了一个context配置——设定使用aliyun集群,默认使用a-1命名空间,使用用户tom进行验证
在命令中带上 kubectl --context=tom@aliyun ... 参数即可指定kubectl使用之前添加的名为tom@aliyun的context操作集群
也可以通过命令 kubectl config use-context tom@aliyun 来设置当前激活的context
Tips: 将认证信息嵌入kubectl的配置中
通过kubectl config set-credentials命令添加的用户,其默认使用的是引用证书文件路径的方式,表现在~/.kube/config中,就是:
users:
- name: tom
user:
client-certificate: path/to/tom.crt
client-key: path/to/tom.key
如果觉得这样总是带着两个证书文件不方便的话,可以将证书内容直接放到config文件里
将tom.crt/tom.key的内容用BASE64编码
cat tom.crt | base64 --wrap=0
cat tom.key | base64 --wrap=0
将获取的编码后的文本复制进config文件中
users:
- name: ich
user:
client-certificate-data: ...
client-key-data: ...
这样就不再需要证书和私钥文件了,当然这两个文件还是保存起来比较好
参考资料:
Authenticating - Kubernetes
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
Using RBAC Authorization
Kubectl Reference Docs
https://brancz.com/2017/10/16/kubernetes-auth-x509-client-certificates/
发布于 2018-08-29
————————————————————————
kubectl patch deployment image-deployment --patch '{"spec": {"template": {"spec": {"containers": [{"name": "nginx","image":"registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1"}]}}}}'
————————————————————————
原配置文件内容:#前置机地址
citic.url=http://10.0.32.32:6789
#登录用户名
citic.userName=dashujinrong
#主体账号
citic.accountNo=8110701013901248009
——————————————————————
yaml 定义 hosts
dnsPolicy: ClusterFirst
hostAliases:
- hostnames:
- www.zhongdengwang.org.cn
ip: 111.30.77.185
- hostnames:
- ws.zhongdengwang.org.cn
ip: 111.30.77.184
- hostnames:
- wsquery.zhongdengwang.org.cn
ip: 111.30.77.181
nodeSelector:
kubernetes.io/hostname: docker18
restartPolicy: Always
——————————————————————
4、关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config
5、关闭swap
swapoff -a
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -pswapoff -a
6、修改内核参数
yum install -y bridge-utils.x86_64
modprobe bridge
modprobe br_netfilter
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.conf
sysctl -p
——————————————————————
##运维通知群
jobname=$JOB_NAME
for i in $dingkeylist
do
curl https://oapi.dingtalk.com/robot/send?access_token=$i \
-H 'Content-Type: application/json' \
-d '
{"msgtype": "text",
"text": {
"content": "金票开发环境开始发布 '$jobname' ..."
}
}'
done
——————————————————————
使用kubectl logs 出现 failed to create fsnotify watcher: too many open files
这是因为系统默认的单个用户使用epoll的文件描述符上限fs.inotify.max_user_instances=128太小,重新设置此值:
sysctl fs.inotify.max_user_instances=8192
sysctl fs.inotify.max_user_watches=40960
或者
cat /proc/sys/fs/inotify/max_user_instances
128
cat /proc/sys/fs/inotify/max_user_watches
8192
echo 20480 > /proc/sys/fs/inotify/max_user_instances
echo 40960 > /proc/sys/fs/inotify/max_user_watches
echo 20480 > /proc/sys/fs/inotify/max_user_instances
vim /etc/sysctl.conf
fs.inotify.max_user_instances=20480
fs.inotify.max_user_watches=40960
——————————————————————
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- docker11
——————————————————————
sed -i -e '/volumeMounts\:/ r /root/.ssh/volumemount.txt' -e '/volumes\:/ r /root/.ssh/volume.txt' *.yaml
——————————————————————
docker主机CPU 排查
# for i in `docker ps |awk '{print $1}'`; do echo processing $i ...; docker inspect $i |grep Pid; done
——————————————————————
参考 https://www.huweihuang.com/article/kubernetes/nodeselector-and-taint/
--- node 节点添加污点
kubectl taint nodes docker18 kvm=true:NoSchedule
kubectl taint nodes docker19 kvm=true:NoSchedule
kubectl taint nodes docker20 kvm=true:NoSchedule
kubectl taint nodes docker22 kvm=true:NoSchedule
k8b taint nodes node5.veredholdings.cn kvm=false:NoSchedule
ku4 taint nodes v-node28 nodetype=bl:NoSchedule
--- yaml
spec:
tolerations:
- key: "kvm"
operator: "Equal"
value: "true"
effect: "NoSchedule"
containers:
- env:
- name: ENV
value: jobk8s
- name: APP
value: eps
- name: XMS
——————————————————————
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi03ajZyYyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNiMGQzOWQ3LTAyYjMtMTFlOS04OWM1LTA4OTRlZjM3NTY1YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.MXElIX3CMra_LDmGk6XMA1ha_J29KnPik_3NqOhL8zp14IJBhyeBDK0kA11cdfC5AG2M3gF6v_TX0iAh2iYMUMQ6LCAaqZQpBGzgD1B-mbHljDjQPUbVfcDTM26jtYj1WP2ZPskIHOAOrP-JPSeewtoOBNhujuFWlsvYPxut3ChgaWiYPd5Osa034vgjVPu8XHL2Q3hfSBrVq6lw2BUh158svTw-7LdAAzlkpZRjfioGYw64t2f84fpesWXSWrn9Qp3mNgGjXQRTgHXU8S3kHuDNPwXUd8GKnZ4H9K5cb44i6IuxHR8KHhyP-3yLUPc8dBjVNSfjViDktohuQ4PLbg
——————————————————————
外网:
115.182.11.68 NG-VIP-大树
115.182.11.69 NG(外网LB-1)-大树
115.182.11.70 NG(外网LB-2)-大树
注意:掩码28位。网关是115.182.11.65 255.255.255.240
LB内网接口
IP
10.0.36.33 NG(外网LB)--大树
10.0.36.34 NG(外网LB)--大树
网关是10.0.36.1
POD节点网段10.6.0.0/16
POD节点网段
10.6.0.0/18 10.6.0.0 - 10.6.63.255 共 16382 个
SERVICE网段
10.6.64.0/20 10.6.64.0 - 10.6.79.252 共 4094 个
——————————————————————
生产k8s集群添加节点命令
sudo /mnt/nfs/v1.9.6/kubeadm join --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:cdd4b7d5e1dc72b742aaf7b54549daf0a57e5e0554462cab2f3669c395071061 10.0.54.26:6443 --skip-preflight-checks
——————————————————————
部署到不同的k8s集群
wget http://nginx.bigtree.com/admin.conf_dev -O /bin/.kube/admin.conf
export KUBECONFIG=/bin/.kube/admin.conf
——————————————————————
DNS 配置
/ # cat /etc/coredns/Corefile
.:53 {
errors
log
health
kubernetes cluster.local 10.96.0.0/12 {
pods insecure
}
prometheus
proxy bigtree.com 172.16.51.39
proxy . /etc/resolv.conf
cache 300
}
mgr@kali:~$ ks2 describe cm coredns
Name: coredns
Namespace: kube-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health\n kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {\n pods insecure\n upstre...
Data
====
Corefile:
----
.:53 {
errors
health
kubernetes bigtree.internal 10.100.1.0/24 {
pods insecure
}
prometheus
proxy bigtree.com 172.16.51.39
proxy . /etc/resolv.conf
cache 300
}
——————————————————————
pod网络规则
172.17.58.0/24 gw 172.16.104.41
172.17.204.0/24 gw 172.16.104.42
172.17.224.0/24 gw 172.16.104.43
172.17.253.0/24 gw 172.16.104.44
172.17.163.0/24 gw 172.16.104.45
——————————————————————
master node参与工作负载
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。
这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载:
kubectl taint nodes docker10 node-role.kubernetes.io/master-node "docker10" untainted
kubectl taint nodes --all node-role.kubernetes.io/master-
1
输出如下:
node "k8s" untainted
1
输出error: taint “node-role.kubernetes.io/master:” not found错误忽略。
禁止master部署pod
kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule
---------------------
作者:张小凡vip
来源:CSDN
原文:https://blog.csdn.net/zzq900503/article/details/81710319
版权声明:本文为博主原创文章,转载请附上博文链接!
——————————————————————
k8s coreDNS 配置dns内网转发服务
mgr@kali:~$ ks describe cm coredns
Name: coredns
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
Corefile:
----
.:53 {
errors
log
health
kubernetes cluster.local 10.96.0.0/12 {
pods insecure
}
prometheus
proxy bigtree.com 172.16.51.39
proxy . /etc/resolv.conf
cache 300
}
Events: <none>
————————————————————
开发环境集群初始化
1、管理端初始化
kubeadm init CoreDNS=true --kubernetes-version v1.9.6 --pod-network-cidr 172.17.0.0/16 --service-cidr 10.100.1.0/24 --service-dns-domain bigtree.internal
2、节点加入集群
#默认(已失效)kubeadm join --token 5a2344.7e6d621f0ebdc744 172.16.104.41:6443 --discovery-token-ca-cert-hash sha256:5b8e0b7df48c2c20e5ca3002a2b253ee5ca36e576557d69da232fdcf4442570a
kubeadm join --token 97a820.6fe41d8a7aefee09 --discovery-token-ca-cert-hash sha256:5b8e0b7df48c2c20e5ca3002a2b253ee5ca36e576557d69da232fdcf4442570a 172.16.104.41:6443 --skip-preflight-checks
$ kubeadm token create --print-join-command
kubeadm join 192.168.1.196:6443 --token 5q8yfk.kzf7tw2qrskufw40 --discovery-token-ca-cert-hash sha256:3967c70711c30a458b38df253cc5d60bf2c615cd5868d34348c05b8103429fbb
3、修改systemd为cgroupfs
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
4、关闭 swap
swapoff -a
vi /etc/fstab
5、安装rpm
yum localinstall *.rpm
6、初始化docker
更改存储路径/var/lib/docker
添加registry /etc/docker/daemon.json
7、导入镜像
for i in `ls .`; do docker load < $i; done
8、启动kubelet 服务
kubeadm join --token 97a820.6fe41d8a7aefee09 --discovery-token-ca-cert-hash sha256:5b8e0b7df48c2c20e5ca3002a2b253ee5ca36e576557d69da232fdcf4442570a 172.16.104.41:6443 --skip-preflight-checks
systemctl enable docker
systemctl enable kubelet
systemctl start kubelet
9、节点添加污点
kubectl taint nodes docker22 kvm=true:NoSchedule
————————————————————
kubeadm 生成的token过期后,集群增加节点
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 19f284.da47998c9abb01d3 172.16.6.47:6443 --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538
---------------------
默认token的有效期为24小时,当过期之后,该token就不可用了。解决方法如下:
1、重新生成新的token
[root@docker10]# kubeadm token create --print-join-command
kubeadm join --token b59ecd.99b79ab7739fb405 172.16.104.41:6443 --discovery-token-ca-cert-hash sha256:5b8e0b7df48c2c20e5ca3002a2b253ee5ca36e576557d69da232fdcf4442570a
[root@docker10]# kubeadm token create
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --ttl 0)
aa78f6.8b4cafc8ed26c34f
[root@docker10]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
aa78f6.8b4cafc8ed26c34f 23h 2017-12-26T16:36:29+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
2、获取ca证书sha256编码hash值
[root@docker10]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538
3、节点加入集群
[root@docker10]# kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 172.16.6.79:6443 --skip-preflight-checks
kubeadm join --token 97a820.6fe41d8a7aefee09 --discovery-token-ca-cert-hash sha256:5b8e0b7df48c2c20e5ca3002a2b253ee5ca36e576557d69da232fdcf4442570a 172.16.104.41:6443 --skip-preflight-checks
————————————————————
myron