k8s安装裸金属MetalLB&Ingress-nginx实现负载均衡

本文继上一篇 「裸机部署k8s集群平台」 继续完善k8s的功能

在 Kubernetes 集群中使用 MetalLB 作为 LoadBalancer

Ingress 为外部访问集群提供了一个 统一入口,避免了对外暴露集群端口;功能类似 Nginx,可以根据域名、路径把请求转发到不同的 Service,通过反向代理实现负载均衡。

本文使用 k8s官方推荐的Ingress:ingress-nginx 来实现负载均衡。

跟 LoadBalancer 有什么区别?LoadBalancer 需要对外暴露端口(不安全),无法根据域名、路径转发流量到不同 Service,而且也无法配置 HTTPS。

要使用 Ingress,需要一个负载均衡器(本地测试可以使用 MetalLB) + Ingress Controller

官网: https://metallb.universe.tf/installation/

安装Ingress Nginx

官网:https://kubernetes.github.io/ingress-nginx/deploy
Github:https://github.com/kubernetes/ingress-nginx

裸机K8S安装ingress nginx具体可以查看文档:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml

但是上面安装的时候用到了 google k8s 的镜像,这是无法下载的(需要梯子),不过可以从Docker Hub上下载镜像。

之后就可以 kubectl apply -f deploy.yaml

删除可以 kubectl delete -f deploy.yaml

1
2
3
4
5
6
7
8
9
10
11
12
> kubectl get all -n ingress-nginx

> kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.19.97 192.168.2.10 80:31396/TCP,443:30578/TCP 39m
ingress-nginx-controller-admission ClusterIP 10.106.187.197 <none> 443/TCP 39

> kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-67q5b 0/1 Completed 0 40m
ingress-nginx-admission-patch-mnxk5 0/1 Completed 0 40m
ingress-nginx-controller-b858dc8dd-qsr2g 1/1 Running 0 40m

其中 192.168.2.10 就是由 MetalLB 从IP地址池中分配的一个地址,如果没有事前安装 MetalLB ,则该IP处于 pending 状态。

否则就是

1
2
3
NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller LoadBalancer 10.107.31.136 <pending> 80:30794/TCP,443:31605/TCP 8s
service/ingress-nginx-controller-admission ClusterIP 10.97.31.152 <none> 443/TCP 8s

可以看到通过YAML创建的ingress-nginx已经自动为我们添加了一个LoadBalancer类型的Service

1
2
3
> kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
test-k8s nginx * 192.168.2.10 80 165m

安装 MetalLB 负载均衡器

如果 k8s 中 kube-proxy 使用 ipvs 模式,最新版本(即高于v1.14.2)需要开启 strict ARP 模式

1
kubectl edit configmap -n kube-system kube-proxy
1
2
3
4
5
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true

安装MetalLB

1
2
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

删除MetailLB

1
2
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

metallb.ip.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.2.10-192.168.2.20

kubectl apply -f metallb.ip.yaml
上面分配地址池为 192.168.2.10-192.168.2.20 ,这里和集群节点位于同一个网段。

查看

1
2
3
4
5
6
> kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-57fd9c5bb-rgmb9 1/1 Running 0 67m
speaker-5krlb 1/1 Running 0 67m
speaker-86vzz 1/1 Running 0 67m
speaker-vdm88 1/1 Running 0 67m

本地测试

test-k8s-deploy-service.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-k8s
spec:
replicas: 5 # 5个副本
selector:
matchLabels:
app: test-k8s
template:
metadata:
labels:
app: test-k8s
spec:
containers:
- name: test-k8s
image: registry.cn-hangzhou.aliyuncs.com/josexy/test-k8s:1.0.0
ports:
- containerPort: 10086
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "500m"

---
apiVersion: v1
kind: Service
metadata:
name: test-k8s-svc
spec:
selector:
app: test-k8s
ports:
- port: 8080 # 服务暴露端口
targetPort: 10086 # 容器内端口

ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / # 重写

spec:
rules:
- http:
paths:
# 将 /k8s 重写为 /
- path: /k8s
pathType: Prefix
backend:
service:
# 将请求转发到 test-k8s-svc 服务,由 test-k8s-svc 将数据分配给不同的容器
name: test-k8s-svc
port:
number: 8080
ingressClassName: nginx

kubectl apply -f test-k8s-deploy-service.yaml
kubectl apply -f ingress.yaml

1
2
3
4
> kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d21h <none>
test-k8s ClusterIP 10.108.251.159 <none> 80/TCP 170m app=test-k8s

在宿主主机上进行测试: curl 192.168.2.10

这里在我的MacBook上通过 arp -a 显示

1
2
3
4
5
6
7
8
9
arp -a                          
? (10.37.129.255) at ff:ff:ff:ff:ff:ff on bridge101 ifscope [bridge]
? (172.18.0.203) at 0:1c:42:4f:fe:16 on bridge100 ifscope [bridge]
pandorabox.lan (192.168.1.1) at 64:9:80:5e:7b:2a on en5 ifscope [ethernet]
? (192.168.1.255) at ff:ff:ff:ff:ff:ff on en5 ifscope [ethernet]
? (192.168.2.104) at 0:1c:42:b0:f6:f8 on bridge100 ifscope [bridge]
? (192.168.2.10) at 0:1c:42:b0:f6:f8 on bridge100 ifscope [bridge]
? (192.168.2.255) at ff:ff:ff:ff:ff:ff on bridge100 ifscope [bridge]
? (224.0.0.251) at 1:0:5e:0:0:fb on en5 ifscope permanent [ethernet]

可以发现其中的 ? (192.168.2.10) at 0:1c:42:b0:f6:f8 on bridge100 ifscope [bridge] 实际上就是虚拟IP地址

为了查看k8s对nginx的配置文件进行了怎么的更改,可以进入 ingress-nginx-controller 容器查看:

1
2
3
4
5
kubectl get pods -n ingress-nginx   
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-gzr7h 0/1 Completed 0 2d10h
ingress-nginx-admission-patch-fq22n 0/1 Completed 1 2d10h
ingress-nginx-controller-b858dc8dd-4llsc 1/1 Running 0 2d10h

进入pod ingress-nginx-controller-b858dc8dd-4llsc

1
2
kubectl exec -it ingress-nginx-controller-b858dc8dd-4llsc -n ingress-nginx -- bash
bash-5.1$ cat nginx.conf

然后定位到 location ~* "^/k8s",往下可以看到 proxy_pass http://upstream_balancer,这就是关键,将请求反向代理到了上游负载均衡器 upstream_balancer。接着查看upstream_balancer,但是我们发现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
249		upstream upstream_balancer {
250 ### Attention!!!
251 #
252 # We no longer create "upstream" section for every backend.
253 # Backends are handled dynamically using Lua. If you would like to debug
254 # and see what backends ingress-nginx has in its memory you can
255 # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
256 # Once you have the plugin you can use "kubectl ingress-nginx backends" command to
257 # inspect current backends.
258 #
259 ###
260
261 server 0.0.0.1; # placeholder
262
263 balancer_by_lua_block {
264 balancer.balance()
265 }
266
267 keepalive 320;
268
269 keepalive_timeout 60s;
270 keepalive_requests 10000;
271
272 }

也就是说要调试查看上游服务器后端节点,需要额外安装ingress-nginx插件,这里就不深究了😂,有兴趣的可以参考:


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!