Hola MV!
Me estoy volviendo loco para entender y configurar el acceso a un servicio (postgresql) a través de un LB en Kubernetes. La arquitectura es la siguiente:
https://github.com/garutilorenzo/k3s-oci-cluster
La configuración aplicada en este momento:
HelmChartConfig
Aquí está la parte del TLS para otros servicios (ejemplo: grafana).
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |
ports:
postgresql:
expose: true
port: 5432
exposedPort: 5432
protocol: TCP
additionalArguments:
- "--log.level=DEBUG"
- "[email protected]"
- "--certificatesresolvers.le.acme.storage=/data/acme.json"
- "--certificatesresolvers.le.acme.tlschallenge=true"
- "--certificatesresolvers.le.acme.caServer=https://acme-v02.api.letsencrypt.org/directory"
- "--entryPoints.postgresql.address=:5432/tcp"
Postgres.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgresql
namespace: postgres
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-prod
namespace: postgres
labels:
app: postgres-prod
spec:
replicas: 1
selector:
matchLabels:
app: postgres-prod
template:
metadata:
labels:
app: postgres-prod
spec:
restartPolicy: Always
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
example: app
containers:
- name: postgres
image: postgres:latest
ports:
- containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgresql-volume
mountPath: /var/lib/postgresql/data
- name: postgresql-config-volume
mountPath: /etc/postgresql/postgresql.conf
subPath: postgresql.conf
- name: postgresql-config-volume
mountPath: /etc/postgresql/pg_hba.conf
subPath: pg_hba.conf
- name: postgresql-config-volume
mountPath: /docker-entrypoint-initdb.d/extra.sh
subPath: extra.sh
# args:
# - "-c"
# - "config_file=/var/lib/postgresql/postgresql.conf"
# - "-c"
# - "hba_file=/var/lib/postgresql/pg_hba.conf"
volumes:
- name: postgresql-volume
persistentVolumeClaim:
claimName: pvc-postgresql
- name: postgresql-config-volume
configMap:
name: postgres-config
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service-37846382
namespace: postgres
spec:
ports:
- name: postgres-port
port: 5432
targetPort: 5432
selector:
app: postgres-prod
type: ClusterIP
Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: postgres-ingress
namespace: postgres
spec:
entryPoints:
- postgresql
routes:
- match: HostSNI(`*`)
services:
- name: postgres-service-37846382
port: 5432
Eventos de traefik al aplicar el ingress:
msg="Creating TCP server 0 at 10.42.1.53:5432" serviceName=postgres-postgres-ingress-673acf455cb2dab0b43a routerName=postgres-postgres-ingress-673acf455cb2dab0b43a@kubernetescrd entryPointName=postgresql serverName=0
msg="Adding route * on TCP" entryPointName=postgresql routerName=postgres-postgres-ingress-673acf455cb2dab0b43a@kubernetescrd
Problema:
- No puedo acceder a la DB a través del LB. Sin eventos en traefik
- Sí puedo acceder a la DB a través de un nodo. Evento de traefik:
Handling connection from 10.42.3.0:15535
Controversia
- Si puedo acceder al panel de grafana a través del LB (puerto 80)
Cloud Oracle
- Todos los puertos abiertos
¿Ideas? Básicamente, no llego al servicio (obviamente) a través del LB, pero en el caso de grafana sin problemas...
Su ingress no es muy diferente que es un route 'web':
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: grafana-dashboard
namespace: monitoring
spec:
entryPoints:
- websecure
routes:
- match: Host(`xxxxxxx`)
kind: Rule
services:
- name: kube-prometheus-stack-1662817876-grafana
port: 80
- match: Host(`xxxxxxxxxxxx`)
kind: Rule
services:
- name: kube-prometheus-stack-1662817876-grafana
port: 80
tls:
certResolver: le
Su servicio se creo por el helm chart de kube-prometheus-stack, pero tiene la siguiente config:
apiVersion: v1
kind: Service
metadata:
name: kube-prometheus-stack-1662817876-grafana
namespace: monitoring
uid: 1dc5e33e-6d06-4899-88ad-c9a2b2797d21
resourceVersion: '10239'
creationTimestamp: '2022-09-10T13:51:42Z'
labels:
app.kubernetes.io/instance: kube-prometheus-stack-1662817876
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 9.0.5
helm.sh/chart: grafana-6.32.10
annotations:
meta.helm.sh/release-name: kube-prometheus-stack-1662817876
meta.helm.sh/release-namespace: monitoring
managedFields:
- manager: helm
operation: Update
apiVersion: v1
time: '2022-09-10T13:51:42Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/version: {}
f:helm.sh/chart: {}
f:spec:
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
selfLink: >-
/api/v1/namespaces/monitoring/services/kube-prometheus-stack-1662817876-grafana
status:
loadBalancer: {}
spec:
ports:
- name: http-web
protocol: TCP
port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: kube-prometheus-stack-1662817876
app.kubernetes.io/name: grafana
clusterIP: 10.43.146.52
clusterIPs:
- 10.43.146.52
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
Sorry por el tochazo