edit_document// BLOG_POST.md

Deploying Microservices on Kubernetes: A Step-by-Step Production Guide

//

, ,

Kubernetes has become the de facto standard for orchestrating containerized microservices. Moving from Docker to production K8s involves understanding Deployments, Services, ConfigMaps, Secrets, health probes, resource limits, and scaling strategies. This guide walks through deploying a real microservice — every step, every manifest.

Kubernetes Architecture

A cluster consists of a control plane (API server, etcd, scheduler, controller manager) and worker nodes running the kubelet agent. The fundamental unit is a Pod — one or more containers sharing network and storage. You rarely create Pods directly; instead use Deployments (stateless), StatefulSets (databases), or DaemonSets (one-per-node agents).

Step 1: Production Dockerfile

# Multi-stage build for a Node.js microservice
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .

FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --from=builder /app .
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:8080/health || exit 1
CMD ["node", "server.js"]

Step 2: Deployment Manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
  labels: { app: order-service, version: v1 }
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate: { maxSurge: 1, maxUnavailable: 0 }
  selector:
    matchLabels: { app: order-service }
  template:
    metadata:
      labels: { app: order-service, version: v1 }
    spec:
      securityContext: { runAsNonRoot: true, runAsUser: 1000 }
      containers:
      - name: order-service
        image: myregistry/order-service:1.2.0
        ports: [{ containerPort: 8080, name: http }]
        resources:
          requests: { cpu: "100m", memory: "128Mi" }
          limits: { cpu: "500m", memory: "512Mi" }
        livenessProbe:
          httpGet: { path: /health, port: 8080 }
          initialDelaySeconds: 15
          periodSeconds: 20
        readinessProbe:
          httpGet: { path: /ready, port: 8080 }
          initialDelaySeconds: 5
          periodSeconds: 10
        startupProbe:
          httpGet: { path: /health, port: 8080 }
          failureThreshold: 30
          periodSeconds: 2
        envFrom:
        - configMapRef: { name: order-service-config }
        - secretRef: { name: order-service-secrets }

Step 3: Service & Networking

apiVersion: v1
kind: Service
metadata: { name: order-service }
spec:
  selector: { app: order-service }
  ports: [{ protocol: TCP, port: 80, targetPort: 8080 }]
  type: ClusterIP  # Internal; use LoadBalancer or Ingress for external

Other services reach yours at http://order-service within the same namespace. For external traffic, use an Ingress controller with path-based routing.

Step 4: Configuration & Secrets

Use ConfigMaps for non-sensitive settings and Secrets for credentials. For production, consider HashiCorp Vault, AWS Secrets Manager, or Sealed Secrets instead of plain K8s Secrets (which are only base64-encoded by default).

Step 5: Autoscaling & Observability

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata: { name: order-service-hpa }
spec:
  scaleTargetRef: { apiVersion: apps/v1, kind: Deployment, name: order-service }
  minReplicas: 3
  maxReplicas: 20
  metrics:
  - type: Resource
    resource: { name: cpu, target: { type: Utilization, averageUtilization: 70 } }

Pair with Prometheus (metrics), Grafana (dashboards), and Loki or ELK (logs). Use OpenTelemetry for distributed tracing across microservices. Start simple — 3 replicas, health probes, iterate. Don’t try service mesh, GitOps, and autoscaling all at once.

Further reading: K8s Architecture | K8s Deployments


arrow_circle_right// POST_NAVIGATION

forum// COMMENTS

Leave a Reply

Your email address will not be published. Required fields are marked *