Category: Cloud

  • Deploying Microservices on Kubernetes: A Step-by-Step Production Guide

    Kubernetes has become the de facto standard for orchestrating containerized microservices. Moving from Docker to production K8s involves understanding Deployments, Services, ConfigMaps, Secrets, health probes, resource limits, and scaling strategies. This guide walks through deploying a real microservice — every step, every manifest.

    Kubernetes Architecture

    A cluster consists of a control plane (API server, etcd, scheduler, controller manager) and worker nodes running the kubelet agent. The fundamental unit is a Pod — one or more containers sharing network and storage. You rarely create Pods directly; instead use Deployments (stateless), StatefulSets (databases), or DaemonSets (one-per-node agents).

    Step 1: Production Dockerfile

    # Multi-stage build for a Node.js microservice
    FROM node:20-alpine AS builder
    WORKDIR /app
    COPY package*.json ./
    RUN npm ci --production
    COPY . .
    
    FROM node:20-alpine
    RUN addgroup -S appgroup && adduser -S appuser -G appgroup
    WORKDIR /app
    COPY --from=builder /app .
    USER appuser
    EXPOSE 8080
    HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:8080/health || exit 1
    CMD ["node", "server.js"]

    Step 2: Deployment Manifest

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: order-service
      labels: { app: order-service, version: v1 }
    spec:
      replicas: 3
      strategy:
        type: RollingUpdate
        rollingUpdate: { maxSurge: 1, maxUnavailable: 0 }
      selector:
        matchLabels: { app: order-service }
      template:
        metadata:
          labels: { app: order-service, version: v1 }
        spec:
          securityContext: { runAsNonRoot: true, runAsUser: 1000 }
          containers:
          - name: order-service
            image: myregistry/order-service:1.2.0
            ports: [{ containerPort: 8080, name: http }]
            resources:
              requests: { cpu: "100m", memory: "128Mi" }
              limits: { cpu: "500m", memory: "512Mi" }
            livenessProbe:
              httpGet: { path: /health, port: 8080 }
              initialDelaySeconds: 15
              periodSeconds: 20
            readinessProbe:
              httpGet: { path: /ready, port: 8080 }
              initialDelaySeconds: 5
              periodSeconds: 10
            startupProbe:
              httpGet: { path: /health, port: 8080 }
              failureThreshold: 30
              periodSeconds: 2
            envFrom:
            - configMapRef: { name: order-service-config }
            - secretRef: { name: order-service-secrets }

    Step 3: Service & Networking

    apiVersion: v1
    kind: Service
    metadata: { name: order-service }
    spec:
      selector: { app: order-service }
      ports: [{ protocol: TCP, port: 80, targetPort: 8080 }]
      type: ClusterIP  # Internal; use LoadBalancer or Ingress for external

    Other services reach yours at http://order-service within the same namespace. For external traffic, use an Ingress controller with path-based routing.

    Step 4: Configuration & Secrets

    Use ConfigMaps for non-sensitive settings and Secrets for credentials. For production, consider HashiCorp Vault, AWS Secrets Manager, or Sealed Secrets instead of plain K8s Secrets (which are only base64-encoded by default).

    Step 5: Autoscaling & Observability

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata: { name: order-service-hpa }
    spec:
      scaleTargetRef: { apiVersion: apps/v1, kind: Deployment, name: order-service }
      minReplicas: 3
      maxReplicas: 20
      metrics:
      - type: Resource
        resource: { name: cpu, target: { type: Utilization, averageUtilization: 70 } }

    Pair with Prometheus (metrics), Grafana (dashboards), and Loki or ELK (logs). Use OpenTelemetry for distributed tracing across microservices. Start simple — 3 replicas, health probes, iterate. Don’t try service mesh, GitOps, and autoscaling all at once.

    Further reading: K8s Architecture | K8s Deployments

  • AWS Lambda vs Azure Functions: A Practical Serverless Comparison

    Serverless computing lets you run code without managing servers. AWS Lambda and Azure Functions are the two dominant platforms — same core concept (event-driven, pay-per-execution) but different developer experiences, ecosystems, and operational characteristics. Here’s a grounded comparison.

    How Serverless Works

    Both execute functions in response to events: HTTP requests, queue messages, file uploads, database changes, or scheduled timers. You write a handler, deploy it, and the platform manages scaling, availability, and infrastructure. You pay only for compute time used — measured in milliseconds. When traffic spikes to 10,000 concurrent requests, instances provision automatically. When idle, you pay nothing.

    Handler Patterns

    // AWS Lambda — Node.js
    exports.handler = async (event, context) => {
        const name = event.queryStringParameters?.name || "World";
        return {
            statusCode: 200,
            headers: { "Content-Type": "application/json" },
            body: JSON.stringify({ message: `Hello ${name} from Lambda!` })
        };
    };
    
    // Azure Functions — Node.js v4 model
    const { app } = require('@azure/functions');
    app.http('hello', {
        methods: ['GET'],
        handler: async (request, context) => {
            const name = request.query.get('name') || 'World';
            return { status: 200, jsonBody: { message: `Hello ${name} from Azure!` } };
        }
    });

    Ecosystem Integration

    Lambda integrates tightly with AWS: API Gateway, DynamoDB Streams, S3, SQS, SNS, EventBridge, Step Functions, Kinesis. Azure Functions integrates with Cosmos DB, Blob Storage, Service Bus, Event Grid, plus Microsoft 365 and Power Platform. Choose based on your existing cloud ecosystem.

    Cold Starts & Performance

    Cold starts (100ms-2s latency for new instances) affect both. Lambda offers Provisioned Concurrency and SnapStart (Java). Azure offers a Premium Plan with pre-warmed instances. Both have improved dramatically — cold starts are far less impactful than three years ago.

    Pricing

    Both offer 1 million free requests and 400,000 GB-seconds/month. Beyond free tier: ~$0.20/million requests and ~$0.0000167/GB-second. Lambda’s ARM64 (Graviton) provides 34% better price-performance for many workloads. Cost differences come from architecture choices, not per-request pricing.

    Developer Experience

    Lambda: SAM, CDK, Serverless Framework; sam local invoke for testing. Azure Functions: Deep VS Code integration, Core Tools CLI with live-reload, plus Durable Functions for stateful workflows (function chaining, fan-out/fan-in, human interaction patterns).

    The Verdict

    Already on AWS? Lambda. Azure/Microsoft shop? Azure Functions. Greenfield? Choose based on which cloud’s broader services fit your needs — the serverless compute layer is comparable. Both are production-ready with massive communities.

    Further reading: AWS Lambda Docs | Azure Functions Docs