Docker
Use Cautionnpx machina-cli add skill muhammederem/chief/docker --openclawFiles (1)
SKILL.md
9.8 KB
Docker Containerization
Overview
Docker is a platform for developing, shipping, and running applications in containers. Containers provide isolation, consistency, and efficiency across different environments.
Core Concepts
Images vs Containers
- Image: Read-only template with application code and dependencies
- Container: Running instance of an image
- Dockerfile: Script to build an image
- Registry: Storage for images (Docker Hub, ECR)
Installation
# Ubuntu/Debian
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# macOS
brew install --cask docker
# Verify
docker --version
docker compose version
Dockerfile
Python Application
# Multi-stage build
FROM python:3.11-slim as builder
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Final stage
FROM python:3.11-slim
WORKDIR /app
# Copy from builder
COPY --from=builder /root/.local /root/.local
# Copy application
COPY . .
# Make sure scripts in .local are usable
ENV PATH=/root/.local/bin:$PATH
# Expose port
EXPOSE 8000
# Run application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
FastAPI Production
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:8000/health')"
# Run with gunicorn
CMD ["gunicorn", "main:app", "--workers", "4", "--bind", "0.0.0.0:8000"]
ML Application
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
# Install Python
RUN apt-get update && apt-get install -y \
python3.10 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Install ML dependencies
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Expose port
EXPOSE 8000
# Set environment
ENV PYTHONPATH=/app
ENV TORCH_HOME=/app/.cache/torch
# Run
CMD ["python3", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Best Practices
1. Use Multi-Stage Builds
# Build stage
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production
CMD ["node", "server.js"]
2. Minimize Image Size
# Use alpine variants when possible
FROM python:3.11-alpine
# Clean up in same layer
RUN apt-get update && apt-get install -y \
package \
&& rm -rf /var/lib/apt/lists/*
# Use .dockerignore
# node_modules
# .git
# *.md
3. Don't Run as Root
RUN adduser -D -u 1000 appuser
USER appuser
4. Use Specific Versions
FROM python:3.11.4-slim # Not just python:3-slim
RUN pip install requests==2.31.0 # Not just pip install requests
5. Leverage Build Cache
# Copy requirements first (changes less often)
COPY requirements.txt .
RUN pip install -r requirements.txt
# Then copy application code
COPY . .
6. Add Health Checks
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:8000/health || exit 1
Docker Compose
Multi-Service Application
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
volumes:
- ./app:/app
networks:
- app-network
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: app
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
networks:
- app-network
worker:
build: .
command: celery -A app.tasks worker --loglevel=info
environment:
- DATABASE_URL=postgresql://db:5432/app
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge
Development Configuration
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- .:/app
ports:
- "8000:8000"
environment:
- DEBUG=1
- RELOAD=1
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
Common Commands
Image Management
# Build image
docker build -t myapp:latest .
# Tag image
docker tag myapp:latest myrepo/myapp:1.0
# Push to registry
docker push myrepo/myapp:1.0
# Pull image
docker pull myrepo/myapp:1.0
# List images
docker images
# Remove image
docker rmi myapp:latest
# Prune unused images
docker image prune -a
Container Management
# Run container
docker run -d -p 8000:8000 --name myapp myapp:latest
# Run with environment variables
docker run -d -e DATABASE_URL=postgresql://... myapp:latest
# Run with volume
docker run -d -v $(pwd)/data:/app/data myapp:latest
# List containers
docker ps -a
# Stop container
docker stop myapp
# Start container
docker start myapp
# Remove container
docker rm myapp
# View logs
docker logs -f myapp
# Execute command in container
docker exec -it myapp bash
# Inspect container
docker inspect myapp
Docker Compose
# Start services
docker compose up -d
# Build and start
docker compose up -d --build
# Stop services
docker compose down
# View logs
docker compose logs -f
# Execute in service
docker compose exec web bash
# Scale services
docker compose up -d --scale worker=3
# Run one-off command
docker compose run web python manage.py migrate
Networking
Create Network
# Create bridge network
docker network create my-network
# Connect container to network
docker network connect my-network myapp
# Disconnect
docker network disconnect my-network myapp
Compose Networks
services:
web:
networks:
- frontend
- backend
db:
networks:
- backend
networks:
frontend:
backend:
driver: bridge
Volumes
Create Volume
# Create named volume
docker volume create my-data
# Use volume
docker run -d -v my-data:/app/data myapp
# List volumes
docker volume ls
# Inspect volume
docker volume inspect my-data
# Remove volume
docker volume rm my-data
Compose Volumes
services:
app:
volumes:
- data:/app/data
- ./config:/app/config:ro # Read-only
- /host/path:/container/path # Bind mount
volumes:
data:
driver: local
Optimization
Reduce Image Size
# Use .dockerignore to exclude unnecessary files
**/node_modules
**/.git
**/__pycache__
**/*.pyc
**/.pytest_cache
# Combine RUN statements
RUN apt-get update && apt-get install -y \
package1 \
package2 \
&& rm -rf /var/lib/apt/lists/*
# Use buildx for multi-platform builds
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest .
Caching Strategy
# Cache dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Then copy application
COPY . .
Layer Ordering
# Put frequently changed files last
COPY requirements.txt .
RUN pip install -r requirements.txt
# Application code changes more often
COPY . .
Security
Scan Images
# Use Trivy
trivy image myapp:latest
# Use Docker Scout
docker scout cves myapp:latest
Security Best Practices
# Use specific version tags
FROM python:3.11.4-slim
# Run as non-root user
RUN adduser -D -u 1000 appuser
USER appuser
# Use COPY instead of ADD (ADD can extract archives)
COPY file.txt /app/
# Don't include secrets
# Use environment variables or secrets management
# Scan for vulnerabilities
RUN apk add --no-cache dumb-init && \
dumb-init --version && \
apk del dumb-init
CI/CD Integration
GitHub Actions
name: Docker
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Log in to registry
run: docker login -u ${{ secrets.USER }} -p ${{ secrets.PASS }}
- name: Push image
run: docker push myapp:${{ github.sha }}
Production Deployment
ECS/EKS
# Task definition for ECS
{
"family": "myapp",
"containerDefinitions": [
{
"name": "myapp",
"image": "myrepo/myapp:latest",
"memory": 512,
"cpu": 256,
"essential": true,
"portMappings": [
{"containerPort": 8000}
]
}
]
}
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myrepo/myapp:latest
ports:
- containerPort: 8000
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Integration
- AWS: ECR, ECS, EKS, Lambda
- CI/CD: GitHub Actions, GitLab CI
- Monitoring: Prometheus, Grafana
- Logging: ELK Stack, CloudWatch
Source
git clone https://github.com/muhammederem/chief/blob/main/.claude/skills/devops/docker/SKILL.mdView on GitHub Overview
Docker enables developers to build, ship, and run applications inside isolated containers. Containers provide isolation, consistency, and efficiency across environments. Core concepts include images, containers, Dockerfile, and registries.
How This Skill Works
A Dockerfile describes the base image, dependencies, and steps to build an image. You build an image (often using multi-stage builds) and run containers from that image; images can be pushed to registries like Docker Hub or ECR for sharing and deployment.
When to Use It
- Need consistent environments across development, testing, and production
- Want to isolate dependencies and runtimes per application
- Deploy microservices with reproducible, shareable builds
- Distribute and deploy images via registries (Docker Hub, ECR) and CI/CD pipelines
- Aim to improve security and efficiency with smaller, non-root containers
Quick Start
- Step 1: Write a Dockerfile describing the base, dependencies, and CMD
- Step 2: Build the image with docker build -t your-app .
- Step 3: Run the container with docker run -p 8000:8000 your-app
Best Practices
- Use Multi-Stage Builds to minimize final image size and separate build and runtime concerns
- Minimize Image Size by choosing slim/alpine bases and cleaning up in the same layer
- Don't Run as Root; create and switch to a non-root user for security
- Use Specific Versions for base images and dependencies to ensure reproducibility
- Leverage Build Cache by ordering steps to reuse unchanged layers
Example Use Cases
- Containerize a Python FastAPI app with a multi-stage Dockerfile and run with Gunicorn for production
- Run a GPU-accelerated ML service using the NVIDIA CUDA base image with a non-root user
- Publish production images based on Python 3.11-slim and keep dependencies in a separate layer
- Build lean production images using Python 3.11-alpine and a .dockerignore to exclude dev files
- Coordinate multiple services with Docker Compose, wiring ports and environment variables
Frequently Asked Questions
Add this skill to your agents