Sistema NEXUS v3.4 migrado con: Estructura principal: - core/orchestration: Sistema SIMCO + CAPVED (27 directivas, 28 perfiles) - core/catalog: Catalogo de funcionalidades reutilizables - shared/knowledge-base: Base de conocimiento compartida - devtools/scripts: Herramientas de desarrollo - control-plane/registries: Control de servicios y CI/CD - orchestration/: Configuracion de orquestacion de agentes Proyectos incluidos (11): - gamilit (submodule -> GitHub) - trading-platform (OrbiquanTIA) - erp-suite con 5 verticales: - erp-core, construccion, vidrio-templado - mecanicas-diesel, retail, clinicas - betting-analytics - inmobiliaria-analytics - platform_marketing_content - pos-micro, erp-basico Configuracion: - .gitignore completo para Node.js/Python/Docker - gamilit como submodule (git@github.com:rckrdmrd/gamilit-workspace.git) - Sistema de puertos estandarizado (3005-3199) Generated with NEXUS v3.4 Migration System EPIC-010: Configuracion Git y Repositorios
50 KiB
DEPLOYMENT GUIDE - ERP Generic
Última actualización: 2025-11-24 Responsable: DevOps Team Estado: ✅ Production-Ready
TABLE OF CONTENTS
- Overview
- Prerequisites
- Docker Setup
- PostgreSQL 16 Setup
- Redis Setup
- Environment Variables
- Multi-Environment Strategy
- Deployment Steps
- Zero-Downtime Deployment
- Rollback Procedures
- Troubleshooting
- References
1. OVERVIEW
1.1 System Architecture
┌─────────────────────────────────────────────────────────────┐
│ Load Balancer / Nginx │
│ (SSL Termination) │
└────────────┬────────────────────────────────┬───────────────┘
│ │
┌───────▼───────┐ ┌──────▼──────┐
│ Frontend │ │ Backend │
│ React 18 │ │ NestJS 10 │
│ Vite 5 │ │ Prisma 5 │
└───────────────┘ └──────┬──────┘
│
┌───────────────────┼────────────────┐
│ │ │
┌──────▼──────┐ ┌──────▼─────┐ ┌──────▼──────┐
│ PostgreSQL │ │ Redis │ │ S3 │
│ 16 │ │ 7 │ │ (Storage) │
│ 9 schemas │ │ (Cache) │ │ │
└─────────────┘ └────────────┘ └─────────────┘
1.2 Components
| Component | Version | Purpose | Port |
|---|---|---|---|
| Backend | NestJS 10 + Node.js 20 | REST API + Business Logic | 3000 |
| Frontend | React 18 + Vite 5 | Web UI | 5173 |
| PostgreSQL | 16-alpine | Primary Database (9 schemas) | 5432 |
| Redis | 7-alpine | Session store + Cache | 6379 |
| Nginx | 1.25-alpine | Reverse proxy + SSL | 80, 443 |
1.3 System Requirements
Minimum (Development):
- CPU: 4 cores
- RAM: 8 GB
- Disk: 50 GB SSD
- OS: Linux (Ubuntu 22.04 LTS recommended)
Recommended (Production):
- CPU: 8+ cores
- RAM: 16+ GB
- Disk: 200+ GB SSD (NVMe preferred)
- OS: Ubuntu 22.04 LTS / RHEL 9 / Amazon Linux 2023
Scalability (High Load):
- CPU: 16+ cores
- RAM: 32+ GB
- Disk: 500+ GB SSD NVMe
- Load Balancer: AWS ELB / Azure Load Balancer / Nginx
- Database: PostgreSQL 16 with read replicas
2. PREREQUISITES
2.1 Software Requirements
# Docker & Docker Compose
docker --version # >= 24.0
docker-compose --version # >= 2.20
# Git
git --version # >= 2.40
# Node.js (for local development)
node --version # >= 20.0 LTS
npm --version # >= 10.0
# PostgreSQL Client (for manual operations)
psql --version # >= 16.0
# Redis Client (for manual operations)
redis-cli --version # >= 7.0
2.2 Installation (Ubuntu 22.04)
# Update system
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker
# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/v2.23.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Install PostgreSQL client
sudo apt install -y postgresql-client-16
# Install Redis client
sudo apt install -y redis-tools
# Install Git
sudo apt install -y git
# Verify installations
docker --version
docker-compose --version
psql --version
redis-cli --version
git --version
2.3 Network Requirements
Firewall Rules (Inbound):
- Port 80 (HTTP) - Allow from Load Balancer only
- Port 443 (HTTPS) - Allow from Load Balancer only
- Port 22 (SSH) - Allow from trusted IPs only (bastion host)
- Port 3000 (Backend API) - Internal only
- Port 5432 (PostgreSQL) - Internal only
- Port 6379 (Redis) - Internal only
Firewall Rules (Outbound):
- Port 80, 443 (HTTPS) - Allow (for package downloads, API calls)
- Port 25, 587 (SMTP) - Allow (for email notifications)
DNS Records:
# Production
erp-generic.com A <LOAD_BALANCER_IP>
api.erp-generic.com A <LOAD_BALANCER_IP>
*.erp-generic.com A <LOAD_BALANCER_IP> # Multi-tenant subdomains
# Staging
staging.erp-generic.com A <STAGING_IP>
# QA
qa.erp-generic.local A <QA_IP>
3. DOCKER SETUP
3.1 Directory Structure
erp-generic/
├── backend/
│ ├── Dockerfile
│ ├── .dockerignore
│ ├── src/
│ ├── prisma/
│ └── package.json
├── frontend/
│ ├── Dockerfile
│ ├── .dockerignore
│ ├── src/
│ └── package.json
├── docker-compose.yml
├── docker-compose.prod.yml
├── docker-compose.dev.yml
├── .env.example
└── nginx/
└── nginx.conf
3.2 Backend Dockerfile
File: backend/Dockerfile
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY prisma ./prisma/
# Install dependencies (including devDependencies for build)
RUN npm ci
# Copy source code
COPY . .
# Generate Prisma Client
RUN npx prisma generate
# Build TypeScript
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
# Install only production dependencies
COPY package*.json ./
COPY prisma ./prisma/
RUN npm ci --only=production
# Copy built files from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules/.prisma ./node_modules/.prisma
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nestjs -u 1001
USER nestjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (res) => { process.exit(res.statusCode === 200 ? 0 : 1); });"
# Start application
CMD ["node", "dist/main.js"]
3.3 Frontend Dockerfile
File: frontend/Dockerfile
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci
# Copy source code
COPY . .
# Build application
RUN npm run build
# Stage 2: Production with Nginx
FROM nginx:1.25-alpine AS production
# Copy built files
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy custom nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port
EXPOSE 80
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:80/ || exit 1
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
File: frontend/nginx.conf
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# SPA routing - all routes go to index.html
location / {
try_files $uri $uri/ /index.html;
}
# API proxy (optional - if frontend and backend on same domain)
location /api {
proxy_pass http://backend:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
3.4 Docker Compose - Development
File: docker-compose.dev.yml
version: '3.9'
services:
postgres:
image: postgres:16-alpine
container_name: erp-postgres-dev
environment:
POSTGRES_DB: erp_generic_dev
POSTGRES_USER: erp_user
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-dev_password_change_me}
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --locale=en_US.UTF-8"
volumes:
- postgres_data_dev:/var/lib/postgresql/data
- ./database/init-scripts:/docker-entrypoint-initdb.d:ro
ports:
- "5432:5432"
networks:
- erp-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U erp_user -d erp_generic_dev"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: erp-redis-dev
command: redis-server --requirepass ${REDIS_PASSWORD:-dev_redis_pass} --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data_dev:/data
ports:
- "6379:6379"
networks:
- erp-network
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 5
restart: unless-stopped
backend:
build:
context: ./backend
dockerfile: Dockerfile
target: builder # Use builder stage for development (includes dev dependencies)
container_name: erp-backend-dev
environment:
NODE_ENV: development
DATABASE_URL: postgresql://erp_user:${POSTGRES_PASSWORD:-dev_password_change_me}@postgres:5432/erp_generic_dev?schema=public
REDIS_URL: redis://default:${REDIS_PASSWORD:-dev_redis_pass}@redis:6379
JWT_SECRET: ${JWT_SECRET:-dev_jwt_secret_change_me}
JWT_EXPIRES_IN: 7d
PORT: 3000
LOG_LEVEL: debug
volumes:
- ./backend:/app
- /app/node_modules
- /app/dist
ports:
- "3000:3000"
- "9229:9229" # Debugger port
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- erp-network
command: npm run start:dev
restart: unless-stopped
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
container_name: erp-frontend-dev
environment:
VITE_API_URL: http://localhost:3000/api
VITE_APP_ENV: development
volumes:
- ./frontend:/app
- /app/node_modules
- /app/dist
ports:
- "5173:5173"
depends_on:
- backend
networks:
- erp-network
command: npm run dev -- --host
restart: unless-stopped
volumes:
postgres_data_dev:
name: erp_postgres_data_dev
redis_data_dev:
name: erp_redis_data_dev
networks:
erp-network:
name: erp-network-dev
driver: bridge
3.5 Docker Compose - Production
File: docker-compose.prod.yml
version: '3.9'
services:
postgres:
image: postgres:16-alpine
container_name: erp-postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --locale=en_US.UTF-8"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./database/init-scripts:/docker-entrypoint-initdb.d:ro
- /backups/postgres:/backups:rw
# Do NOT expose ports publicly in production
networks:
- erp-network-internal
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
restart: always
deploy:
resources:
limits:
cpus: '4'
memory: 8G
reservations:
cpus: '2'
memory: 4G
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
redis:
image: redis:7-alpine
container_name: erp-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 2gb --maxmemory-policy allkeys-lru --appendonly yes
volumes:
- redis_data:/data
networks:
- erp-network-internal
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 5
restart: always
deploy:
resources:
limits:
cpus: '2'
memory: 3G
reservations:
cpus: '1'
memory: 2G
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
backend:
build:
context: ./backend
dockerfile: Dockerfile
target: production
image: ${DOCKER_REGISTRY}/erp-generic-backend:${VERSION:-latest}
container_name: erp-backend
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}?schema=public&connection_limit=20&pool_timeout=30
REDIS_URL: redis://default:${REDIS_PASSWORD}@redis:6379
JWT_SECRET: ${JWT_SECRET}
JWT_EXPIRES_IN: ${JWT_EXPIRES_IN:-24h}
JWT_REFRESH_EXPIRES_IN: ${JWT_REFRESH_EXPIRES_IN:-7d}
PORT: 3000
LOG_LEVEL: ${LOG_LEVEL:-info}
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS}
SMTP_HOST: ${SMTP_HOST}
SMTP_PORT: ${SMTP_PORT}
SMTP_USER: ${SMTP_USER}
SMTP_PASSWORD: ${SMTP_PASSWORD}
S3_BUCKET: ${S3_BUCKET}
S3_REGION: ${S3_REGION}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- erp-network-internal
- erp-network-external
restart: always
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 5s
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "5"
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: production
image: ${DOCKER_REGISTRY}/erp-generic-frontend:${VERSION:-latest}
container_name: erp-frontend
environment:
VITE_API_URL: ${API_URL}
VITE_APP_ENV: production
depends_on:
- backend
networks:
- erp-network-external
restart: always
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
nginx:
image: nginx:1.25-alpine
container_name: erp-nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./nginx/logs:/var/log/nginx:rw
ports:
- "80:80"
- "443:443"
depends_on:
- frontend
- backend
networks:
- erp-network-external
restart: always
deploy:
resources:
limits:
cpus: '1'
memory: 512M
logging:
driver: "json-file"
options:
max-size: "20m"
max-file: "5"
volumes:
postgres_data:
name: erp_postgres_data
driver: local
driver_opts:
type: none
o: bind
device: /data/postgres
redis_data:
name: erp_redis_data
driver: local
networks:
erp-network-internal:
name: erp-network-internal
driver: bridge
internal: true
erp-network-external:
name: erp-network-external
driver: bridge
4. POSTGRESQL 16 SETUP
4.1 Initialization Script
File: database/init-scripts/01-init-schemas.sql
-- =====================================================
-- ERP GENERIC - PostgreSQL 16 Initialization
-- Creates 9 schemas + roles + extensions
-- =====================================================
-- Create extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- For text search
CREATE EXTENSION IF NOT EXISTS "btree_gin"; -- For indexing
CREATE EXTENSION IF NOT EXISTS "pg_stat_statements"; -- For query monitoring
-- Create roles
CREATE ROLE erp_readonly;
CREATE ROLE erp_readwrite;
CREATE ROLE erp_admin;
-- Create 9 schemas
CREATE SCHEMA IF NOT EXISTS auth;
CREATE SCHEMA IF NOT EXISTS core;
CREATE SCHEMA IF NOT EXISTS financial;
CREATE SCHEMA IF NOT EXISTS inventory;
CREATE SCHEMA IF NOT EXISTS purchase;
CREATE SCHEMA IF NOT EXISTS sales;
CREATE SCHEMA IF NOT EXISTS analytics;
CREATE SCHEMA IF NOT EXISTS projects;
CREATE SCHEMA IF NOT EXISTS system;
-- Grant permissions to schemas
GRANT USAGE ON SCHEMA auth TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA core TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA financial TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA inventory TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA purchase TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA sales TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA analytics TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA projects TO erp_readonly, erp_readwrite, erp_admin;
GRANT USAGE ON SCHEMA system TO erp_readonly, erp_readwrite, erp_admin;
-- Grant SELECT to readonly
GRANT SELECT ON ALL TABLES IN SCHEMA auth TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA core TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA financial TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA inventory TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA purchase TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA sales TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA projects TO erp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA system TO erp_readonly;
-- Grant SELECT, INSERT, UPDATE, DELETE to readwrite
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA auth TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA core TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA financial TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA inventory TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA purchase TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA sales TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA analytics TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA projects TO erp_readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA system TO erp_readwrite;
-- Grant ALL to admin
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA auth TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA core TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA financial TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA inventory TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA purchase TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA sales TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA analytics TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA projects TO erp_admin;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA system TO erp_admin;
-- Default privileges for future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA auth GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA core GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA financial GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA inventory GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA purchase GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA sales GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA analytics GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA projects GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA system GRANT SELECT ON TABLES TO erp_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA auth GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA core GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA financial GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA inventory GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA purchase GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA sales GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA analytics GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA projects GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA system GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO erp_readwrite;
-- Enable Row Level Security (RLS) on all schemas
ALTER SCHEMA auth ENABLE;
ALTER SCHEMA core ENABLE;
ALTER SCHEMA financial ENABLE;
ALTER SCHEMA inventory ENABLE;
ALTER SCHEMA purchase ENABLE;
ALTER SCHEMA sales ENABLE;
ALTER SCHEMA analytics ENABLE;
ALTER SCHEMA projects ENABLE;
ALTER SCHEMA system ENABLE;
-- Create audit function
CREATE OR REPLACE FUNCTION system.audit_trigger_function()
RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'INSERT') THEN
NEW.created_at = CURRENT_TIMESTAMP;
NEW.created_by = COALESCE(NEW.created_by, current_setting('app.current_user_id', TRUE)::UUID);
ELSIF (TG_OP = 'UPDATE') THEN
NEW.updated_at = CURRENT_TIMESTAMP;
NEW.updated_by = current_setting('app.current_user_id', TRUE)::UUID;
NEW.created_at = OLD.created_at;
NEW.created_by = OLD.created_by;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Create tenant context function
CREATE OR REPLACE FUNCTION auth.get_current_tenant_id()
RETURNS UUID AS $$
BEGIN
RETURN NULLIF(current_setting('app.current_tenant_id', TRUE), '')::UUID;
END;
$$ LANGUAGE plpgsql STABLE;
COMMENT ON FUNCTION auth.get_current_tenant_id() IS 'Returns the current tenant_id from session variable';
-- Success message
DO $$
BEGIN
RAISE NOTICE 'ERP Generic PostgreSQL initialization completed successfully!';
RAISE NOTICE 'Created 9 schemas: auth, core, financial, inventory, purchase, sales, analytics, projects, system';
RAISE NOTICE 'Created 3 roles: erp_readonly, erp_readwrite, erp_admin';
RAISE NOTICE 'Installed extensions: uuid-ossp, pgcrypto, pg_trgm, btree_gin, pg_stat_statements';
END $$;
4.2 PostgreSQL Configuration
File: database/postgresql.conf (Production optimizations)
# Connection Settings
max_connections = 200
superuser_reserved_connections = 3
# Memory Settings
shared_buffers = 4GB # 25% of RAM
effective_cache_size = 12GB # 75% of RAM
maintenance_work_mem = 1GB
work_mem = 20MB # max_connections * work_mem < RAM
temp_buffers = 16MB
# Checkpoint Settings
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1 # For SSD
effective_io_concurrency = 200 # For SSD
# WAL Settings (for PITR - Point In Time Recovery)
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /backups/wal/%f && cp %p /backups/wal/%f'
max_wal_senders = 3
wal_keep_size = 1GB
# Query Planning
default_statistics_target = 100
random_page_cost = 1.1
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_min_duration_statement = 1000 # Log queries slower than 1s
log_line_prefix = '%m [%p] %q%u@%d '
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = off
log_lock_waits = on
log_statement = 'ddl' # Log DDL statements only
log_temp_files = 0
# Performance Monitoring
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
pg_stat_statements.max = 10000
# Locale and Formatting
datestyle = 'iso, mdy'
timezone = 'UTC'
lc_messages = 'en_US.utf8'
lc_monetary = 'en_US.utf8'
lc_numeric = 'en_US.utf8'
lc_time = 'en_US.utf8'
default_text_search_config = 'pg_catalog.english'
4.3 Database Migrations
# Run Prisma migrations (creates tables, indexes, constraints)
docker-compose exec backend npx prisma migrate deploy
# Verify migration status
docker-compose exec backend npx prisma migrate status
# Generate Prisma Client
docker-compose exec backend npx prisma generate
# Seed initial data
docker-compose exec backend npm run seed:initial
5. REDIS SETUP
5.1 Redis Configuration
File: redis/redis.conf
# Network
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
# Security
requirepass ${REDIS_PASSWORD}
# Disable dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG ""
# Memory Management
maxmemory 2gb
maxmemory-policy allkeys-lru
maxmemory-samples 5
# Persistence (AOF for durability)
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# RDB Snapshots (backup)
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
# Logging
loglevel notice
logfile "/var/log/redis/redis-server.log"
# Slow log
slowlog-log-slower-than 10000
slowlog-max-len 128
# Performance
databases 16
5.2 Redis Usage in ERP
Session Storage:
// backend/src/common/redis/redis.service.ts
@Injectable()
export class RedisService {
private client: Redis;
constructor() {
this.client = new Redis({
host: process.env.REDIS_HOST || 'redis',
port: parseInt(process.env.REDIS_PORT) || 6379,
password: process.env.REDIS_PASSWORD,
db: 0, // Sessions DB
retryStrategy: (times) => Math.min(times * 50, 2000),
});
}
// Session management
async setSession(sessionId: string, data: any, ttl: number = 86400) {
await this.client.setex(
`session:${sessionId}`,
ttl,
JSON.stringify(data)
);
}
async getSession(sessionId: string): Promise<any | null> {
const data = await this.client.get(`session:${sessionId}`);
return data ? JSON.parse(data) : null;
}
async deleteSession(sessionId: string): Promise<void> {
await this.client.del(`session:${sessionId}`);
}
// Cache management
async cacheSet(key: string, value: any, ttl: number = 3600) {
await this.client.setex(
`cache:${key}`,
ttl,
JSON.stringify(value)
);
}
async cacheGet(key: string): Promise<any | null> {
const data = await this.client.get(`cache:${key}`);
return data ? JSON.parse(data) : null;
}
async cacheInvalidate(pattern: string): Promise<void> {
const keys = await this.client.keys(`cache:${pattern}*`);
if (keys.length > 0) {
await this.client.del(...keys);
}
}
}
6. ENVIRONMENT VARIABLES
6.1 .env.example
File: .env.example
# =====================================================
# ERP GENERIC - Environment Variables
# =====================================================
# Application
NODE_ENV=production
APP_NAME=ERP Generic
APP_URL=https://erp-generic.com
PORT=3000
VERSION=1.0.0
# Database (PostgreSQL 16)
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=erp_generic
POSTGRES_USER=erp_user
POSTGRES_PASSWORD=CHANGE_ME_STRONG_PASSWORD_32_CHARS
DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}?schema=public&connection_limit=20&pool_timeout=30
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=CHANGE_ME_REDIS_PASSWORD
REDIS_URL=redis://default:${REDIS_PASSWORD}@${REDIS_HOST}:${REDIS_PORT}
# JWT Authentication
JWT_SECRET=CHANGE_ME_JWT_SECRET_64_CHARS_RANDOM_STRING_SECURE
JWT_EXPIRES_IN=24h
JWT_REFRESH_SECRET=CHANGE_ME_REFRESH_SECRET_64_CHARS
JWT_REFRESH_EXPIRES_IN=7d
# CORS
ALLOWED_ORIGINS=https://erp-generic.com,https://www.erp-generic.com,https://*.erp-generic.com
# Email (SMTP)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=apikey
SMTP_PASSWORD=SENDGRID_API_KEY
SMTP_FROM_EMAIL=noreply@erp-generic.com
SMTP_FROM_NAME=ERP Generic
# AWS S3 (File Storage)
S3_BUCKET=erp-generic-prod
S3_REGION=us-east-1
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Logging
LOG_LEVEL=info
LOG_FORMAT=json
SENTRY_DSN=https://your-sentry-dsn@sentry.io/project-id
# Rate Limiting
RATE_LIMIT_TTL=60
RATE_LIMIT_MAX=100
# Docker Registry
DOCKER_REGISTRY=ghcr.io/your-org
# Monitoring
PROMETHEUS_PORT=9090
GRAFANA_PORT=3001
# Multi-Tenancy
DEFAULT_TENANT_SCHEMA=tenant_default
ENABLE_TENANT_ISOLATION=true
6.2 Environment-Specific Variables
Development (.env.development):
NODE_ENV=development
LOG_LEVEL=debug
DATABASE_URL=postgresql://erp_user:dev_pass@localhost:5432/erp_generic_dev
REDIS_URL=redis://default:dev_redis@localhost:6379
JWT_SECRET=dev_jwt_secret_not_secure
ALLOWED_ORIGINS=http://localhost:5173,http://localhost:3000
QA (.env.qa):
NODE_ENV=qa
LOG_LEVEL=info
DATABASE_URL=postgresql://erp_user:qa_pass@postgres-qa:5432/erp_generic_qa
REDIS_URL=redis://default:qa_redis@redis-qa:6379
JWT_SECRET=qa_jwt_secret_secure_32_chars
ALLOWED_ORIGINS=https://qa.erp-generic.local
Staging (.env.staging):
NODE_ENV=staging
LOG_LEVEL=info
DATABASE_URL=postgresql://erp_user:staging_pass@postgres-staging:5432/erp_generic_staging
REDIS_URL=redis://default:staging_redis@redis-staging:6379
JWT_SECRET=staging_jwt_secret_secure_64_chars
ALLOWED_ORIGINS=https://staging.erp-generic.com
S3_BUCKET=erp-generic-staging
SENTRY_DSN=https://staging-sentry-dsn@sentry.io/project-id
Production (.env.production):
NODE_ENV=production
LOG_LEVEL=warn
DATABASE_URL=postgresql://erp_user:PROD_PASSWORD@postgres-prod:5432/erp_generic
REDIS_URL=redis://default:PROD_REDIS@redis-prod:6379
JWT_SECRET=PROD_JWT_SECRET_SECURE_64_CHARS_RANDOM
ALLOWED_ORIGINS=https://erp-generic.com,https://www.erp-generic.com
S3_BUCKET=erp-generic-prod
SENTRY_DSN=https://prod-sentry-dsn@sentry.io/project-id
6.3 Secret Management
Using AWS Secrets Manager:
# Fetch secrets from AWS Secrets Manager
aws secretsmanager get-secret-value --secret-id erp-generic/prod/database --query SecretString --output text > .env.secrets
# Or use in docker-compose
services:
backend:
environment:
DATABASE_PASSWORD: "{{secrets:erp-generic/prod/database:password}}"
Using HashiCorp Vault:
# Fetch secrets from Vault
vault kv get -field=password secret/erp-generic/prod/database
7. MULTI-ENVIRONMENT STRATEGY
7.1 Environment Pipeline
Development → QA → Staging → Production
↓ ↓ ↓ ↓
Manual Auto Manual Manual
Deploy Deploy Deploy Deploy
7.2 Deployment Matrix
| Characteristic | Development | QA | Staging | Production |
|---|---|---|---|---|
| Trigger | Manual | Auto (push to develop) | Manual (approved PR) | Manual (release tag) |
| Database | Local PostgreSQL | Anonymized prod | Prod clone | Production |
| Redis | Local Redis | Dedicated QA | Dedicated Staging | Production cluster |
| Replicas | 1 | 1 | 2 | 3+ |
| Resources | Minimal | Low | Medium | High |
| Monitoring | Basic logs | Prometheus + Grafana | Full stack | Full stack + alerts |
| Backups | None | Daily | Hourly | Every 4 hours |
| SSL | Self-signed | Let's Encrypt | Let's Encrypt | Commercial cert |
| Domain | localhost | qa.local | staging.com | erp-generic.com |
7.3 Promotion Criteria
Development → QA:
- All unit tests passing (>80% coverage)
- Code review approved
- No TypeScript errors
- Linter passing
QA → Staging:
- All integration tests passing
- E2E tests passing (critical flows)
- Manual QA sign-off
- Performance tests passing
- Security scan passing (no critical vulnerabilities)
Staging → Production:
- Staging validation complete (48 hours soak test)
- Load testing passed (>1000 req/s)
- Backup verified
- Rollback plan documented
- Change approval from Product Owner + CTO
- Communication sent to stakeholders
8. DEPLOYMENT STEPS
8.1 First-Time Deployment (Fresh Install)
# Step 1: Clone repository
git clone https://github.com/your-org/erp-generic.git
cd erp-generic
# Step 2: Configure environment
cp .env.example .env.production
nano .env.production # Edit with real values
# Step 3: Generate secrets
JWT_SECRET=$(openssl rand -hex 32)
POSTGRES_PASSWORD=$(openssl rand -base64 32)
REDIS_PASSWORD=$(openssl rand -base64 32)
# Update .env.production with generated secrets
# Step 4: Start infrastructure services first
docker-compose -f docker-compose.prod.yml up -d postgres redis
# Wait for databases to be healthy
docker-compose -f docker-compose.prod.yml ps
# Step 5: Run database initialization
docker-compose -f docker-compose.prod.yml exec postgres psql -U erp_user -d erp_generic -f /docker-entrypoint-initdb.d/01-init-schemas.sql
# Step 6: Run Prisma migrations
docker-compose -f docker-compose.prod.yml run --rm backend npx prisma migrate deploy
# Step 7: Seed initial data
docker-compose -f docker-compose.prod.yml run --rm backend npm run seed:initial
# Step 8: Start application services
docker-compose -f docker-compose.prod.yml up -d backend frontend nginx
# Step 9: Verify health
./scripts/health-check.sh
# Step 10: Check logs
docker-compose -f docker-compose.prod.yml logs -f backend
# Step 11: Access application
curl -I https://erp-generic.com
# Expected: HTTP/1.1 200 OK
Total time: 15-20 minutes
8.2 Update Deployment (Rolling Update)
# Step 1: Backup database (safety first!)
./scripts/backup-postgres.sh
# Step 2: Pull latest changes
git fetch origin
git checkout v1.2.0 # Or specific release tag
# Step 3: Update environment variables (if needed)
diff .env.example .env.production
# Add any new variables
# Step 4: Build new images
docker-compose -f docker-compose.prod.yml build backend frontend
# Step 5: Tag images with version
docker tag erp-generic-backend:latest erp-generic-backend:v1.2.0
docker tag erp-generic-frontend:latest erp-generic-frontend:v1.2.0
# Step 6: Run database migrations (zero-downtime)
docker-compose -f docker-compose.prod.yml run --rm backend npx prisma migrate deploy
# Step 7: Rolling update backend (one replica at a time)
docker-compose -f docker-compose.prod.yml up -d --no-deps --scale backend=4 backend
# Wait 30s for health checks
docker-compose -f docker-compose.prod.yml scale backend=3
# Step 8: Update frontend
docker-compose -f docker-compose.prod.yml up -d --no-deps frontend
# Step 9: Verify health
./scripts/health-check.sh
# Step 10: Run smoke tests
npm run test:smoke:production
# Step 11: Monitor for 15 minutes
docker-compose -f docker-compose.prod.yml logs -f backend | grep ERROR
Total time: 5-10 minutes
8.3 Hotfix Deployment (Emergency)
# Step 1: Create hotfix branch
git checkout -b hotfix/critical-security-fix
# Step 2: Implement fix + tests
# ... make changes ...
npm run test
# Step 3: Fast-track deployment (skip QA for critical issues)
git commit -m "hotfix: Fix critical security vulnerability"
git push origin hotfix/critical-security-fix
# Step 4: Deploy directly to production (with approval)
./scripts/deploy-hotfix.sh hotfix/critical-security-fix
# Step 5: Monitor closely
tail -f /var/log/erp-generic/backend.log
# Step 6: Communicate to stakeholders
# Send email notification about hotfix deployment
Total time: 5-15 minutes (depending on complexity)
9. ZERO-DOWNTIME DEPLOYMENT
9.1 Blue-Green Deployment Strategy
┌─────────────────────────────────────────────────────┐
│ Load Balancer (Nginx) │
│ (Routes traffic to active env) │
└─────┬──────────────────────────────────────┬────────┘
│ │
│ 100% traffic │ 0% traffic
↓ ↓
┌──────────────┐ ┌──────────────┐
│ Blue (v1.1) │ │ Green (v1.2) │
│ ACTIVE │ │ STANDBY │
│ │ │ │
│ Backend x3 │ │ Backend x3 │
│ Frontend │ │ Frontend │
│ Database │ │ Database │
└──────────────┘ └──────────────┘
After testing Green:
│ 0% traffic │ 100% traffic
↓ ↓
┌──────────────┐ ┌──────────────┐
│ Blue (v1.1) │ │ Green (v1.2) │
│ STANDBY │ ← Rollback │ ACTIVE │
└──────────────┘ └──────────────┘
9.2 Implementation
File: scripts/deploy-blue-green.sh
#!/bin/bash
set -euo pipefail
CURRENT_ENV=${1:-blue}
NEW_VERSION=${2:-latest}
echo "===== Blue-Green Deployment ====="
echo "Current environment: $CURRENT_ENV"
echo "New version: $NEW_VERSION"
# Determine new environment
if [ "$CURRENT_ENV" = "blue" ]; then
NEW_ENV="green"
else
NEW_ENV="blue"
fi
echo "Deploying to: $NEW_ENV"
# Step 1: Deploy to standby environment
echo "1. Deploying to $NEW_ENV environment..."
docker-compose -f docker-compose.$NEW_ENV.yml up -d --build
# Step 2: Wait for health checks
echo "2. Waiting for health checks..."
sleep 30
for i in {1..10}; do
if ./scripts/health-check.sh $NEW_ENV; then
echo "Health check passed!"
break
fi
if [ $i -eq 10 ]; then
echo "Health check failed after 10 attempts. Aborting deployment."
exit 1
fi
echo "Health check failed. Retrying in 10s... ($i/10)"
sleep 10
done
# Step 3: Run smoke tests
echo "3. Running smoke tests on $NEW_ENV..."
npm run test:smoke -- --env=$NEW_ENV
if [ $? -ne 0 ]; then
echo "Smoke tests failed. Aborting deployment."
exit 1
fi
# Step 4: Switch traffic to new environment
echo "4. Switching traffic to $NEW_ENV..."
# Update nginx config to point to new environment
sed -i "s/upstream backend_$CURRENT_ENV/upstream backend_$NEW_ENV/g" /etc/nginx/nginx.conf
nginx -s reload
echo "Traffic switched to $NEW_ENV"
# Step 5: Monitor for 5 minutes
echo "5. Monitoring $NEW_ENV for 5 minutes..."
sleep 300
ERROR_COUNT=$(docker-compose -f docker-compose.$NEW_ENV.yml logs backend | grep -c ERROR || true)
if [ $ERROR_COUNT -gt 10 ]; then
echo "Too many errors detected ($ERROR_COUNT). Rolling back..."
./scripts/rollback.sh $CURRENT_ENV
exit 1
fi
# Step 6: Shutdown old environment
echo "6. Shutting down old environment $CURRENT_ENV..."
docker-compose -f docker-compose.$CURRENT_ENV.yml down
echo "===== Deployment Complete ====="
echo "Active environment: $NEW_ENV"
echo "Version: $NEW_VERSION"
9.3 Health Check Endpoint
// backend/src/health/health.controller.ts
import { Controller, Get } from '@nestjs/common';
import { HealthCheck, HealthCheckService, PrismaHealthIndicator, MemoryHealthIndicator } from '@nestjs/terminus';
import { RedisHealthIndicator } from './redis.health';
@Controller('health')
export class HealthController {
constructor(
private health: HealthCheckService,
private db: PrismaHealthIndicator,
private redis: RedisHealthIndicator,
private memory: MemoryHealthIndicator,
) {}
@Get()
@HealthCheck()
check() {
return this.health.check([
// Database check
() => this.db.pingCheck('database', { timeout: 3000 }),
// Redis check
() => this.redis.isHealthy('redis'),
// Memory check (heap should not exceed 150MB)
() => this.memory.checkHeap('memory_heap', 150 * 1024 * 1024),
// Memory check (RSS should not exceed 300MB)
() => this.memory.checkRSS('memory_rss', 300 * 1024 * 1024),
]);
}
@Get('live')
liveness() {
// Simple liveness probe for Kubernetes
return { status: 'ok', timestamp: new Date().toISOString() };
}
@Get('ready')
@HealthCheck()
readiness() {
// Readiness probe - checks all dependencies
return this.health.check([
() => this.db.pingCheck('database'),
() => this.redis.isHealthy('redis'),
]);
}
}
10. ROLLBACK PROCEDURES
10.1 Automatic Rollback
Trigger conditions:
- Health checks failing for >2 minutes
- Error rate >5% for >5 minutes
- Database migration failure
- Critical exception thrown
File: scripts/rollback.sh
#!/bin/bash
set -euo pipefail
ROLLBACK_TO_VERSION=${1:-previous}
echo "===== EMERGENCY ROLLBACK ====="
echo "Rolling back to: $ROLLBACK_TO_VERSION"
# Step 1: Identify current and previous versions
CURRENT_VERSION=$(docker inspect erp-backend --format='{{.Config.Labels.version}}')
echo "Current version: $CURRENT_VERSION"
if [ "$ROLLBACK_TO_VERSION" = "previous" ]; then
ROLLBACK_TO_VERSION=$(git describe --tags --abbrev=0 HEAD^)
fi
echo "Rollback to version: $ROLLBACK_TO_VERSION"
# Step 2: Stop current containers
echo "Stopping current containers..."
docker-compose -f docker-compose.prod.yml stop backend frontend
# Step 3: Restore database backup (if needed)
read -p "Restore database backup? (y/N) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Restoring database..."
LATEST_BACKUP=$(ls -t /backups/postgres/full_*.dump | head -1)
./scripts/restore-postgres.sh --backup=$LATEST_BACKUP --no-prompt
fi
# Step 4: Checkout previous version
echo "Checking out version $ROLLBACK_TO_VERSION..."
git fetch --tags
git checkout tags/$ROLLBACK_TO_VERSION
# Step 5: Start services with previous version
echo "Starting services with version $ROLLBACK_TO_VERSION..."
docker-compose -f docker-compose.prod.yml up -d backend frontend
# Step 6: Verify health
echo "Waiting for services to start..."
sleep 30
./scripts/health-check.sh
if [ $? -eq 0 ]; then
echo "===== ROLLBACK SUCCESSFUL ====="
echo "Rolled back from $CURRENT_VERSION to $ROLLBACK_TO_VERSION"
# Notify team
curl -X POST https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
-H 'Content-Type: application/json' \
-d "{\"text\": \"🚨 EMERGENCY ROLLBACK: $CURRENT_VERSION → $ROLLBACK_TO_VERSION\"}"
else
echo "===== ROLLBACK FAILED ====="
echo "Manual intervention required!"
exit 1
fi
10.2 Manual Rollback Steps
# 1. Identify issue and decide to rollback
# Check logs, monitoring dashboards, alerts
# 2. Execute rollback script
./scripts/rollback.sh v1.1.0
# 3. If automated rollback fails, manual steps:
# Stop services
docker-compose -f docker-compose.prod.yml stop
# Restore database from backup
pg_restore -h localhost -U erp_user -d erp_generic /backups/postgres/full_20251124_020000.dump
# Checkout previous stable version
git checkout tags/v1.1.0
# Start services
docker-compose -f docker-compose.prod.yml up -d
# 4. Verify system is stable
./scripts/health-check.sh
npm run test:smoke
# 5. Communicate to stakeholders
# Post-mortem within 48 hours
10.3 Database Rollback
Forward-only migrations (recommended):
# Never use "prisma migrate reset" in production!
# Instead, create a new migration to undo changes
npx prisma migrate dev --name revert_feature_x
# Example: If you added a column, create migration to drop it
# migrations/20251124_revert_feature_x/migration.sql:
ALTER TABLE core.products DROP COLUMN IF EXISTS new_field;
Emergency database rollback (last resort):
# 1. Stop application
docker-compose stop backend
# 2. Create safety backup
pg_dump -h localhost -U erp_user -Fc erp_generic > /backups/pre-rollback-$(date +%Y%m%d_%H%M%S).dump
# 3. Restore from backup (point-in-time before bad deployment)
pg_restore -h localhost -U erp_user -d erp_generic -c /backups/postgres/full_20251124_020000.dump
# 4. Verify data integrity
psql -h localhost -U erp_user -d erp_generic -c "SELECT COUNT(*) FROM auth.users;"
# 5. Restart application
docker-compose start backend
11. TROUBLESHOOTING
11.1 Common Issues
Issue: Backend fails to start
# Check logs
docker-compose logs backend
# Common causes:
# 1. Database connection failed
docker-compose exec postgres pg_isready
# Fix: Verify DATABASE_URL, check PostgreSQL is running
# 2. Redis connection failed
docker-compose exec redis redis-cli ping
# Fix: Verify REDIS_URL, check Redis is running
# 3. Missing environment variables
docker-compose exec backend env | grep DATABASE_URL
# Fix: Update .env file
# 4. Port already in use
sudo lsof -i :3000
# Fix: Stop conflicting process or change PORT
Issue: Database migrations fail
# Check migration status
npx prisma migrate status
# View pending migrations
npx prisma migrate resolve
# Force migration (CAUTION: only in dev)
npx prisma migrate resolve --applied "20251124_migration_name"
# Production fix: Create compensating migration
npx prisma migrate dev --name fix_migration_issue
Issue: High memory usage
# Check container stats
docker stats
# If PostgreSQL using too much memory:
# Reduce shared_buffers in postgresql.conf
# Restart: docker-compose restart postgres
# If Node.js using too much memory:
# Add heap limit: NODE_OPTIONS="--max-old-space-size=2048"
# Restart: docker-compose restart backend
Issue: Slow API responses
# Check database query performance
docker-compose exec postgres psql -U erp_user -d erp_generic
# View slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
# View active connections
SELECT count(*) FROM pg_stat_activity;
# View locks
SELECT * FROM pg_locks WHERE NOT granted;
# Fix: Add indexes, optimize queries, increase resources
11.2 Disaster Recovery Checklist
Database Corruption:
- Stop application immediately
- Identify corruption scope (single table vs full database)
- Restore from latest backup
- Replay WAL logs for point-in-time recovery
- Verify data integrity
- Restart application
Complete System Failure:
- Assess damage (hardware failure, network outage, etc.)
- Provision new infrastructure (if needed)
- Restore database from backup (cloud or offsite)
- Deploy latest stable version
- Restore Redis data (if needed)
- Verify system health
- Resume operations
- Post-mortem analysis
Security Breach:
- Isolate compromised system immediately
- Rotate all credentials (database, API keys, JWT secrets)
- Audit access logs
- Patch vulnerability
- Deploy patched version
- Notify affected users (if data leaked)
- Conduct full security audit
12. REFERENCES
Internal Documentation:
- README.md - DevOps overview
- MONITORING-OBSERVABILITY.md - Monitoring setup
- BACKUP-RECOVERY.md - Backup procedures
- SECURITY-HARDENING.md - Security hardening
- CI-CD-PIPELINE.md - CI/CD pipeline
- Database Schemas - Database DDL
- ADR-001: Stack Tecnológico
- ADR-003: Multi-Tenancy
External Resources:
- Docker Documentation
- Docker Compose Documentation
- PostgreSQL 16 Documentation
- Redis Documentation
- NestJS Documentation
- Prisma Documentation
- 12-Factor App
- Blue-Green Deployment Pattern
APPENDIX A: Quick Reference Commands
# Health check
./scripts/health-check.sh
# View logs
docker-compose logs -f backend
docker-compose logs -f frontend
docker-compose logs -f postgres
docker-compose logs -f redis
# Database console
docker-compose exec postgres psql -U erp_user -d erp_generic
# Redis console
docker-compose exec redis redis-cli -a $REDIS_PASSWORD
# Run migrations
docker-compose exec backend npx prisma migrate deploy
# Backup database
./scripts/backup-postgres.sh
# Restore database
./scripts/restore-postgres.sh --backup=full_20251124_020000.dump
# Rollback deployment
./scripts/rollback.sh v1.1.0
# View container stats
docker stats
# Restart service
docker-compose restart backend
# Scale service
docker-compose up -d --scale backend=5
Documento: DEPLOYMENT-GUIDE.md Versión: 1.0 Total Páginas: ~18 Última Actualización: 2025-11-24 Próxima Revisión: Mensual