trading-platform/orchestration/tareas/_archive/2026-01/TASK-2026-01-25-ML-DATA-MIGRATION/03-PLANEACION.md
Adrian Flores Cortes df43dd90cb [F0-F2] feat: Coherence analysis baseline + entity types + frontend stores
FASE 0 - Preparación y Purga:
- Archived 21 completed tasks to _archive/2026-01/
- Marked 4 docs as DEPRECATED
- Created 3 baseline coherence reports

FASE 1 - DDL-Backend Coherence:
- audit.types.ts: +4 types (SystemEvent, TradingAudit, ApiRequestLog, DataAccessLog)
- investment.types.ts: +4 types (RiskQuestionnaire, WithdrawalRequest, DailyPerformance, DistributionHistory)
- entity.types.ts: +5 types (Symbol, TradingBot, TradingSignal, TradingMetrics, PaperBalance)

FASE 2 - Backend-Frontend Coherence:
- investmentStore.ts: New Zustand store with 20+ actions
- mlStore.ts: New Zustand store with signal caching
- alerts.service.ts: New service with 15 functions

FASE 3 - Documentation:
- OQI-009: Updated to 100% coverage, added ET-MKT-004-productos.md
- OQI-010: Created full structure (STATUS.md, ROADMAP-MT4.md, ET-MT4-001-gateway.md)

Coherence Baseline Established:
- DDL-Backend: 31% (target 95%)
- Backend-Frontend: 72% (target 85%)
- Global: 39.6% (target 90%)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:08:04 -06:00

1.6 KiB

03-PLANEACION - ML Data Migration & Model Training

Fecha: 2026-01-25

Fase: PLANEACION (P)

Estado: COMPLETADA


1. Plan de Ejecucion

Fase 1: Preparacion Ambiente Python

  1. Crear venv en WSL: ~/venvs/data-service/
  2. Instalar dependencias: aiohttp, asyncpg, pandas, numpy, python-dotenv

Fase 2: Carga de Datos

  1. Crear script fetch_polygon_data.py
  2. Configurar API key de Polygon
  3. Ejecutar carga para 6 tickers x 365 dias
  4. Validar datos insertados

Fase 3: Migracion ML Engine

  1. Crear apps/ml-engine/src/data/database.py
  2. Implementar PostgreSQLConnection con metodos:
    • get_ticker_data()
    • execute_query() con traduccion MySQL→PostgreSQL
  3. Actualizar config/database.yaml
  4. Crear .env con credenciales

Fase 4: Entrenamiento Modelos

  1. Instalar dependencias ML: xgboost, scikit-learn, joblib
  2. Ejecutar train_attention_models.py
  3. Validar metricas de modelos
  4. Generar reporte de entrenamiento

Fase 5: Documentacion

  1. Actualizar DATABASE_INVENTORY.yml
  2. Actualizar ML_INVENTORY.yml
  3. Crear carpeta TASK con CAPVED

2. Estimacion de Entregables

Entregable Complejidad Archivos
fetch_polygon_data.py MEDIA 1
database.py ALTA 1
Config files BAJA 3
12 modelos ALTA 36
Documentacion MEDIA 4

3. Orden de Ejecucion

[1] Ambiente Python → [2] Datos → [3] Migration → [4] Training → [5] Docs
         ↓                ↓             ↓              ↓            ↓
      venv OK       469K bars     database.py    12 modelos    TASK folder