--- id: "PIPELINE-ORQUESTACION" title: "Pipeline de Orquestaci\u00f3n - Conectando los Modelos" type: "Documentation" project: "trading-platform" version: "1.0.0" updated_date: "2026-01-04" --- # Pipeline de Orquestaci\u00f3n - Conectando los Modelos **Versi\u00f3n:** 1.0.0 **Fecha:** 2025-12-05 **M\u00f3dulo:** OQI-006-ml-signals **Autor:** Trading Strategist - OrbiQuant IA --- ## Tabla de Contenidos 1. [Visi\u00f3n General](#visi\u00f3n-general) 2. [Arquitectura del Pipeline](#arquitectura-del-pipeline) 3. [Flujo de Datos](#flujo-de-datos) 4. [Dependencias entre Modelos](#dependencias-entre-modelos) 5. [StrategyOrchestrator](#strategyorchestrator) 6. [Escenarios de Uso](#escenarios-de-uso) 7. [Optimizaci\u00f3n y Performance](#optimizaci\u00f3n-y-performance) 8. [Monitoring y Alertas](#monitoring-y-alertas) --- ## Visi\u00f3n General El pipeline de orquestaci\u00f3n coordina m\u00faltiples modelos ML para generar se\u00f1ales de trading coherentes y de alta calidad. ### Principios de Dise\u00f1o 1. **Modular**: Cada modelo es independiente y reemplazable 2. **Secuencial**: Modelos se ejecutan en orden l\u00f3gico de dependencias 3. **Fault-Tolerant**: Fallo de un modelo no detiene el pipeline completo 4. **Cacheable**: Resultados intermedios se pueden cachear para eficiencia 5. **Observable**: Cada etapa es monitoreada y logueada --- ## Arquitectura del Pipeline ### Diagrama de Alto Nivel ``` ┌────────────────────────────────────────────────────────────────────┐ │ ORBIQUANT ML PIPELINE │ ├────────────────────────────────────────────────────────────────────┤ │ │ │ INPUT: Market Data (OHLCV) │ │ ┌──────────────────────┐ │ │ │ MarketDataFetcher │ │ │ └──────────┬───────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────┐ │ │ │ Feature Engineering │ (50+ features) │ │ └──────────┬───────────┘ │ │ │ │ │ ┌──────────┴───────────────────────────────────┐ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │ │ │AMDDetector │ │LiquidityHunter│ │OrderFlowAnalyzer│ │ │ │(Phase) │ │(Sweeps) │ │(Institutional) │ │ │ └─────┬───────┘ └──────┬───────┘ └────────┬────────┘ │ │ │ │ │ │ │ └─────────────────┼───────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────┐ │ │ │ Feature Union │ (70+ features) │ │ └────────┬─────────┘ │ │ │ │ │ ┌───────────────┴────────────────┐ │ │ │ │ │ │ ▼ ▼ │ │ ┌──────────────┐ ┌──────────────┐ │ │ │RangePredictor│ │TPSLClassifier│ │ │ │(ΔH/ΔL) │ │(P[TP first]) │ │ │ └──────┬───────┘ └──────┬───────┘ │ │ │ │ │ │ └────────────────┬───────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────┐ │ │ │StrategyOrchestra│ │ │ │tor (Meta-Model) │ │ │ └────────┬─────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────┐ │ │ │ Signal Validator │ │ │ └────────┬─────────┘ │ │ │ │ │ ▼ │ │ OUTPUT: Trading Signal (BUY/SELL/HOLD) │ │ ┌──────────────────────┐ │ │ │ - Action │ │ │ │ - Entry Price │ │ │ │ - Stop Loss │ │ │ │ - Take Profit │ │ │ │ - Position Size │ │ │ │ - Confidence │ │ │ │ - Reasoning │ │ │ └──────────────────────┘ │ │ │ └────────────────────────────────────────────────────────────────────┘ ``` --- ## Flujo de Datos ### Pipeline Completo Paso a Paso ```python class MLPipeline: """ Pipeline completo de ML """ def __init__(self, models, config=None): self.models = models self.config = config or {} self.cache = {} self.logger = setup_logger() async def execute(self, symbol: str, timeframe: str = '5m'): """ Ejecuta pipeline completo """ pipeline_id = f"{symbol}_{timeframe}_{int(time.time())}" self.logger.info(f"Pipeline {pipeline_id} started") try: # STEP 1: Fetch market data start_time = time.time() market_data = await self._fetch_market_data(symbol, timeframe) self.logger.info(f"Step 1: Market data fetched ({time.time() - start_time:.2f}s)") # STEP 2: Feature engineering start_time = time.time() base_features = self._engineer_base_features(market_data) self.logger.info(f"Step 2: Base features engineered ({time.time() - start_time:.2f}s)") # STEP 3: AMDDetector start_time = time.time() amd_prediction = self.models['amd_detector'].predict(base_features) self.cache['amd_prediction'] = amd_prediction self.logger.info(f"Step 3: AMD phase detected: {amd_prediction['phase']} ({time.time() - start_time:.2f}s)") # Early exit if manipulation or neutral if amd_prediction['phase'] in ['manipulation', 'neutral']: return self._create_hold_signal( reason=f"AMD phase is {amd_prediction['phase']}" ) # STEP 4: LiquidityHunter start_time = time.time() liquidity_prediction = self.models['liquidity_hunter'].predict(market_data) self.cache['liquidity_prediction'] = liquidity_prediction self.logger.info(f"Step 4: Liquidity analyzed ({time.time() - start_time:.2f}s)") # STEP 5: OrderFlowAnalyzer (opcional) orderflow_prediction = None if 'order_flow_analyzer' in self.models: start_time = time.time() orderflow_prediction = self.models['order_flow_analyzer'].predict(market_data) self.cache['orderflow_prediction'] = orderflow_prediction self.logger.info(f"Step 5: Order flow analyzed ({time.time() - start_time:.2f}s)") # STEP 6: Enhanced features start_time = time.time() enhanced_features = self._combine_features( base_features, amd_prediction, liquidity_prediction, orderflow_prediction ) self.logger.info(f"Step 6: Enhanced features created ({time.time() - start_time:.2f}s)") # STEP 7: RangePredictor start_time = time.time() range_prediction = self.models['range_predictor'].predict( enhanced_features, current_price=market_data['close'].iloc[-1] ) self.cache['range_prediction'] = range_prediction self.logger.info(f"Step 7: Range predicted ({time.time() - start_time:.2f}s)") # STEP 8: TPSLClassifier with stacking start_time = time.time() stacked_features = self._stack_features( enhanced_features, range_prediction ) tpsl_prediction = self.models['tpsl_classifier'].predict( stacked_features, current_price=market_data['close'].iloc[-1] ) self.cache['tpsl_prediction'] = tpsl_prediction self.logger.info(f"Step 8: TP/SL probabilities calculated ({time.time() - start_time:.2f}s)") # STEP 9: StrategyOrchestrator start_time = time.time() signal = self.models['orchestrator'].generate_signal( market_data=market_data, current_price=market_data['close'].iloc[-1], predictions={ 'amd': amd_prediction, 'liquidity': liquidity_prediction, 'orderflow': orderflow_prediction, 'range': range_prediction, 'tpsl': tpsl_prediction } ) self.logger.info(f"Step 9: Signal generated: {signal['action']} ({time.time() - start_time:.2f}s)") # STEP 10: Validation start_time = time.time() validated_signal = self._validate_signal(signal, market_data) self.logger.info(f"Step 10: Signal validated ({time.time() - start_time:.2f}s)") self.logger.info(f"Pipeline {pipeline_id} completed successfully") return validated_signal except Exception as e: self.logger.error(f"Pipeline {pipeline_id} failed: {str(e)}") return self._create_error_signal(str(e)) def _combine_features(self, base, amd, liquidity, orderflow): """Combina features de diferentes fuentes""" combined = base.copy() # AMD features combined['phase_encoded'] = self._encode_phase(amd['phase']) combined['phase_confidence'] = amd['confidence'] combined['phase_strength'] = amd['strength'] for phase_name, prob in amd['probabilities'].items(): combined[f'phase_prob_{phase_name}'] = prob # Liquidity features if liquidity: for liq in liquidity: if liq.liquidity_type == 'BSL': combined['bsl_sweep_prob'] = liq.sweep_probability combined['bsl_distance_pct'] = liq.distance_pct elif liq.liquidity_type == 'SSL': combined['ssl_sweep_prob'] = liq.sweep_probability combined['ssl_distance_pct'] = liq.distance_pct # OrderFlow features (if available) if orderflow: combined['flow_type_encoded'] = orderflow.flow_type combined['institutional_activity'] = orderflow.institutional_activity return combined def _stack_features(self, base_features, range_predictions): """Stacking: usa predictions como features""" stacked = base_features.copy() for pred in range_predictions: horizon = pred.horizon stacked[f'pred_delta_high_{horizon}'] = pred.delta_high stacked[f'pred_delta_low_{horizon}'] = pred.delta_low stacked[f'pred_conf_high_{horizon}'] = pred.confidence_high stacked[f'pred_conf_low_{horizon}'] = pred.confidence_low return stacked ``` --- ## Dependencias entre Modelos ### Grafo de Dependencias ``` Market Data │ ├─────────────────┬─────────────────┬─────────────────┐ │ │ │ │ ▼ ▼ ▼ ▼ Base Features AMDDetector LiquidityHunter OrderFlow │ │ │ │ └─────────────────┴─────────────────┴─────────────────┘ │ ▼ Enhanced Features │ ▼ RangePredictor │ ▼ Stacked Features │ ▼ TPSLClassifier │ ▼ All Predictions │ ▼ StrategyOrchestrator │ ▼ Trading Signal ``` ### Matriz de Dependencias | Modelo | Depende de | Output usado por | |--------|-----------|------------------| | **AMDDetector** | Base Features | RangePredictor, TPSLClassifier, Orchestrator | | **LiquidityHunter** | Base Features | TPSLClassifier, Orchestrator | | **OrderFlowAnalyzer** | Market Data (granular) | Orchestrator | | **RangePredictor** | Base + AMD + Liquidity | TPSLClassifier, Orchestrator | | **TPSLClassifier** | Base + AMD + Range | Orchestrator | | **StrategyOrchestrator** | Todos | - (genera signal final) | --- ## StrategyOrchestrator ### Implementaci\u00f3n Completa ```python class StrategyOrchestrator: """ Meta-modelo que combina todas las predicciones """ def __init__(self, config=None): self.config = config or self._default_config() def _default_config(self): return { # Pesos de cada modelo 'weights': { 'amd': 0.30, 'range': 0.25, 'tpsl': 0.25, 'liquidity': 0.15, 'orderflow': 0.05 }, # Umbrales 'min_confidence': 0.60, 'min_tp_probability': 0.55, 'min_amd_confidence': 0.60, # Risk management 'max_position_size': 0.20, # 20% of capital max 'risk_per_trade': 0.02, # 2% risk per trade # Filters 'avoid_manipulation': True, 'require_killzone': False, # Set to True for stricter filtering 'min_rr_ratio': 1.5, # Confluence requirements 'min_confluence_factors': 3 } def generate_signal(self, market_data, current_price, predictions): """ Genera se\u00f1al final combinando todas las predicciones """ signal = { 'action': 'hold', 'confidence': 0.0, 'entry_price': current_price, 'stop_loss': None, 'take_profit': None, 'position_size': 0.0, 'risk_reward_ratio': 0.0, 'reasoning': [], 'confluence_factors': [], 'model_outputs': predictions, 'metadata': {} } # === PASO 1: AMD PHASE CHECK === amd = predictions['amd'] if amd['confidence'] < self.config['min_amd_confidence']: signal['reasoning'].append(f"Low AMD confidence: {amd['confidence']:.2%}") return signal if self.config['avoid_manipulation'] and amd['phase'] == 'manipulation': signal['reasoning'].append("Manipulation phase detected - avoiding entry") return signal # Determine bias from AMD if amd['phase'] == 'accumulation': bias = 'bullish' signal['confluence_factors'].append('amd_accumulation') signal['reasoning'].append(f"✓ AMD: Accumulation (conf: {amd['confidence']:.2%})") elif amd['phase'] == 'distribution': bias = 'bearish' signal['confluence_factors'].append('amd_distribution') signal['reasoning'].append(f"✓ AMD: Distribution (conf: {amd['confidence']:.2%})") else: signal['reasoning'].append(f"AMD: {amd['phase']} - neutral bias") return signal # === PASO 2: RANGE PREDICTION CHECK === range_pred = predictions['range'] # Verify range aligns with bias if bias == 'bullish': range_alignment = range_pred['15m'].delta_high > range_pred['15m'].delta_low * 1.5 if range_alignment: signal['confluence_factors'].append('range_bullish') signal['reasoning'].append( f"✓ Range: Upside favored (ΔH: {range_pred['15m'].delta_high:.3f}, " f"ΔL: {range_pred['15m'].delta_low:.3f})" ) else: range_alignment = range_pred['15m'].delta_low > range_pred['15m'].delta_high * 1.5 if range_alignment: signal['confluence_factors'].append('range_bearish') signal['reasoning'].append( f"✓ Range: Downside favored (ΔL: {range_pred['15m'].delta_low:.3f}, " f"ΔH: {range_pred['15m'].delta_high:.3f})" ) if not range_alignment: signal['reasoning'].append("✗ Range prediction does not align with bias") return signal # === PASO 3: TP/SL PROBABILITY CHECK === tpsl_preds = predictions['tpsl'] # Find relevant TPSL prediction action = 'long' if bias == 'bullish' else 'short' relevant_tpsl = [p for p in tpsl_preds if p.recommended_action == action] if not relevant_tpsl: signal['reasoning'].append("✗ No suitable TP/SL configuration found") return signal best_tpsl = max(relevant_tpsl, key=lambda x: x.prob_tp_first) if best_tpsl.prob_tp_first < self.config['min_tp_probability']: signal['reasoning'].append( f"✗ Low TP probability: {best_tpsl.prob_tp_first:.2%} " f"(min: {self.config['min_tp_probability']:.2%})" ) return signal signal['confluence_factors'].append('high_tp_probability') signal['reasoning'].append( f"✓ TP Probability: {best_tpsl.prob_tp_first:.2%} " f"(R:R {best_tpsl.rr_config})" ) # === PASO 4: LIQUIDITY RISK CHECK === if predictions['liquidity']: high_risk_liquidity = any( liq.sweep_probability > 0.7 and liq.distance_pct < 0.005 for liq in predictions['liquidity'] ) if high_risk_liquidity: signal['reasoning'].append("⚠ High liquidity sweep risk nearby") position_multiplier = 0.5 # Reduce position size else: signal['confluence_factors'].append('low_liquidity_risk') signal['reasoning'].append("✓ Low liquidity sweep risk") position_multiplier = 1.0 else: position_multiplier = 1.0 # === PASO 5: ORDERFLOW CHECK (if available) === if predictions.get('orderflow'): flow = predictions['orderflow'] if bias == 'bullish' and flow.flow_type == 'accumulation': signal['confluence_factors'].append('orderflow_accumulation') signal['reasoning'].append("✓ Order flow confirms accumulation") elif bias == 'bearish' and flow.flow_type == 'distribution': signal['confluence_factors'].append('orderflow_distribution') signal['reasoning'].append("✓ Order flow confirms distribution") # === PASO 6: CONFLUENCE CHECK === num_factors = len(signal['confluence_factors']) if num_factors < self.config['min_confluence_factors']: signal['reasoning'].append( f"✗ Insufficient confluence: {num_factors}/{self.config['min_confluence_factors']} factors" ) return signal signal['reasoning'].append(f"✓ Strong confluence: {num_factors} factors aligned") # === PASO 7: CALCULATE CONFIDENCE === weights = self.config['weights'] confidence = 0.0 # AMD contribution confidence += weights['amd'] * amd['confidence'] # Range contribution range_conf = ( range_pred['15m'].confidence_high + range_pred['15m'].confidence_low ) / 2 confidence += weights['range'] * range_conf # TPSL contribution confidence += weights['tpsl'] * best_tpsl.confidence # Liquidity contribution (inverse of risk) if predictions['liquidity']: max_risk = max(liq.risk_score for liq in predictions['liquidity']) liq_conf = 1 - max_risk confidence += weights['liquidity'] * liq_conf # OrderFlow contribution if predictions.get('orderflow'): confidence += weights['orderflow'] * predictions['orderflow'].confidence signal['confidence'] = confidence if confidence < self.config['min_confidence']: signal['reasoning'].append( f"✗ Overall confidence too low: {confidence:.2%} " f"(min: {self.config['min_confidence']:.2%})" ) return signal # === PASO 8: GENERATE ENTRY === signal['action'] = action signal['entry_price'] = current_price signal['stop_loss'] = best_tpsl.sl_price signal['take_profit'] = best_tpsl.tp_price # R:R check rr_ratio = abs( (best_tpsl.tp_price - current_price) / (current_price - best_tpsl.sl_price) ) signal['risk_reward_ratio'] = rr_ratio if rr_ratio < self.config['min_rr_ratio']: signal['reasoning'].append( f"✗ R:R too low: {rr_ratio:.2f} (min: {self.config['min_rr_ratio']})" ) signal['action'] = 'hold' return signal # === PASO 9: POSITION SIZING === account_risk = self.config['risk_per_trade'] price_risk = abs(current_price - best_tpsl.sl_price) / current_price position_size = (account_risk / price_risk) * position_multiplier position_size = min(position_size, self.config['max_position_size']) signal['position_size'] = position_size # === PASO 10: FINAL REASONING === signal['reasoning'].append(f"✓ SIGNAL GENERATED: {action.upper()}") signal['reasoning'].append(f"Confidence: {confidence:.2%}") signal['reasoning'].append(f"R:R: {rr_ratio:.2f}:1") signal['reasoning'].append(f"Position Size: {position_size:.2%}") signal['reasoning'].append(f"Confluence Factors: {', '.join(signal['confluence_factors'])}") # Metadata signal['metadata'] = { 'amd_phase': amd['phase'], 'range_horizon': '15m', 'tpsl_config': best_tpsl.rr_config, 'num_confluence': num_factors, 'timestamp': datetime.now().isoformat() } return signal ``` --- ## Escenarios de Uso ### Escenario 1: Setup Perfecto (Alta Confluence) ``` ENTRADA: - Symbol: XAUUSD - Price: $2,350.00 - Timeframe: 5m - Killzone: NY AM PREDICCIONES: - AMD: Accumulation (0.78 conf) - Range: ΔHigh=+0.85%, ΔLow=-0.42% - TPSL: P(TP)=0.68 (RR 2:1) - Liquidity: Low sweep risk - OrderFlow: Institutional accumulation OUTPUT SIGNAL: { "action": "long", "confidence": 0.73, "entry_price": 2350.00, "stop_loss": 2317.00, "take_profit": 2416.00, "position_size": 0.15, "risk_reward_ratio": 2.0, "confluence_factors": [ "amd_accumulation", "range_bullish", "high_tp_probability", "low_liquidity_risk", "orderflow_accumulation" ], "reasoning": [ "✓ AMD: Accumulation (conf: 78%)", "✓ Range: Upside favored", "✓ TP Probability: 68%", "✓ Low liquidity sweep risk", "✓ Order flow confirms accumulation", "✓ Strong confluence: 5 factors aligned", "✓ SIGNAL GENERATED: LONG", "Confidence: 73%", "R:R: 2.00:1" ] } ``` ### Escenario 2: Manipulation Phase (Hold) ``` PREDICCIONES: - AMD: Manipulation (0.82 conf) - Liquidity: High BSL sweep prob (0.85) - False breakouts detected OUTPUT: { "action": "hold", "confidence": 0.0, "reasoning": [ "Manipulation phase detected - avoiding entry" ] } ``` ### Escenario 3: Baja Confluence (Hold) ``` PREDICCIONES: - AMD: Accumulation (0.72 conf) - Range: ΔHigh=+0.45%, ΔLow=-0.40% (no clear bias) - TPSL: P(TP)=0.52 (marginal) OUTPUT: { "action": "hold", "confidence": 0.45, "reasoning": [ "✓ AMD: Accumulation (conf: 72%)", "✗ Range prediction does not strongly favor upside", "✗ TP probability marginal: 52%", "✗ Insufficient confluence: 1/3 factors", "✗ Overall confidence too low: 45% (min: 60%)" ] } ``` --- ## Optimizaci\u00f3n y Performance ### Caching Strategy ```python class PipelineCache: """ Cache para resultados intermedios """ def __init__(self, ttl=300): # 5 minutes TTL self.cache = {} self.ttl = ttl def get(self, key): """Get from cache if not expired""" if key in self.cache: value, timestamp = self.cache[key] if time.time() - timestamp < self.ttl: return value else: del self.cache[key] return None def set(self, key, value): """Set in cache""" self.cache[key] = (value, time.time()) # Uso cache = PipelineCache(ttl=300) # Cache features by symbol+timeframe cache_key = f"{symbol}_{timeframe}_features" features = cache.get(cache_key) if features is None: features = engineer_features(market_data) cache.set(cache_key, features) ``` ### Parallel Execution ```python import asyncio async def execute_models_parallel(base_features): """ Ejecuta modelos independientes en paralelo """ # Estos modelos no dependen entre s\u00ed tasks = [ asyncio.create_task(amd_detector.predict_async(base_features)), asyncio.create_task(liquidity_hunter.predict_async(base_features)), asyncio.create_task(orderflow_analyzer.predict_async(base_features)) ] # Wait for all results = await asyncio.gather(*tasks) return { 'amd': results[0], 'liquidity': results[1], 'orderflow': results[2] } ``` ### Batch Processing ```python def process_multiple_symbols(symbols, timeframe='5m'): """ Procesa m\u00faltiples s\u00edmbolos en batch """ results = {} # Fetch all data at once market_data_batch = { symbol: fetch_market_data(symbol, timeframe) for symbol in symbols } # Engineer features in batch features_batch = { symbol: engineer_features(data) for symbol, data in market_data_batch.items() } # Process through models (can be parallelized) for symbol in symbols: signal = pipeline.execute(symbol, features_batch[symbol]) results[symbol] = signal return results ``` --- ## Monitoring y Alertas ### M\u00e9tricas Clave ```python from prometheus_client import Counter, Histogram, Gauge # Counters signals_generated = Counter( 'ml_signals_generated_total', 'Total signals generated', ['action', 'symbol'] ) pipeline_errors = Counter( 'ml_pipeline_errors_total', 'Total pipeline errors', ['stage', 'error_type'] ) # Histograms pipeline_latency = Histogram( 'ml_pipeline_latency_seconds', 'Pipeline execution time', ['symbol'] ) model_latency = Histogram( 'ml_model_latency_seconds', 'Individual model execution time', ['model_name'] ) # Gauges model_confidence = Gauge( 'ml_model_confidence', 'Model confidence', ['model_name', 'symbol'] ) active_signals = Gauge( 'ml_active_signals', 'Number of active signals', ['symbol'] ) ``` ### Logging Estructura ```python import structlog logger = structlog.get_logger() def log_signal(signal): """Log estructurado de señales""" logger.info( "signal_generated", symbol=signal['metadata']['symbol'], action=signal['action'], confidence=signal['confidence'], entry_price=signal['entry_price'], stop_loss=signal['stop_loss'], take_profit=signal['take_profit'], rr_ratio=signal['risk_reward_ratio'], amd_phase=signal['metadata']['amd_phase'], num_confluence=signal['metadata']['num_confluence'], timestamp=signal['metadata']['timestamp'] ) ``` ### Alertas ```python def check_alerts(signal, predictions): """ Genera alertas basadas en condiciones """ alerts = [] # Alert 1: High confidence signal if signal['confidence'] > 0.80: alerts.append({ 'type': 'high_confidence_signal', 'severity': 'info', 'message': f"High confidence {signal['action']} signal: {signal['confidence']:.2%}" }) # Alert 2: Manipulation detected if predictions['amd']['phase'] == 'manipulation': alerts.append({ 'type': 'manipulation_detected', 'severity': 'warning', 'message': "Market manipulation phase detected - exercise caution" }) # Alert 3: High liquidity sweep risk if predictions['liquidity']: high_risk = any(liq.sweep_probability > 0.80 for liq in predictions['liquidity']) if high_risk: alerts.append({ 'type': 'high_liquidity_risk', 'severity': 'warning', 'message': "High probability of liquidity sweep nearby" }) # Alert 4: Model disagreement if signal['confidence'] < 0.50 and len(signal['confluence_factors']) < 2: alerts.append({ 'type': 'model_disagreement', 'severity': 'info', 'message': "Low model agreement - holding position" }) return alerts ``` --- ## Resumen del Pipeline ### Checklist de Ejecuci\u00f3n - [ ] Market data fetched successfully - [ ] Base features engineered (50+) - [ ] AMDDetector executed (phase detected) - [ ] LiquidityHunter executed (sweeps analyzed) - [ ] OrderFlowAnalyzer executed (optional) - [ ] Enhanced features created (70+) - [ ] RangePredictor executed (ΔH/ΔL predicted) - [ ] Stacked features created (80+) - [ ] TPSLClassifier executed (P[TP] calculated) - [ ] StrategyOrchestrator executed (signal generated) - [ ] Signal validated - [ ] Metrics logged - [ ] Alerts checked ### Latencias Esperadas | Etapa | Latencia Target | Notas | |-------|----------------|-------| | Market Data Fetch | <500ms | API call | | Feature Engineering | <200ms | CPU-bound | | AMDDetector | <100ms | GPU-accelerated | | LiquidityHunter | <50ms | Lightweight | | OrderFlowAnalyzer | <100ms | Optional | | RangePredictor | <100ms | GPU-accelerated | | TPSLClassifier | <100ms | GPU-accelerated | | StrategyOrchestrator | <50ms | Logic only | | **TOTAL** | **<1,200ms** | End-to-end | --- **Documento Generado:** 2025-12-05 **Pr\u00f3xima Revisi\u00f3n:** 2025-Q1 **Contacto:** ml-engineering@orbiquant.ai