erp-transportistas-v2/docs/10-arquitectura/SINCRONIZACION-OFFLINE.md
Adrian Flores Cortes 6ed7f9e2ec [BACKUP] Pre-restructure workspace backup 2026-01-29
- Updated docs and inventory files
- Added new architecture docs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 17:35:54 -06:00

1227 lines
32 KiB
Markdown

# SINCRONIZACION-OFFLINE.md - Detalles de Implementacion
**Proyecto:** erp-transportistas
**Modulo:** MAI-006-tracking / App Conductor
**Version:** 1.0.0
**Fecha:** 2026-01-27
**Relacionado:** ARQUITECTURA-OFFLINE.md
---
## 1) Flujo Detallado de Sincronizacion
### 1.1 Estados del Sistema de Sync
```
+------------+ +------------+ +------------+
| IDLE |----->| SYNCING |----->| COMPLETED |
| | | | | |
+------------+ +------------+ +------------+
^ | |
| v |
| +------------+ |
| | ERROR | |
| | | |
| +------------+ |
| | |
+-------------------+-------------------+
```
### 1.2 Flujo Completo de Push Sync
```
1. TRIGGER
- Conexion restaurada (Network Change Event)
- Timer periodico (cada 60 segundos si online)
- Usuario fuerza sync manualmente
- Evento critico capturado (POD, firma)
2. PRE-SYNC CHECK
- Verificar conectividad real (ping al servidor)
- Verificar autenticacion valida (JWT no expirado)
- Verificar espacio en cola (no exceder limites)
3. QUEUE PROCESSING
- Obtener items pendientes de la cola
- Ordenar por prioridad (CRITICAL > HIGH > MEDIUM > LOW)
- Agrupar por tipo (batch processing)
4. BATCH SEND
- Para cada batch:
a. Serializar datos
b. Comprimir si aplica (gzip para >10KB)
c. Enviar request HTTP
d. Esperar respuesta
e. Procesar resultado
5. POST-SYNC
- Actualizar timestamps de sync
- Limpiar items enviados exitosamente
- Registrar errores para retry
- Actualizar UI (badge, notificaciones)
```
### 1.3 Flujo Completo de Pull Sync
```
1. TRIGGER
- Post push-sync exitoso
- Push notification recibida
- Usuario abre la app (foreground)
- Timer periodico (cada 5 minutos si online)
2. DELTA FETCH
- Enviar last_sync_timestamp al servidor
- Servidor retorna solo cambios desde ese timestamp
- Incluir version de esquema para migraciones
3. CONFLICT DETECTION
- Comparar registros locales con remotos
- Identificar conflictos por campo
- Clasificar tipo de conflicto
4. CONFLICT RESOLUTION
- Aplicar estrategia segun tipo de dato
- Registrar resoluciones en audit log
- Notificar usuario si es necesario
5. LOCAL UPDATE
- Aplicar cambios a base de datos local
- Actualizar cache de referencia
- Disparar eventos reactivos para UI
```
---
## 2) Manejo de Errores y Reintentos
### 2.1 Clasificacion de Errores
| Codigo | Tipo | Estrategia | Ejemplo |
|--------|------|------------|---------|
| 4xx | Client Error | No reintentar (excepto 408, 429) | 400 Bad Request |
| 408 | Timeout | Reintentar con backoff | Request Timeout |
| 429 | Rate Limit | Reintentar con backoff largo | Too Many Requests |
| 5xx | Server Error | Reintentar con backoff | 500 Internal Error |
| Network | Conexion | Reintentar al reconectar | No internet |
### 2.2 Algoritmo de Exponential Backoff
```typescript
interface RetryConfig {
maxRetries: number;
baseDelayMs: number;
maxDelayMs: number;
jitterFactor: number;
}
const DEFAULT_RETRY_CONFIG: RetryConfig = {
maxRetries: 8,
baseDelayMs: 1000,
maxDelayMs: 60000,
jitterFactor: 0.2
};
function calculateDelay(attempt: number, config: RetryConfig): number {
// Exponential: 1s, 2s, 4s, 8s, 16s, 32s, 60s, 60s...
const exponentialDelay = config.baseDelayMs * Math.pow(2, attempt - 1);
const cappedDelay = Math.min(exponentialDelay, config.maxDelayMs);
// Agregar jitter para evitar thundering herd
const jitter = cappedDelay * config.jitterFactor * Math.random();
return Math.floor(cappedDelay + jitter);
}
```
### 2.3 Manejo de Errores por Tipo de Operacion
| Operacion | Error | Accion | Notificacion Usuario |
|-----------|-------|--------|---------------------|
| Evento | Network | Queue + Retry | Toast discreto |
| Evento | 400 | Log + Drop | Ninguna (error interno) |
| Evento | 5xx | Queue + Retry | Banner si >3 intentos |
| Foto | Network | Queue + Retry | Badge contador |
| Foto | 413 | Comprimir + Retry | "Foto muy grande, comprimiendo" |
| Foto | 507 | Notify + Hold | "Servidor lleno, reintentando" |
| Firma POD | Network | Queue + Retry | Toast + Badge |
| Firma POD | 400 | Log + Alert | Modal de error |
| Firma POD | 5xx | Retry indefinido | Banner persistente |
### 2.4 Dead Letter Queue
```typescript
interface DeadLetterItem {
id: string;
originalItem: QueueItem;
errorHistory: ErrorEntry[];
failedAt: Date;
reason: 'MAX_RETRIES' | 'PERMANENT_ERROR' | 'VALIDATION_FAILED';
}
// Items que van a Dead Letter:
// - Mas de maxRetries fallidos
// - Error 400/422 (datos invalidos)
// - Datos corruptos localmente
// Acciones para Dead Letter:
// 1. Notificar al usuario
// 2. Enviar reporte a soporte
// 3. Mantener para revision manual
// 4. Opcion de reintentar manualmente
```
---
## 3) Priorizacion de Datos
### 3.1 Matriz de Prioridad
| Prioridad | Nivel | Datos | Tiempo Max Offline |
|-----------|-------|-------|-------------------|
| CRITICAL | 0 | Firma POD, Incidencia grave | Sync inmediato |
| HIGH | 1 | Eventos de viaje, Checklist | < 5 min |
| MEDIUM | 2 | Posiciones GPS | < 15 min |
| LOW | 3 | Fotos evidencia | < 1 hora |
| BACKGROUND | 4 | Logs, Analytics | < 24 horas |
### 3.2 Algoritmo de Priorizacion
```typescript
interface QueueItem {
id: string;
priority: Priority;
type: DataType;
createdAt: Date;
size: number;
retryCount: number;
}
function prioritizeQueue(items: QueueItem[]): QueueItem[] {
return items.sort((a, b) => {
// 1. Por prioridad (CRITICAL primero)
if (a.priority !== b.priority) {
return a.priority - b.priority;
}
// 2. Por antiguedad (mas viejo primero)
if (a.createdAt.getTime() !== b.createdAt.getTime()) {
return a.createdAt.getTime() - b.createdAt.getTime();
}
// 3. Por tamano (mas pequeno primero para quick wins)
return a.size - b.size;
});
}
```
### 3.3 Batching por Tipo
```typescript
const BATCH_CONFIG = {
events: { maxItems: 50, maxSize: 100 * 1024 }, // 100KB
positions: { maxItems: 100, maxSize: 50 * 1024 }, // 50KB
photos: { maxItems: 3, maxSize: 5 * 1024 * 1024 }, // 5MB
signatures: { maxItems: 1, maxSize: 500 * 1024 } // 500KB (una a la vez)
};
```
---
## 4) Limites de Almacenamiento Local
### 4.1 Limites por Tipo de Dato
| Tipo | Limite Soft | Limite Hard | Accion al Exceder |
|------|-------------|-------------|-------------------|
| Eventos | 500 items | 1000 items | Forzar sync / Drop oldest |
| Posiciones GPS | 2000 items | 5000 items | Comprimir / Aggregate |
| Fotos | 50 items | 100 items | Bloquear captura |
| Firmas | 10 items | 20 items | Bloquear captura |
| Cache | 100 MB | 200 MB | Purge LRU |
### 4.2 Monitoreo de Almacenamiento
```typescript
interface StorageStatus {
used: number;
available: number;
limit: number;
percentUsed: number;
breakdown: {
events: number;
positions: number;
photos: number;
signatures: number;
cache: number;
};
}
async function checkStorage(): Promise<StorageStatus> {
const estimate = await navigator.storage.estimate();
return {
used: estimate.usage || 0,
available: (estimate.quota || 0) - (estimate.usage || 0),
limit: estimate.quota || 0,
percentUsed: ((estimate.usage || 0) / (estimate.quota || 1)) * 100,
breakdown: await calculateBreakdown()
};
}
```
### 4.3 Alertas de Almacenamiento
| Porcentaje | Nivel | Accion |
|------------|-------|--------|
| < 60% | Normal | Ninguna |
| 60-80% | Warning | Toast informativo |
| 80-90% | Critical | Banner + Sugerir sync |
| > 90% | Emergency | Modal + Forzar sync |
| > 95% | Block | Bloquear nuevas capturas |
---
## 5) Politica de Retencion de Datos Offline
### 5.1 Reglas de Retencion
| Tipo de Dato | Retencion Local | Condicion de Borrado |
|--------------|-----------------|---------------------|
| Viaje activo | Hasta cierre | Viaje cerrado + synced |
| Viajes cerrados | 7 dias | Synced + tiempo cumplido |
| Eventos synced | 24 horas | Synced + tiempo cumplido |
| Eventos pending | Indefinido | Hasta sync exitoso |
| Fotos synced | Inmediato | Post sync exitoso |
| Fotos pending | 30 dias | Hasta sync o timeout |
| Cache referencia | 7 dias | LRU + tiempo cumplido |
| Posiciones synced | 1 hora | Synced + tiempo cumplido |
### 5.2 Proceso de Limpieza (Garbage Collection)
```typescript
async function runGarbageCollection(): Promise<void> {
const now = new Date();
// 1. Limpiar eventos synced > 24h
await cleanSyncedEvents(subDays(now, 1));
// 2. Limpiar posiciones synced > 1h
await cleanSyncedPositions(subHours(now, 1));
// 3. Limpiar fotos synced (inmediato)
await cleanSyncedPhotos();
// 4. Limpiar viajes cerrados > 7d
await cleanClosedTrips(subDays(now, 7));
// 5. Limpiar cache LRU > 7d
await cleanOldCache(subDays(now, 7));
// 6. Compactar base de datos
await database.compactDatabase();
}
```
### 5.3 Trigger de Limpieza
- Al completar sync exitoso
- Al iniciar la app
- Cada 6 horas en background
- Cuando almacenamiento > 70%
---
## 6) Ejemplos de Codigo TypeScript
### 6.1 SyncManager Service
```typescript
// src/services/sync/SyncManager.ts
import { BehaviorSubject, Observable } from 'rxjs';
import { OfflineQueue, QueueItem, Priority } from './OfflineQueue';
import { ConflictResolver } from './ConflictResolver';
import { NetworkService } from '../network/NetworkService';
import { ApiClient } from '../api/ApiClient';
import { Database } from '../database/Database';
export type SyncStatus = 'IDLE' | 'SYNCING' | 'COMPLETED' | 'ERROR';
export interface SyncProgress {
status: SyncStatus;
pendingCount: number;
syncedCount: number;
errorCount: number;
lastSyncAt: Date | null;
currentOperation: string | null;
error: Error | null;
}
export class SyncManager {
private queue: OfflineQueue;
private conflictResolver: ConflictResolver;
private networkService: NetworkService;
private apiClient: ApiClient;
private database: Database;
private _progress = new BehaviorSubject<SyncProgress>({
status: 'IDLE',
pendingCount: 0,
syncedCount: 0,
errorCount: 0,
lastSyncAt: null,
currentOperation: null,
error: null
});
public progress$: Observable<SyncProgress> = this._progress.asObservable();
private syncLock = false;
private syncInterval: NodeJS.Timer | null = null;
constructor(deps: {
queue: OfflineQueue;
conflictResolver: ConflictResolver;
networkService: NetworkService;
apiClient: ApiClient;
database: Database;
}) {
this.queue = deps.queue;
this.conflictResolver = deps.conflictResolver;
this.networkService = deps.networkService;
this.apiClient = deps.apiClient;
this.database = deps.database;
this.setupNetworkListener();
this.startPeriodicSync();
}
private setupNetworkListener(): void {
this.networkService.isOnline$.subscribe((isOnline) => {
if (isOnline) {
this.triggerSync('network_restored');
}
});
}
private startPeriodicSync(): void {
this.syncInterval = setInterval(() => {
if (this.networkService.isOnline()) {
this.triggerSync('periodic');
}
}, 60000); // Cada 60 segundos
}
public async triggerSync(reason: string): Promise<void> {
if (this.syncLock) {
console.log('[SyncManager] Sync already in progress, skipping');
return;
}
if (!this.networkService.isOnline()) {
console.log('[SyncManager] Offline, skipping sync');
return;
}
this.syncLock = true;
this.updateProgress({ status: 'SYNCING', currentOperation: 'Iniciando sync...' });
try {
// 1. Push local changes
await this.pushChanges();
// 2. Pull remote changes
await this.pullChanges();
// 3. Run garbage collection
await this.runGarbageCollection();
this.updateProgress({
status: 'COMPLETED',
lastSyncAt: new Date(),
currentOperation: null,
error: null
});
console.log(`[SyncManager] Sync completed (reason: ${reason})`);
} catch (error) {
console.error('[SyncManager] Sync failed:', error);
this.updateProgress({
status: 'ERROR',
error: error as Error,
currentOperation: null
});
} finally {
this.syncLock = false;
}
}
private async pushChanges(): Promise<void> {
const pendingItems = await this.queue.getPendingItems();
if (pendingItems.length === 0) {
return;
}
this.updateProgress({
pendingCount: pendingItems.length,
currentOperation: `Enviando ${pendingItems.length} items...`
});
// Agrupar por tipo
const eventItems = pendingItems.filter(i => i.type === 'EVENT');
const positionItems = pendingItems.filter(i => i.type === 'POSITION');
const photoItems = pendingItems.filter(i => i.type === 'PHOTO');
const signatureItems = pendingItems.filter(i => i.type === 'SIGNATURE');
// Enviar en orden de prioridad
await this.pushBatch(signatureItems, '/api/sync/signatures', 1);
await this.pushBatch(eventItems, '/api/sync/events', 50);
await this.pushBatch(positionItems, '/api/sync/positions', 100);
await this.pushBatch(photoItems, '/api/sync/photos', 3);
}
private async pushBatch(
items: QueueItem[],
endpoint: string,
batchSize: number
): Promise<void> {
for (let i = 0; i < items.length; i += batchSize) {
const batch = items.slice(i, i + batchSize);
try {
const response = await this.apiClient.post(endpoint, {
items: batch.map(item => item.data)
});
if (response.ok) {
// Marcar como enviados
await Promise.all(batch.map(item => this.queue.markAsSent(item.id)));
this.updateProgress({
syncedCount: this._progress.value.syncedCount + batch.length
});
}
} catch (error) {
// Incrementar retry count
await Promise.all(batch.map(item => this.queue.incrementRetry(item.id)));
this.updateProgress({
errorCount: this._progress.value.errorCount + batch.length
});
throw error;
}
}
}
private async pullChanges(): Promise<void> {
this.updateProgress({ currentOperation: 'Obteniendo cambios del servidor...' });
const lastSync = await this.database.getLastSyncTimestamp();
const response = await this.apiClient.get('/api/sync/pull', {
params: { since: lastSync?.toISOString() }
});
if (!response.ok) {
throw new Error('Failed to pull changes');
}
const changes = response.data;
// Resolver conflictos y aplicar cambios
for (const change of changes.items) {
const localRecord = await this.database.findById(change.type, change.id);
if (localRecord) {
const resolved = await this.conflictResolver.resolve(localRecord, change);
await this.database.upsert(change.type, resolved);
} else {
await this.database.insert(change.type, change);
}
}
await this.database.setLastSyncTimestamp(new Date());
}
private async runGarbageCollection(): Promise<void> {
this.updateProgress({ currentOperation: 'Limpiando datos antiguos...' });
await this.database.runGarbageCollection();
}
private updateProgress(partial: Partial<SyncProgress>): void {
this._progress.next({
...this._progress.value,
...partial
});
}
public getPendingCount(): Promise<number> {
return this.queue.getCount();
}
public getLastSyncTime(): Promise<Date | null> {
return this.database.getLastSyncTimestamp();
}
public forceSync(): Promise<void> {
return this.triggerSync('manual');
}
public destroy(): void {
if (this.syncInterval) {
clearInterval(this.syncInterval);
}
}
}
```
### 6.2 OfflineQueue
```typescript
// src/services/sync/OfflineQueue.ts
import { Database } from '../database/Database';
import { v4 as uuidv4 } from 'uuid';
export type Priority = 0 | 1 | 2 | 3 | 4;
export type DataType = 'EVENT' | 'POSITION' | 'PHOTO' | 'SIGNATURE' | 'CHECKLIST';
export type QueueStatus = 'PENDING' | 'SENDING' | 'SENT' | 'FAILED' | 'DEAD';
export interface QueueItem {
id: string;
type: DataType;
priority: Priority;
data: Record<string, unknown>;
metadata: {
viajeId: string;
createdAt: Date;
updatedAt: Date;
retryCount: number;
lastError: string | null;
size: number;
};
status: QueueStatus;
}
export interface RetryConfig {
maxRetries: number;
baseDelayMs: number;
maxDelayMs: number;
}
const PRIORITY_MAP: Record<DataType, Priority> = {
SIGNATURE: 0,
EVENT: 1,
CHECKLIST: 1,
POSITION: 2,
PHOTO: 3
};
const DEFAULT_RETRY_CONFIG: RetryConfig = {
maxRetries: 8,
baseDelayMs: 1000,
maxDelayMs: 60000
};
export class OfflineQueue {
private database: Database;
private retryConfig: RetryConfig;
constructor(database: Database, retryConfig?: Partial<RetryConfig>) {
this.database = database;
this.retryConfig = { ...DEFAULT_RETRY_CONFIG, ...retryConfig };
}
public async enqueue(
type: DataType,
data: Record<string, unknown>,
viajeId: string
): Promise<string> {
const id = uuidv4();
const now = new Date();
const item: QueueItem = {
id,
type,
priority: PRIORITY_MAP[type],
data,
metadata: {
viajeId,
createdAt: now,
updatedAt: now,
retryCount: 0,
lastError: null,
size: this.calculateSize(data)
},
status: 'PENDING'
};
await this.database.insert('sync_queue', item);
console.log(`[OfflineQueue] Enqueued ${type} item: ${id}`);
return id;
}
public async getPendingItems(): Promise<QueueItem[]> {
const items = await this.database.query<QueueItem>('sync_queue', {
status: { $in: ['PENDING', 'FAILED'] },
'metadata.retryCount': { $lt: this.retryConfig.maxRetries }
});
// Ordenar por prioridad y antiguedad
return items.sort((a, b) => {
if (a.priority !== b.priority) {
return a.priority - b.priority;
}
return a.metadata.createdAt.getTime() - b.metadata.createdAt.getTime();
});
}
public async markAsSent(id: string): Promise<void> {
await this.database.update('sync_queue', id, {
status: 'SENT',
'metadata.updatedAt': new Date()
});
// Borrar items enviados despues de un delay
setTimeout(() => this.removeIfSent(id), 5000);
}
public async incrementRetry(id: string, error?: Error): Promise<void> {
const item = await this.database.findById<QueueItem>('sync_queue', id);
if (!item) return;
const newRetryCount = item.metadata.retryCount + 1;
const newStatus: QueueStatus =
newRetryCount >= this.retryConfig.maxRetries ? 'DEAD' : 'FAILED';
await this.database.update('sync_queue', id, {
status: newStatus,
'metadata.retryCount': newRetryCount,
'metadata.lastError': error?.message || null,
'metadata.updatedAt': new Date()
});
if (newStatus === 'DEAD') {
console.error(`[OfflineQueue] Item ${id} moved to dead letter queue`);
await this.moveToDeadLetter(item, 'MAX_RETRIES');
}
}
public async getCount(): Promise<number> {
return this.database.count('sync_queue', {
status: { $in: ['PENDING', 'FAILED'] }
});
}
public async getByViaje(viajeId: string): Promise<QueueItem[]> {
return this.database.query<QueueItem>('sync_queue', {
'metadata.viajeId': viajeId
});
}
public async clear(viajeId?: string): Promise<void> {
if (viajeId) {
await this.database.deleteMany('sync_queue', {
'metadata.viajeId': viajeId,
status: 'SENT'
});
} else {
await this.database.deleteMany('sync_queue', { status: 'SENT' });
}
}
public async getDeadLetterItems(): Promise<QueueItem[]> {
return this.database.query<QueueItem>('dead_letter_queue', {});
}
public async retryDeadLetter(id: string): Promise<void> {
const deadItem = await this.database.findById<QueueItem>('dead_letter_queue', id);
if (!deadItem) return;
// Restaurar a la cola principal
await this.database.insert('sync_queue', {
...deadItem,
status: 'PENDING',
metadata: {
...deadItem.metadata,
retryCount: 0,
updatedAt: new Date()
}
});
await this.database.delete('dead_letter_queue', id);
}
private async removeIfSent(id: string): Promise<void> {
const item = await this.database.findById<QueueItem>('sync_queue', id);
if (item?.status === 'SENT') {
await this.database.delete('sync_queue', id);
}
}
private async moveToDeadLetter(
item: QueueItem,
reason: 'MAX_RETRIES' | 'PERMANENT_ERROR' | 'VALIDATION_FAILED'
): Promise<void> {
await this.database.insert('dead_letter_queue', {
...item,
deadLetterReason: reason,
movedAt: new Date()
});
await this.database.delete('sync_queue', item.id);
}
private calculateSize(data: Record<string, unknown>): number {
return new Blob([JSON.stringify(data)]).size;
}
public calculateRetryDelay(retryCount: number): number {
const exponentialDelay = this.retryConfig.baseDelayMs * Math.pow(2, retryCount);
const cappedDelay = Math.min(exponentialDelay, this.retryConfig.maxDelayMs);
const jitter = cappedDelay * 0.2 * Math.random();
return Math.floor(cappedDelay + jitter);
}
}
```
### 6.3 ConflictResolver
```typescript
// src/services/sync/ConflictResolver.ts
import { Database } from '../database/Database';
export type ConflictStrategy =
| 'SERVER_WINS'
| 'CLIENT_WINS'
| 'MERGE'
| 'APPEND_ONLY'
| 'MANUAL';
export interface ConflictRecord {
id: string;
type: string;
localVersion: Record<string, unknown>;
serverVersion: Record<string, unknown>;
resolvedVersion: Record<string, unknown>;
strategy: ConflictStrategy;
resolvedAt: Date;
autoResolved: boolean;
}
interface FieldConflict {
field: string;
localValue: unknown;
serverValue: unknown;
}
// Estrategias por tipo de entidad
const STRATEGY_MAP: Record<string, ConflictStrategy> = {
'viaje': 'SERVER_WINS',
'viaje_estado': 'SERVER_WINS',
'evento_tracking': 'APPEND_ONLY',
'posicion_gps': 'CLIENT_WINS',
'foto_evidencia': 'CLIENT_WINS',
'firma_pod': 'CLIENT_WINS',
'instruccion': 'SERVER_WINS',
'operador': 'SERVER_WINS',
'unidad': 'SERVER_WINS',
'checklist_respuesta': 'CLIENT_WINS'
};
export class ConflictResolver {
private database: Database;
private conflictLog: ConflictRecord[] = [];
constructor(database: Database) {
this.database = database;
}
public async resolve(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): Promise<Record<string, unknown>> {
const entityType = serverRecord._type as string;
const strategy = STRATEGY_MAP[entityType] || 'SERVER_WINS';
let resolved: Record<string, unknown>;
switch (strategy) {
case 'SERVER_WINS':
resolved = this.resolveServerWins(localRecord, serverRecord);
break;
case 'CLIENT_WINS':
resolved = this.resolveClientWins(localRecord, serverRecord);
break;
case 'MERGE':
resolved = this.resolveMerge(localRecord, serverRecord);
break;
case 'APPEND_ONLY':
resolved = this.resolveAppendOnly(localRecord, serverRecord);
break;
default:
resolved = serverRecord;
}
// Registrar la resolucion
await this.logConflict({
id: serverRecord.id as string,
type: entityType,
localVersion: localRecord,
serverVersion: serverRecord,
resolvedVersion: resolved,
strategy,
resolvedAt: new Date(),
autoResolved: true
});
return resolved;
}
private resolveServerWins(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): Record<string, unknown> {
// Server siempre gana, pero preservamos campos locales no conflictivos
return {
...localRecord,
...serverRecord,
_localModifiedAt: localRecord._modifiedAt,
_conflictResolved: true
};
}
private resolveClientWins(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): Record<string, unknown> {
// Cliente gana, pero actualizamos metadata del server
return {
...serverRecord,
...localRecord,
_serverVersion: serverRecord._version,
_conflictResolved: true
};
}
private resolveMerge(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): Record<string, unknown> {
const conflicts = this.detectFieldConflicts(localRecord, serverRecord);
const merged: Record<string, unknown> = { ...serverRecord };
for (const conflict of conflicts) {
// Regla: campos de timestamp -> el mas reciente
if (conflict.field.includes('_at') || conflict.field.includes('fecha')) {
merged[conflict.field] = this.mostRecent(
conflict.localValue as Date,
conflict.serverValue as Date
);
continue;
}
// Regla: campos numericos -> el mayor (para contadores)
if (typeof conflict.localValue === 'number') {
merged[conflict.field] = Math.max(
conflict.localValue as number,
conflict.serverValue as number
);
continue;
}
// Default: server wins
merged[conflict.field] = conflict.serverValue;
}
return {
...merged,
_conflictResolved: true,
_mergedFields: conflicts.map(c => c.field)
};
}
private resolveAppendOnly(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): Record<string, unknown> {
// Para eventos: ambos registros son validos
// El servidor ya deberia tener el evento si se sincronizo
// Si no lo tiene, el evento local es nuevo y debe preservarse
const localId = localRecord.id || localRecord._localId;
const serverId = serverRecord.id;
if (localId === serverId) {
// Mismo evento, usar version del servidor (ya procesado)
return serverRecord;
}
// Eventos diferentes, el local es nuevo
return localRecord;
}
private detectFieldConflicts(
localRecord: Record<string, unknown>,
serverRecord: Record<string, unknown>
): FieldConflict[] {
const conflicts: FieldConflict[] = [];
const allKeys = new Set([
...Object.keys(localRecord),
...Object.keys(serverRecord)
]);
for (const key of allKeys) {
// Ignorar campos internos
if (key.startsWith('_')) continue;
const localValue = localRecord[key];
const serverValue = serverRecord[key];
if (!this.isEqual(localValue, serverValue)) {
conflicts.push({
field: key,
localValue,
serverValue
});
}
}
return conflicts;
}
private isEqual(a: unknown, b: unknown): boolean {
if (a === b) return true;
if (a === null || b === null) return false;
if (typeof a !== typeof b) return false;
if (a instanceof Date && b instanceof Date) {
return a.getTime() === b.getTime();
}
if (typeof a === 'object') {
return JSON.stringify(a) === JSON.stringify(b);
}
return false;
}
private mostRecent(a: Date | null, b: Date | null): Date | null {
if (!a) return b;
if (!b) return a;
return a > b ? a : b;
}
private async logConflict(record: ConflictRecord): Promise<void> {
this.conflictLog.push(record);
// Persistir para auditoria
await this.database.insert('conflict_log', record);
console.log(
`[ConflictResolver] Resolved conflict for ${record.type}:${record.id} ` +
`using ${record.strategy}`
);
}
public getConflictHistory(): ConflictRecord[] {
return [...this.conflictLog];
}
public async getConflictStats(): Promise<{
total: number;
byStrategy: Record<ConflictStrategy, number>;
byType: Record<string, number>;
}> {
const logs = await this.database.query<ConflictRecord>('conflict_log', {});
const byStrategy: Record<string, number> = {};
const byType: Record<string, number> = {};
for (const log of logs) {
byStrategy[log.strategy] = (byStrategy[log.strategy] || 0) + 1;
byType[log.type] = (byType[log.type] || 0) + 1;
}
return {
total: logs.length,
byStrategy: byStrategy as Record<ConflictStrategy, number>,
byType
};
}
}
```
### 6.4 Hook de React para UI
```typescript
// src/hooks/useSyncStatus.ts
import { useState, useEffect } from 'react';
import { SyncManager, SyncProgress } from '../services/sync/SyncManager';
export interface SyncStatusHook {
status: SyncProgress['status'];
pendingCount: number;
lastSyncAt: Date | null;
isOnline: boolean;
isSyncing: boolean;
hasErrors: boolean;
forceSync: () => Promise<void>;
}
export function useSyncStatus(syncManager: SyncManager): SyncStatusHook {
const [progress, setProgress] = useState<SyncProgress>({
status: 'IDLE',
pendingCount: 0,
syncedCount: 0,
errorCount: 0,
lastSyncAt: null,
currentOperation: null,
error: null
});
const [isOnline, setIsOnline] = useState<boolean>(navigator.onLine);
useEffect(() => {
const subscription = syncManager.progress$.subscribe(setProgress);
const handleOnline = () => setIsOnline(true);
const handleOffline = () => setIsOnline(false);
window.addEventListener('online', handleOnline);
window.addEventListener('offline', handleOffline);
return () => {
subscription.unsubscribe();
window.removeEventListener('online', handleOnline);
window.removeEventListener('offline', handleOffline);
};
}, [syncManager]);
return {
status: progress.status,
pendingCount: progress.pendingCount,
lastSyncAt: progress.lastSyncAt,
isOnline,
isSyncing: progress.status === 'SYNCING',
hasErrors: progress.errorCount > 0,
forceSync: () => syncManager.forceSync()
};
}
```
---
## 7) Endpoints del Backend para Sync
### 7.1 Endpoints Requeridos
| Metodo | Endpoint | Descripcion |
|--------|----------|-------------|
| POST | /api/sync/events | Batch de eventos de tracking |
| POST | /api/sync/positions | Batch de posiciones GPS |
| POST | /api/sync/photos | Upload de fotos (multipart) |
| POST | /api/sync/signatures | Upload de firmas POD |
| GET | /api/sync/pull | Delta sync desde timestamp |
| GET | /api/sync/status | Estado de sync del viaje |
### 7.2 Formato de Request/Response
```typescript
// POST /api/sync/events
interface SyncEventsRequest {
items: Array<{
localId: string;
tipo: string;
timestamp: string;
lat: number;
lng: number;
viajeId: string;
data: Record<string, unknown>;
}>;
}
interface SyncEventsResponse {
accepted: string[]; // IDs aceptados
rejected: Array<{
localId: string;
reason: string;
}>;
serverTimestamp: string;
}
// GET /api/sync/pull?since=2026-01-27T10:00:00Z
interface SyncPullResponse {
items: Array<{
type: string;
id: string;
action: 'CREATE' | 'UPDATE' | 'DELETE';
data: Record<string, unknown>;
timestamp: string;
}>;
serverTimestamp: string;
hasMore: boolean;
}
```
---
## 8) Testing de Funcionalidad Offline
### 8.1 Escenarios de Prueba
| Escenario | Pasos | Resultado Esperado |
|-----------|-------|-------------------|
| Captura offline | Desconectar -> Capturar evento | Evento en cola local |
| Sync al reconectar | Reconectar red | Eventos enviados automaticamente |
| Conflicto de estado | Cambiar estado offline, server cambia | Server wins, UI actualizada |
| Retry despues de error | Simular error 500 | Reintentos con backoff |
| Dead letter | 8 fallos consecutivos | Item en dead letter queue |
| Limpieza automatica | Esperar 24h post-sync | Items synced eliminados |
### 8.2 Herramientas de Debug
```typescript
// Comandos de consola para debugging
// Ver cola de sync
window.__SYNC_MANAGER__.queue.getPendingItems().then(console.table);
// Forzar sync
window.__SYNC_MANAGER__.forceSync();
// Ver conflictos
window.__SYNC_MANAGER__.conflictResolver.getConflictHistory();
// Simular offline
window.__NETWORK_SERVICE__.simulateOffline(true);
// Ver storage usado
navigator.storage.estimate().then(console.log);
```
---
## 9) Referencias
- ARQUITECTURA-OFFLINE.md (documento principal)
- REQ-GIRO-TRANSPORTISTA.md RF-4.5.2 (requerimiento funcional)
- erp-mecanicas-diesel/Field Service (implementacion de referencia)
- [WatermelonDB Sync](https://watermelondb.dev/docs/Sync/Intro)
- [Background Sync API](https://wicg.github.io/background-sync/spec/)
---
*SINCRONIZACION-OFFLINE.md v1.0.0 - erp-transportistas - Sistema SIMCO v4.0.0*