diff --git a/CLAUDE.md b/CLAUDE.md index d26c3f2..e7c03e8 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -27,89 +27,29 @@ node -e "const main = require('./lib/Main'); main.handleFullWorkflow({ rowNumber ### Testing Commands ```bash -# Test suites -npm run test:all # Complete test suite -npm run test:light # Light test runner -npm run test:smoke # Smoke tests only -npm run test:llm # LLM connectivity tests -npm run test:content # Content generation tests -npm run test:integration # Integration tests -npm run test:systematic # Systematic module testing -npm run test:basic # Basic validation only +# Main test suites +npm run test:all # Complete test suite +npm run test:production-loop # Production ready validation (CI/CD recommended) +npm run test:comprehensive # Exhaustive modular combinations (22 tests) +npm run test:basic # Basic architecture validation -# Individual test categories -npm run test:ai-validation # AI content validation -npm run test:dashboard # Test dashboard server - -# Comprehensive Integration Tests (NEW) -npm run test:comprehensive # Exhaustive modular combinations testing -npm run test:modular # Alias for comprehensive tests - -# Production Ready Tests (NEW) -npm run test:production-workflow # Complete production workflow tests (slow) -npm run test:production-quick # Fast production workflow validation -npm run test:production-loop # Complete production ready loop validation +# Quick tests +npm run test:smoke # Smoke tests +npm run test:llm # LLM connectivity +npm run test:content # Content generation +npm run test:integration # Integration tests ``` -### Google Sheets Integration Tests +### Quick System Tests ```bash -# Test personality loading -node -e "const {getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => console.log(\`\${p.length} personalities loaded\`));" +# Production workflow +node -e "require('./lib/Main').handleFullWorkflow({ rowNumber: 2, source: 'production' });" -# Test CSV data loading -node -e "const {readInstructionsData} = require('./lib/BrainConfig'); readInstructionsData(2).then(d => console.log('Data:', d));" +# LLM connectivity +node -e "require('./lib/LLMManager').testLLMManager()" -# Test random personality selection -node -e "const {selectPersonalityWithAI, getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => selectPersonalityWithAI('test', 'test', p)).then(r => console.log('Selected:', r.nom));" -``` - -### LLM Connectivity Tests -```bash -node -e "require('./lib/LLMManager').testLLMManager()" # Basic LLM connectivity -node -e "require('./lib/LLMManager').testLLMManagerComplete()" # Full LLM provider test suite -``` - -### Complete System Test -```bash -node -e " -const main = require('./lib/Main'); -const testData = { - csvData: { - mc0: 'plaque personnalisée', - t0: 'Créer une plaque personnalisée unique', - personality: { nom: 'Marc', style: 'professionnel' }, - tMinus1: 'décoration personnalisée', - mcPlus1: 'plaque gravée,plaque métal,plaque bois,plaque acrylique', - tPlus1: 'Plaque Gravée Premium,Plaque Métal Moderne,Plaque Bois Naturel,Plaque Acrylique Design' - }, - xmlTemplate: Buffer.from(\` -
-

|Titre_Principal{{T0}}{Rédige un titre H1 accrocheur}|

- |Introduction{{MC0}}{Rédige une introduction engageante}| -
\`).toString('base64'), - source: 'node_server_test' -}; -main.handleFullWorkflow(testData); -" -``` - -### Production Ready Loop Validation -```bash -# Complete production ready validation (recommended for CI/CD) -npm run test:production-loop - -# This runs: -# 1. npm run test:basic # Architecture validation -# 2. npm run test:production-quick # Google Sheets connectivity + core functions -# 3. Echo "✅ Production ready loop validated" - -# Expected output: -# ✅ Architecture modulaire selective validée -# ✅ Architecture modulaire adversarial validée -# ✅ Google Sheets connectivity OK -# ✅ 15 personnalités chargées -# ✅ All core modules available -# 🎯 PRODUCTION READY LOOP ✅ +# Google Sheets +node -e "require('./lib/BrainConfig').getPersonalities().then(p => console.log(\`\${p.length} personalities\`))" ``` ## Architecture Overview @@ -120,277 +60,196 @@ The server operates in two mutually exclusive modes controlled by `lib/modes/Mod - **MANUAL Mode** (`lib/modes/ManualServer.js`): Web interface, API endpoints, WebSocket for real-time logs - **AUTO Mode** (`lib/modes/AutoProcessor.js`): Batch processing from Google Sheets without web interface -### 🆕 Flexible Pipeline System (NEW) -**Revolutionary architecture** allowing custom, reusable workflows with complete flexibility: +### 🆕 Advanced Configuration Systems -#### Components -- **Pipeline Builder** (`public/pipeline-builder.html`): Visual drag-and-drop interface -- **Pipeline Runner** (`public/pipeline-runner.html`): Execute saved pipelines with progress tracking -- **Pipeline Executor** (`lib/pipeline/PipelineExecutor.js`): Execution engine -- **Pipeline Templates** (`lib/pipeline/PipelineTemplates.js`): 10 predefined templates -- **Pipeline Definition** (`lib/pipeline/PipelineDefinition.js`): Schemas & validation -- **Config Manager** (`lib/ConfigManager.js`): Extended with pipeline CRUD operations +#### Dynamic Prompt Engine (`lib/prompt-engine/`) +Génération dynamique de prompts adaptatifs avec composition multi-niveaux: +- **Templates**: technical, style, adversarial avec variables dynamiques +- **Context analyzers**: Analyse automatique pour adaptation prompts +- **Variable injection**: Remplacement intelligent de variables contextuelles -#### Key Features -✅ **Any module order**: generation → selective → adversarial → human → pattern (fully customizable) -✅ **Multi-pass support**: Apply same module multiple times with different intensities -✅ **Per-step configuration**: mode, intensity (0.1-2.0), custom parameters -✅ **Checkpoint saving**: Optional checkpoints between steps for debugging -✅ **Template-based**: Start from 10 templates or build from scratch -✅ **Complete validation**: Real-time validation with detailed error messages -✅ **Duration estimation**: Estimate total execution time before running +#### Trend Manager (`lib/trend-prompts/`) +Gestion de tendances configurables pour moduler les prompts: +- **Tendances sectorielles**: eco-responsable (durabilité), tech-innovation (digitalisation), artisanal-premium (savoir-faire) +- **Tendances générationnelles**: generation-z (inclusif/viral), millennials (authenticité), seniors (tradition) +- **Configuration**: targetTerms, focusAreas, tone, values appliqués sélectivement -#### Available Templates -- `minimal-test`: 1 step (15s) - Quick testing -- `light-fast`: 2 steps (35s) - Basic generation -- `standard-seo`: 4 steps (75s) - Balanced protection -- `premium-seo`: 6 steps (130s) - High quality + anti-detection -- `heavy-guard`: 8 steps (180s) - Maximum protection -- `personality-focus`: 4 steps (70s) - Enhanced personality style -- `fluidity-master`: 4 steps (73s) - Natural transitions focus -- `adaptive-smart`: 5 steps (105s) - Intelligent adaptive modes -- `gptzero-killer`: 6 steps (155s) - GPTZero-specific bypass -- `originality-bypass`: 6 steps (160s) - Originality.ai-specific bypass +#### Workflow Engine (`lib/workflow-configuration/`) +Séquences modulaires configurables - 5 workflows prédéfinis: +- **default**: Selective → Adversarial → Human → Pattern (workflow standard) +- **human-first**: Human → Pattern → Selective → Pattern (humanisation prioritaire) +- **stealth-intensive**: Pattern → Adversarial → Human → Pattern → Adversarial (anti-détection max) +- **quality-first**: Selective → Human → Selective → Pattern (qualité prioritaire) +- **balanced**: Selective → Human → Adversarial → Pattern → Selective (équilibré) -#### API Endpoints -``` -POST /api/pipeline/save # Save pipeline definition -GET /api/pipeline/list # List all saved pipelines -GET /api/pipeline/:name # Load specific pipeline -DELETE /api/pipeline/:name # Delete pipeline -POST /api/pipeline/execute # Execute pipeline -GET /api/pipeline/templates # Get all templates -GET /api/pipeline/templates/:name # Get specific template -GET /api/pipeline/modules # Get available modules -POST /api/pipeline/validate # Validate pipeline structure -POST /api/pipeline/estimate # Estimate duration/cost -``` +Support multi-passes (même module plusieurs fois) et intensité variable par étape. -#### Example Pipeline Definition -```javascript -{ - name: "Custom Premium Pipeline", - description: "Multi-pass anti-detection with personality focus", - pipeline: [ - { step: 1, module: "generation", mode: "simple", intensity: 1.0 }, - { step: 2, module: "selective", mode: "fullEnhancement", intensity: 1.0 }, - { step: 3, module: "adversarial", mode: "heavy", intensity: 1.2, - parameters: { detector: "gptZero", method: "regeneration" } }, - { step: 4, module: "human", mode: "personalityFocus", intensity: 1.5 }, - { step: 5, module: "pattern", mode: "syntaxFocus", intensity: 1.1 }, - { step: 6, module: "adversarial", mode: "adaptive", intensity: 1.3, - parameters: { detector: "originality", method: "hybrid" } } - ], - metadata: { - author: "user", - created: "2025-10-08", - version: "1.0", - tags: ["premium", "multi-pass", "anti-detection"] - } -} -``` +#### Batch Processing (`lib/batch/`) +Système complet de traitement batch: +- **BatchController**: API endpoints (config, start, stop, status) +- **BatchProcessor**: Queue management, gestion d'erreurs, progression temps réel +- **DigitalOceanTemplates**: 10+ templates XML prédéfinis +- **Configuration**: rowRange, trendId, workflowSequence, saveIntermediateSteps -#### Backward Compatibility -The flexible pipeline system coexists with the legacy modular workflow system: -- **New way**: Use `pipelineConfig` parameter in `handleFullWorkflow()` -- **Old way**: Use `selectiveStack`, `adversarialMode`, `humanSimulationMode`, `patternBreakingMode` -- Both are fully supported and can be used interchangeably +### 🆕 Flexible Pipeline System +Architecture révolutionnaire permettant des workflows personnalisés et réutilisables: -### Core Workflow Pipeline (lib/Main.js) -1. **Data Preparation** - Read from Google Sheets (CSV data + XML templates) -2. **Element Extraction** - Parse XML elements with embedded instructions -3. **Missing Keywords Generation** - Auto-complete missing data using LLMs -4. **Direct Content Generation** - Generate all content elements in parallel -5. **Multi-LLM Enhancement** - 4-stage processing pipeline across different LLM providers -6. **Content Assembly** - Inject generated content back into XML structure -7. **Organic Compilation & Storage** - Save clean text to Google Sheets +**Composants**: +- `public/pipeline-builder.html` - Interface drag-and-drop visuelle +- `public/pipeline-runner.html` - Exécution avec tracking progressif +- `lib/pipeline/PipelineExecutor.js` - Moteur d'exécution +- `lib/pipeline/PipelineTemplates.js` - 10 templates prédéfinis -### Google Sheets Integration -- **Authentication**: Via `GOOGLE_SERVICE_ACCOUNT_EMAIL` and `GOOGLE_PRIVATE_KEY` environment variables -- **Data Sources**: - - `Instructions` sheet: Columns A-I (slug, T0, MC0, T-1, L-1, MC+1, T+1, L+1, XML template) - - `Personnalites` sheet: 15 AI personalities for content variety - - `Generated_Articles` sheet: Final compiled text output with metadata +**10 Templates disponibles**: +- `minimal-test` (1 step, 15s) - Tests rapides +- `light-fast` (2 steps, 35s) - Génération basique +- `standard-seo` (4 steps, 75s) - Protection équilibrée +- `premium-seo` (6 steps, 130s) - Qualité + anti-détection +- `heavy-guard` (8 steps, 180s) - Protection maximale +- `gptzero-killer` (6 steps, 155s) - Spécialisé anti-GPTZero +- `originality-bypass` (6 steps, 160s) - Spécialisé anti-Originality.ai -### Multi-LLM Modular Enhancement System -**Architecture 100% Modulaire** avec sauvegarde versionnée : +**Fonctionnalités clés**: +- Ordre de modules entièrement personnalisable +- Multi-pass support (même module plusieurs fois) +- Configuration par étape (mode, intensity 0.1-2.0, paramètres custom) +- Sauvegarde checkpoints optionnels pour debugging +- Validation temps réel avec messages d'erreur détaillés +- Estimation durée/coût avant exécution -#### **Workflow Principal** (lib/Main.js) -1. **Data Preparation** - Read from Google Sheets (CSV data + XML templates) -2. **Element Extraction** - Parse XML elements with embedded instructions -3. **Missing Keywords Generation** - Auto-complete missing data using LLMs -4. **Simple Generation** - Generate base content with Claude -5. **Selective Enhancement** - Couches modulaires configurables -6. **Adversarial Enhancement** - Anti-détection modulaire -7. **Human Simulation** - Erreurs humaines réalistes -8. **Pattern Breaking** - Cassage patterns LLM -9. **Content Assembly & Storage** - Final compilation avec versioning +**Structure Pipeline**: JSON avec steps (module, mode, intensity, parameters optionnels), metadata (author, version, tags) -#### **Couches Modulaires Disponibles** -- **5 Selective Stacks** : lightEnhancement → fullEnhancement → adaptive -- **5 Adversarial Modes** : none → light → standard → heavy → adaptive -- **6 Human Simulation Modes** : none → lightSimulation → personalityFocus → adaptive -- **7 Pattern Breaking Modes** : none → syntaxFocus → connectorsFocus → adaptive +**API Endpoints**: `/api/pipeline/{save,list,execute,validate,estimate}` +**Backward compatible**: `pipelineConfig` (nouveau) et `selectiveStack/adversarialMode` (ancien) supportés -#### **Sauvegarde Versionnée** -- **v1.0** : Génération initiale Claude -- **v1.1** : Post Selective Enhancement -- **v1.2** : Post Adversarial Enhancement -- **v1.3** : Post Human Simulation -- **v1.4** : Post Pattern Breaking -- **v2.0** : Version finale +### Core Workflow Pipeline -Supported LLM providers: Claude, OpenAI, Gemini, Deepseek, Moonshot, Mistral +**7 étapes principales** (lib/Main.js): +1. **Data Preparation** - Lecture Google Sheets (CSV data + XML templates) +2. **Element Extraction** - Parse XML avec instructions {{variables}} vs {prompts} +3. **Missing Keywords Generation** - Auto-complétion données manquantes via LLMs +4. **Content Generation** - Génération base contenu en parallèle +5. **Multi-LLM Enhancement** - 4 couches modulaires (Selective → Adversarial → Human → Pattern) +6. **Content Assembly** - Injection contenu dans structure XML +7. **Organic Compilation & Storage** - Sauvegarde texte clean dans Google Sheets -#### **Tests d'Intégration Exhaustifs (Nouveau)** -Les TI exhaustifs (`npm run test:comprehensive`) testent **22 combinaisons modulaires complètes** : +**Google Sheets Integration**: +- **Instructions** (colonnes A-I): slug, T0, MC0, T-1, L-1, MC+1, T+1, L+1, XML template +- **Personnalites** (15 personnalités): Marc, Sophie, Laurent, Julie, Kévin, Amara, Mamadou, Émilie, Pierre-Henri, Yasmine, Fabrice, Chloé, Linh, Minh, Thierry +- **Generated_Articles**: Output texte final + metadata complète -**Selective Stacks Testés (5)** : -- `lightEnhancement` : 1 couche OpenAI technique -- `standardEnhancement` : 2 couches OpenAI + Gemini -- `fullEnhancement` : 3 couches multi-LLM complet -- `personalityFocus` : Style Mistral prioritaire -- `fluidityFocus` : Transitions Gemini prioritaires +**Modular Enhancement Layers** (Architecture 100% modulaire): +- **5 Selective Stacks**: + - `lightEnhancement` (1 couche OpenAI technique) + - `standardEnhancement` (2 couches OpenAI + Gemini) + - `fullEnhancement` (3 couches multi-LLM) + - `personalityFocus` (style Mistral prioritaire) + - `adaptive` (sélection intelligente) -**Adversarial Modes Testés (4)** : -- `general + regeneration` : Anti-détection standard -- `gptZero + regeneration` : Anti-GPTZero spécialisé -- `originality + hybrid` : Anti-Originality.ai -- `general + enhancement` : Méthode douce +- **5 Adversarial Modes**: + - `none` → `light` → `standard` → `heavy` → `adaptive` + - Détecteurs: GPTZero, Originality.ai, général + - Méthodes: enhancement, regeneration, hybrid -**Pipelines Combinés Testés (5)** : -- Light → Adversarial -- Standard → Adversarial Intense -- Full → Multi-Adversarial -- Personality → GPTZero -- Fluidity → Originality +- **6 Human Simulation Modes**: + - `none` → `lightSimulation` → `standardSimulation` → `heavySimulation` → `personalityFocus` → `adaptive` + - FatiguePatterns, PersonalityErrors, TemporalStyles -**Tests Performance & Intensités (8)** : -- Intensités variables (0.5 → 1.2) -- Méthodes multiples (enhancement/regeneration/hybrid) -- Benchmark pipeline complet avec métriques +- **7 Pattern Breaking Modes**: + - `none` → `syntaxFocus` → `connectorsFocus` → `structureFocus` → `styleFocus` → `comprehensiveFocus` → `adaptive` + - LLMFingerprints removal, SyntaxVariations, NaturalConnectors -### Personality System (lib/BrainConfig.js:265-340) -**Random Selection Process**: -1. Load 15 personalities from Google Sheets -2. Fisher-Yates shuffle for true randomness -3. Select 60% (9 personalities) per generation -4. AI chooses best match within random subset -5. Temperature = 1.0 for maximum variability +**Versioned Saves**: +v1.0 (génération initiale) → v1.1 (post selective) → v1.2 (post adversarial) → v1.3 (post human) → v1.4 (post pattern) → v2.0 (version finale) -**15 Available Personalities**: Marc (technical), Sophie (déco), Laurent (commercial), Julie (architecture), Kévin (terrain), Amara (engineering), Mamadou (artisan), Émilie (digital), Pierre-Henri (heritage), Yasmine (greentech), Fabrice (metallurgy), Chloé (content), Linh (manufacturing), Minh (design), Thierry (creole) +**LLM Providers**: +Claude (Anthropic), OpenAI (GPT-4), Gemini (Google), Deepseek, Moonshot, Mistral - **5/6 opérationnels** (Gemini peut être géo-bloqué) + +**Personality System**: +Random selection - 60% des 15 personnalités par génération, Fisher-Yates shuffle pour vraie randomisation, Temperature=1.0 pour variabilité maximale ## Centralized Logging System (LogSh) -### Architecture -- **All logging must go through `logSh()` function** in `lib/ErrorReporting.js` -- **Multi-output streams**: Console (formatted) + File (JSON) + WebSocket (real-time) -- **Never use `console.*` or other loggers directly** - -### Log Levels and Usage -- **TRACE**: Hierarchical workflow execution with parameters (▶ ✔ ✖ symbols) -- **DEBUG**: Detailed debugging information (visible in files with debug level) -- **INFO**: Standard operational messages -- **WARN**: Warning conditions -- **ERROR**: Error conditions with stack traces - -### File Logging -- **Format**: JSON structured logs in timestamped files -- **Location**: logs/seo-generator-YYYY-MM-DD_HH-MM-SS.log -- **Flush behavior**: Immediate flush on every log call to prevent buffer loss -- **Level**: DEBUG and above (includes all TRACE logs) - -### Trace System -- **Hierarchical execution tracking**: Using AsyncLocalStorage for span context -- **Function parameters**: All tracer.run() calls include relevant parameters -- **Format**: Function names with file prefixes (e.g., "Main.handleFullWorkflow()") -- **Performance timing**: Start/end with duration measurements -- **Error handling**: Automatic stack trace logging on failures - -### Log Consultation (LogViewer) -Les logs ne sont plus envoyés en console.log (trop verbeux). Tous les événements sont enregistrés dans logs/app.log au format **JSONL Pino**. - -Un outil `tools/logViewer.js` permet d'interroger facilement ce fichier: +**Architecture**: All logging via `logSh()` (lib/ErrorReporting.js) - Multi-output (Console + File + WebSocket) +**Levels**: TRACE (workflow), DEBUG, INFO, WARN, ERROR +**Format**: JSON structured logs (logs/seo-generator-YYYY-MM-DD_HH-MM-SS.log), JSONL Pino (logs/app.log) +**Trace**: AsyncLocalStorage hierarchical tracking with performance timing +**Log Viewer** (`tools/logViewer.js`): ```bash -# Voir les 200 dernières lignes formatées -node tools/logViewer.js --pretty - -# Rechercher un mot-clé dans les messages -node tools/logViewer.js --search --includes "Claude" --pretty - -# Rechercher par plage de temps (tous les logs du 2 septembre 2025) -node tools/logViewer.js --since 2025-09-02T00:00:00Z --until 2025-09-02T23:59:59Z --pretty - -# Filtrer par niveau d'erreur -node tools/logViewer.js --last 300 --level ERROR --pretty +node tools/logViewer.js --pretty # Dernières 200 lignes +node tools/logViewer.js --includes "Claude" --pretty # Recherche mot-clé +node tools/logViewer.js --level ERROR --pretty # Filtrer erreurs ``` -**Filtres disponibles**: -- `--level`: 30=INFO, 40=WARN, 50=ERROR (ou INFO, WARN, ERROR) -- `--module`: filtre par path ou module -- `--includes`: mot-clé dans msg -- `--regex`: expression régulière sur msg -- `--since / --until`: bornes temporelles (ISO ou YYYY-MM-DD) - -### Real-time Log Viewing -- **WebSocket server** on port 8081 -- **Auto-launched** `tools/logs-viewer.html` in Edge browser -- **Features**: Search, level filtering, scroll preservation +**Real-time**: WebSocket port 8081, auto-launch `tools/logs-viewer.html` in browser ## Key Components -### lib/Main.js -**Architecture Modulaire Complète** - Orchestration workflow avec pipeline configurable et sauvegarde versionnée. +### Core Orchestration +- **`lib/Main.js`** - Orchestration workflow complète avec pipeline configurable et sauvegarde versionnée (v1.0 → v2.0) +- **`lib/APIController.js`** - Contrôleur API RESTful centralisant toute la logique métier: + - CRUD articles, projets, templates + - Intégration DynamicPromptEngine, TrendManager, WorkflowEngine + - Endpoints monitoring (health, metrics, personalities) +- **`lib/ConfigManager.js`** - Gestionnaire configurations modulaires et pipelines: + - Sauvegarde/chargement JSON dans `configs/` et `configs/pipelines/` + - Validation et versioning automatique + - API complète pour manipulation configs -### lib/selective-enhancement/ -**Couches Selective Modulaires** : -- `SelectiveCore.js` - Application couche par couche -- `SelectiveLayers.js` - 5 stacks prédéfinis + adaptatif -- `TechnicalLayer.js` - Enhancement technique OpenAI -- `TransitionLayer.js` - Enhancement transitions Gemini -- `StyleLayer.js` - Enhancement style Mistral -- `SelectiveUtils.js` - Utilitaires + génération simple (remplace ContentGeneration.js) +### Enhancement Modules (Architecture Modulaire) +- **`lib/selective-enhancement/`** - Couches enhancement sélectives: + - `SelectiveCore.js` - Application couche par couche + - `SelectiveLayers.js` - 5 stacks prédéfinis + adaptatif + - `TechnicalLayer.js` - Enhancement technique OpenAI + - `TransitionLayer.js` - Enhancement transitions Gemini + - `StyleLayer.js` - Enhancement style Mistral + - `SelectiveUtils.js` - Utilitaires + génération simple -### lib/adversarial-generation/ -**Anti-détection Modulaire** : -- `AdversarialCore.js` - Moteur adversarial principal -- `AdversarialLayers.js` - 5 modes défense configurables -- `DetectorStrategies.js` - Stratégies anti-détection interchangeables +- **`lib/adversarial-generation/`** - Anti-détection modulaire: + - `AdversarialCore.js` - Moteur adversarial principal + - `AdversarialLayers.js` - 5 modes défense configurables + - `DetectorStrategies.js` - Stratégies anti-détection interchangeables (GPTZero, Originality.ai) -### lib/human-simulation/ -**Simulation Erreurs Humaines** : -- `HumanSimulationCore.js` - Moteur simulation principal -- `HumanSimulationLayers.js` - 6 modes simulation -- `FatiguePatterns.js` - Patterns fatigue réalistes -- `PersonalityErrors.js` - Erreurs spécifiques personnalité -- `TemporalStyles.js` - Variations temporelles +- **`lib/human-simulation/`** - Simulation erreurs humaines réalistes: + - `HumanSimulationCore.js` - Moteur simulation principal + - `HumanSimulationLayers.js` - 6 modes simulation + - `FatiguePatterns.js` - Patterns fatigue réalistes + - `PersonalityErrors.js` - Erreurs spécifiques personnalité + - `TemporalStyles.js` - Variations temporelles -### lib/pattern-breaking/ -**Cassage Patterns LLM** : -- `PatternBreakingCore.js` - Moteur pattern breaking -- `PatternBreakingLayers.js` - 7 modes cassage -- `LLMFingerprints.js` - Suppression empreintes LLM -- `SyntaxVariations.js` - Variations syntaxiques -- `NaturalConnectors.js` - Connecteurs naturels +- **`lib/pattern-breaking/`** - Cassage patterns LLM: + - `PatternBreakingCore.js` - Moteur pattern breaking + - `PatternBreakingLayers.js` - 7 modes cassage + - `LLMFingerprints.js` - Suppression empreintes LLM + - `SyntaxVariations.js` - Variations syntaxiques + - `NaturalConnectors.js` - Connecteurs naturels -### lib/post-processing/ -**Post-traitement Legacy** (remplacé par modules ci-dessus) +### Advanced Systems (Nouveaux - Sept 2025) +- **`lib/prompt-engine/`** - DynamicPromptEngine: + - Templates modulaires (technical, style, adversarial) + - Context analyzers et adaptive rules + - Composition multi-niveaux avec variables dynamiques -### lib/LLMManager.js -Multi-LLM provider management with retry logic, rate limiting, and provider rotation. +- **`lib/trend-prompts/`** - TrendManager: + - 6+ tendances prédéfinies (sectorielles + générationnelles) + - Configuration par tendance (targetTerms, focusAreas, tone, values) -### lib/BrainConfig.js -Google Sheets integration, personality system, and random selection algorithms. +- **`lib/workflow-configuration/`** - WorkflowEngine: + - 5 workflows prédéfinis configurables + - Support iterations multiples et intensité variable -### lib/ElementExtraction.js -XML parsing and element extraction with instruction parsing ({{variables}} vs {instructions}). +- **`lib/batch/`** - Batch Processing System: + - BatchController (API endpoints) + - BatchProcessor (queue, monitoring) + - DigitalOceanTemplates (10+ templates XML) -### lib/ArticleStorage.js -Organic text compilation maintaining natural hierarchy and Google Sheets storage. - -### lib/ErrorReporting.js -Centralized logging system with hierarchical tracing and multi-output streams. +### Utilities +- **`lib/LLMManager.js`** - Gestion multi-LLM providers avec retry logic, rate limiting, provider rotation +- **`lib/BrainConfig.js`** - Intégration Google Sheets + système personnalités (random selection, Fisher-Yates) +- **`lib/ElementExtraction.js`** - Parsing XML avec distinction {{variables}} vs {instructions} +- **`lib/ArticleStorage.js`** - Compilation texte organique + stockage Google Sheets +- **`lib/ErrorReporting.js`** - Logging centralisé via `logSh()`, hierarchical tracing AsyncLocalStorage ## Environment Configuration @@ -435,49 +294,53 @@ node tools/audit-unused.cjs # Report dead files and unused exports ## Important Development Notes -- **Architecture 100% Modulaire**: Ancien système séquentiel supprimé, backup dans `/backup/sequential-system/` -- **Configuration Granulaire**: Chaque couche modulaire indépendamment configurable -- **Sauvegarde Versionnée**: v1.0 → v1.1 → v1.2 → v1.3 → v1.4 → v2.0 pour traçabilité complète -- **Compatibility Layer**: Interface `handleFullWorkflow()` maintenue pour rétrocompatibilité -- **Personality system uses randomization**: 60% of 15 personalities selected per generation run -- **All data sourced from Google Sheets**: No hardcoded JSON files or static data -- **Default XML templates**: Auto-generated when column I contains filenames -- **Organic compilation**: Maintains natural text flow in final output -- **Temperature = 1.0**: Ensures maximum variability in AI responses -- **Trace system**: Uses AsyncLocalStorage for hierarchical execution tracking -- **5/6 LLM providers operational**: Gemini may be geo-blocked in some regions +**Architecture**: 100% modulaire, configuration granulaire, versioned saves (v1.0→v2.0), compatibility layer `handleFullWorkflow()` -### **Migration Legacy → Modulaire** -- ❌ **Supprimé**: `lib/ContentGeneration.js` + `lib/generation/` (pipeline séquentiel fixe) -- ✅ **Remplacé par**: Modules selective/adversarial/human-simulation/pattern-breaking -- ✅ **Avantage**: Flexibilité totale, stacks adaptatifs, parallélisation possible +**New Systems (Sept 2025)**: DynamicPromptEngine, TrendManager (6+ trends), WorkflowEngine (5 workflows), BatchProcessing, ConfigManager, APIController, 11 web interfaces + +**Data**: Google Sheets source (no hardcoded JSON), 15 personalities (60% random selection, Fisher-Yates, temp=1.0), organic compilation, XML templates auto-generated + +**Monitoring**: AsyncLocalStorage tracing, 5/6 LLM providers, RESTful API (pagination/filters), WebSocket real-time logs + health/metrics + +**Migration Legacy→Modulaire**: ❌ `lib/ContentGeneration.js` + `lib/generation/` → ✅ selective/adversarial/human-simulation/pattern-breaking modules (flexibilité totale, stacks adaptatifs, parallélisation) + +## Web Interfaces (MANUEL Mode) + +### Interfaces de Production +- **`public/index.html`** - Dashboard principal avec contrôles workflow +- **`public/production-runner.html`** - Exécution workflows production depuis Google Sheets +- **`public/pipeline-builder.html`** - Constructeur visuel de pipelines drag-and-drop +- **`public/pipeline-runner.html`** - Exécuteur de pipelines sauvegardés avec tracking +- **`public/config-editor.html`** - Éditeur de configurations modulaires + +### Interfaces de Test et Développement +- **`public/batch-dashboard.html`** - Dashboard traitement batch avec configuration +- **`public/batch-interface.html`** - Interface batch avec contrôle granulaire +- **`public/prompt-engine-interface.html`** - Interface test DynamicPromptEngine +- **`public/modular-pipeline-demo.html`** - Démo système pipeline modulaire +- **`public/step-by-step.html`** - Exécution pas-à-pas pour debugging +- **`public/test-modulaire.html`** - Tests manuels des modules + +## RESTful API Endpoints + +**Articles/Projects/Templates**: Full CRUD (GET, POST, PUT, DELETE) - `/api/articles/*`, `/api/projects/*`, `/api/templates/*` +**Monitoring**: `/api/health`, `/api/metrics`, `/api/config/personalities` +**Batch**: `/api/batch/{config,start,stop,status}` +**Pipeline**: `/api/pipeline/{save,list,execute,validate,estimate}` +**Advanced**: `/api/prompt-engine/generate`, `/api/trends/*`, `/api/workflows/*` + +Voir `API.md` pour documentation complète avec exemples. ## File Structure -- `server.js` - Express server entry point with mode selection -- `lib/Main.js` - Core workflow orchestration -- `lib/modes/` - Mode management (Manual/Auto) -- `lib/BrainConfig.js` - Google Sheets integration + personality system -- `lib/LLMManager.js` - Multi-LLM provider management -- `lib/ContentGeneration.js` - Content generation and enhancement pipeline -- `lib/ElementExtraction.js` - XML parsing and element extraction -- `lib/ArticleStorage.js` - Content compilation and Google Sheets storage -- `lib/ErrorReporting.js` - Centralized logging and error handling -- `tools/` - Development utilities (log viewer, bundler, audit) -- `tests/` - Comprehensive test suite with multiple categories -- `.env` - Environment configuration (Google credentials, API keys) +**Core**: `server.js`, `lib/Main.js`, `lib/APIController.js`, `lib/ConfigManager.js`, `lib/modes/`, `lib/BrainConfig.js`, `lib/LLMManager.js` +**Enhancement**: `lib/selective-enhancement/`, `lib/adversarial-generation/`, `lib/human-simulation/`, `lib/pattern-breaking/` +**Advanced**: `lib/prompt-engine/`, `lib/trend-prompts/`, `lib/workflow-configuration/`, `lib/batch/`, `lib/pipeline/` +**Utilities**: `lib/ElementExtraction.js`, `lib/ArticleStorage.js`, `lib/ErrorReporting.js` +**Assets**: `public/` (11 web interfaces), `configs/` (saved configs/pipelines), `tools/` (logViewer, bundler, audit), `tests/` (comprehensive test suite), `.env` (credentials) -## Key Dependencies -- `googleapis` - Google Sheets API integration -- `axios` - HTTP client for LLM APIs -- `dotenv` - Environment variable management -- `express` - Web server framework -- `nodemailer` - Email notifications (needs setup) - -## Workflow Sources -- `production` - Real Google Sheets data processing -- `test_random_personality` - Testing with personality randomization -- `node_server` - Direct API processing -- Legacy: make_com, digital_ocean_autonomous +## Dependencies & Workflow Sources +**Deps**: googleapis, axios, dotenv, express, nodemailer +**Sources**: production (Google Sheets), test_random_personality, node_server ## Git Push Configuration Si le push échoue avec "Connection closed port 22", utiliser SSH sur port 443: