Update and compact CLAUDE.md documentation

- Add documentation for new advanced systems (DynamicPromptEngine, TrendManager, WorkflowEngine, BatchProcessing)
- Document 11 web interfaces and complete RESTful API
- Compact from 726 to 354 lines (-51%) while maintaining clarity
- Restructure sections for better readability and navigation
- Add detailed workflow sequences and module explanations
- Update Key Components with enhanced modules architecture

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
StillHammer 2025-10-09 16:11:04 +08:00
parent 471058f731
commit afc4b9b2ff

523
CLAUDE.md
View File

@ -27,89 +27,29 @@ node -e "const main = require('./lib/Main'); main.handleFullWorkflow({ rowNumber
### Testing Commands ### Testing Commands
```bash ```bash
# Test suites # Main test suites
npm run test:all # Complete test suite npm run test:all # Complete test suite
npm run test:light # Light test runner npm run test:production-loop # Production ready validation (CI/CD recommended)
npm run test:smoke # Smoke tests only npm run test:comprehensive # Exhaustive modular combinations (22 tests)
npm run test:llm # LLM connectivity tests npm run test:basic # Basic architecture validation
npm run test:content # Content generation tests
# Quick tests
npm run test:smoke # Smoke tests
npm run test:llm # LLM connectivity
npm run test:content # Content generation
npm run test:integration # Integration tests npm run test:integration # Integration tests
npm run test:systematic # Systematic module testing
npm run test:basic # Basic validation only
# Individual test categories
npm run test:ai-validation # AI content validation
npm run test:dashboard # Test dashboard server
# Comprehensive Integration Tests (NEW)
npm run test:comprehensive # Exhaustive modular combinations testing
npm run test:modular # Alias for comprehensive tests
# Production Ready Tests (NEW)
npm run test:production-workflow # Complete production workflow tests (slow)
npm run test:production-quick # Fast production workflow validation
npm run test:production-loop # Complete production ready loop validation
``` ```
### Google Sheets Integration Tests ### Quick System Tests
```bash ```bash
# Test personality loading # Production workflow
node -e "const {getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => console.log(\`\${p.length} personalities loaded\`));" node -e "require('./lib/Main').handleFullWorkflow({ rowNumber: 2, source: 'production' });"
# Test CSV data loading # LLM connectivity
node -e "const {readInstructionsData} = require('./lib/BrainConfig'); readInstructionsData(2).then(d => console.log('Data:', d));" node -e "require('./lib/LLMManager').testLLMManager()"
# Test random personality selection # Google Sheets
node -e "const {selectPersonalityWithAI, getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => selectPersonalityWithAI('test', 'test', p)).then(r => console.log('Selected:', r.nom));" node -e "require('./lib/BrainConfig').getPersonalities().then(p => console.log(\`\${p.length} personalities\`))"
```
### LLM Connectivity Tests
```bash
node -e "require('./lib/LLMManager').testLLMManager()" # Basic LLM connectivity
node -e "require('./lib/LLMManager').testLLMManagerComplete()" # Full LLM provider test suite
```
### Complete System Test
```bash
node -e "
const main = require('./lib/Main');
const testData = {
csvData: {
mc0: 'plaque personnalisée',
t0: 'Créer une plaque personnalisée unique',
personality: { nom: 'Marc', style: 'professionnel' },
tMinus1: 'décoration personnalisée',
mcPlus1: 'plaque gravée,plaque métal,plaque bois,plaque acrylique',
tPlus1: 'Plaque Gravée Premium,Plaque Métal Moderne,Plaque Bois Naturel,Plaque Acrylique Design'
},
xmlTemplate: Buffer.from(\`<?xml version='1.0' encoding='UTF-8'?>
<article>
<h1>|Titre_Principal{{T0}}{Rédige un titre H1 accrocheur}|</h1>
<intro>|Introduction{{MC0}}{Rédige une introduction engageante}|</intro>
</article>\`).toString('base64'),
source: 'node_server_test'
};
main.handleFullWorkflow(testData);
"
```
### Production Ready Loop Validation
```bash
# Complete production ready validation (recommended for CI/CD)
npm run test:production-loop
# This runs:
# 1. npm run test:basic # Architecture validation
# 2. npm run test:production-quick # Google Sheets connectivity + core functions
# 3. Echo "✅ Production ready loop validated"
# Expected output:
# ✅ Architecture modulaire selective validée
# ✅ Architecture modulaire adversarial validée
# ✅ Google Sheets connectivity OK
# ✅ 15 personnalités chargées
# ✅ All core modules available
# 🎯 PRODUCTION READY LOOP ✅
``` ```
## Architecture Overview ## Architecture Overview
@ -120,277 +60,196 @@ The server operates in two mutually exclusive modes controlled by `lib/modes/Mod
- **MANUAL Mode** (`lib/modes/ManualServer.js`): Web interface, API endpoints, WebSocket for real-time logs - **MANUAL Mode** (`lib/modes/ManualServer.js`): Web interface, API endpoints, WebSocket for real-time logs
- **AUTO Mode** (`lib/modes/AutoProcessor.js`): Batch processing from Google Sheets without web interface - **AUTO Mode** (`lib/modes/AutoProcessor.js`): Batch processing from Google Sheets without web interface
### 🆕 Flexible Pipeline System (NEW) ### 🆕 Advanced Configuration Systems
**Revolutionary architecture** allowing custom, reusable workflows with complete flexibility:
#### Components #### Dynamic Prompt Engine (`lib/prompt-engine/`)
- **Pipeline Builder** (`public/pipeline-builder.html`): Visual drag-and-drop interface Génération dynamique de prompts adaptatifs avec composition multi-niveaux:
- **Pipeline Runner** (`public/pipeline-runner.html`): Execute saved pipelines with progress tracking - **Templates**: technical, style, adversarial avec variables dynamiques
- **Pipeline Executor** (`lib/pipeline/PipelineExecutor.js`): Execution engine - **Context analyzers**: Analyse automatique pour adaptation prompts
- **Pipeline Templates** (`lib/pipeline/PipelineTemplates.js`): 10 predefined templates - **Variable injection**: Remplacement intelligent de variables contextuelles
- **Pipeline Definition** (`lib/pipeline/PipelineDefinition.js`): Schemas & validation
- **Config Manager** (`lib/ConfigManager.js`): Extended with pipeline CRUD operations
#### Key Features #### Trend Manager (`lib/trend-prompts/`)
**Any module order**: generation → selective → adversarial → human → pattern (fully customizable) Gestion de tendances configurables pour moduler les prompts:
**Multi-pass support**: Apply same module multiple times with different intensities - **Tendances sectorielles**: eco-responsable (durabilité), tech-innovation (digitalisation), artisanal-premium (savoir-faire)
**Per-step configuration**: mode, intensity (0.1-2.0), custom parameters - **Tendances générationnelles**: generation-z (inclusif/viral), millennials (authenticité), seniors (tradition)
**Checkpoint saving**: Optional checkpoints between steps for debugging - **Configuration**: targetTerms, focusAreas, tone, values appliqués sélectivement
**Template-based**: Start from 10 templates or build from scratch
**Complete validation**: Real-time validation with detailed error messages
**Duration estimation**: Estimate total execution time before running
#### Available Templates #### Workflow Engine (`lib/workflow-configuration/`)
- `minimal-test`: 1 step (15s) - Quick testing Séquences modulaires configurables - 5 workflows prédéfinis:
- `light-fast`: 2 steps (35s) - Basic generation - **default**: Selective → Adversarial → Human → Pattern (workflow standard)
- `standard-seo`: 4 steps (75s) - Balanced protection - **human-first**: Human → Pattern → Selective → Pattern (humanisation prioritaire)
- `premium-seo`: 6 steps (130s) - High quality + anti-detection - **stealth-intensive**: Pattern → Adversarial → Human → Pattern → Adversarial (anti-détection max)
- `heavy-guard`: 8 steps (180s) - Maximum protection - **quality-first**: Selective → Human → Selective → Pattern (qualité prioritaire)
- `personality-focus`: 4 steps (70s) - Enhanced personality style - **balanced**: Selective → Human → Adversarial → Pattern → Selective (équilibré)
- `fluidity-master`: 4 steps (73s) - Natural transitions focus
- `adaptive-smart`: 5 steps (105s) - Intelligent adaptive modes
- `gptzero-killer`: 6 steps (155s) - GPTZero-specific bypass
- `originality-bypass`: 6 steps (160s) - Originality.ai-specific bypass
#### API Endpoints Support multi-passes (même module plusieurs fois) et intensité variable par étape.
```
POST /api/pipeline/save # Save pipeline definition
GET /api/pipeline/list # List all saved pipelines
GET /api/pipeline/:name # Load specific pipeline
DELETE /api/pipeline/:name # Delete pipeline
POST /api/pipeline/execute # Execute pipeline
GET /api/pipeline/templates # Get all templates
GET /api/pipeline/templates/:name # Get specific template
GET /api/pipeline/modules # Get available modules
POST /api/pipeline/validate # Validate pipeline structure
POST /api/pipeline/estimate # Estimate duration/cost
```
#### Example Pipeline Definition #### Batch Processing (`lib/batch/`)
```javascript Système complet de traitement batch:
{ - **BatchController**: API endpoints (config, start, stop, status)
name: "Custom Premium Pipeline", - **BatchProcessor**: Queue management, gestion d'erreurs, progression temps réel
description: "Multi-pass anti-detection with personality focus", - **DigitalOceanTemplates**: 10+ templates XML prédéfinis
pipeline: [ - **Configuration**: rowRange, trendId, workflowSequence, saveIntermediateSteps
{ step: 1, module: "generation", mode: "simple", intensity: 1.0 },
{ step: 2, module: "selective", mode: "fullEnhancement", intensity: 1.0 },
{ step: 3, module: "adversarial", mode: "heavy", intensity: 1.2,
parameters: { detector: "gptZero", method: "regeneration" } },
{ step: 4, module: "human", mode: "personalityFocus", intensity: 1.5 },
{ step: 5, module: "pattern", mode: "syntaxFocus", intensity: 1.1 },
{ step: 6, module: "adversarial", mode: "adaptive", intensity: 1.3,
parameters: { detector: "originality", method: "hybrid" } }
],
metadata: {
author: "user",
created: "2025-10-08",
version: "1.0",
tags: ["premium", "multi-pass", "anti-detection"]
}
}
```
#### Backward Compatibility ### 🆕 Flexible Pipeline System
The flexible pipeline system coexists with the legacy modular workflow system: Architecture révolutionnaire permettant des workflows personnalisés et réutilisables:
- **New way**: Use `pipelineConfig` parameter in `handleFullWorkflow()`
- **Old way**: Use `selectiveStack`, `adversarialMode`, `humanSimulationMode`, `patternBreakingMode`
- Both are fully supported and can be used interchangeably
### Core Workflow Pipeline (lib/Main.js) **Composants**:
1. **Data Preparation** - Read from Google Sheets (CSV data + XML templates) - `public/pipeline-builder.html` - Interface drag-and-drop visuelle
2. **Element Extraction** - Parse XML elements with embedded instructions - `public/pipeline-runner.html` - Exécution avec tracking progressif
3. **Missing Keywords Generation** - Auto-complete missing data using LLMs - `lib/pipeline/PipelineExecutor.js` - Moteur d'exécution
4. **Direct Content Generation** - Generate all content elements in parallel - `lib/pipeline/PipelineTemplates.js` - 10 templates prédéfinis
5. **Multi-LLM Enhancement** - 4-stage processing pipeline across different LLM providers
6. **Content Assembly** - Inject generated content back into XML structure
7. **Organic Compilation & Storage** - Save clean text to Google Sheets
### Google Sheets Integration **10 Templates disponibles**:
- **Authentication**: Via `GOOGLE_SERVICE_ACCOUNT_EMAIL` and `GOOGLE_PRIVATE_KEY` environment variables - `minimal-test` (1 step, 15s) - Tests rapides
- **Data Sources**: - `light-fast` (2 steps, 35s) - Génération basique
- `Instructions` sheet: Columns A-I (slug, T0, MC0, T-1, L-1, MC+1, T+1, L+1, XML template) - `standard-seo` (4 steps, 75s) - Protection équilibrée
- `Personnalites` sheet: 15 AI personalities for content variety - `premium-seo` (6 steps, 130s) - Qualité + anti-détection
- `Generated_Articles` sheet: Final compiled text output with metadata - `heavy-guard` (8 steps, 180s) - Protection maximale
- `gptzero-killer` (6 steps, 155s) - Spécialisé anti-GPTZero
- `originality-bypass` (6 steps, 160s) - Spécialisé anti-Originality.ai
### Multi-LLM Modular Enhancement System **Fonctionnalités clés**:
**Architecture 100% Modulaire** avec sauvegarde versionnée : - Ordre de modules entièrement personnalisable
- Multi-pass support (même module plusieurs fois)
- Configuration par étape (mode, intensity 0.1-2.0, paramètres custom)
- Sauvegarde checkpoints optionnels pour debugging
- Validation temps réel avec messages d'erreur détaillés
- Estimation durée/coût avant exécution
#### **Workflow Principal** (lib/Main.js) **Structure Pipeline**: JSON avec steps (module, mode, intensity, parameters optionnels), metadata (author, version, tags)
1. **Data Preparation** - Read from Google Sheets (CSV data + XML templates)
2. **Element Extraction** - Parse XML elements with embedded instructions
3. **Missing Keywords Generation** - Auto-complete missing data using LLMs
4. **Simple Generation** - Generate base content with Claude
5. **Selective Enhancement** - Couches modulaires configurables
6. **Adversarial Enhancement** - Anti-détection modulaire
7. **Human Simulation** - Erreurs humaines réalistes
8. **Pattern Breaking** - Cassage patterns LLM
9. **Content Assembly & Storage** - Final compilation avec versioning
#### **Couches Modulaires Disponibles** **API Endpoints**: `/api/pipeline/{save,list,execute,validate,estimate}`
- **5 Selective Stacks** : lightEnhancement → fullEnhancement → adaptive **Backward compatible**: `pipelineConfig` (nouveau) et `selectiveStack/adversarialMode` (ancien) supportés
- **5 Adversarial Modes** : none → light → standard → heavy → adaptive
- **6 Human Simulation Modes** : none → lightSimulation → personalityFocus → adaptive
- **7 Pattern Breaking Modes** : none → syntaxFocus → connectorsFocus → adaptive
#### **Sauvegarde Versionnée** ### Core Workflow Pipeline
- **v1.0** : Génération initiale Claude
- **v1.1** : Post Selective Enhancement
- **v1.2** : Post Adversarial Enhancement
- **v1.3** : Post Human Simulation
- **v1.4** : Post Pattern Breaking
- **v2.0** : Version finale
Supported LLM providers: Claude, OpenAI, Gemini, Deepseek, Moonshot, Mistral **7 étapes principales** (lib/Main.js):
1. **Data Preparation** - Lecture Google Sheets (CSV data + XML templates)
2. **Element Extraction** - Parse XML avec instructions {{variables}} vs {prompts}
3. **Missing Keywords Generation** - Auto-complétion données manquantes via LLMs
4. **Content Generation** - Génération base contenu en parallèle
5. **Multi-LLM Enhancement** - 4 couches modulaires (Selective → Adversarial → Human → Pattern)
6. **Content Assembly** - Injection contenu dans structure XML
7. **Organic Compilation & Storage** - Sauvegarde texte clean dans Google Sheets
#### **Tests d'Intégration Exhaustifs (Nouveau)** **Google Sheets Integration**:
Les TI exhaustifs (`npm run test:comprehensive`) testent **22 combinaisons modulaires complètes** : - **Instructions** (colonnes A-I): slug, T0, MC0, T-1, L-1, MC+1, T+1, L+1, XML template
- **Personnalites** (15 personnalités): Marc, Sophie, Laurent, Julie, Kévin, Amara, Mamadou, Émilie, Pierre-Henri, Yasmine, Fabrice, Chloé, Linh, Minh, Thierry
- **Generated_Articles**: Output texte final + metadata complète
**Selective Stacks Testés (5)** : **Modular Enhancement Layers** (Architecture 100% modulaire):
- `lightEnhancement` : 1 couche OpenAI technique - **5 Selective Stacks**:
- `standardEnhancement` : 2 couches OpenAI + Gemini - `lightEnhancement` (1 couche OpenAI technique)
- `fullEnhancement` : 3 couches multi-LLM complet - `standardEnhancement` (2 couches OpenAI + Gemini)
- `personalityFocus` : Style Mistral prioritaire - `fullEnhancement` (3 couches multi-LLM)
- `fluidityFocus` : Transitions Gemini prioritaires - `personalityFocus` (style Mistral prioritaire)
- `adaptive` (sélection intelligente)
**Adversarial Modes Testés (4)** : - **5 Adversarial Modes**:
- `general + regeneration` : Anti-détection standard - `none``light``standard``heavy``adaptive`
- `gptZero + regeneration` : Anti-GPTZero spécialisé - Détecteurs: GPTZero, Originality.ai, général
- `originality + hybrid` : Anti-Originality.ai - Méthodes: enhancement, regeneration, hybrid
- `general + enhancement` : Méthode douce
**Pipelines Combinés Testés (5)** : - **6 Human Simulation Modes**:
- Light → Adversarial - `none``lightSimulation``standardSimulation``heavySimulation``personalityFocus``adaptive`
- Standard → Adversarial Intense - FatiguePatterns, PersonalityErrors, TemporalStyles
- Full → Multi-Adversarial
- Personality → GPTZero
- Fluidity → Originality
**Tests Performance & Intensités (8)** : - **7 Pattern Breaking Modes**:
- Intensités variables (0.5 → 1.2) - `none``syntaxFocus``connectorsFocus``structureFocus``styleFocus``comprehensiveFocus``adaptive`
- Méthodes multiples (enhancement/regeneration/hybrid) - LLMFingerprints removal, SyntaxVariations, NaturalConnectors
- Benchmark pipeline complet avec métriques
### Personality System (lib/BrainConfig.js:265-340) **Versioned Saves**:
**Random Selection Process**: v1.0 (génération initiale) → v1.1 (post selective) → v1.2 (post adversarial) → v1.3 (post human) → v1.4 (post pattern) → v2.0 (version finale)
1. Load 15 personalities from Google Sheets
2. Fisher-Yates shuffle for true randomness
3. Select 60% (9 personalities) per generation
4. AI chooses best match within random subset
5. Temperature = 1.0 for maximum variability
**15 Available Personalities**: Marc (technical), Sophie (déco), Laurent (commercial), Julie (architecture), Kévin (terrain), Amara (engineering), Mamadou (artisan), Émilie (digital), Pierre-Henri (heritage), Yasmine (greentech), Fabrice (metallurgy), Chloé (content), Linh (manufacturing), Minh (design), Thierry (creole) **LLM Providers**:
Claude (Anthropic), OpenAI (GPT-4), Gemini (Google), Deepseek, Moonshot, Mistral - **5/6 opérationnels** (Gemini peut être géo-bloqué)
**Personality System**:
Random selection - 60% des 15 personnalités par génération, Fisher-Yates shuffle pour vraie randomisation, Temperature=1.0 pour variabilité maximale
## Centralized Logging System (LogSh) ## Centralized Logging System (LogSh)
### Architecture **Architecture**: All logging via `logSh()` (lib/ErrorReporting.js) - Multi-output (Console + File + WebSocket)
- **All logging must go through `logSh()` function** in `lib/ErrorReporting.js` **Levels**: TRACE (workflow), DEBUG, INFO, WARN, ERROR
- **Multi-output streams**: Console (formatted) + File (JSON) + WebSocket (real-time) **Format**: JSON structured logs (logs/seo-generator-YYYY-MM-DD_HH-MM-SS.log), JSONL Pino (logs/app.log)
- **Never use `console.*` or other loggers directly** **Trace**: AsyncLocalStorage hierarchical tracking with performance timing
### Log Levels and Usage
- **TRACE**: Hierarchical workflow execution with parameters (▶ ✔ ✖ symbols)
- **DEBUG**: Detailed debugging information (visible in files with debug level)
- **INFO**: Standard operational messages
- **WARN**: Warning conditions
- **ERROR**: Error conditions with stack traces
### File Logging
- **Format**: JSON structured logs in timestamped files
- **Location**: logs/seo-generator-YYYY-MM-DD_HH-MM-SS.log
- **Flush behavior**: Immediate flush on every log call to prevent buffer loss
- **Level**: DEBUG and above (includes all TRACE logs)
### Trace System
- **Hierarchical execution tracking**: Using AsyncLocalStorage for span context
- **Function parameters**: All tracer.run() calls include relevant parameters
- **Format**: Function names with file prefixes (e.g., "Main.handleFullWorkflow()")
- **Performance timing**: Start/end with duration measurements
- **Error handling**: Automatic stack trace logging on failures
### Log Consultation (LogViewer)
Les logs ne sont plus envoyés en console.log (trop verbeux). Tous les événements sont enregistrés dans logs/app.log au format **JSONL Pino**.
Un outil `tools/logViewer.js` permet d'interroger facilement ce fichier:
**Log Viewer** (`tools/logViewer.js`):
```bash ```bash
# Voir les 200 dernières lignes formatées node tools/logViewer.js --pretty # Dernières 200 lignes
node tools/logViewer.js --pretty node tools/logViewer.js --includes "Claude" --pretty # Recherche mot-clé
node tools/logViewer.js --level ERROR --pretty # Filtrer erreurs
# Rechercher un mot-clé dans les messages
node tools/logViewer.js --search --includes "Claude" --pretty
# Rechercher par plage de temps (tous les logs du 2 septembre 2025)
node tools/logViewer.js --since 2025-09-02T00:00:00Z --until 2025-09-02T23:59:59Z --pretty
# Filtrer par niveau d'erreur
node tools/logViewer.js --last 300 --level ERROR --pretty
``` ```
**Filtres disponibles**: **Real-time**: WebSocket port 8081, auto-launch `tools/logs-viewer.html` in browser
- `--level`: 30=INFO, 40=WARN, 50=ERROR (ou INFO, WARN, ERROR)
- `--module`: filtre par path ou module
- `--includes`: mot-clé dans msg
- `--regex`: expression régulière sur msg
- `--since / --until`: bornes temporelles (ISO ou YYYY-MM-DD)
### Real-time Log Viewing
- **WebSocket server** on port 8081
- **Auto-launched** `tools/logs-viewer.html` in Edge browser
- **Features**: Search, level filtering, scroll preservation
## Key Components ## Key Components
### lib/Main.js ### Core Orchestration
**Architecture Modulaire Complète** - Orchestration workflow avec pipeline configurable et sauvegarde versionnée. - **`lib/Main.js`** - Orchestration workflow complète avec pipeline configurable et sauvegarde versionnée (v1.0 → v2.0)
- **`lib/APIController.js`** - Contrôleur API RESTful centralisant toute la logique métier:
- CRUD articles, projets, templates
- Intégration DynamicPromptEngine, TrendManager, WorkflowEngine
- Endpoints monitoring (health, metrics, personalities)
- **`lib/ConfigManager.js`** - Gestionnaire configurations modulaires et pipelines:
- Sauvegarde/chargement JSON dans `configs/` et `configs/pipelines/`
- Validation et versioning automatique
- API complète pour manipulation configs
### lib/selective-enhancement/ ### Enhancement Modules (Architecture Modulaire)
**Couches Selective Modulaires** : - **`lib/selective-enhancement/`** - Couches enhancement sélectives:
- `SelectiveCore.js` - Application couche par couche - `SelectiveCore.js` - Application couche par couche
- `SelectiveLayers.js` - 5 stacks prédéfinis + adaptatif - `SelectiveLayers.js` - 5 stacks prédéfinis + adaptatif
- `TechnicalLayer.js` - Enhancement technique OpenAI - `TechnicalLayer.js` - Enhancement technique OpenAI
- `TransitionLayer.js` - Enhancement transitions Gemini - `TransitionLayer.js` - Enhancement transitions Gemini
- `StyleLayer.js` - Enhancement style Mistral - `StyleLayer.js` - Enhancement style Mistral
- `SelectiveUtils.js` - Utilitaires + génération simple (remplace ContentGeneration.js) - `SelectiveUtils.js` - Utilitaires + génération simple
### lib/adversarial-generation/ - **`lib/adversarial-generation/`** - Anti-détection modulaire:
**Anti-détection Modulaire** :
- `AdversarialCore.js` - Moteur adversarial principal - `AdversarialCore.js` - Moteur adversarial principal
- `AdversarialLayers.js` - 5 modes défense configurables - `AdversarialLayers.js` - 5 modes défense configurables
- `DetectorStrategies.js` - Stratégies anti-détection interchangeables - `DetectorStrategies.js` - Stratégies anti-détection interchangeables (GPTZero, Originality.ai)
### lib/human-simulation/ - **`lib/human-simulation/`** - Simulation erreurs humaines réalistes:
**Simulation Erreurs Humaines** :
- `HumanSimulationCore.js` - Moteur simulation principal - `HumanSimulationCore.js` - Moteur simulation principal
- `HumanSimulationLayers.js` - 6 modes simulation - `HumanSimulationLayers.js` - 6 modes simulation
- `FatiguePatterns.js` - Patterns fatigue réalistes - `FatiguePatterns.js` - Patterns fatigue réalistes
- `PersonalityErrors.js` - Erreurs spécifiques personnalité - `PersonalityErrors.js` - Erreurs spécifiques personnalité
- `TemporalStyles.js` - Variations temporelles - `TemporalStyles.js` - Variations temporelles
### lib/pattern-breaking/ - **`lib/pattern-breaking/`** - Cassage patterns LLM:
**Cassage Patterns LLM** :
- `PatternBreakingCore.js` - Moteur pattern breaking - `PatternBreakingCore.js` - Moteur pattern breaking
- `PatternBreakingLayers.js` - 7 modes cassage - `PatternBreakingLayers.js` - 7 modes cassage
- `LLMFingerprints.js` - Suppression empreintes LLM - `LLMFingerprints.js` - Suppression empreintes LLM
- `SyntaxVariations.js` - Variations syntaxiques - `SyntaxVariations.js` - Variations syntaxiques
- `NaturalConnectors.js` - Connecteurs naturels - `NaturalConnectors.js` - Connecteurs naturels
### lib/post-processing/ ### Advanced Systems (Nouveaux - Sept 2025)
**Post-traitement Legacy** (remplacé par modules ci-dessus) - **`lib/prompt-engine/`** - DynamicPromptEngine:
- Templates modulaires (technical, style, adversarial)
- Context analyzers et adaptive rules
- Composition multi-niveaux avec variables dynamiques
### lib/LLMManager.js - **`lib/trend-prompts/`** - TrendManager:
Multi-LLM provider management with retry logic, rate limiting, and provider rotation. - 6+ tendances prédéfinies (sectorielles + générationnelles)
- Configuration par tendance (targetTerms, focusAreas, tone, values)
### lib/BrainConfig.js - **`lib/workflow-configuration/`** - WorkflowEngine:
Google Sheets integration, personality system, and random selection algorithms. - 5 workflows prédéfinis configurables
- Support iterations multiples et intensité variable
### lib/ElementExtraction.js - **`lib/batch/`** - Batch Processing System:
XML parsing and element extraction with instruction parsing ({{variables}} vs {instructions}). - BatchController (API endpoints)
- BatchProcessor (queue, monitoring)
- DigitalOceanTemplates (10+ templates XML)
### lib/ArticleStorage.js ### Utilities
Organic text compilation maintaining natural hierarchy and Google Sheets storage. - **`lib/LLMManager.js`** - Gestion multi-LLM providers avec retry logic, rate limiting, provider rotation
- **`lib/BrainConfig.js`** - Intégration Google Sheets + système personnalités (random selection, Fisher-Yates)
### lib/ErrorReporting.js - **`lib/ElementExtraction.js`** - Parsing XML avec distinction {{variables}} vs {instructions}
Centralized logging system with hierarchical tracing and multi-output streams. - **`lib/ArticleStorage.js`** - Compilation texte organique + stockage Google Sheets
- **`lib/ErrorReporting.js`** - Logging centralisé via `logSh()`, hierarchical tracing AsyncLocalStorage
## Environment Configuration ## Environment Configuration
@ -435,49 +294,53 @@ node tools/audit-unused.cjs # Report dead files and unused exports
## Important Development Notes ## Important Development Notes
- **Architecture 100% Modulaire**: Ancien système séquentiel supprimé, backup dans `/backup/sequential-system/` **Architecture**: 100% modulaire, configuration granulaire, versioned saves (v1.0→v2.0), compatibility layer `handleFullWorkflow()`
- **Configuration Granulaire**: Chaque couche modulaire indépendamment configurable
- **Sauvegarde Versionnée**: v1.0 → v1.1 → v1.2 → v1.3 → v1.4 → v2.0 pour traçabilité complète
- **Compatibility Layer**: Interface `handleFullWorkflow()` maintenue pour rétrocompatibilité
- **Personality system uses randomization**: 60% of 15 personalities selected per generation run
- **All data sourced from Google Sheets**: No hardcoded JSON files or static data
- **Default XML templates**: Auto-generated when column I contains filenames
- **Organic compilation**: Maintains natural text flow in final output
- **Temperature = 1.0**: Ensures maximum variability in AI responses
- **Trace system**: Uses AsyncLocalStorage for hierarchical execution tracking
- **5/6 LLM providers operational**: Gemini may be geo-blocked in some regions
### **Migration Legacy → Modulaire** **New Systems (Sept 2025)**: DynamicPromptEngine, TrendManager (6+ trends), WorkflowEngine (5 workflows), BatchProcessing, ConfigManager, APIController, 11 web interfaces
- ❌ **Supprimé**: `lib/ContentGeneration.js` + `lib/generation/` (pipeline séquentiel fixe)
- ✅ **Remplacé par**: Modules selective/adversarial/human-simulation/pattern-breaking **Data**: Google Sheets source (no hardcoded JSON), 15 personalities (60% random selection, Fisher-Yates, temp=1.0), organic compilation, XML templates auto-generated
- ✅ **Avantage**: Flexibilité totale, stacks adaptatifs, parallélisation possible
**Monitoring**: AsyncLocalStorage tracing, 5/6 LLM providers, RESTful API (pagination/filters), WebSocket real-time logs + health/metrics
**Migration Legacy→Modulaire**: ❌ `lib/ContentGeneration.js` + `lib/generation/` → ✅ selective/adversarial/human-simulation/pattern-breaking modules (flexibilité totale, stacks adaptatifs, parallélisation)
## Web Interfaces (MANUEL Mode)
### Interfaces de Production
- **`public/index.html`** - Dashboard principal avec contrôles workflow
- **`public/production-runner.html`** - Exécution workflows production depuis Google Sheets
- **`public/pipeline-builder.html`** - Constructeur visuel de pipelines drag-and-drop
- **`public/pipeline-runner.html`** - Exécuteur de pipelines sauvegardés avec tracking
- **`public/config-editor.html`** - Éditeur de configurations modulaires
### Interfaces de Test et Développement
- **`public/batch-dashboard.html`** - Dashboard traitement batch avec configuration
- **`public/batch-interface.html`** - Interface batch avec contrôle granulaire
- **`public/prompt-engine-interface.html`** - Interface test DynamicPromptEngine
- **`public/modular-pipeline-demo.html`** - Démo système pipeline modulaire
- **`public/step-by-step.html`** - Exécution pas-à-pas pour debugging
- **`public/test-modulaire.html`** - Tests manuels des modules
## RESTful API Endpoints
**Articles/Projects/Templates**: Full CRUD (GET, POST, PUT, DELETE) - `/api/articles/*`, `/api/projects/*`, `/api/templates/*`
**Monitoring**: `/api/health`, `/api/metrics`, `/api/config/personalities`
**Batch**: `/api/batch/{config,start,stop,status}`
**Pipeline**: `/api/pipeline/{save,list,execute,validate,estimate}`
**Advanced**: `/api/prompt-engine/generate`, `/api/trends/*`, `/api/workflows/*`
Voir `API.md` pour documentation complète avec exemples.
## File Structure ## File Structure
- `server.js` - Express server entry point with mode selection **Core**: `server.js`, `lib/Main.js`, `lib/APIController.js`, `lib/ConfigManager.js`, `lib/modes/`, `lib/BrainConfig.js`, `lib/LLMManager.js`
- `lib/Main.js` - Core workflow orchestration **Enhancement**: `lib/selective-enhancement/`, `lib/adversarial-generation/`, `lib/human-simulation/`, `lib/pattern-breaking/`
- `lib/modes/` - Mode management (Manual/Auto) **Advanced**: `lib/prompt-engine/`, `lib/trend-prompts/`, `lib/workflow-configuration/`, `lib/batch/`, `lib/pipeline/`
- `lib/BrainConfig.js` - Google Sheets integration + personality system **Utilities**: `lib/ElementExtraction.js`, `lib/ArticleStorage.js`, `lib/ErrorReporting.js`
- `lib/LLMManager.js` - Multi-LLM provider management **Assets**: `public/` (11 web interfaces), `configs/` (saved configs/pipelines), `tools/` (logViewer, bundler, audit), `tests/` (comprehensive test suite), `.env` (credentials)
- `lib/ContentGeneration.js` - Content generation and enhancement pipeline
- `lib/ElementExtraction.js` - XML parsing and element extraction
- `lib/ArticleStorage.js` - Content compilation and Google Sheets storage
- `lib/ErrorReporting.js` - Centralized logging and error handling
- `tools/` - Development utilities (log viewer, bundler, audit)
- `tests/` - Comprehensive test suite with multiple categories
- `.env` - Environment configuration (Google credentials, API keys)
## Key Dependencies ## Dependencies & Workflow Sources
- `googleapis` - Google Sheets API integration **Deps**: googleapis, axios, dotenv, express, nodemailer
- `axios` - HTTP client for LLM APIs **Sources**: production (Google Sheets), test_random_personality, node_server
- `dotenv` - Environment variable management
- `express` - Web server framework
- `nodemailer` - Email notifications (needs setup)
## Workflow Sources
- `production` - Real Google Sheets data processing
- `test_random_personality` - Testing with personality randomization
- `node_server` - Direct API processing
- Legacy: make_com, digital_ocean_autonomous
## Git Push Configuration ## Git Push Configuration
Si le push échoue avec "Connection closed port 22", utiliser SSH sur port 443: Si le push échoue avec "Connection closed port 22", utiliser SSH sur port 443: