8.6 KiB
8.6 KiB
CLAUDE.md - Essential Information Backup
Project Overview
Node.js-based SEO content generation server converted from Google Apps Script. Generates SEO-optimized content using multiple LLMs with anti-detection mechanisms and Content DNA Mixing.
Development Commands
Production Workflow Execution
# Execute real production workflow from Google Sheets
node -e "const main = require('./lib/Main'); main.handleFullWorkflow({ rowNumber: 2, source: 'production' });"
# Test with different rows
node -e "const main = require('./lib/Main'); main.handleFullWorkflow({ rowNumber: 3, source: 'production' });"
Basic Operations
npm start- Start the production server on port 3000npm run dev- Start the development server (same as start)node server.js- Direct server startup
Testing Commands
Google Sheets Integration Tests
# Test personality loading from Google Sheets
node -e "const {getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => console.log(\`\${p.length} personalities loaded\`));"
# Test CSV data loading
node -e "const {readInstructionsData} = require('./lib/BrainConfig'); readInstructionsData(2).then(d => console.log('Data:', d));"
# Test random personality selection
node -e "const {selectPersonalityWithAI, getPersonalities} = require('./lib/BrainConfig'); getPersonalities().then(p => selectPersonalityWithAI('test', 'test', p)).then(r => console.log('Selected:', r.nom));"
LLM Connectivity Tests
node -e "require('./lib/LLMManager').testLLMManager()"- Test basic LLM connectivitynode -e "require('./lib/LLMManager').testLLMManagerComplete()"- Full LLM provider test suite
Complete System Test
node -e "
const main = require('./lib/Main');
const testData = {
csvData: {
mc0: 'plaque personnalisée',
t0: 'Créer une plaque personnalisée unique',
personality: { nom: 'Marc', style: 'professionnel' },
tMinus1: 'décoration personnalisée',
mcPlus1: 'plaque gravée,plaque métal,plaque bois,plaque acrylique',
tPlus1: 'Plaque Gravée Premium,Plaque Métal Moderne,Plaque Bois Naturel,Plaque Acrylique Design'
},
xmlTemplate: Buffer.from(\`<?xml version='1.0' encoding='UTF-8'?>
<article>
<h1>|Titre_Principal{{T0}}{Rédige un titre H1 accrocheur}|</h1>
<intro>|Introduction{{MC0}}{Rédige une introduction engageante}|</intro>
</article>\`).toString('base64'),
source: 'node_server_test'
};
main.handleFullWorkflow(testData);
"
Architecture Overview
Core Workflow (lib/Main.js)
- Data Preparation - Read from Google Sheets (CSV + XML template)
- Element Extraction - Parse 16+ XML elements with instructions
- Missing Keywords Generation - Auto-complete missing data
- Direct Content Generation - Bypass hierarchy, generate all elements
- Multi-LLM Enhancement - 4-stage processing (Claude → GPT-4 → Gemini → Mistral)
- Content Assembly - Inject content back into XML template
- Organic Compilation & Storage - Save clean text to Google Sheets
Google Sheets Integration (lib/BrainConfig.js, lib/ArticleStorage.js)
Authentication: Environment variables (GOOGLE_SERVICE_ACCOUNT_EMAIL, GOOGLE_PRIVATE_KEY)
Data Sources:
- Instructions Sheet: Columns A-I (slug, T0, MC0, T-1, L-1, MC+1, T+1, L+1, XML)
- Personnalites Sheet: 15 personalities with complete profiles
- Generated_Articles Sheet: Compiled text output with metadata
Personality System (lib/BrainConfig.js:265-340)
Random Selection Process:
- Load 15 personalities from Google Sheets
- Fisher-Yates shuffle for true randomness
- Select 60% (9 personalities) per generation
- AI chooses best match within random subset
- Temperature = 1.0 for maximum variability
Multi-LLM Pipeline (lib/ContentGeneration.js)
- Base Generation (Claude Sonnet-4) - Initial content creation
- Technical Enhancement (GPT-4o-mini) - Add precision and terminology
- Transition Enhancement (Gemini) - Improve flow (if available)
- Personality Style (Mistral) - Apply personality-specific voice
LogSh - Centralized Logging System
Architecture
- Centralized logging: All logs must go through LogSh function in ErrorReporting.js
- Multi-output streams: Console (pretty format) + File (JSON) + WebSocket (real-time)
- No console or custom loggers: Do not use console.* or alternate logger modules
Log Levels and Usage
- TRACE: Hierarchical workflow execution with parameters (▶ ✔ ✖ symbols)
- DEBUG: Detailed debugging information (visible in files with debug level)
- INFO: Standard operational messages
- WARN: Warning conditions
- ERROR: Error conditions with stack traces
File Logging
- Format: JSON structured logs in timestamped files
- Location: logs/seo-generator-YYYY-MM-DD_HH-MM-SS.log
- Flush behavior: Immediate flush on every log call to prevent buffer loss
- Level: DEBUG and above (includes all TRACE logs)
Real-time Logging
- WebSocket server: Port 8081 for live log viewing
- Auto-launch: logs-viewer.html opens in Edge browser automatically
- Features: Search, filtering by level, scroll preservation, compact UI
Trace System
- Hierarchical execution tracking: Using AsyncLocalStorage for span context
- Function parameters: All tracer.run() calls include relevant parameters
- Format: Function names with file prefixes (e.g., "Main.handleFullWorkflow()")
- Performance timing: Start/end with duration measurements
- Error handling: Automatic stack trace logging on failures
🔍 Log Consultation (LogViewer)
Contexte
- Les logs ne sont plus envoyés en console.log (trop verbeux).
- Tous les événements sont enregistrés dans logs/app.log au format JSONL Pino.
Outil dédié
Un outil tools/logViewer.js permet d'interroger facilement ce fichier.
Commandes rapides
-
Voir les 200 dernières lignes formatées
node tools/logViewer.js --pretty -
Rechercher un mot-clé dans les messages
node tools/logViewer.js --search --includes "Claude" --pretty -
Rechercher par plage de temps
# Tous les logs du 2 septembre 2025 node tools/logViewer.js --since 2025-09-02T00:00:00Z --until 2025-09-02T23:59:59Z --pretty -
Filtrer par niveau d'erreur
node tools/logViewer.js --last 300 --level ERROR --pretty
Filtres disponibles
- --level : 30=INFO, 40=WARN, 50=ERROR (ou INFO, WARN, ERROR)
- --module : filtre par path ou module
- --includes : mot-clé dans msg
- --regex : expression régulière sur msg
- --since / --until : bornes temporelles (ISO ou YYYY-MM-DD)
📦 Bundling Tool
pack-lib.cjs creates a single code.js from all files in lib/.
Each file is concatenated with an ASCII header showing its path. Imports/exports are kept, so the bundle is for reading/audit only, not execution.
Usage
node pack-lib.cjs # default → code.js
node pack-lib.cjs --out out.js # custom output
node pack-lib.cjs --order alpha
node pack-lib.cjs --entry lib/test-manual.js
File Structure
- server.js : Express server with basic endpoints
- lib/Main.js : Core workflow orchestration
- lib/BrainConfig.js : Google Sheets integration + personality system
- lib/LLMManager.js : Multi-LLM provider management
- lib/ContentGeneration.js : Content generation and enhancement
- lib/ElementExtraction.js : XML parsing and element extraction
- lib/ArticleStorage.js : Google Sheets storage and compilation
- lib/ErrorReporting.js : Logging and error handling
- .env : Environment configuration (Google credentials, API keys)
Key Dependencies
- googleapis : Google Sheets API integration
- axios : HTTP client for LLM APIs
- dotenv : Environment variable management
- express : Web server framework
- nodemailer : Email notifications (needs setup)
Important Notes for Future Development
- Personality system is now random-based: 60% of 15 personalities selected per run
- All data comes from Google Sheets: No more JSON files or hardcoded data
- Default XML template: Auto-generated when column I contains filename
- Temperature = 1.0: Maximum variability in AI selection
- Direct element generation: Bypasses hierarchy system for reliability
- Organic compilation: Maintains natural text flow in final output
- 5/6 LLM providers operational: Gemini geo-blocked, others fully functional
Workflow Sources
- production - Real Google Sheets data processing
- test_random_personality - Testing with personality randomization
- node_server - Direct API processing
- Legacy: make_com, digital_ocean_autonomous