From 094ab288656e38611f90003358211a14acaf3cbf Mon Sep 17 00:00:00 2001 From: StillHammer Date: Tue, 18 Nov 2025 15:09:39 +0800 Subject: [PATCH] feat: Add integration test scenarios 11-13 for IO and DataNode systems MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Added three new integration test scenario documents: - Scenario 11: IO System Stress Test - Tests IntraIO pub/sub with pattern matching, batching, backpressure, and thread safety - Scenario 12: DataNode Integration Test - Tests IDataTree with hot-reload, persistence, hashing, and performance on 1000+ nodes - Scenario 13: Cross-System Integration - Tests IO + DataNode working together with config hot-reload chains and concurrent access Also includes comprehensive DataNode system architecture analysis documentation. These scenarios complement the existing test suite by covering the IO communication layer and data management systems that were previously untested. đŸ€– Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- docs/architecture/DATANODE-SYSTEM-ANALYSIS.md | 766 ++++++++++++ planTI/scenario_11_io_system.md | 780 +++++++++++++ planTI/scenario_12_datanode.md | 914 +++++++++++++++ planTI/scenario_13_cross_system.md | 1022 +++++++++++++++++ 4 files changed, 3482 insertions(+) create mode 100644 docs/architecture/DATANODE-SYSTEM-ANALYSIS.md create mode 100644 planTI/scenario_11_io_system.md create mode 100644 planTI/scenario_12_datanode.md create mode 100644 planTI/scenario_13_cross_system.md diff --git a/docs/architecture/DATANODE-SYSTEM-ANALYSIS.md b/docs/architecture/DATANODE-SYSTEM-ANALYSIS.md new file mode 100644 index 0000000..2dbd9d8 --- /dev/null +++ b/docs/architecture/DATANODE-SYSTEM-ANALYSIS.md @@ -0,0 +1,766 @@ +# DataNode System Architecture Analysis + +## System Overview + +The DataNode system is a hierarchical data management framework for the GroveEngine, providing unified access to configuration, persistent data, and runtime state. It's a complete abstraction layer separating data concerns from business logic. + +--- + +## 1. Core Architecture + +### Three-Tier System + +``` +IDataTree (Root Container) + ├── config/ (Read-only, hot-reload enabled) + ├── data/ (Read-write, persistent) + └── runtime/ (Read-write, temporary) +``` + +### Architectural Layers + +``` +Layer 1: Interfaces (Abstract) +├── IDataValue - Type-safe value wrapper +├── IDataNode - Tree node with navigation and modification +└── IDataTree - Root container with save/reload operations + +Layer 2: Concrete Implementations +├── JsonDataValue - nlohmann::json backed value +├── JsonDataNode - JSON tree node with full features +└── JsonDataTree - File-based JSON storage + +Layer 3: Module System Integration +└── IModuleSystem - Owns IDataTree, manages save/reload + +Layer 4: Distributed Coordination +├── CoordinationModule (Master) - Hot-reload detection +└── DebugEngine (Workers) - Config synchronization +``` + +--- + +## 2. Key Classes and Responsibilities + +### IDataValue Interface +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataValue.h` + +**Responsibility**: Abstract data value with type-safe access + +**Key Methods**: +- Type checking: `isNull()`, `isBool()`, `isNumber()`, `isString()`, `isArray()`, `isObject()` +- Conversion: `asBool()`, `asInt()`, `asDouble()`, `asString()` +- Access: `get(index)`, `get(key)`, `has(key)`, `size()` +- Serialization: `toString()` + +**Why It Exists**: Allows modules to work with values without exposing JSON format, enabling future implementations (binary, database, etc.) + +--- + +### JsonDataValue Implementation +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataValue.h` +**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataValue.cpp` + +**Concrete Implementation**: Backed by `nlohmann::json` + +**Key Features**: +- Transparent JSON wrapping +- Direct JSON access for internal use: `getJson()`, `getJson()` +- All interface methods delegated to JSON type system +- No conversion overhead (move semantics) + +--- + +### IDataNode Interface +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataNode.h` (259 lines) + +**Responsibility**: Single tree node with hierarchical navigation, search, and modification + +**Major Capabilities**: + +#### 1. Tree Navigation +```cpp +std::unique_ptr getChild(const std::string& name) +std::vector getChildNames() +bool hasChildren() +``` + +#### 2. Exact Search (Direct Children Only) +```cpp +std::vector getChildrenByName(const std::string& name) +bool hasChildrenByName(const std::string& name) const +IDataNode* getFirstChildByName(const std::string& name) +``` + +#### 3. Pattern Matching (Deep Subtree Search) +```cpp +// Examples: "component*", "*heavy*", "model_*" +std::vector getChildrenByNameMatch(const std::string& pattern) +bool hasChildrenByNameMatch(const std::string& pattern) const +IDataNode* getFirstChildByNameMatch(const std::string& pattern) +``` + +#### 4. Property-Based Queries (Functional) +```cpp +std::vector queryByProperty(const std::string& propName, + const std::function& predicate) + +// Example: Find all tanks with armor > 150 +auto heavy = root->queryByProperty("armor", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() > 150; + }); +``` + +#### 5. Typed Data Access +```cpp +std::string getString(const std::string& name, const std::string& default = "") +int getInt(const std::string& name, int default = 0) +double getDouble(const std::string& name, double default = 0.0) +bool getBool(const std::string& name, bool default = false) +bool hasProperty(const std::string& name) +``` + +#### 6. Hash System (Validation & Synchronization) +```cpp +std::string getDataHash() // SHA256 of this node's data +std::string getTreeHash() // SHA256 of entire subtree +std::string getSubtreeHash(const std::string& childPath) // Specific child +``` + +**Use Cases**: +- Validate config hasn't been corrupted +- Detect changes for synchronization +- Fast change detection without full tree comparison + +#### 7. Node Data Management +```cpp +std::unique_ptr getData() const +bool hasData() const +void setData(std::unique_ptr data) +``` + +#### 8. Tree Modification +```cpp +void setChild(const std::string& name, std::unique_ptr node) +bool removeChild(const std::string& name) +void clearChildren() +``` + +**Restrictions**: Only works on data/ and runtime/ nodes. Config nodes are read-only. + +#### 9. Metadata +```cpp +std::string getPath() const // Full path: "vehicles/tanks/heavy" +std::string getName() const // Node name only +std::string getNodeType() const // "JsonDataNode" +``` + +--- + +### JsonDataNode Implementation +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataNode.h` (109 lines) +**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataNode.cpp` (344 lines) + +**Internal Structure**: +```cpp +class JsonDataNode : public IDataNode { +private: + std::string m_name; + json m_data; // Node's own data + JsonDataNode* m_parent; // Parent reference (path building) + bool m_readOnly; // For config/ nodes + std::map> m_children; // Child nodes +} +``` + +**Key Capabilities**: + +1. **Pattern Matching Implementation** + - Converts wildcard patterns to regex: `*` → `.*` + - Escapes all special regex chars except `*` + - Recursive depth-first search: `collectMatchingNodes()` + - O(n) complexity where n = subtree size + +2. **Hash Computation** + - Uses OpenSSL SHA256 + - Data hash: `SHA256(m_data.dump())` + - Tree hash: Combined hash of data + all children + - Format: Lowercase hex string + +3. **Copy-on-Access Pattern** + - `getChild()` returns a new unique_ptr copy + - Preserves encapsulation + - Enables safe distribution + +4. **Read-Only Enforcement** + - `checkReadOnly()` throws if modification attempted on config + - Error: `"Cannot modify read-only node: " + getPath()` + +--- + +### IDataTree Interface +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataTree.h` (128 lines) + +**Responsibility**: Root container managing three separate trees + +**Key Methods**: + +#### Tree Access +```cpp +std::unique_ptr getRoot() // Everything +std::unique_ptr getNode(const std::string& path) // "config/vehicles/tanks" + +// Recommended: Access separate roots +std::unique_ptr getConfigRoot() // Read-only config +std::unique_ptr getDataRoot() // Persistent data +std::unique_ptr getRuntimeRoot() // Temporary state +``` + +#### Save Operations +```cpp +bool saveData() // Save entire data/ +bool saveNode(const std::string& path) // Save specific node (data/ only) +``` + +#### Hot-Reload +```cpp +bool checkForChanges() // Check if config files changed +bool reloadIfChanged() // Reload if changed, fire callbacks +void onTreeReloaded(std::function callback) // Register reload handler +``` + +#### Metadata +```cpp +std::string getType() // "JsonDataTree" +``` + +--- + +### JsonDataTree Implementation +**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataTree.h` (87 lines) +**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataTree.cpp` (partial read) + +**Internal Structure**: +```cpp +class JsonDataTree : public IDataTree { +private: + std::string m_basePath; // Root directory + std::unique_ptr m_root; // Root container + std::unique_ptr m_configRoot; // config/ subtree + std::unique_ptr m_dataRoot; // data/ subtree + std::unique_ptr m_runtimeRoot; // runtime/ subtree (in-memory) + + std::map m_configFileTimes; + std::vector> m_reloadCallbacks; +} +``` + +**Key Features**: + +1. **Initialization** (`JsonDataTree(basePath)`) + - Creates root node + - Calls `loadConfigTree()` from disk + - Calls `loadDataTree()` from disk + - Calls `initializeRuntimeTree()` (empty in-memory) + - Attaches all three as children to root + +2. **File-Based Loading** (`scanDirectory()`) + - Recursively scans config/ and data/ directories + - Creates JsonDataNode tree from JSON files + - Builds hierarchical structure + - config/ marked as read-only + +3. **Hot-Reload Detection** (`checkForChanges()`) + - Tracks file modification times + - Detects file deletions + - Detects new files + - Returns bool (changed?) + +4. **Hot-Reload Execution** (`reloadIfChanged()`) + - Calls `loadConfigTree()` to reload from disk + - Fires all registered callbacks + - Allows modules to refresh configuration + +5. **Save Operations** + - `saveData()`: Saves data/ subtree to disk + - `saveNode(path)`: Saves specific data/ path + - Only allows data/ paths (read-only protection) + - Creates JSON files matching hierarchy + +--- + +## 3. Data Flow Patterns + +### Pattern 1: Reading Configuration +```cpp +// Engine startup +auto tree = std::make_unique("gamedata"); +auto tankConfig = tree->getConfigRoot() + ->getChild("tanks") + ->getChild("heavy_mk1"); + +// Module receives config +void TankModule::setConfiguration(const IDataNode& config, ...) { + m_armor = config.getInt("armor"); // Default: 0 + m_speed = config.getDouble("speed"); // Default: 0.0 + m_weaponType = config.getString("weapon_type"); // Default: "" +} +``` + +### Pattern 2: Saving State +```cpp +// Module creates state +auto state = std::make_unique("state", json::object()); +state->setData(std::make_unique( + json{{"position", {x, y}}, {"health", hp}} +)); + +// Engine persists +tree->getDataRoot()->setChild("tank_123", std::move(state)); +tree->saveNode("data/tank_123"); +``` + +### Pattern 3: Hot-Reload (Distributed) +```cpp +// Master: Detect and broadcast +if (masterTree->reloadIfChanged()) { + auto config = masterTree->getConfigRoot(); + io->publish("config:reload", std::move(config)); +} + +// Worker: Receive and apply +auto msg = io->pullMessage(); +if (msg.topic == "config:reload") { + auto configRoot = tree->getConfigRoot(); + configRoot->setChild("updated", std::move(msg.data)); + + // Notify modules + for (auto& module : modules) { + auto moduleConfig = configRoot->getChild(module->getType()); + module->setConfiguration(*moduleConfig, io, scheduler); + } +} +``` + +### Pattern 4: Advanced Queries +```cpp +// Pattern matching +auto heavyUnits = root->getChildrenByNameMatch("*_heavy_*"); +auto tanks = root->getChildrenByNameMatch("tank_*"); + +// Property-based query +auto highArmor = root->queryByProperty("armor", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() > 150; + }); + +// Hash validation +std::string oldHash = configNode->getTreeHash(); +// ... later ... +if (oldHash != configNode->getTreeHash()) { + // Config changed - refresh caches +} +``` + +--- + +## 4. Storage and Persistence + +### File Structure +``` +gamedata/ +├── config/ +│ ├── tanks.json → JsonDataNode "tanks" with children +│ ├── weapons.json +│ └── mods/super_mod/ +│ └── new_tanks.json +├── data/ +│ ├── campaign_progress.json +│ ├── player_stats.json +│ └── unlocks.json +└── runtime/ + (in-memory only) +``` + +### JSON Format +Each node can have: +- Own data (any JSON value) +- Child nodes (files become nodes) +- Properties (key-value pairs in object data) + +**Example tanks.json**: +```json +{ + "heavy_mk1": { + "armor": 200, + "speed": 15.5, + "weapon_type": "cannon_105mm" + }, + "medium_t_72": { + "armor": 140, + "speed": 60.0, + "weapon_type": "cannon_125mm" + } +} +``` + +**Tree Structure**: +``` +config/ +└── tanks + ├── heavy_mk1 (data: {armor: 200, ...}) + └── medium_t_72 (data: {armor: 140, ...}) +``` + +--- + +## 5. Synchronization Mechanisms + +### Hash System +- **Data Hash**: Validates single node integrity +- **Tree Hash**: Validates subtree (change detection) +- **Subtree Hash**: Validates specific child path + +**Use Case**: Quick change detection without tree traversal + +### Hot-Reload System +1. `checkForChanges()` - File timestamp comparison +2. `reloadIfChanged()` - Reload + callback firing +3. `onTreeReloaded()` - Register callback handlers + +**Distributed Pattern**: +- Master detects changes +- Broadcasts new config via IIO +- Workers apply synchronized updates +- Modules notified via `setConfiguration()` + +### Read-Only Enforcement +- config/ nodes: Immutable by modules +- data/ nodes: Writable by modules +- runtime/ nodes: Always writable + +--- + +## 6. Synchronization Features + +### Thread Safety Considerations +Current implementation is **NOT thread-safe**: +- No mutex protection in JsonDataNode +- No mutex protection in JsonDataTree +- Concurrent access requires external synchronization + +**Recommended Pattern**: +```cpp +std::mutex treeMutex; + +// Reader +std::lock_guard lock(treeMutex); +auto data = tree->getNode(path); + +// Writer +std::lock_guard lock(treeMutex); +tree->getDataRoot()->setChild("path", std::move(node)); +tree->saveData(); +``` + +### Copy Semantics +All getters return unique_ptr copies (not references): +- `getChild()` → new JsonDataNode copy +- `getData()` → new JsonDataValue copy +- Protects internal state +- Enables safe distribution + +--- + +## 7. Existing Tests + +### Test Files Found +- `/mnt/c/Users/alexi/Documents/projects/groveengine/tests/integration/test_04_race_condition.cpp` + - Tests concurrent compilation and hot-reload + - Uses JsonDataNode for configuration + - Tests module integrity validation + - Tests concurrent access patterns + +### Test Scenarios (planTI/) +1. **scenario_01_production_hotreload.md** - Hot-reload validation +2. **scenario_02_chaos_monkey.md** - Random failure injection +3. **scenario_03_stress_test.md** - Load testing +4. **scenario_04_race_condition.md** - Concurrency testing +5. **scenario_05_multimodule.md** - Multi-module coordination +6. **scenario_07_limits.md** - Extreme conditions +7. **scenario_06_error_recovery.md** - Error handling + +--- + +## 8. Critical Features Requiring Integration Tests + +### 1. Tree Navigation & Search +**What Needs Testing**: +- Tree construction from file system +- Exact name matching (getChildrenByName) +- Pattern matching with wildcards +- Deep subtree search efficiency +- Path building and navigation +- Edge cases: empty names, special characters, deep nesting + +**Test Scenarios**: +```cpp +// Test exact matching +auto tanks = root->getChildrenByName("tanks"); +assert(tanks.size() == 1); + +// Test pattern matching +auto heavy = root->getChildrenByNameMatch("*heavy*"); +assert(heavy.size() == 2); // e.g., heavy_mk1, tank_heavy_v2 + +// Test deep navigation +auto node = root->getChild("vehicles")->getChild("tanks")->getChild("heavy"); +assert(node != nullptr); +``` + +### 2. Data Persistence & Save/Load +**What Needs Testing**: +- Save entire data/ tree +- Save specific nodes +- Load from disk +- Nested structure preservation +- Data type preservation (numbers, strings, booleans, arrays) +- Empty node handling +- Large data handling (1MB+) +- File corruption recovery + +**Test Scenarios**: +```cpp +// Create and save +auto node = std::make_unique("player", json::object()); +tree->getDataRoot()->setChild("player1", std::move(node)); +assert(tree->saveNode("data/player1")); + +// Reload and verify +auto reloaded = tree->getDataRoot()->getChild("player1"); +assert(reloaded != nullptr); +assert(reloaded->hasData()); +``` + +### 3. Hot-Reload System +**What Needs Testing**: +- File change detection +- Config reload accuracy +- Callback execution +- Multiple callback handling +- Timing consistency +- No data/ changes during reload +- No runtime/ changes during reload +- Rapid successive reloads + +**Test Scenarios**: +```cpp +// Register callback +bool callbackFired = false; +tree->onTreeReloaded([&]() { callbackFired = true; }); + +// Modify config file +modifyConfigFile("config/tanks.json"); +std::this_thread::sleep_for(10ms); + +// Trigger reload +assert(tree->reloadIfChanged()); +assert(callbackFired); +``` + +### 4. Property-Based Queries +**What Needs Testing**: +- Predicate evaluation +- Type-safe access +- Complex predicates (AND, OR) +- Performance with large datasets +- Empty result sets +- Single result matches +- Null value handling + +**Test Scenarios**: +```cpp +// Query by numeric property +auto armored = root->queryByProperty("armor", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() >= 150; + }); +assert(armored.size() >= 1); + +// Query by string property +auto cannons = root->queryByProperty("weapon", + [](const IDataValue& val) { + return val.isString() && val.asString().find("cannon") != std::string::npos; + }); +``` + +### 5. Hash System & Validation +**What Needs Testing**: +- Hash consistency (same data = same hash) +- Hash change detection +- Tree hash includes all children +- Subtree hash isolation +- Hash format (lowercase hex, 64 chars for SHA256) +- Performance of hash computation +- Deep tree hashing + +**Test Scenarios**: +```cpp +auto hash1 = node->getDataHash(); +auto hash2 = node->getDataHash(); +assert(hash1 == hash2); // Consistent + +// Modify data +node->setData(...); +auto hash3 = node->getDataHash(); +assert(hash1 != hash3); // Changed + +// Tree hash includes children +auto treeHash1 = node->getTreeHash(); +node->setChild("new", ...); +auto treeHash2 = node->getTreeHash(); +assert(treeHash1 != treeHash2); // Child change detected +``` + +### 6. Read-Only Enforcement +**What Needs Testing**: +- config/ nodes reject modifications +- data/ nodes allow modifications +- runtime/ nodes allow modifications +- Exception on modification attempt +- Error message contains path +- Read-only flag propagation to children +- Inherited read-only status + +**Test Scenarios**: +```cpp +auto configNode = tree->getConfigRoot(); +assert_throws([&]() { + configNode->setChild("new", std::make_unique("x", json::object())); +}); + +auto dataNode = tree->getDataRoot(); +dataNode->setChild("new", std::make_unique("x", json::object())); // OK +``` + +### 7. Type Safety & Data Access +**What Needs Testing**: +- getString with default fallback +- getInt with type coercion +- getDouble precision +- getBool parsing +- hasProperty existence check +- Wrong type access returns default +- Null handling +- Array/object access edge cases + +**Test Scenarios**: +```cpp +auto node = ...; // Has {"armor": 200, "speed": 60.5, "active": true} + +assert(node->getInt("armor") == 200); +assert(node->getDouble("speed") == 60.5); +assert(node->getBool("active") == true); +assert(node->getString("name", "default") == "default"); // Missing key + +assert(node->hasProperty("armor")); +assert(!node->hasProperty("missing")); +``` + +### 8. Concurrent Access Patterns +**What Needs Testing**: +- Safe reader access (multiple threads reading simultaneously) +- Safe writer access (single writer with lock) +- Race condition detection +- No data corruption under load +- Reload safety during concurrent reads +- No deadlocks + +**Test Scenarios**: +```cpp +std::mutex treeMutex; +std::vector readers; + +for (int i = 0; i < 10; ++i) { + readers.emplace_back([&]() { + std::lock_guard lock(treeMutex); + auto data = tree->getConfigRoot()->getChild("tanks"); + assert(data != nullptr); + }); +} + +for (auto& t : readers) t.join(); +``` + +### 9. Error Handling & Edge Cases +**What Needs Testing**: +- Invalid paths (non-existent nodes) +- Empty names +- Special characters in names +- Null data nodes +- Circular reference prevention +- Memory cleanup on exception +- File system errors (permissions, disk full) +- Corrupted JSON recovery + +**Test Scenarios**: +```cpp +// Non-existent node +auto missing = tree->getNode("config/does/not/exist"); +assert(missing == nullptr); + +// Empty name +auto node = std::make_unique("", json::object()); +assert(node->getName() == ""); +assert(node->getPath() == ""); // Root-like behavior +``` + +### 10. Performance & Scale +**What Needs Testing**: +- Large tree navigation (1000+ nodes) +- Deep nesting (100+ levels) +- Pattern matching performance +- Hash computation speed +- File I/O performance +- Memory usage +- Reload speed + +**Test Scenarios**: +```cpp +// Create large tree +auto root = std::make_unique("root", json::object()); +for (int i = 0; i < 1000; ++i) { + root->setChild("child_" + std::to_string(i), + std::make_unique("x", json::object())); +} + +// Benchmark pattern matching +auto start = std::chrono::high_resolution_clock::now(); +auto results = root->getChildrenByNameMatch("child_*"); +auto end = std::chrono::high_resolution_clock::now(); + +assert(results.size() == 1000); +auto duration = std::chrono::duration_cast(end - start).count(); +assert(duration < 100); // Should be fast +``` + +--- + +## Summary + +The DataNode system is a complete, production-ready data management framework providing: + +1. **Three-tier abstraction** (Interface → Implementation → Integration) +2. **Hierarchical organization** (config/, data/, runtime/) +3. **Advanced queries** (exact matching, pattern matching, property-based) +4. **Hash-based validation** (change detection, integrity checking) +5. **Hot-reload support** (file monitoring, callback system) +6. **Type-safe access** (IDataValue interface with coercion) +7. **Read-only enforcement** (configuration immutability) +8. **Persistence layer** (file-based save/load) + +**Critical missing piece**: No integration tests specifically for DataNode system. All existing tests focus on module loading, hot-reload, and race conditions, but not on DataNode functionality itself. + +--- + diff --git a/planTI/scenario_11_io_system.md b/planTI/scenario_11_io_system.md new file mode 100644 index 0000000..65df84b --- /dev/null +++ b/planTI/scenario_11_io_system.md @@ -0,0 +1,780 @@ +# ScĂ©nario 11: IO System Stress Test + +**PrioritĂ©**: ⭐⭐ SHOULD HAVE +**Phase**: 2 (SHOULD HAVE) +**DurĂ©e estimĂ©e**: ~5 minutes +**Effort implĂ©mentation**: ~4-6 heures + +--- + +## 🎯 Objectif + +Valider que le systĂšme IntraIO (pub/sub intra-process) fonctionne correctement dans tous les cas d'usage: +- Pattern matching avec wildcards et regex +- Multi-module routing (1-to-1, 1-to-many) +- Message batching et flushing (low-frequency subscriptions) +- Backpressure et queue overflow +- Thread safety (concurrent publish/pull) +- Health monitoring et mĂ©triques +- Subscription lifecycle + +**Bug connu Ă  valider**: IntraIOManager ne route qu'au premier subscriber (limitation std::move sans clone) + +--- + +## 📋 Description + +### Setup Initial +1. CrĂ©er 5 modules avec IntraIO: + - **ProducerModule** - Publie 1000 msg/s sur diffĂ©rents topics + - **ConsumerModule** - Souscrit Ă  plusieurs patterns + - **BroadcastModule** - Publie sur topics avec multiples subscribers + - **BatchModule** - Utilise low-frequency subscriptions + - **StressModule** - Stress test avec 10k msg/s + +2. Configurer IntraIOManager avec routage entre modules + +3. Tester 8 scĂ©narios diffĂ©rents sur 5 minutes + +### Test SĂ©quence + +#### Test 1: Basic Publish-Subscribe (30s) +1. ProducerModule publie 100 messages sur "test:basic" +2. ConsumerModule souscrit Ă  "test:basic" +3. VĂ©rifier: + - 100 messages reçus + - Ordre FIFO prĂ©servĂ© + - Aucun message perdu + +#### Test 2: Pattern Matching (30s) +1. ProducerModule publie sur: + - "player:001:position" + - "player:001:health" + - "player:002:position" + - "enemy:001:position" +2. ConsumerModule souscrit aux patterns: + - "player:*" (devrait matcher 3 messages) + - "player:001:*" (devrait matcher 2 messages) + - "*:position" (devrait matcher 3 messages) +3. VĂ©rifier matching counts corrects + +#### Test 3: Multi-Module Routing (60s) +1. ProducerModule publie "broadcast:data" (100 messages) +2. ConsumerModule, BatchModule, StressModule souscrivent tous Ă  "broadcast:*" +3. VĂ©rifier: + - **Bug attendu**: Seul le premier subscriber reçoit (limitation clone) + - Logger quel module reçoit + - Documenter le bug pour fix futur + +#### Test 4: Message Batching (60s) +1. BatchModule configure low-frequency subscription: + - Pattern: "batch:*" + - Interval: 1000ms + - replaceable: true +2. ProducerModule publie "batch:metric" Ă  100 Hz (toutes les 10ms) +3. VĂ©rifier: + - BatchModule reçoit ~1 message/seconde (dernier seulement) + - Batching fonctionne correctement + +#### Test 5: Backpressure & Queue Overflow (30s) +1. ProducerModule publie 50k messages sur "stress:flood" +2. ConsumerModule souscrit mais ne pull que 100 msg/s +3. VĂ©rifier: + - Queue overflow dĂ©tectĂ© (health.dropping = true) + - Messages droppĂ©s comptĂ©s (health.droppedMessageCount > 0) + - SystĂšme reste stable (pas de crash) + +#### Test 6: Thread Safety (60s) +1. Lancer 10 threads qui publient simultanĂ©ment (1000 msg chacun) +2. Lancer 5 threads qui pullent simultanĂ©ment +3. VĂ©rifier: + - Aucun crash + - Aucune corruption de donnĂ©es + - Total messages reçus = total envoyĂ©s (ou moins si overflow) + +#### Test 7: Health Monitoring (30s) +1. ProducerModule publie Ă  diffĂ©rents dĂ©bits: + - Phase 1: 100 msg/s (normal) + - Phase 2: 10k msg/s (overload) + - Phase 3: 100 msg/s (recovery) +2. Monitorer health metrics: + - queueSize augmente/diminue correctement + - averageProcessingRate reflĂšte rĂ©alitĂ© + - dropping flag activĂ©/dĂ©sactivĂ© au bon moment + +#### Test 8: Subscription Lifecycle (30s) +1. CrĂ©er/dĂ©truire subscriptions dynamiquement +2. VĂ©rifier: + - Messages aprĂšs unsubscribe ne sont pas reçus + - Re-subscribe fonctionne + - Pas de leak de subscriptions dans IntraIOManager + +--- + +## đŸ—ïž ImplĂ©mentation + +### ProducerModule Structure + +```cpp +// ProducerModule.h +class ProducerModule : public IModule { +public: + void initialize(std::shared_ptr config) override; + void process(float deltaTime) override; + std::shared_ptr getState() const override; + void setState(std::shared_ptr state) override; + bool isIdle() const override { return true; } + +private: + std::shared_ptr io; + int messageCount = 0; + float publishRate = 100.0f; // Hz + float accumulator = 0.0f; + + void publishTestMessages(); +}; +``` + +### ConsumerModule Structure + +```cpp +// ConsumerModule.h +class ConsumerModule : public IModule { +public: + void initialize(std::shared_ptr config) override; + void process(float deltaTime) override; + std::shared_ptr getState() const override; + void setState(std::shared_ptr state) override; + bool isIdle() const override { return true; } + + // Test helpers + int getReceivedCount() const { return receivedMessages.size(); } + const std::vector& getMessages() const { return receivedMessages; } + +private: + std::shared_ptr io; + std::vector receivedMessages; + + void processIncomingMessages(); +}; +``` + +### Test Principal + +```cpp +// test_11_io_system.cpp +#include "helpers/TestMetrics.h" +#include "helpers/TestAssertions.h" +#include "helpers/TestReporter.h" +#include +#include + +int main() { + TestReporter reporter("IO System Stress Test"); + TestMetrics metrics; + + // === SETUP === + DebugEngine engine; + + // Charger modules + engine.loadModule("ProducerModule", "build/modules/libProducerModule.so"); + engine.loadModule("ConsumerModule", "build/modules/libConsumerModule.so"); + engine.loadModule("BroadcastModule", "build/modules/libBroadcastModule.so"); + engine.loadModule("BatchModule", "build/modules/libBatchModule.so"); + engine.loadModule("StressModule", "build/modules/libStressModule.so"); + + // Initialiser avec IOFactory + auto config = createJsonConfig({ + {"transport", "intra"}, + {"instanceId", "test_engine"} + }); + + engine.initializeModule("ProducerModule", config); + engine.initializeModule("ConsumerModule", config); + engine.initializeModule("BroadcastModule", config); + engine.initializeModule("BatchModule", config); + engine.initializeModule("StressModule", config); + + // ======================================================================== + // TEST 1: Basic Publish-Subscribe + // ======================================================================== + std::cout << "\n=== TEST 1: Basic Publish-Subscribe ===\n"; + + // ConsumerModule subscribe to "test:basic" + auto consumerIO = engine.getModuleIO("ConsumerModule"); + consumerIO->subscribe("test:basic", {}); + + // ProducerModule publie 100 messages + auto producerIO = engine.getModuleIO("ProducerModule"); + for (int i = 0; i < 100; i++) { + auto data = std::make_unique(nlohmann::json{ + {"id", i}, + {"payload", "test_message_" + std::to_string(i)} + }); + producerIO->publish("test:basic", std::move(data)); + } + + // Process pour permettre routing + engine.update(1.0f/60.0f); + + // VĂ©rifier rĂ©ception + int receivedCount = 0; + while (consumerIO->hasMessages() > 0) { + auto msg = consumerIO->pullMessage(); + receivedCount++; + + // VĂ©rifier ordre FIFO + auto* jsonData = dynamic_cast(msg.data.get()); + int msgId = jsonData->getJsonData()["id"]; + ASSERT_EQ(msgId, receivedCount - 1, "Messages should be in FIFO order"); + } + + ASSERT_EQ(receivedCount, 100, "Should receive all 100 messages"); + reporter.addAssertion("basic_pubsub", receivedCount == 100); + std::cout << "✓ TEST 1 PASSED: " << receivedCount << " messages received\n"; + + // ======================================================================== + // TEST 2: Pattern Matching + // ======================================================================== + std::cout << "\n=== TEST 2: Pattern Matching ===\n"; + + // Subscribe to different patterns + consumerIO->subscribe("player:*", {}); + consumerIO->subscribe("player:001:*", {}); + consumerIO->subscribe("*:position", {}); + + // Publish test messages + std::vector testTopics = { + "player:001:position", // Matches all 3 patterns + "player:001:health", // Matches pattern 1 and 2 + "player:002:position", // Matches pattern 1 and 3 + "enemy:001:position" // Matches pattern 3 only + }; + + for (const auto& topic : testTopics) { + auto data = std::make_unique(nlohmann::json{{"topic", topic}}); + producerIO->publish(topic, std::move(data)); + } + + engine.update(1.0f/60.0f); + + // Count messages by pattern + std::map patternCounts; + while (consumerIO->hasMessages() > 0) { + auto msg = consumerIO->pullMessage(); + auto* jsonData = dynamic_cast(msg.data.get()); + std::string topic = jsonData->getJsonData()["topic"]; + patternCounts[topic]++; + } + + // Note: Due to pattern overlap, same message might be received multiple times + std::cout << "Pattern matching results:\n"; + for (const auto& [topic, count] : patternCounts) { + std::cout << " " << topic << ": " << count << " times\n"; + } + + reporter.addAssertion("pattern_matching", true); + std::cout << "✓ TEST 2 PASSED\n"; + + // ======================================================================== + // TEST 3: Multi-Module Routing (Bug Detection) + // ======================================================================== + std::cout << "\n=== TEST 3: Multi-Module Routing (1-to-many) ===\n"; + + // All modules subscribe to "broadcast:*" + consumerIO->subscribe("broadcast:*", {}); + auto broadcastIO = engine.getModuleIO("BroadcastModule"); + broadcastIO->subscribe("broadcast:*", {}); + auto batchIO = engine.getModuleIO("BatchModule"); + batchIO->subscribe("broadcast:*", {}); + auto stressIO = engine.getModuleIO("StressModule"); + stressIO->subscribe("broadcast:*", {}); + + // Publish 10 broadcast messages + for (int i = 0; i < 10; i++) { + auto data = std::make_unique(nlohmann::json{{"broadcast_id", i}}); + producerIO->publish("broadcast:data", std::move(data)); + } + + engine.update(1.0f/60.0f); + + // Check which modules received messages + int consumerReceived = consumerIO->hasMessages(); + int broadcastReceived = broadcastIO->hasMessages(); + int batchReceived = batchIO->hasMessages(); + int stressReceived = stressIO->hasMessages(); + + std::cout << "Broadcast distribution:\n"; + std::cout << " ConsumerModule: " << consumerReceived << " messages\n"; + std::cout << " BroadcastModule: " << broadcastReceived << " messages\n"; + std::cout << " BatchModule: " << batchReceived << " messages\n"; + std::cout << " StressModule: " << stressReceived << " messages\n"; + + // Expected: Only ONE module receives due to std::move limitation + int totalReceived = consumerReceived + broadcastReceived + batchReceived + stressReceived; + + if (totalReceived == 10) { + std::cout << "⚠ BUG: Only one module received all messages (clone() not implemented)\n"; + reporter.addMetric("broadcast_bug_present", 1.0f); + } else if (totalReceived == 40) { + std::cout << "✓ FIXED: All modules received copies (clone() implemented!)\n"; + reporter.addMetric("broadcast_bug_present", 0.0f); + } + + reporter.addAssertion("multi_module_routing_tested", true); + std::cout << "✓ TEST 3 COMPLETED (bug documented)\n"; + + // ======================================================================== + // TEST 4: Message Batching + // ======================================================================== + std::cout << "\n=== TEST 4: Message Batching (Low-Frequency) ===\n"; + + // Configure low-freq subscription + IIO::SubscriptionConfig batchConfig; + batchConfig.replaceable = true; + batchConfig.batchInterval = 1000; // 1 second + batchIO->subscribeLowFreq("batch:*", batchConfig); + + // Publish at 100 Hz for 3 seconds (300 messages) + auto batchStart = std::chrono::high_resolution_clock::now(); + int batchedPublished = 0; + + for (int sec = 0; sec < 3; sec++) { + for (int i = 0; i < 100; i++) { + auto data = std::make_unique(nlohmann::json{ + {"timestamp", batchedPublished}, + {"value", batchedPublished * 0.1f} + }); + producerIO->publish("batch:metric", std::move(data)); + batchedPublished++; + + // Simulate 10ms interval + std::this_thread::sleep_for(std::chrono::milliseconds(10)); + engine.update(1.0f/60.0f); + } + } + + auto batchEnd = std::chrono::high_resolution_clock::now(); + float batchDuration = std::chrono::duration(batchEnd - batchStart).count(); + + // Check how many batched messages received + int batchesReceived = 0; + while (batchIO->hasMessages() > 0) { + auto msg = batchIO->pullMessage(); + batchesReceived++; + } + + std::cout << "Published: " << batchedPublished << " messages over " << batchDuration << "s\n"; + std::cout << "Received: " << batchesReceived << " batches\n"; + std::cout << "Expected: ~" << static_cast(batchDuration) << " batches (1/second)\n"; + + // Should receive ~3 batches (1 per second) + ASSERT_TRUE(batchesReceived >= 2 && batchesReceived <= 4, + "Should receive 2-4 batches for 3 seconds"); + reporter.addMetric("batch_count", batchesReceived); + reporter.addAssertion("batching_works", batchesReceived >= 2); + std::cout << "✓ TEST 4 PASSED\n"; + + // ======================================================================== + // TEST 5: Backpressure & Queue Overflow + // ======================================================================== + std::cout << "\n=== TEST 5: Backpressure & Queue Overflow ===\n"; + + // Subscribe but don't pull + consumerIO->subscribe("stress:flood", {}); + + // Flood with 50k messages + std::cout << "Publishing 50000 messages...\n"; + for (int i = 0; i < 50000; i++) { + auto data = std::make_unique(nlohmann::json{{"flood_id", i}}); + producerIO->publish("stress:flood", std::move(data)); + + if (i % 10000 == 0) { + std::cout << " " << i << " messages published\n"; + } + } + + engine.update(1.0f/60.0f); + + // Check health + auto health = consumerIO->getHealth(); + std::cout << "Health status:\n"; + std::cout << " Queue size: " << health.queueSize << " / " << health.maxQueueSize << "\n"; + std::cout << " Dropping: " << (health.dropping ? "YES" : "NO") << "\n"; + std::cout << " Dropped count: " << health.droppedMessageCount << "\n"; + std::cout << " Processing rate: " << health.averageProcessingRate << " msg/s\n"; + + ASSERT_TRUE(health.queueSize > 0, "Queue should have messages"); + + // Likely queue overflow happened + if (health.dropping || health.droppedMessageCount > 0) { + std::cout << "✓ Backpressure detected correctly\n"; + reporter.addAssertion("backpressure_detected", true); + } + + reporter.addMetric("queue_size", health.queueSize); + reporter.addMetric("dropped_messages", health.droppedMessageCount); + std::cout << "✓ TEST 5 PASSED\n"; + + // ======================================================================== + // TEST 6: Thread Safety + // ======================================================================== + std::cout << "\n=== TEST 6: Thread Safety (Concurrent Pub/Pull) ===\n"; + + std::atomic publishedTotal{0}; + std::atomic receivedTotal{0}; + std::atomic running{true}; + + consumerIO->subscribe("thread:*", {}); + + // 10 publisher threads + std::vector publishers; + for (int t = 0; t < 10; t++) { + publishers.emplace_back([&, t]() { + for (int i = 0; i < 1000; i++) { + auto data = std::make_unique(nlohmann::json{ + {"thread", t}, + {"id", i} + }); + producerIO->publish("thread:test", std::move(data)); + publishedTotal++; + } + }); + } + + // 5 consumer threads + std::vector consumers; + for (int t = 0; t < 5; t++) { + consumers.emplace_back([&]() { + while (running || consumerIO->hasMessages() > 0) { + if (consumerIO->hasMessages() > 0) { + try { + auto msg = consumerIO->pullMessage(); + receivedTotal++; + } catch (...) { + std::cerr << "ERROR: Exception during pull\n"; + } + } + std::this_thread::sleep_for(std::chrono::microseconds(100)); + } + }); + } + + // Wait for publishers + for (auto& t : publishers) { + t.join(); + } + + std::cout << "All publishers done: " << publishedTotal << " messages\n"; + + // Let consumers finish + std::this_thread::sleep_for(std::chrono::milliseconds(500)); + running = false; + + for (auto& t : consumers) { + t.join(); + } + + std::cout << "All consumers done: " << receivedTotal << " messages\n"; + + // May have drops, but should be stable + ASSERT_GT(receivedTotal, 0, "Should receive at least some messages"); + reporter.addMetric("concurrent_published", publishedTotal); + reporter.addMetric("concurrent_received", receivedTotal); + reporter.addAssertion("thread_safety", true); // No crash = success + std::cout << "✓ TEST 6 PASSED (no crashes)\n"; + + // ======================================================================== + // TEST 7: Health Monitoring Accuracy + // ======================================================================== + std::cout << "\n=== TEST 7: Health Monitoring Accuracy ===\n"; + + consumerIO->subscribe("health:*", {}); + + // Phase 1: Normal load (100 msg/s) + std::cout << "Phase 1: Normal load (100 msg/s for 2s)\n"; + for (int i = 0; i < 200; i++) { + auto data = std::make_unique(nlohmann::json{{"phase", 1}}); + producerIO->publish("health:test", std::move(data)); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); + + // Pull to keep queue low + if (consumerIO->hasMessages() > 0) { + consumerIO->pullMessage(); + } + } + + auto healthPhase1 = consumerIO->getHealth(); + std::cout << " Queue: " << healthPhase1.queueSize << ", Dropping: " << healthPhase1.dropping << "\n"; + + // Phase 2: Overload (10k msg/s without pulling) + std::cout << "Phase 2: Overload (10000 msg/s for 1s)\n"; + for (int i = 0; i < 10000; i++) { + auto data = std::make_unique(nlohmann::json{{"phase", 2}}); + producerIO->publish("health:test", std::move(data)); + } + engine.update(1.0f/60.0f); + + auto healthPhase2 = consumerIO->getHealth(); + std::cout << " Queue: " << healthPhase2.queueSize << ", Dropping: " << healthPhase2.dropping << "\n"; + + ASSERT_GT(healthPhase2.queueSize, healthPhase1.queueSize, + "Queue should grow during overload"); + + // Phase 3: Recovery (pull all) + std::cout << "Phase 3: Recovery (pulling all messages)\n"; + int pulled = 0; + while (consumerIO->hasMessages() > 0) { + consumerIO->pullMessage(); + pulled++; + } + + auto healthPhase3 = consumerIO->getHealth(); + std::cout << " Pulled: " << pulled << " messages\n"; + std::cout << " Queue: " << healthPhase3.queueSize << ", Dropping: " << healthPhase3.dropping << "\n"; + + ASSERT_EQ(healthPhase3.queueSize, 0, "Queue should be empty after pulling all"); + reporter.addAssertion("health_monitoring", true); + std::cout << "✓ TEST 7 PASSED\n"; + + // ======================================================================== + // TEST 8: Subscription Lifecycle + // ======================================================================== + std::cout << "\n=== TEST 8: Subscription Lifecycle ===\n"; + + // Subscribe + consumerIO->subscribe("lifecycle:test", {}); + + // Publish 10 messages + for (int i = 0; i < 10; i++) { + auto data = std::make_unique(nlohmann::json{{"id", i}}); + producerIO->publish("lifecycle:test", std::move(data)); + } + engine.update(1.0f/60.0f); + + int count1 = 0; + while (consumerIO->hasMessages() > 0) { + consumerIO->pullMessage(); + count1++; + } + ASSERT_EQ(count1, 10, "Should receive 10 messages"); + + // Unsubscribe (if API exists - might not be implemented yet) + // consumerIO->unsubscribe("lifecycle:test"); + + // Publish 10 more + for (int i = 10; i < 20; i++) { + auto data = std::make_unique(nlohmann::json{{"id", i}}); + producerIO->publish("lifecycle:test", std::move(data)); + } + engine.update(1.0f/60.0f); + + // If unsubscribe exists, should receive 0. If not, will receive 10. + int count2 = 0; + while (consumerIO->hasMessages() > 0) { + consumerIO->pullMessage(); + count2++; + } + + std::cout << "After unsubscribe: " << count2 << " messages (0 if unsubscribe works)\n"; + + // Re-subscribe + consumerIO->subscribe("lifecycle:test", {}); + + // Publish 10 more + for (int i = 20; i < 30; i++) { + auto data = std::make_unique(nlohmann::json{{"id", i}}); + producerIO->publish("lifecycle:test", std::move(data)); + } + engine.update(1.0f/60.0f); + + int count3 = 0; + while (consumerIO->hasMessages() > 0) { + consumerIO->pullMessage(); + count3++; + } + ASSERT_EQ(count3, 10, "Should receive 10 messages after re-subscribe"); + + reporter.addAssertion("subscription_lifecycle", true); + std::cout << "✓ TEST 8 PASSED\n"; + + // ======================================================================== + // RAPPORT FINAL + // ======================================================================== + + metrics.printReport(); + reporter.printFinalReport(); + + return reporter.getExitCode(); +} +``` + +--- + +## 📊 MĂ©triques CollectĂ©es + +| MĂ©trique | Description | Seuil | +|----------|-------------|-------| +| **basic_pubsub** | Messages reçus dans test basique | 100/100 | +| **pattern_matching** | Pattern matching fonctionne | true | +| **broadcast_bug_present** | Bug 1-to-1 dĂ©tectĂ© (1.0) ou fixĂ© (0.0) | Documentation | +| **batch_count** | Nombre de batches reçus | 2-4 | +| **queue_size** | Taille queue pendant flood | > 0 | +| **dropped_messages** | Messages droppĂ©s dĂ©tectĂ©s | >= 0 | +| **concurrent_published** | Messages publiĂ©s concurrents | 10000 | +| **concurrent_received** | Messages reçus concurrents | > 0 | +| **health_monitoring** | Health metrics prĂ©cis | true | +| **subscription_lifecycle** | Subscribe/unsubscribe fonctionne | true | + +--- + +## ✅ CritĂšres de SuccĂšs + +### MUST PASS +1. ✅ Basic pub/sub: 100/100 messages en FIFO +2. ✅ Pattern matching fonctionne (wildcards) +3. ✅ Batching rĂ©duit frĂ©quence (100 msg/s → ~1 msg/s) +4. ✅ Backpressure dĂ©tectĂ© (dropping flag ou dropped count) +5. ✅ Thread safety: aucun crash en concurrence +6. ✅ Health monitoring reflĂšte Ă©tat rĂ©el +7. ✅ Re-subscribe fonctionne + +### KNOWN BUGS (Documentation) +1. ⚠ Multi-module routing: Seul 1er subscriber reçoit (pas de clone()) +2. ⚠ Unsubscribe API peut ne pas exister + +### NICE TO HAVE +1. ✅ Fix du bug clone() pour 1-to-many routing +2. ✅ Unsubscribe API implĂ©mentĂ©e +3. ✅ Compression pour batching + +--- + +## 🐛 Cas d'Erreur Attendus + +| Erreur | Cause | Action | +|--------|-------|--------| +| Messages perdus | Routing bug | WARN - documenter | +| Pattern pas match | Regex incorrect | FAIL - fix pattern | +| Pas de batching | Config ignorĂ©e | FAIL - check SubscriptionConfig | +| Pas de backpressure | Health non mis Ă  jour | FAIL - fix IOHealth | +| Crash concurrent | Race condition | FAIL - add mutex | +| Queue size incorrect | Compteur buguĂ© | FAIL - fix queueSize tracking | + +--- + +## 📝 Output Attendu + +``` +================================================================================ +TEST: IO System Stress Test +================================================================================ + +=== TEST 1: Basic Publish-Subscribe === +✓ TEST 1 PASSED: 100 messages received + +=== TEST 2: Pattern Matching === +Pattern matching results: + player:001:position: 3 times + player:001:health: 2 times + player:002:position: 2 times + enemy:001:position: 1 times +✓ TEST 2 PASSED + +=== TEST 3: Multi-Module Routing (1-to-many) === +Broadcast distribution: + ConsumerModule: 10 messages + BroadcastModule: 0 messages + BatchModule: 0 messages + StressModule: 0 messages +⚠ BUG: Only one module received all messages (clone() not implemented) +✓ TEST 3 COMPLETED (bug documented) + +=== TEST 4: Message Batching (Low-Frequency) === +Published: 300 messages over 3.02s +Received: 3 batches +Expected: ~3 batches (1/second) +✓ TEST 4 PASSED + +=== TEST 5: Backpressure & Queue Overflow === +Publishing 50000 messages... + 0 messages published + 10000 messages published + 20000 messages published + 30000 messages published + 40000 messages published +Health status: + Queue size: 10000 / 10000 + Dropping: YES + Dropped count: 40000 + Processing rate: 0.0 msg/s +✓ Backpressure detected correctly +✓ TEST 5 PASSED + +=== TEST 6: Thread Safety (Concurrent Pub/Pull) === +All publishers done: 10000 messages +All consumers done: 9847 messages +✓ TEST 6 PASSED (no crashes) + +=== TEST 7: Health Monitoring Accuracy === +Phase 1: Normal load (100 msg/s for 2s) + Queue: 2, Dropping: NO +Phase 2: Overload (10000 msg/s for 1s) + Queue: 9998, Dropping: YES +Phase 3: Recovery (pulling all messages) + Pulled: 9998 messages + Queue: 0, Dropping: NO +✓ TEST 7 PASSED + +=== TEST 8: Subscription Lifecycle === +After unsubscribe: 10 messages (0 if unsubscribe works) +✓ TEST 8 PASSED + +================================================================================ +METRICS +================================================================================ + Basic pub/sub: 100/100 + Batch count: 3 + Queue size: 10000 + Dropped messages: 40000 + Concurrent published: 10000 + Concurrent received: 9847 + Broadcast bug present: 1.0 (not fixed yet) + +================================================================================ +ASSERTIONS +================================================================================ + ✓ basic_pubsub + ✓ pattern_matching + ✓ multi_module_routing_tested + ✓ batching_works + ✓ backpressure_detected + ✓ thread_safety + ✓ health_monitoring + ✓ subscription_lifecycle + +Result: ✅ PASSED (8/8 tests) + +================================================================================ +``` + +--- + +## 📅 Planning + +**Jour 1 (3h):** +- ImplĂ©menter ProducerModule, ConsumerModule, BroadcastModule +- ImplĂ©menter BatchModule, StressModule +- Setup IOFactory pour tests + +**Jour 2 (3h):** +- ImplĂ©menter test_11_io_system.cpp +- Tests 1-4 (pub/sub, patterns, routing, batching) + +**Jour 3 (2h):** +- Tests 5-8 (backpressure, threads, health, lifecycle) +- Debug + validation + +--- + +**Prochaine Ă©tape**: `scenario_12_datanode.md` diff --git a/planTI/scenario_12_datanode.md b/planTI/scenario_12_datanode.md new file mode 100644 index 0000000..989fe5e --- /dev/null +++ b/planTI/scenario_12_datanode.md @@ -0,0 +1,914 @@ +# ScĂ©nario 12: DataNode Integration Test + +**PrioritĂ©**: ⭐⭐ SHOULD HAVE +**Phase**: 2 (SHOULD HAVE) +**DurĂ©e estimĂ©e**: ~4 minutes +**Effort implĂ©mentation**: ~5-7 heures + +--- + +## 🎯 Objectif + +Valider que le systĂšme DataNode (IDataTree/JsonDataTree) fonctionne correctement pour tous les cas d'usage: +- Tree navigation (exact match, pattern matching, queries) +- Hot-reload system (file watch, callbacks, isolation) +- Persistence (save/load, data integrity) +- Hash system (data hash, tree hash, change detection) +- Read-only enforcement (config/ vs data/ vs runtime/) +- Type safety et defaults +- Performance avec large trees (1000+ nodes) + +**Note**: Le DataNode est le systĂšme central de configuration et persistence du moteur. + +--- + +## 📋 Description + +### Setup Initial +1. CrĂ©er un IDataTree avec structure complĂšte: + - **config/** - Configuration read-only avec 500 nodes + - **data/** - Persistence read-write avec 300 nodes + - **runtime/** - State temporaire avec 200 nodes +2. Total: ~1000 nodes dans l'arbre +3. Fichiers JSON sur disque pour config/ et data/ + +### Test SĂ©quence + +#### Test 1: Tree Navigation & Exact Matching (30s) +1. CrĂ©er hiĂ©rarchie: `config/units/tanks/heavy_mk1` +2. Tester navigation: + - `getChild("units")` → `getChild("tanks")` → `getChild("heavy_mk1")` + - `getChildrenByName("heavy_mk1")` - direct children only + - `getPath()` - verify full path +3. VĂ©rifier: + - Nodes trouvĂ©s correctement + - Path correct: "config/units/tanks/heavy_mk1" + - getChild retourne nullptr si non trouvĂ© + +#### Test 2: Pattern Matching (Wildcards) (30s) +1. CrĂ©er nodes: + - `config/units/tanks/heavy_mk1` + - `config/units/tanks/heavy_mk2` + - `config/units/infantry/heavy_trooper` + - `config/units/aircraft/light_fighter` +2. Tester patterns: + - `getChildrenByNameMatch("*heavy*")` → 3 matches + - `getChildrenByNameMatch("tanks/*")` → 2 matches + - `getChildrenByNameMatch("*_mk*")` → 2 matches +3. VĂ©rifier tous les matches corrects + +#### Test 3: Property-Based Queries (30s) +1. CrĂ©er nodes avec propriĂ©tĂ©s: + - `heavy_mk1`: armor=150, speed=30, cost=1000 + - `heavy_mk2`: armor=180, speed=25, cost=1200 + - `light_fighter`: armor=50, speed=120, cost=800 +2. Query predicates: + - `queryByProperty("armor", val > 100)` → 2 units + - `queryByProperty("speed", val > 50)` → 1 unit + - `queryByProperty("cost", val <= 1000)` → 2 units +3. VĂ©rifier rĂ©sultats des queries + +#### Test 4: Hot-Reload System (60s) +1. CrĂ©er `config/gameplay.json` sur disque +2. Charger dans tree avec `onTreeReloaded` callback +3. Modifier fichier sur disque (changer valeur) +4. Appeler `checkForChanges()` → devrait dĂ©tecter changement +5. Appeler `reloadIfChanged()` → callback dĂ©clenchĂ© +6. VĂ©rifier: + - Callback appelĂ© exactement 1 fois + - Nouvelles valeurs chargĂ©es + - Anciens nodes remplacĂ©s + +#### Test 5: Hot-Reload Isolation (30s) +1. Charger 2 fichiers: `config/units.json` et `config/maps.json` +2. Modifier seulement `units.json` +3. VĂ©rifier: + - `checkForChanges()` dĂ©tecte seulement units.json + - Reload ne touche pas maps.json + - Callback reçoit info sur quel fichier changĂ© + +#### Test 6: Persistence (Save/Load) (60s) +1. CrĂ©er structure data/: + - `data/player/stats` - {kills: 42, deaths: 3} + - `data/player/inventory` - {gold: 1000, items: [...]} + - `data/world/time` - {day: 5, hour: 14} +2. Appeler `saveData()` → Ă©crit sur disque +3. VĂ©rifier fichiers créés: + - `data/player/stats.json` + - `data/player/inventory.json` + - `data/world/time.json` +4. Charger dans nouveau tree +5. VĂ©rifier data identique (deep comparison) + +#### Test 7: Selective Save (30s) +1. Modifier seulement `data/player/stats` +2. Appeler `saveNode("data/player/stats")` +3. VĂ©rifier: + - Seulement stats.json Ă©crit + - Autres fichiers non modifiĂ©s (mtime identique) + +#### Test 8: Hash System (Data Hash) (30s) +1. CrĂ©er node avec data: `{value: 42}` +2. Calculer `getDataHash()` +3. Modifier data: `{value: 43}` +4. Recalculer hash +5. VĂ©rifier hashes diffĂ©rents + +#### Test 9: Hash System (Tree Hash) (30s) +1. CrĂ©er arbre: + ``` + root + ├─ child1 {data: 1} + └─ child2 {data: 2} + ``` +2. Calculer `getTreeHash()` +3. Modifier child1 data +4. Recalculer tree hash +5. VĂ©rifier hashes diffĂ©rents (propagation) + +#### Test 10: Read-Only Enforcement (30s) +1. Tenter `setChild()` sur node config/ +2. Devrait throw exception +3. VĂ©rifier: + - Exception levĂ©e + - Message descriptif + - Config/ non modifiĂ© + +#### Test 11: Type Safety & Defaults (20s) +1. CrĂ©er node: `{armor: 150, name: "Tank"}` +2. Tester accĂšs: + - `getInt("armor")` → 150 + - `getInt("missing", 100)` → 100 (default) + - `getString("name")` → "Tank" + - `getBool("active", true)` → true (default) + - `getDouble("speed")` → throw ou default + +#### Test 12: Deep Tree Performance (30s) +1. CrĂ©er tree avec 1000 nodes: + - 10 catĂ©gories + - 10 subcatĂ©gories each + - 10 items each +2. Mesurer temps: + - Pattern matching "*" (tous nodes): < 100ms + - Query by property: < 50ms + - Tree hash calculation: < 200ms +3. VĂ©rifier performance acceptable + +--- + +## đŸ—ïž ImplĂ©mentation + +### Test Module Structure + +```cpp +// DataNodeTestModule.h +class DataNodeTestModule : public IModule { +public: + void initialize(std::shared_ptr config) override; + void process(float deltaTime) override; + std::shared_ptr getState() const override; + void setState(std::shared_ptr state) override; + bool isIdle() const override { return true; } + + // Test helpers + void createTestTree(); + void testNavigation(); + void testPatternMatching(); + void testQueries(); + void testHotReload(); + void testPersistence(); + void testHashes(); + void testReadOnly(); + void testTypeAccess(); + void testPerformance(); + +private: + std::shared_ptr tree; + int reloadCallbackCount = 0; +}; +``` + +### Test Principal + +```cpp +// test_12_datanode.cpp +#include "helpers/TestMetrics.h" +#include "helpers/TestAssertions.h" +#include "helpers/TestReporter.h" +#include + +int main() { + TestReporter reporter("DataNode Integration Test"); + TestMetrics metrics; + + // === SETUP === + std::filesystem::create_directories("test_data/config"); + std::filesystem::create_directories("test_data/data"); + + // CrĂ©er IDataTree + auto tree = std::make_shared("test_data"); + + // ======================================================================== + // TEST 1: Tree Navigation & Exact Matching + // ======================================================================== + std::cout << "\n=== TEST 1: Tree Navigation & Exact Matching ===\n"; + + // CrĂ©er hiĂ©rarchie + auto configRoot = tree->getConfigRoot(); + auto units = std::make_shared("units", nlohmann::json::object()); + auto tanks = std::make_shared("tanks", nlohmann::json::object()); + auto heavyMk1 = std::make_shared("heavy_mk1", nlohmann::json{ + {"armor", 150}, + {"speed", 30}, + {"cost", 1000} + }); + + tanks->setChild("heavy_mk1", heavyMk1); + units->setChild("tanks", tanks); + configRoot->setChild("units", units); + + // Navigation + auto foundUnits = configRoot->getChild("units"); + ASSERT_TRUE(foundUnits != nullptr, "Should find units node"); + + auto foundTanks = foundUnits->getChild("tanks"); + ASSERT_TRUE(foundTanks != nullptr, "Should find tanks node"); + + auto foundHeavy = foundTanks->getChild("heavy_mk1"); + ASSERT_TRUE(foundHeavy != nullptr, "Should find heavy_mk1 node"); + + // Path + std::string path = foundHeavy->getPath(); + std::cout << "Path: " << path << "\n"; + ASSERT_TRUE(path.find("heavy_mk1") != std::string::npos, "Path should contain node name"); + + // Not found + auto notFound = foundTanks->getChild("does_not_exist"); + ASSERT_TRUE(notFound == nullptr, "Should return nullptr for missing child"); + + reporter.addAssertion("navigation_exact", true); + std::cout << "✓ TEST 1 PASSED\n"; + + // ======================================================================== + // TEST 2: Pattern Matching (Wildcards) + // ======================================================================== + std::cout << "\n=== TEST 2: Pattern Matching ===\n"; + + // Ajouter plus de nodes + auto heavyMk2 = std::make_shared("heavy_mk2", nlohmann::json{ + {"armor", 180}, + {"speed", 25}, + {"cost", 1200} + }); + tanks->setChild("heavy_mk2", heavyMk2); + + auto infantry = std::make_shared("infantry", nlohmann::json::object()); + auto heavyTrooper = std::make_shared("heavy_trooper", nlohmann::json{ + {"armor", 120}, + {"speed", 15}, + {"cost", 500} + }); + infantry->setChild("heavy_trooper", heavyTrooper); + units->setChild("infantry", infantry); + + auto aircraft = std::make_shared("aircraft", nlohmann::json::object()); + auto lightFighter = std::make_shared("light_fighter", nlohmann::json{ + {"armor", 50}, + {"speed", 120}, + {"cost", 800} + }); + aircraft->setChild("light_fighter", lightFighter); + units->setChild("aircraft", aircraft); + + // Pattern: *heavy* + auto heavyUnits = configRoot->getChildrenByNameMatch("*heavy*"); + std::cout << "Pattern '*heavy*' matched: " << heavyUnits.size() << " nodes\n"; + for (const auto& node : heavyUnits) { + std::cout << " - " << node->getName() << "\n"; + } + // Should match: heavy_mk1, heavy_mk2, heavy_trooper + ASSERT_EQ(heavyUnits.size(), 3, "Should match 3 'heavy' units"); + reporter.addMetric("pattern_heavy_count", heavyUnits.size()); + + // Pattern: *_mk* + auto mkUnits = configRoot->getChildrenByNameMatch("*_mk*"); + std::cout << "Pattern '*_mk*' matched: " << mkUnits.size() << " nodes\n"; + // Should match: heavy_mk1, heavy_mk2 + ASSERT_EQ(mkUnits.size(), 2, "Should match 2 '_mk' units"); + + reporter.addAssertion("pattern_matching", true); + std::cout << "✓ TEST 2 PASSED\n"; + + // ======================================================================== + // TEST 3: Property-Based Queries + // ======================================================================== + std::cout << "\n=== TEST 3: Property-Based Queries ===\n"; + + // Query: armor > 100 + auto armoredUnits = configRoot->queryByProperty("armor", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() >= 100; + }); + + std::cout << "Units with armor >= 100: " << armoredUnits.size() << "\n"; + for (const auto& node : armoredUnits) { + int armor = node->getInt("armor"); + std::cout << " - " << node->getName() << " (armor=" << armor << ")\n"; + ASSERT_GE(armor, 100, "Armor should be >= 100"); + } + // Should match: heavy_mk1 (150), heavy_mk2 (180), heavy_trooper (120) + ASSERT_EQ(armoredUnits.size(), 3, "Should find 3 armored units"); + + // Query: speed > 50 + auto fastUnits = configRoot->queryByProperty("speed", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() > 50; + }); + + std::cout << "Units with speed > 50: " << fastUnits.size() << "\n"; + // Should match: light_fighter (120) + ASSERT_EQ(fastUnits.size(), 1, "Should find 1 fast unit"); + + reporter.addAssertion("property_queries", true); + std::cout << "✓ TEST 3 PASSED\n"; + + // ======================================================================== + // TEST 4: Hot-Reload System + // ======================================================================== + std::cout << "\n=== TEST 4: Hot-Reload System ===\n"; + + // CrĂ©er fichier config + nlohmann::json gameplayConfig = { + {"difficulty", "normal"}, + {"maxPlayers", 4}, + {"timeLimit", 3600} + }; + + std::ofstream configFile("test_data/config/gameplay.json"); + configFile << gameplayConfig.dump(2); + configFile.close(); + + // Charger dans tree + tree->loadConfigFile("gameplay.json"); + + // Setup callback + int callbackCount = 0; + tree->onTreeReloaded([&callbackCount]() { + callbackCount++; + std::cout << " → Reload callback triggered (count=" << callbackCount << ")\n"; + }); + + // VĂ©rifier contenu initial + auto gameplay = configRoot->getChild("gameplay"); + ASSERT_TRUE(gameplay != nullptr, "gameplay node should exist"); + std::string difficulty = gameplay->getString("difficulty"); + ASSERT_EQ(difficulty, "normal", "Initial difficulty should be 'normal'"); + + std::cout << "Initial difficulty: " << difficulty << "\n"; + + // Modifier fichier + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + gameplayConfig["difficulty"] = "hard"; + gameplayConfig["maxPlayers"] = 8; + + std::ofstream configFile2("test_data/config/gameplay.json"); + configFile2 << gameplayConfig.dump(2); + configFile2.close(); + + // Force file timestamp update + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + + // Check for changes + bool hasChanges = tree->checkForChanges(); + std::cout << "Has changes: " << (hasChanges ? "YES" : "NO") << "\n"; + ASSERT_TRUE(hasChanges, "Should detect file modification"); + + // Reload + bool reloaded = tree->reloadIfChanged(); + std::cout << "Reloaded: " << (reloaded ? "YES" : "NO") << "\n"; + ASSERT_TRUE(reloaded, "Should reload changed file"); + + // VĂ©rifier callback + ASSERT_EQ(callbackCount, 1, "Callback should be called exactly once"); + + // VĂ©rifier nouvelles valeurs + gameplay = configRoot->getChild("gameplay"); + difficulty = gameplay->getString("difficulty"); + int maxPlayers = gameplay->getInt("maxPlayers"); + + std::cout << "After reload - difficulty: " << difficulty << ", maxPlayers: " << maxPlayers << "\n"; + ASSERT_EQ(difficulty, "hard", "Difficulty should be updated to 'hard'"); + ASSERT_EQ(maxPlayers, 8, "maxPlayers should be updated to 8"); + + reporter.addAssertion("hot_reload", true); + reporter.addMetric("reload_callback_count", callbackCount); + std::cout << "✓ TEST 4 PASSED\n"; + + // ======================================================================== + // TEST 5: Hot-Reload Isolation + // ======================================================================== + std::cout << "\n=== TEST 5: Hot-Reload Isolation ===\n"; + + // CrĂ©er second fichier + nlohmann::json mapsConfig = { + {"defaultMap", "desert"}, + {"mapCount", 10} + }; + + std::ofstream mapsFile("test_data/config/maps.json"); + mapsFile << mapsConfig.dump(2); + mapsFile.close(); + + tree->loadConfigFile("maps.json"); + + // Modifier seulement gameplay.json + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + gameplayConfig["difficulty"] = "extreme"; + + std::ofstream configFile3("test_data/config/gameplay.json"); + configFile3 << gameplayConfig.dump(2); + configFile3.close(); + + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + + // Check changes + hasChanges = tree->checkForChanges(); + ASSERT_TRUE(hasChanges, "Should detect gameplay.json change"); + + // Verify maps.json not affected + auto maps = configRoot->getChild("maps"); + std::string defaultMap = maps->getString("defaultMap"); + ASSERT_EQ(defaultMap, "desert", "maps.json should not be affected"); + + reloaded = tree->reloadIfChanged(); + ASSERT_TRUE(reloaded, "Should reload only changed file"); + + // Verify maps still intact + maps = configRoot->getChild("maps"); + defaultMap = maps->getString("defaultMap"); + ASSERT_EQ(defaultMap, "desert", "maps.json should still be 'desert' after isolated reload"); + + reporter.addAssertion("reload_isolation", true); + std::cout << "✓ TEST 5 PASSED\n"; + + // ======================================================================== + // TEST 6: Persistence (Save/Load) + // ======================================================================== + std::cout << "\n=== TEST 6: Persistence (Save/Load) ===\n"; + + auto dataRoot = tree->getDataRoot(); + + // CrĂ©er structure data/ + auto player = std::make_shared("player", nlohmann::json::object()); + auto stats = std::make_shared("stats", nlohmann::json{ + {"kills", 42}, + {"deaths", 3}, + {"level", 15} + }); + auto inventory = std::make_shared("inventory", nlohmann::json{ + {"gold", 1000}, + {"items", nlohmann::json::array({"sword", "shield", "potion"})} + }); + + player->setChild("stats", stats); + player->setChild("inventory", inventory); + dataRoot->setChild("player", player); + + auto world = std::make_shared("world", nlohmann::json::object()); + auto time = std::make_shared("time", nlohmann::json{ + {"day", 5}, + {"hour", 14}, + {"minute", 30} + }); + world->setChild("time", time); + dataRoot->setChild("world", world); + + // Save all data + tree->saveData(); + + // VĂ©rifier fichiers créés + ASSERT_TRUE(std::filesystem::exists("test_data/data/player/stats.json"), + "stats.json should exist"); + ASSERT_TRUE(std::filesystem::exists("test_data/data/player/inventory.json"), + "inventory.json should exist"); + ASSERT_TRUE(std::filesystem::exists("test_data/data/world/time.json"), + "time.json should exist"); + + std::cout << "Files saved successfully\n"; + + // CrĂ©er nouveau tree et charger + auto tree2 = std::make_shared("test_data"); + tree2->loadDataDirectory(); + + auto dataRoot2 = tree2->getDataRoot(); + auto player2 = dataRoot2->getChild("player"); + ASSERT_TRUE(player2 != nullptr, "player node should load"); + + auto stats2 = player2->getChild("stats"); + int kills = stats2->getInt("kills"); + int deaths = stats2->getInt("deaths"); + + std::cout << "Loaded: kills=" << kills << ", deaths=" << deaths << "\n"; + ASSERT_EQ(kills, 42, "kills should be preserved"); + ASSERT_EQ(deaths, 3, "deaths should be preserved"); + + reporter.addAssertion("persistence", true); + std::cout << "✓ TEST 6 PASSED\n"; + + // ======================================================================== + // TEST 7: Selective Save + // ======================================================================== + std::cout << "\n=== TEST 7: Selective Save ===\n"; + + // Get mtime of inventory.json before + auto inventoryPath = std::filesystem::path("test_data/data/player/inventory.json"); + auto mtimeBefore = std::filesystem::last_write_time(inventoryPath); + + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + + // Modify only stats + stats->setInt("kills", 100); + + // Save only stats + tree->saveNode("data/player/stats"); + + // Check inventory.json not modified + auto mtimeAfter = std::filesystem::last_write_time(inventoryPath); + + ASSERT_EQ(mtimeBefore, mtimeAfter, "inventory.json should not be modified"); + + // Load stats and verify + auto tree3 = std::make_shared("test_data"); + tree3->loadDataDirectory(); + auto stats3 = tree3->getDataRoot()->getChild("player")->getChild("stats"); + int newKills = stats3->getInt("kills"); + + ASSERT_EQ(newKills, 100, "Selective save should update only stats"); + + reporter.addAssertion("selective_save", true); + std::cout << "✓ TEST 7 PASSED\n"; + + // ======================================================================== + // TEST 8: Hash System (Data Hash) + // ======================================================================== + std::cout << "\n=== TEST 8: Hash System (Data Hash) ===\n"; + + auto testNode = std::make_shared("test", nlohmann::json{ + {"value", 42} + }); + + std::string hash1 = testNode->getDataHash(); + std::cout << "Hash 1: " << hash1.substr(0, 16) << "...\n"; + + // Modify data + testNode->setInt("value", 43); + + std::string hash2 = testNode->getDataHash(); + std::cout << "Hash 2: " << hash2.substr(0, 16) << "...\n"; + + ASSERT_TRUE(hash1 != hash2, "Hashes should differ after data change"); + + reporter.addAssertion("data_hash", true); + std::cout << "✓ TEST 8 PASSED\n"; + + // ======================================================================== + // TEST 9: Hash System (Tree Hash) + // ======================================================================== + std::cout << "\n=== TEST 9: Hash System (Tree Hash) ===\n"; + + auto root = std::make_shared("root", nlohmann::json::object()); + auto child1 = std::make_shared("child1", nlohmann::json{{"data", 1}}); + auto child2 = std::make_shared("child2", nlohmann::json{{"data", 2}}); + + root->setChild("child1", child1); + root->setChild("child2", child2); + + std::string treeHash1 = root->getTreeHash(); + std::cout << "Tree Hash 1: " << treeHash1.substr(0, 16) << "...\n"; + + // Modify child1 + child1->setInt("data", 999); + + std::string treeHash2 = root->getTreeHash(); + std::cout << "Tree Hash 2: " << treeHash2.substr(0, 16) << "...\n"; + + ASSERT_TRUE(treeHash1 != treeHash2, "Tree hash should change when child changes"); + + reporter.addAssertion("tree_hash", true); + std::cout << "✓ TEST 9 PASSED\n"; + + // ======================================================================== + // TEST 10: Read-Only Enforcement + // ======================================================================== + std::cout << "\n=== TEST 10: Read-Only Enforcement ===\n"; + + auto readOnlyNode = configRoot->getChild("gameplay"); + + bool exceptionThrown = false; + try { + auto newChild = std::make_shared("illegal", nlohmann::json{{"bad", true}}); + readOnlyNode->setChild("illegal", newChild); + } catch (const std::runtime_error& e) { + std::cout << "✓ Exception thrown: " << e.what() << "\n"; + exceptionThrown = true; + } + + ASSERT_TRUE(exceptionThrown, "Should throw exception when modifying read-only node"); + + reporter.addAssertion("readonly_enforcement", true); + std::cout << "✓ TEST 10 PASSED\n"; + + // ======================================================================== + // TEST 11: Type Safety & Defaults + // ======================================================================== + std::cout << "\n=== TEST 11: Type Safety & Defaults ===\n"; + + auto typeNode = std::make_shared("types", nlohmann::json{ + {"armor", 150}, + {"name", "Tank"}, + {"active", true}, + {"speed", 30.5} + }); + + int armor = typeNode->getInt("armor"); + ASSERT_EQ(armor, 150, "getInt should return correct value"); + + int missing = typeNode->getInt("missing", 100); + ASSERT_EQ(missing, 100, "getInt with default should return default"); + + std::string name = typeNode->getString("name"); + ASSERT_EQ(name, "Tank", "getString should return correct value"); + + bool active = typeNode->getBool("active"); + ASSERT_EQ(active, true, "getBool should return correct value"); + + bool defaultBool = typeNode->getBool("nothere", false); + ASSERT_EQ(defaultBool, false, "getBool with default should return default"); + + double speed = typeNode->getDouble("speed"); + ASSERT_EQ(speed, 30.5, "getDouble should return correct value"); + + reporter.addAssertion("type_safety", true); + std::cout << "✓ TEST 11 PASSED\n"; + + // ======================================================================== + // TEST 12: Deep Tree Performance + // ======================================================================== + std::cout << "\n=== TEST 12: Deep Tree Performance ===\n"; + + auto perfRoot = std::make_shared("perf", nlohmann::json::object()); + + // Create 1000 nodes: 10 x 10 x 10 + int nodeCount = 0; + for (int cat = 0; cat < 10; cat++) { + auto category = std::make_shared("cat_" + std::to_string(cat), + nlohmann::json::object()); + + for (int sub = 0; sub < 10; sub++) { + auto subcategory = std::make_shared("sub_" + std::to_string(sub), + nlohmann::json::object()); + + for (int item = 0; item < 10; item++) { + auto itemNode = std::make_shared("item_" + std::to_string(item), + nlohmann::json{ + {"id", nodeCount}, + {"value", nodeCount * 10} + }); + subcategory->setChild("item_" + std::to_string(item), itemNode); + nodeCount++; + } + + category->setChild("sub_" + std::to_string(sub), subcategory); + } + + perfRoot->setChild("cat_" + std::to_string(cat), category); + } + + std::cout << "Created " << nodeCount << " nodes\n"; + ASSERT_EQ(nodeCount, 1000, "Should create 1000 nodes"); + + // Pattern matching: find all items + auto start = std::chrono::high_resolution_clock::now(); + auto allItems = perfRoot->getChildrenByNameMatch("item_*"); + auto end = std::chrono::high_resolution_clock::now(); + + float patternTime = std::chrono::duration(end - start).count(); + std::cout << "Pattern matching found " << allItems.size() << " items in " << patternTime << "ms\n"; + + ASSERT_EQ(allItems.size(), 1000, "Should find all 1000 items"); + ASSERT_LT(patternTime, 100.0f, "Pattern matching should be < 100ms"); + reporter.addMetric("pattern_time_ms", patternTime); + + // Query by property + start = std::chrono::high_resolution_clock::now(); + auto queryResults = perfRoot->queryByProperty("value", + [](const IDataValue& val) { + return val.isNumber() && val.asInt() > 5000; + }); + end = std::chrono::high_resolution_clock::now(); + + float queryTime = std::chrono::duration(end - start).count(); + std::cout << "Query found " << queryResults.size() << " results in " << queryTime << "ms\n"; + + ASSERT_LT(queryTime, 50.0f, "Query should be < 50ms"); + reporter.addMetric("query_time_ms", queryTime); + + // Tree hash + start = std::chrono::high_resolution_clock::now(); + std::string treeHash = perfRoot->getTreeHash(); + end = std::chrono::high_resolution_clock::now(); + + float hashTime = std::chrono::duration(end - start).count(); + std::cout << "Tree hash calculated in " << hashTime << "ms\n"; + + ASSERT_LT(hashTime, 200.0f, "Tree hash should be < 200ms"); + reporter.addMetric("treehash_time_ms", hashTime); + + reporter.addAssertion("performance", true); + std::cout << "✓ TEST 12 PASSED\n"; + + // ======================================================================== + // CLEANUP + // ======================================================================== + std::filesystem::remove_all("test_data"); + + // ======================================================================== + // RAPPORT FINAL + // ======================================================================== + + metrics.printReport(); + reporter.printFinalReport(); + + return reporter.getExitCode(); +} +``` + +--- + +## 📊 MĂ©triques CollectĂ©es + +| MĂ©trique | Description | Seuil | +|----------|-------------|-------| +| **pattern_heavy_count** | Nodes matchĂ©s par pattern "*heavy*" | 3 | +| **reload_callback_count** | Callbacks dĂ©clenchĂ©s lors reload | 1 | +| **pattern_time_ms** | Temps pattern matching 1000 nodes | < 100ms | +| **query_time_ms** | Temps property query 1000 nodes | < 50ms | +| **treehash_time_ms** | Temps calcul tree hash 1000 nodes | < 200ms | + +--- + +## ✅ CritĂšres de SuccĂšs + +### MUST PASS +1. ✅ Navigation exacte fonctionne (getChild, getPath) +2. ✅ Pattern matching trouve tous les matches +3. ✅ Property queries retournent rĂ©sultats corrects +4. ✅ Hot-reload dĂ©tecte changements fichier +5. ✅ Hot-reload callback dĂ©clenchĂ© +6. ✅ Hot-reload isolation (un fichier modifiĂ© n'affecte pas autres) +7. ✅ Persistence save/load prĂ©serve data +8. ✅ Selective save modifie seulement node ciblĂ© +9. ✅ Data hash change quand data modifiĂ© +10. ✅ Tree hash change quand children modifiĂ©s +11. ✅ Read-only nodes throw exception si modifiĂ©s +12. ✅ Type access avec defaults fonctionne +13. ✅ Performance acceptable sur 1000 nodes + +### NICE TO HAVE +1. ✅ Pattern matching < 50ms (optimal) +2. ✅ Query < 25ms (optimal) +3. ✅ Tree hash < 100ms (optimal) + +--- + +## 🐛 Cas d'Erreur Attendus + +| Erreur | Cause | Action | +|--------|-------|--------| +| Pattern pas match | Regex incorrecte | FAIL - fix wildcard conversion | +| Query vide | Predicate trop strict | WARN - vĂ©rifier logique | +| Hot-reload pas dĂ©tectĂ© | File watch bug | FAIL - fix checkForChanges() | +| Callback pas appelĂ© | onTreeReloaded bug | FAIL - fix callback system | +| Persistence data corrompu | JSON malformĂ© | FAIL - add validation | +| Hash identiques | Hash calculation bug | FAIL - fix getDataHash() | +| Read-only pas enforced | isReadOnly check manquant | FAIL - add check | +| Type mismatch crash | Pas de default handling | FAIL - add try/catch | +| Performance > seuils | Algorithme O(nÂČ) | FAIL - optimize | + +--- + +## 📝 Output Attendu + +``` +================================================================================ +TEST: DataNode Integration Test +================================================================================ + +=== TEST 1: Tree Navigation & Exact Matching === +Path: config/units/tanks/heavy_mk1 +✓ TEST 1 PASSED + +=== TEST 2: Pattern Matching === +Pattern '*heavy*' matched: 3 nodes + - heavy_mk1 + - heavy_mk2 + - heavy_trooper +Pattern '*_mk*' matched: 2 nodes +✓ TEST 2 PASSED + +=== TEST 3: Property-Based Queries === +Units with armor >= 100: 3 + - heavy_mk1 (armor=150) + - heavy_mk2 (armor=180) + - heavy_trooper (armor=120) +Units with speed > 50: 1 +✓ TEST 3 PASSED + +=== TEST 4: Hot-Reload System === +Initial difficulty: normal +Has changes: YES +Reloaded: YES + → Reload callback triggered (count=1) +After reload - difficulty: hard, maxPlayers: 8 +✓ TEST 4 PASSED + +=== TEST 5: Hot-Reload Isolation === +✓ TEST 5 PASSED + +=== TEST 6: Persistence (Save/Load) === +Files saved successfully +Loaded: kills=42, deaths=3 +✓ TEST 6 PASSED + +=== TEST 7: Selective Save === +✓ TEST 7 PASSED + +=== TEST 8: Hash System (Data Hash) === +Hash 1: 5d41402abc4b2a76... +Hash 2: 7c6a180b36896a0e... +✓ TEST 8 PASSED + +=== TEST 9: Hash System (Tree Hash) === +Tree Hash 1: a1b2c3d4e5f6g7h8... +Tree Hash 2: 9i8j7k6l5m4n3o2p... +✓ TEST 9 PASSED + +=== TEST 10: Read-Only Enforcement === +✓ Exception thrown: Cannot modify read-only node 'gameplay' +✓ TEST 10 PASSED + +=== TEST 11: Type Safety & Defaults === +✓ TEST 11 PASSED + +=== TEST 12: Deep Tree Performance === +Created 1000 nodes +Pattern matching found 1000 items in 45.3ms +Query found 500 results in 23.7ms +Tree hash calculated in 134.2ms +✓ TEST 12 PASSED + +================================================================================ +METRICS +================================================================================ + Pattern heavy count: 3 + Reload callback count: 1 + Pattern time: 45.3ms (threshold: < 100ms) ✓ + Query time: 23.7ms (threshold: < 50ms) ✓ + Tree hash time: 134.2ms (threshold: < 200ms) ✓ + +================================================================================ +ASSERTIONS +================================================================================ + ✓ navigation_exact + ✓ pattern_matching + ✓ property_queries + ✓ hot_reload + ✓ reload_isolation + ✓ persistence + ✓ selective_save + ✓ data_hash + ✓ tree_hash + ✓ readonly_enforcement + ✓ type_safety + ✓ performance + +Result: ✅ PASSED (12/12 tests) + +================================================================================ +``` + +--- + +## 📅 Planning + +**Jour 1 (4h):** +- Setup JsonDataTree avec test directory +- ImplĂ©menter tests 1-6 (navigation, patterns, queries, hot-reload, persistence) + +**Jour 2 (3h):** +- ImplĂ©menter tests 7-12 (selective save, hashes, readonly, types, performance) +- Debug + validation + +--- + +**Prochaine Ă©tape**: `scenario_13_cross_system.md` diff --git a/planTI/scenario_13_cross_system.md b/planTI/scenario_13_cross_system.md new file mode 100644 index 0000000..acbbebe --- /dev/null +++ b/planTI/scenario_13_cross_system.md @@ -0,0 +1,1022 @@ +# ScĂ©nario 13: Cross-System Integration (IO + DataNode) + +**PrioritĂ©**: ⭐⭐ SHOULD HAVE +**Phase**: 2 (SHOULD HAVE) +**DurĂ©e estimĂ©e**: ~6 minutes +**Effort implĂ©mentation**: ~6-8 heures + +--- + +## 🎯 Objectif + +Valider que les systĂšmes IO (IntraIO) et DataNode (IDataTree) fonctionnent correctement **ensemble** dans des cas d'usage rĂ©els: +- Config hot-reload → republish via IO +- State persistence via DataNode + message routing via IO +- Multi-module coordination (Module A publie state → Module B lit via DataNode) +- Concurrent access (IO threads + DataNode threads) +- Integration avec hot-reload de modules +- Performance du systĂšme complet + +**Note**: Ce test valide l'intĂ©gration complĂšte du moteur, pas les composants isolĂ©s. + +--- + +## 📋 Description + +### Setup Initial +1. CrĂ©er IDataTree avec structure complĂšte: + - **config/** - Configuration modules (units, gameplay, network) + - **data/** - State persistence (player, world, economy) + - **runtime/** - State temporaire (fps, metrics, active_entities) + +2. CrĂ©er 4 modules avec IO + DataNode: + - **ConfigWatcherModule** - Surveille config/, publie changements via IO + - **PlayerModule** - GĂšre state joueur, persiste via data/, publie events + - **EconomyModule** - Souscrit Ă  player events, met Ă  jour economy data/ + - **MetricsModule** - Collecte metrics dans runtime/, publie stats + +3. Total: 4 modules communicant via IO et partageant data via DataNode + +### Test SĂ©quence + +#### Test 1: Config Hot-Reload → IO Broadcast (60s) +1. ConfigWatcherModule souscrit Ă  hot-reload callbacks +2. Modifier `config/gameplay.json` (changer difficulty) +3. Quand callback dĂ©clenchĂ©: + - ConfigWatcherModule publie "config:gameplay:changed" via IO + - PlayerModule souscrit et reçoit notification + - PlayerModule lit nouvelle config via DataNode + - PlayerModule ajuste son comportement +4. VĂ©rifier: + - Callback → publish → subscribe → read chain fonctionne + - Nouvelle config appliquĂ©e dans PlayerModule + - Latence totale < 100ms + +#### Test 2: State Persistence + Event Publishing (60s) +1. PlayerModule crĂ©e state: + - `data/player/profile` - {name, level, gold} + - `data/player/inventory` - {items[]} +2. PlayerModule sauvegarde via `tree->saveNode()` +3. PlayerModule publie "player:level_up" via IO +4. EconomyModule souscrit Ă  "player:*" +5. EconomyModule reçoit event, lit player data via DataNode +6. EconomyModule calcule bonus, met Ă  jour `data/economy/bonuses` +7. EconomyModule sauvegarde via `tree->saveNode()` +8. VĂ©rifier: + - Save → publish → subscribe → read → save chain fonctionne + - Data persistence correcte + - Pas de race conditions + +#### Test 3: Multi-Module State Synchronization (90s) +1. PlayerModule met Ă  jour `data/player/gold` = 1000 +2. PlayerModule publie "player:gold:updated" avec {gold: 1000} +3. EconomyModule reçoit event via IO +4. EconomyModule lit `data/player/gold` via DataNode +5. VĂ©rifier cohĂ©rence: + - Valeur dans message IO = valeur dans DataNode + - Pas de dĂ©synchronisation + - Order des events prĂ©servĂ© +6. RĂ©pĂ©ter 100 fois avec updates rapides +7. VĂ©rifier consistency finale + +#### Test 4: Runtime Metrics Collection (60s) +1. MetricsModule collecte metrics toutes les 100ms: + - `runtime/fps` - FPS actuel + - `runtime/memory` - Memory usage + - `runtime/message_count` - Messages IO +2. MetricsModule publie "metrics:snapshot" toutes les secondes +3. ConfigWatcherModule souscrit et log metrics +4. VĂ©rifier: + - Runtime data pas persistĂ© (pas de fichiers) + - Metrics publishing fonctionne + - Low-frequency batching optimise (pas 10 msg/s mais 1 msg/s) + +#### Test 5: Concurrent Access (IO + DataNode) (90s) +1. Lancer 4 threads: + - Thread 1: PlayerModule publie events Ă  100 Hz + - Thread 2: EconomyModule lit data/ Ă  50 Hz + - Thread 3: MetricsModule Ă©crit runtime/ Ă  100 Hz + - Thread 4: ConfigWatcherModule lit config/ Ă  10 Hz +2. ExĂ©cuter pendant 60 secondes +3. VĂ©rifier: + - Aucun crash + - Aucune corruption de data + - Aucun deadlock + - Performance acceptable (< 10% overhead) + +#### Test 6: Hot-Reload Module + Preserve State (90s) +1. PlayerModule a state actif: + - 50 entities dans `runtime/entities` + - Gold = 5000 dans `data/player/gold` + - Active quest dans `runtime/quest` +2. DĂ©clencher hot-reload de PlayerModule: + - `getState()` extrait tout (data/ + runtime/) + - Recompile module + - `setState()` restaure +3. Pendant reload: + - EconomyModule continue de publier via IO + - Messages accumulĂ©s dans queue PlayerModule +4. AprĂšs reload: + - PlayerModule pull messages accumulĂ©s + - VĂ©rifie state prĂ©servĂ© (50 entities, 5000 gold, quest) + - Continue processing normalement +5. VĂ©rifier: + - State complet prĂ©servĂ© (DataNode + runtime) + - Messages pas perdus (IO queue) + - Pas de corruption + +#### Test 7: Config Change Cascades (60s) +1. Modifier `config/gameplay.json` → difficulty = "hard" +2. ConfigWatcherModule dĂ©tecte → publie "config:gameplay:changed" +3. PlayerModule reçoit → reload config → ajuste HP multiplier +4. PlayerModule publie "player:config:updated" +5. EconomyModule reçoit → reload config → ajuste prices +6. EconomyModule publie "economy:config:updated" +7. MetricsModule reçoit → log cascade +8. VĂ©rifier: + - Cascade complĂšte en < 500ms + - Tous modules synchronisĂ©s + - Ordre des events correct + +#### Test 8: Large State + High-Frequency IO (60s) +1. CrĂ©er large DataNode tree (1000 nodes) +2. Publier 10k messages/s via IO +3. Modules lisent DataNode pendant IO flood +4. Mesurer: + - Latence IO: < 10ms p99 + - Latence DataNode read: < 5ms p99 + - Memory growth: < 20MB + - CPU usage: < 80% +5. VĂ©rifier: + - SystĂšmes restent performants + - Pas de dĂ©gradation mutuelle + +--- + +## đŸ—ïž ImplĂ©mentation + +### ConfigWatcherModule Structure + +```cpp +// ConfigWatcherModule.h +class ConfigWatcherModule : public IModule { +public: + void initialize(std::shared_ptr config) override; + void process(float deltaTime) override; + std::shared_ptr getState() const override; + void setState(std::shared_ptr state) override; + bool isIdle() const override { return true; } + +private: + std::shared_ptr io; + std::shared_ptr tree; + + void onConfigReloaded(); + void publishConfigChange(const std::string& configName); +}; +``` + +### PlayerModule Structure + +```cpp +// PlayerModule.h +class PlayerModule : public IModule { +public: + void initialize(std::shared_ptr config) override; + void process(float deltaTime) override; + std::shared_ptr getState() const override; + void setState(std::shared_ptr state) override; + bool isIdle() const override { return true; } + +private: + std::shared_ptr io; + std::shared_ptr tree; + + int gold = 0; + int level = 1; + std::vector inventory; + + void handleConfigChange(); + void savePlayerData(); + void publishLevelUp(); +}; +``` + +### Test Principal + +```cpp +// test_13_cross_system.cpp +#include "helpers/TestMetrics.h" +#include "helpers/TestAssertions.h" +#include "helpers/TestReporter.h" +#include +#include + +int main() { + TestReporter reporter("Cross-System Integration"); + TestMetrics metrics; + + // === SETUP === + std::filesystem::create_directories("test_cross/config"); + std::filesystem::create_directories("test_cross/data"); + + auto tree = std::make_shared("test_cross"); + + DebugEngine engine; + engine.setDataTree(tree); + + // Charger modules + engine.loadModule("ConfigWatcherModule", "build/modules/libConfigWatcherModule.so"); + engine.loadModule("PlayerModule", "build/modules/libPlayerModule.so"); + engine.loadModule("EconomyModule", "build/modules/libEconomyModule.so"); + engine.loadModule("MetricsModule", "build/modules/libMetricsModule.so"); + + auto config = createJsonConfig({ + {"transport", "intra"}, + {"instanceId", "test_cross"} + }); + + engine.initializeModule("ConfigWatcherModule", config); + engine.initializeModule("PlayerModule", config); + engine.initializeModule("EconomyModule", config); + engine.initializeModule("MetricsModule", config); + + // ======================================================================== + // TEST 1: Config Hot-Reload → IO Broadcast + // ======================================================================== + std::cout << "\n=== TEST 1: Config Hot-Reload → IO Broadcast ===\n"; + + // CrĂ©er config initial + nlohmann::json gameplayConfig = { + {"difficulty", "normal"}, + {"hpMultiplier", 1.0} + }; + + std::ofstream configFile("test_cross/config/gameplay.json"); + configFile << gameplayConfig.dump(2); + configFile.close(); + + tree->loadConfigFile("gameplay.json"); + + // Setup reload callback + std::atomic configChangedEvents{0}; + auto playerIO = engine.getModuleIO("PlayerModule"); + + playerIO->subscribe("config:gameplay:changed", {}); + + // ConfigWatcherModule setup callback + tree->onTreeReloaded([&]() { + std::cout << " → Config reloaded, publishing event...\n"; + auto watcherIO = engine.getModuleIO("ConfigWatcherModule"); + auto data = std::make_unique(nlohmann::json{ + {"config", "gameplay"}, + {"timestamp", std::time(nullptr)} + }); + watcherIO->publish("config:gameplay:changed", std::move(data)); + }); + + // Modifier config + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + gameplayConfig["difficulty"] = "hard"; + gameplayConfig["hpMultiplier"] = 1.5; + + std::ofstream configFile2("test_cross/config/gameplay.json"); + configFile2 << gameplayConfig.dump(2); + configFile2.close(); + + auto reloadStart = std::chrono::high_resolution_clock::now(); + + // Trigger reload + tree->reloadIfChanged(); + + // Process pour permettre IO routing + engine.update(1.0f/60.0f); + + // PlayerModule vĂ©rifie message + if (playerIO->hasMessages() > 0) { + auto msg = playerIO->pullMessage(); + configChangedEvents++; + + // PlayerModule lit nouvelle config + auto gameplay = tree->getConfigRoot()->getChild("gameplay"); + std::string difficulty = gameplay->getString("difficulty"); + double hpMult = gameplay->getDouble("hpMultiplier"); + + std::cout << " PlayerModule received config change: difficulty=" << difficulty + << ", hpMult=" << hpMult << "\n"; + + ASSERT_EQ(difficulty, "hard", "Difficulty should be updated"); + ASSERT_EQ(hpMult, 1.5, "HP multiplier should be updated"); + } + + auto reloadEnd = std::chrono::high_resolution_clock::now(); + float reloadLatency = std::chrono::duration(reloadEnd - reloadStart).count(); + + std::cout << "Total latency (reload + publish + subscribe + read): " << reloadLatency << "ms\n"; + ASSERT_LT(reloadLatency, 100.0f, "Total latency should be < 100ms"); + ASSERT_EQ(configChangedEvents, 1, "Should receive exactly 1 config change event"); + + reporter.addMetric("config_reload_latency_ms", reloadLatency); + reporter.addAssertion("config_hotreload_chain", true); + std::cout << "✓ TEST 1 PASSED\n"; + + // ======================================================================== + // TEST 2: State Persistence + Event Publishing + // ======================================================================== + std::cout << "\n=== TEST 2: State Persistence + Event Publishing ===\n"; + + auto dataRoot = tree->getDataRoot(); + + // PlayerModule crĂ©e state + auto player = std::make_shared("player", nlohmann::json::object()); + auto profile = std::make_shared("profile", nlohmann::json{ + {"name", "TestPlayer"}, + {"level", 5}, + {"gold", 1000} + }); + player->setChild("profile", profile); + dataRoot->setChild("player", player); + + // Save + tree->saveNode("data/player"); + + // Verify file + ASSERT_TRUE(std::filesystem::exists("test_cross/data/player/profile.json"), + "Profile should be saved"); + + // PlayerModule publie level up + auto levelUpData = std::make_unique(nlohmann::json{ + {"event", "level_up"}, + {"newLevel", 6}, + {"goldBonus", 500} + }); + playerIO->publish("player:level_up", std::move(levelUpData)); + + // EconomyModule souscrit + auto economyIO = engine.getModuleIO("EconomyModule"); + economyIO->subscribe("player:*", {}); + + engine.update(1.0f/60.0f); + + // EconomyModule reçoit et traite + if (economyIO->hasMessages() > 0) { + auto msg = economyIO->pullMessage(); + std::cout << " EconomyModule received: " << msg.topic << "\n"; + + // EconomyModule lit player data + auto playerData = tree->getDataRoot()->getChild("player")->getChild("profile"); + int gold = playerData->getInt("gold"); + std::cout << " Player gold: " << gold << "\n"; + + // EconomyModule calcule bonus + int goldBonus = 500; + int newGold = gold + goldBonus; + + // Update data + playerData->setInt("gold", newGold); + + // Create economy bonuses + auto economy = std::make_shared("economy", nlohmann::json::object()); + auto bonuses = std::make_shared("bonuses", nlohmann::json{ + {"levelUpBonus", goldBonus}, + {"appliedAt", std::time(nullptr)} + }); + economy->setChild("bonuses", bonuses); + dataRoot->setChild("economy", economy); + + // Save economy data + tree->saveNode("data/economy"); + + std::cout << " EconomyModule updated bonuses and saved\n"; + } + + // Verify full chain + ASSERT_TRUE(std::filesystem::exists("test_cross/data/economy/bonuses.json"), + "Economy bonuses should be saved"); + + reporter.addAssertion("state_persistence_chain", true); + std::cout << "✓ TEST 2 PASSED\n"; + + // ======================================================================== + // TEST 3: Multi-Module State Synchronization + // ======================================================================== + std::cout << "\n=== TEST 3: Multi-Module State Synchronization ===\n"; + + int syncErrors = 0; + + for (int i = 0; i < 100; i++) { + // PlayerModule met Ă  jour gold + int goldValue = 1000 + i * 10; + auto playerProfile = tree->getDataRoot()->getChild("player")->getChild("profile"); + playerProfile->setInt("gold", goldValue); + + // PlayerModule publie event avec valeur + auto goldUpdate = std::make_unique(nlohmann::json{ + {"event", "gold_updated"}, + {"gold", goldValue} + }); + playerIO->publish("player:gold:updated", std::move(goldUpdate)); + + engine.update(1.0f/60.0f); + + // EconomyModule reçoit et vĂ©rifie cohĂ©rence + if (economyIO->hasMessages() > 0) { + auto msg = economyIO->pullMessage(); + auto* msgData = dynamic_cast(msg.data.get()); + int msgGold = msgData->getJsonData()["gold"]; + + // Lire DataNode + auto playerData = tree->getDataRoot()->getChild("player")->getChild("profile"); + int dataGold = playerData->getInt("gold"); + + if (msgGold != dataGold) { + std::cerr << " SYNC ERROR: msg=" << msgGold << " data=" << dataGold << "\n"; + syncErrors++; + } + } + } + + std::cout << "Synchronization errors: " << syncErrors << " / 100\n"; + ASSERT_EQ(syncErrors, 0, "Should have zero synchronization errors"); + + reporter.addMetric("sync_errors", syncErrors); + reporter.addAssertion("state_synchronization", syncErrors == 0); + std::cout << "✓ TEST 3 PASSED\n"; + + // ======================================================================== + // TEST 4: Runtime Metrics Collection + // ======================================================================== + std::cout << "\n=== TEST 4: Runtime Metrics Collection ===\n"; + + auto runtimeRoot = tree->getRuntimeRoot(); + auto metricsIO = engine.getModuleIO("MetricsModule"); + + // MetricsModule publie metrics avec low-freq batching + IIO::SubscriptionConfig metricsConfig; + metricsConfig.replaceable = true; + metricsConfig.batchInterval = 1000; // 1 second + + playerIO->subscribeLowFreq("metrics:*", metricsConfig); + + // Simulate 3 seconds de metrics collection + for (int sec = 0; sec < 3; sec++) { + for (int i = 0; i < 10; i++) { + // MetricsModule collecte metrics + auto metrics = std::make_shared("metrics", nlohmann::json{ + {"fps", 60.0}, + {"memory", 125000000 + i * 1000}, + {"messageCount", i * 100} + }); + runtimeRoot->setChild("metrics", metrics); + + // Publie snapshot + auto snapshot = std::make_unique(nlohmann::json{ + {"fps", 60.0}, + {"memory", 125000000 + i * 1000}, + {"timestamp", std::time(nullptr)} + }); + metricsIO->publish("metrics:snapshot", std::move(snapshot)); + + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + engine.update(1.0f/60.0f); + } + } + + // VĂ©rifier batching + int snapshotsReceived = 0; + while (playerIO->hasMessages() > 0) { + playerIO->pullMessage(); + snapshotsReceived++; + } + + std::cout << "Snapshots received: " << snapshotsReceived << " (expected ~3 due to batching)\n"; + ASSERT_TRUE(snapshotsReceived >= 2 && snapshotsReceived <= 4, + "Should receive ~3 batched snapshots"); + + // VĂ©rifier runtime pas persistĂ© + ASSERT_FALSE(std::filesystem::exists("test_cross/runtime"), + "Runtime data should not be persisted"); + + reporter.addMetric("batched_snapshots", snapshotsReceived); + reporter.addAssertion("runtime_metrics", true); + std::cout << "✓ TEST 4 PASSED\n"; + + // ======================================================================== + // TEST 5: Concurrent Access (IO + DataNode) + // ======================================================================== + std::cout << "\n=== TEST 5: Concurrent Access ===\n"; + + std::atomic running{true}; + std::atomic publishCount{0}; + std::atomic readCount{0}; + std::atomic writeCount{0}; + std::atomic errors{0}; + + // Thread 1: PlayerModule publie events + std::thread pubThread([&]() { + while (running) { + try { + auto data = std::make_unique(nlohmann::json{{"id", publishCount++}}); + playerIO->publish("concurrent:test", std::move(data)); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); + } catch (...) { + errors++; + } + } + }); + + // Thread 2: EconomyModule lit data/ + std::thread readThread([&]() { + while (running) { + try { + auto playerData = tree->getDataRoot()->getChild("player"); + if (playerData) { + auto profile = playerData->getChild("profile"); + if (profile) { + int gold = profile->getInt("gold", 0); + readCount++; + } + } + std::this_thread::sleep_for(std::chrono::milliseconds(20)); + } catch (...) { + errors++; + } + } + }); + + // Thread 3: MetricsModule Ă©crit runtime/ + std::thread writeThread([&]() { + while (running) { + try { + auto metrics = std::make_shared("metrics", nlohmann::json{ + {"counter", writeCount++} + }); + runtimeRoot->setChild("metrics", metrics); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); + } catch (...) { + errors++; + } + } + }); + + // Thread 4: ConfigWatcherModule lit config/ + std::thread configThread([&]() { + while (running) { + try { + auto gameplay = tree->getConfigRoot()->getChild("gameplay"); + if (gameplay) { + std::string diff = gameplay->getString("difficulty", "normal"); + } + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + } catch (...) { + errors++; + } + } + }); + + // Run for 5 seconds + auto concurrentStart = std::chrono::high_resolution_clock::now(); + std::this_thread::sleep_for(std::chrono::seconds(5)); + running = false; + auto concurrentEnd = std::chrono::high_resolution_clock::now(); + + pubThread.join(); + readThread.join(); + writeThread.join(); + configThread.join(); + + float duration = std::chrono::duration(concurrentEnd - concurrentStart).count(); + + std::cout << "Concurrent test ran for " << duration << "s\n"; + std::cout << " Publishes: " << publishCount << "\n"; + std::cout << " Reads: " << readCount << "\n"; + std::cout << " Writes: " << writeCount << "\n"; + std::cout << " Errors: " << errors << "\n"; + + ASSERT_EQ(errors, 0, "Should have zero errors during concurrent access"); + ASSERT_GT(publishCount, 0, "Should have published messages"); + ASSERT_GT(readCount, 0, "Should have read data"); + ASSERT_GT(writeCount, 0, "Should have written data"); + + reporter.addMetric("concurrent_publishes", publishCount); + reporter.addMetric("concurrent_reads", readCount); + reporter.addMetric("concurrent_writes", writeCount); + reporter.addMetric("concurrent_errors", errors); + reporter.addAssertion("concurrent_access", errors == 0); + std::cout << "✓ TEST 5 PASSED\n"; + + // ======================================================================== + // TEST 6: Hot-Reload Module + Preserve State + // ======================================================================== + std::cout << "\n=== TEST 6: Hot-Reload Module + Preserve State ===\n"; + + // PlayerModule crĂ©e state complexe + auto entities = std::make_shared("entities", nlohmann::json::array()); + for (int i = 0; i < 50; i++) { + entities->getJsonData().push_back({{"id", i}, {"hp", 100}}); + } + runtimeRoot->setChild("entities", entities); + + auto playerGold = tree->getDataRoot()->getChild("player")->getChild("profile"); + playerGold->setInt("gold", 5000); + tree->saveNode("data/player/profile"); + + auto quest = std::make_shared("quest", nlohmann::json{ + {"active", true}, + {"questId", 42} + }); + runtimeRoot->setChild("quest", quest); + + std::cout << "State before reload: 50 entities, 5000 gold, quest #42 active\n"; + + // EconomyModule publie messages pendant reload + std::thread spamThread([&]() { + for (int i = 0; i < 100; i++) { + auto data = std::make_unique(nlohmann::json{{"spam", i}}); + economyIO->publish("player:spam", std::move(data)); + std::this_thread::sleep_for(std::chrono::milliseconds(10)); + } + }); + + // Trigger hot-reload de PlayerModule + std::this_thread::sleep_for(std::chrono::milliseconds(200)); + + auto stateBefore = engine.getModuleState("PlayerModule"); + + modifySourceFile("tests/modules/PlayerModule.cpp", "v1.0", "v2.0"); + system("cmake --build build --target PlayerModule 2>&1 > /dev/null"); + + engine.reloadModule("PlayerModule"); + + spamThread.join(); + + // VĂ©rifier state aprĂšs reload + auto stateAfter = engine.getModuleState("PlayerModule"); + + auto entitiesAfter = runtimeRoot->getChild("entities"); + int entityCount = entitiesAfter->getJsonData().size(); + std::cout << "Entities after reload: " << entityCount << "\n"; + ASSERT_EQ(entityCount, 50, "Should preserve 50 entities"); + + auto goldAfter = tree->getDataRoot()->getChild("player")->getChild("profile"); + int goldValue = goldAfter->getInt("gold"); + std::cout << "Gold after reload: " << goldValue << "\n"; + ASSERT_EQ(goldValue, 5000, "Should preserve 5000 gold"); + + auto questAfter = runtimeRoot->getChild("quest"); + bool questActive = questAfter->getBool("active"); + int questId = questAfter->getInt("questId"); + std::cout << "Quest after reload: active=" << questActive << ", id=" << questId << "\n"; + ASSERT_EQ(questActive, true, "Quest should still be active"); + ASSERT_EQ(questId, 42, "Quest ID should be preserved"); + + // VĂ©rifier messages pas perdus + int spamReceived = 0; + while (playerIO->hasMessages() > 0) { + playerIO->pullMessage(); + spamReceived++; + } + std::cout << "Spam messages received after reload: " << spamReceived << "\n"; + ASSERT_GT(spamReceived, 0, "Should receive queued messages after reload"); + + reporter.addAssertion("hotreload_preserve_state", true); + reporter.addMetric("spam_messages_queued", spamReceived); + std::cout << "✓ TEST 6 PASSED\n"; + + // ======================================================================== + // TEST 7: Config Change Cascades + // ======================================================================== + std::cout << "\n=== TEST 7: Config Change Cascades ===\n"; + + // Subscribe chain + playerIO->subscribe("config:*", {}); + economyIO->subscribe("player:*", {}); + metricsIO->subscribe("economy:*", {}); + + auto cascadeStart = std::chrono::high_resolution_clock::now(); + + // 1. Modifier config + gameplayConfig["difficulty"] = "extreme"; + std::ofstream configFile3("test_cross/config/gameplay.json"); + configFile3 << gameplayConfig.dump(2); + configFile3.close(); + + // 2. Trigger reload + tree->reloadIfChanged(); + auto watcherIO = engine.getModuleIO("ConfigWatcherModule"); + watcherIO->publish("config:gameplay:changed", std::make_unique(nlohmann::json{{"config", "gameplay"}})); + + engine.update(1.0f/60.0f); + + // 3. PlayerModule reçoit et publie + if (playerIO->hasMessages() > 0) { + playerIO->pullMessage(); + playerIO->publish("player:config:updated", std::make_unique(nlohmann::json{{"hpMult", 2.0}})); + } + + engine.update(1.0f/60.0f); + + // 4. EconomyModule reçoit et publie + if (economyIO->hasMessages() > 0) { + economyIO->pullMessage(); + economyIO->publish("economy:config:updated", std::make_unique(nlohmann::json{{"pricesMult", 1.5}})); + } + + engine.update(1.0f/60.0f); + + // 5. MetricsModule reçoit et log + if (metricsIO->hasMessages() > 0) { + metricsIO->pullMessage(); + std::cout << " → Cascade complete!\n"; + } + + auto cascadeEnd = std::chrono::high_resolution_clock::now(); + float cascadeTime = std::chrono::duration(cascadeEnd - cascadeStart).count(); + + std::cout << "Cascade latency: " << cascadeTime << "ms\n"; + ASSERT_LT(cascadeTime, 500.0f, "Cascade should complete in < 500ms"); + + reporter.addMetric("cascade_latency_ms", cascadeTime); + reporter.addAssertion("config_cascade", true); + std::cout << "✓ TEST 7 PASSED\n"; + + // ======================================================================== + // TEST 8: Large State + High-Frequency IO + // ======================================================================== + std::cout << "\n=== TEST 8: Large State + High-Frequency IO ===\n"; + + // CrĂ©er large tree (1000 nodes) + auto largeRoot = tree->getDataRoot(); + for (int i = 0; i < 100; i++) { + auto category = std::make_shared("cat_" + std::to_string(i), nlohmann::json::object()); + for (int j = 0; j < 10; j++) { + auto item = std::make_shared("item_" + std::to_string(j), nlohmann::json{ + {"id", i * 10 + j}, + {"value", (i * 10 + j) * 100} + }); + category->setChild("item_" + std::to_string(j), item); + } + largeRoot->setChild("cat_" + std::to_string(i), category); + } + + std::cout << "Created large DataNode tree (1000 nodes)\n"; + + // High-frequency IO + concurrent DataNode reads + std::atomic ioPublished{0}; + std::atomic dataReads{0}; + std::vector ioLatencies; + std::vector dataLatencies; + + running = true; + + std::thread ioThread([&]() { + while (running) { + auto start = std::chrono::high_resolution_clock::now(); + auto data = std::make_unique(nlohmann::json{{"id", ioPublished++}}); + playerIO->publish("stress:test", std::move(data)); + auto end = std::chrono::high_resolution_clock::now(); + + float latency = std::chrono::duration(end - start).count(); + ioLatencies.push_back(latency); + + // Target: 10k msg/s = 0.1ms interval + std::this_thread::sleep_for(std::chrono::microseconds(100)); + } + }); + + std::thread dataThread([&]() { + while (running) { + auto start = std::chrono::high_resolution_clock::now(); + auto cat = largeRoot->getChild("cat_50"); + if (cat) { + auto item = cat->getChild("item_5"); + if (item) { + int value = item->getInt("value", 0); + dataReads++; + } + } + auto end = std::chrono::high_resolution_clock::now(); + + float latency = std::chrono::duration(end - start).count(); + dataLatencies.push_back(latency); + + std::this_thread::sleep_for(std::chrono::microseconds(500)); + } + }); + + auto memBefore = getCurrentMemoryUsage(); + + std::this_thread::sleep_for(std::chrono::seconds(5)); + running = false; + + ioThread.join(); + dataThread.join(); + + auto memAfter = getCurrentMemoryUsage(); + long memGrowth = static_cast(memAfter) - static_cast(memBefore); + + // Calculate p99 latencies + std::sort(ioLatencies.begin(), ioLatencies.end()); + std::sort(dataLatencies.begin(), dataLatencies.end()); + + float ioP99 = ioLatencies[static_cast(ioLatencies.size() * 0.99)]; + float dataP99 = dataLatencies[static_cast(dataLatencies.size() * 0.99)]; + + std::cout << "Performance results:\n"; + std::cout << " IO published: " << ioPublished << " messages\n"; + std::cout << " IO p99 latency: " << ioP99 << "ms\n"; + std::cout << " DataNode reads: " << dataReads << "\n"; + std::cout << " DataNode p99 latency: " << dataP99 << "ms\n"; + std::cout << " Memory growth: " << (memGrowth / 1024.0 / 1024.0) << "MB\n"; + + ASSERT_LT(ioP99, 10.0f, "IO p99 latency should be < 10ms"); + ASSERT_LT(dataP99, 5.0f, "DataNode p99 latency should be < 5ms"); + ASSERT_LT(memGrowth, 20 * 1024 * 1024, "Memory growth should be < 20MB"); + + reporter.addMetric("io_p99_latency_ms", ioP99); + reporter.addMetric("datanode_p99_latency_ms", dataP99); + reporter.addMetric("memory_growth_mb", memGrowth / 1024.0 / 1024.0); + reporter.addAssertion("performance_under_load", true); + std::cout << "✓ TEST 8 PASSED\n"; + + // ======================================================================== + // CLEANUP + // ======================================================================== + std::filesystem::remove_all("test_cross"); + + // ======================================================================== + // RAPPORT FINAL + // ======================================================================== + + metrics.printReport(); + reporter.printFinalReport(); + + return reporter.getExitCode(); +} +``` + +--- + +## 📊 MĂ©triques CollectĂ©es + +| MĂ©trique | Description | Seuil | +|----------|-------------|-------| +| **config_reload_latency_ms** | Latence reload→publish→subscribe→read | < 100ms | +| **sync_errors** | Erreurs synchronisation IO/DataNode | 0 | +| **batched_snapshots** | Snapshots reçus avec batching | 2-4 | +| **concurrent_publishes** | Messages publiĂ©s en concurrence | > 0 | +| **concurrent_reads** | Lectures DataNode concurrentes | > 0 | +| **concurrent_writes** | Écritures DataNode concurrentes | > 0 | +| **concurrent_errors** | Erreurs pendant concurrence | 0 | +| **spam_messages_queued** | Messages queued pendant reload | > 0 | +| **cascade_latency_ms** | Latence cascade config changes | < 500ms | +| **io_p99_latency_ms** | P99 latence IO sous charge | < 10ms | +| **datanode_p99_latency_ms** | P99 latence DataNode sous charge | < 5ms | +| **memory_growth_mb** | Croissance mĂ©moire sous charge | < 20MB | + +--- + +## ✅ CritĂšres de SuccĂšs + +### MUST PASS +1. ✅ Config hot-reload chain fonctionne (< 100ms) +2. ✅ State persistence + event publishing chain fonctionne +3. ✅ Synchronization IO/DataNode sans erreurs +4. ✅ Runtime metrics avec batching +5. ✅ Concurrent access sans crashes/corruption +6. ✅ Hot-reload prĂ©serve state complet +7. ✅ Messages IO pas perdus pendant reload +8. ✅ Config cascades propagent correctement +9. ✅ Performance acceptable sous charge + +### NICE TO HAVE +1. ✅ Config reload latency < 50ms (optimal) +2. ✅ Cascade latency < 200ms (optimal) +3. ✅ IO p99 < 5ms (optimal) +4. ✅ DataNode p99 < 2ms (optimal) + +--- + +## 🐛 Cas d'Erreur Attendus + +| Erreur | Cause | Action | +|--------|-------|--------| +| Config change pas propagĂ© | Callback pas dĂ©clenchĂ© | FAIL - fix onTreeReloaded | +| Sync errors > 0 | Race condition IO/DataNode | FAIL - add locking | +| Messages perdus | Queue overflow pendant reload | WARN - increase queue size | +| Concurrent crashes | Missing mutex | FAIL - add thread safety | +| State corrompu aprĂšs reload | setState() bug | FAIL - fix state restoration | +| Cascade timeout | Deadlock dans chain | FAIL - fix event routing | +| Performance degradation | O(nÂČ) algorithm | FAIL - optimize | +| Memory leak | Resources not freed | FAIL - fix destructors | + +--- + +## 📝 Output Attendu + +``` +================================================================================ +TEST: Cross-System Integration (IO + DataNode) +================================================================================ + +=== TEST 1: Config Hot-Reload → IO Broadcast === + → Config reloaded, publishing event... + PlayerModule received config change: difficulty=hard, hpMult=1.5 +Total latency (reload + publish + subscribe + read): 87ms +✓ TEST 1 PASSED + +=== TEST 2: State Persistence + Event Publishing === + EconomyModule received: player:level_up + Player gold: 1000 + EconomyModule updated bonuses and saved +✓ TEST 2 PASSED + +=== TEST 3: Multi-Module State Synchronization === +Synchronization errors: 0 / 100 +✓ TEST 3 PASSED + +=== TEST 4: Runtime Metrics Collection === +Snapshots received: 3 (expected ~3 due to batching) +✓ TEST 4 PASSED + +=== TEST 5: Concurrent Access === +Concurrent test ran for 5.001s + Publishes: 487 + Reads: 243 + Writes: 489 + Errors: 0 +✓ TEST 5 PASSED + +=== TEST 6: Hot-Reload Module + Preserve State === +State before reload: 50 entities, 5000 gold, quest #42 active +Entities after reload: 50 +Gold after reload: 5000 +Quest after reload: active=true, id=42 +Spam messages received after reload: 94 +✓ TEST 6 PASSED + +=== TEST 7: Config Change Cascades === + → Cascade complete! +Cascade latency: 234ms +✓ TEST 7 PASSED + +=== TEST 8: Large State + High-Frequency IO === +Created large DataNode tree (1000 nodes) +Performance results: + IO published: 48723 messages + IO p99 latency: 8.3ms + DataNode reads: 9745 + DataNode p99 latency: 3.2ms + Memory growth: 14.7MB +✓ TEST 8 PASSED + +================================================================================ +METRICS +================================================================================ + Config reload latency: 87ms (threshold: < 100ms) ✓ + Sync errors: 0 (threshold: 0) ✓ + Batched snapshots: 3 + Concurrent publishes: 487 + Concurrent reads: 243 + Concurrent writes: 489 + Concurrent errors: 0 (threshold: 0) ✓ + Spam messages queued: 94 + Cascade latency: 234ms (threshold: < 500ms) ✓ + IO p99 latency: 8.3ms (threshold: < 10ms) ✓ + DataNode p99 latency: 3.2ms (threshold: < 5ms) ✓ + Memory growth: 14.7MB (threshold: < 20MB) ✓ + +================================================================================ +ASSERTIONS +================================================================================ + ✓ config_hotreload_chain + ✓ state_persistence_chain + ✓ state_synchronization + ✓ runtime_metrics + ✓ concurrent_access + ✓ hotreload_preserve_state + ✓ config_cascade + ✓ performance_under_load + +Result: ✅ PASSED (8/8 tests) + +================================================================================ +``` + +--- + +## 📅 Planning + +**Jour 1 (4h):** +- ImplĂ©menter ConfigWatcherModule, PlayerModule, EconomyModule, MetricsModule +- Setup IDataTree avec structure config/data/runtime +- Tests 1-3 (config reload, persistence, sync) + +**Jour 2 (4h):** +- Tests 4-6 (metrics, concurrent, hot-reload) +- Tests 7-8 (cascades, performance) +- Debug + validation + +--- + +**Conclusion**: Ces 3 nouveaux scĂ©narios (11, 12, 13) complĂštent la suite de tests d'intĂ©gration en couvrant les systĂšmes IO et DataNode, ainsi que leur intĂ©gration.