feat: Add integration test scenarios 11-13 for IO and DataNode systems
Added three new integration test scenario documents: - Scenario 11: IO System Stress Test - Tests IntraIO pub/sub with pattern matching, batching, backpressure, and thread safety - Scenario 12: DataNode Integration Test - Tests IDataTree with hot-reload, persistence, hashing, and performance on 1000+ nodes - Scenario 13: Cross-System Integration - Tests IO + DataNode working together with config hot-reload chains and concurrent access Also includes comprehensive DataNode system architecture analysis documentation. These scenarios complement the existing test suite by covering the IO communication layer and data management systems that were previously untested. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
3864450b0d
commit
094ab28865
766
docs/architecture/DATANODE-SYSTEM-ANALYSIS.md
Normal file
766
docs/architecture/DATANODE-SYSTEM-ANALYSIS.md
Normal file
@ -0,0 +1,766 @@
|
|||||||
|
# DataNode System Architecture Analysis
|
||||||
|
|
||||||
|
## System Overview
|
||||||
|
|
||||||
|
The DataNode system is a hierarchical data management framework for the GroveEngine, providing unified access to configuration, persistent data, and runtime state. It's a complete abstraction layer separating data concerns from business logic.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Core Architecture
|
||||||
|
|
||||||
|
### Three-Tier System
|
||||||
|
|
||||||
|
```
|
||||||
|
IDataTree (Root Container)
|
||||||
|
├── config/ (Read-only, hot-reload enabled)
|
||||||
|
├── data/ (Read-write, persistent)
|
||||||
|
└── runtime/ (Read-write, temporary)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Architectural Layers
|
||||||
|
|
||||||
|
```
|
||||||
|
Layer 1: Interfaces (Abstract)
|
||||||
|
├── IDataValue - Type-safe value wrapper
|
||||||
|
├── IDataNode - Tree node with navigation and modification
|
||||||
|
└── IDataTree - Root container with save/reload operations
|
||||||
|
|
||||||
|
Layer 2: Concrete Implementations
|
||||||
|
├── JsonDataValue - nlohmann::json backed value
|
||||||
|
├── JsonDataNode - JSON tree node with full features
|
||||||
|
└── JsonDataTree - File-based JSON storage
|
||||||
|
|
||||||
|
Layer 3: Module System Integration
|
||||||
|
└── IModuleSystem - Owns IDataTree, manages save/reload
|
||||||
|
|
||||||
|
Layer 4: Distributed Coordination
|
||||||
|
├── CoordinationModule (Master) - Hot-reload detection
|
||||||
|
└── DebugEngine (Workers) - Config synchronization
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Key Classes and Responsibilities
|
||||||
|
|
||||||
|
### IDataValue Interface
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataValue.h`
|
||||||
|
|
||||||
|
**Responsibility**: Abstract data value with type-safe access
|
||||||
|
|
||||||
|
**Key Methods**:
|
||||||
|
- Type checking: `isNull()`, `isBool()`, `isNumber()`, `isString()`, `isArray()`, `isObject()`
|
||||||
|
- Conversion: `asBool()`, `asInt()`, `asDouble()`, `asString()`
|
||||||
|
- Access: `get(index)`, `get(key)`, `has(key)`, `size()`
|
||||||
|
- Serialization: `toString()`
|
||||||
|
|
||||||
|
**Why It Exists**: Allows modules to work with values without exposing JSON format, enabling future implementations (binary, database, etc.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### JsonDataValue Implementation
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataValue.h`
|
||||||
|
**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataValue.cpp`
|
||||||
|
|
||||||
|
**Concrete Implementation**: Backed by `nlohmann::json`
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
- Transparent JSON wrapping
|
||||||
|
- Direct JSON access for internal use: `getJson()`, `getJson()`
|
||||||
|
- All interface methods delegated to JSON type system
|
||||||
|
- No conversion overhead (move semantics)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IDataNode Interface
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataNode.h` (259 lines)
|
||||||
|
|
||||||
|
**Responsibility**: Single tree node with hierarchical navigation, search, and modification
|
||||||
|
|
||||||
|
**Major Capabilities**:
|
||||||
|
|
||||||
|
#### 1. Tree Navigation
|
||||||
|
```cpp
|
||||||
|
std::unique_ptr<IDataNode> getChild(const std::string& name)
|
||||||
|
std::vector<std::string> getChildNames()
|
||||||
|
bool hasChildren()
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Exact Search (Direct Children Only)
|
||||||
|
```cpp
|
||||||
|
std::vector<IDataNode*> getChildrenByName(const std::string& name)
|
||||||
|
bool hasChildrenByName(const std::string& name) const
|
||||||
|
IDataNode* getFirstChildByName(const std::string& name)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Pattern Matching (Deep Subtree Search)
|
||||||
|
```cpp
|
||||||
|
// Examples: "component*", "*heavy*", "model_*"
|
||||||
|
std::vector<IDataNode*> getChildrenByNameMatch(const std::string& pattern)
|
||||||
|
bool hasChildrenByNameMatch(const std::string& pattern) const
|
||||||
|
IDataNode* getFirstChildByNameMatch(const std::string& pattern)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Property-Based Queries (Functional)
|
||||||
|
```cpp
|
||||||
|
std::vector<IDataNode*> queryByProperty(const std::string& propName,
|
||||||
|
const std::function<bool(const IDataValue&)>& predicate)
|
||||||
|
|
||||||
|
// Example: Find all tanks with armor > 150
|
||||||
|
auto heavy = root->queryByProperty("armor",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() > 150;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Typed Data Access
|
||||||
|
```cpp
|
||||||
|
std::string getString(const std::string& name, const std::string& default = "")
|
||||||
|
int getInt(const std::string& name, int default = 0)
|
||||||
|
double getDouble(const std::string& name, double default = 0.0)
|
||||||
|
bool getBool(const std::string& name, bool default = false)
|
||||||
|
bool hasProperty(const std::string& name)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 6. Hash System (Validation & Synchronization)
|
||||||
|
```cpp
|
||||||
|
std::string getDataHash() // SHA256 of this node's data
|
||||||
|
std::string getTreeHash() // SHA256 of entire subtree
|
||||||
|
std::string getSubtreeHash(const std::string& childPath) // Specific child
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use Cases**:
|
||||||
|
- Validate config hasn't been corrupted
|
||||||
|
- Detect changes for synchronization
|
||||||
|
- Fast change detection without full tree comparison
|
||||||
|
|
||||||
|
#### 7. Node Data Management
|
||||||
|
```cpp
|
||||||
|
std::unique_ptr<IDataValue> getData() const
|
||||||
|
bool hasData() const
|
||||||
|
void setData(std::unique_ptr<IDataValue> data)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 8. Tree Modification
|
||||||
|
```cpp
|
||||||
|
void setChild(const std::string& name, std::unique_ptr<IDataNode> node)
|
||||||
|
bool removeChild(const std::string& name)
|
||||||
|
void clearChildren()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restrictions**: Only works on data/ and runtime/ nodes. Config nodes are read-only.
|
||||||
|
|
||||||
|
#### 9. Metadata
|
||||||
|
```cpp
|
||||||
|
std::string getPath() const // Full path: "vehicles/tanks/heavy"
|
||||||
|
std::string getName() const // Node name only
|
||||||
|
std::string getNodeType() const // "JsonDataNode"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### JsonDataNode Implementation
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataNode.h` (109 lines)
|
||||||
|
**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataNode.cpp` (344 lines)
|
||||||
|
|
||||||
|
**Internal Structure**:
|
||||||
|
```cpp
|
||||||
|
class JsonDataNode : public IDataNode {
|
||||||
|
private:
|
||||||
|
std::string m_name;
|
||||||
|
json m_data; // Node's own data
|
||||||
|
JsonDataNode* m_parent; // Parent reference (path building)
|
||||||
|
bool m_readOnly; // For config/ nodes
|
||||||
|
std::map<std::string, std::unique_ptr<JsonDataNode>> m_children; // Child nodes
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Capabilities**:
|
||||||
|
|
||||||
|
1. **Pattern Matching Implementation**
|
||||||
|
- Converts wildcard patterns to regex: `*` → `.*`
|
||||||
|
- Escapes all special regex chars except `*`
|
||||||
|
- Recursive depth-first search: `collectMatchingNodes()`
|
||||||
|
- O(n) complexity where n = subtree size
|
||||||
|
|
||||||
|
2. **Hash Computation**
|
||||||
|
- Uses OpenSSL SHA256
|
||||||
|
- Data hash: `SHA256(m_data.dump())`
|
||||||
|
- Tree hash: Combined hash of data + all children
|
||||||
|
- Format: Lowercase hex string
|
||||||
|
|
||||||
|
3. **Copy-on-Access Pattern**
|
||||||
|
- `getChild()` returns a new unique_ptr copy
|
||||||
|
- Preserves encapsulation
|
||||||
|
- Enables safe distribution
|
||||||
|
|
||||||
|
4. **Read-Only Enforcement**
|
||||||
|
- `checkReadOnly()` throws if modification attempted on config
|
||||||
|
- Error: `"Cannot modify read-only node: " + getPath()`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IDataTree Interface
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/IDataTree.h` (128 lines)
|
||||||
|
|
||||||
|
**Responsibility**: Root container managing three separate trees
|
||||||
|
|
||||||
|
**Key Methods**:
|
||||||
|
|
||||||
|
#### Tree Access
|
||||||
|
```cpp
|
||||||
|
std::unique_ptr<IDataNode> getRoot() // Everything
|
||||||
|
std::unique_ptr<IDataNode> getNode(const std::string& path) // "config/vehicles/tanks"
|
||||||
|
|
||||||
|
// Recommended: Access separate roots
|
||||||
|
std::unique_ptr<IDataNode> getConfigRoot() // Read-only config
|
||||||
|
std::unique_ptr<IDataNode> getDataRoot() // Persistent data
|
||||||
|
std::unique_ptr<IDataNode> getRuntimeRoot() // Temporary state
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Save Operations
|
||||||
|
```cpp
|
||||||
|
bool saveData() // Save entire data/
|
||||||
|
bool saveNode(const std::string& path) // Save specific node (data/ only)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Hot-Reload
|
||||||
|
```cpp
|
||||||
|
bool checkForChanges() // Check if config files changed
|
||||||
|
bool reloadIfChanged() // Reload if changed, fire callbacks
|
||||||
|
void onTreeReloaded(std::function<void()> callback) // Register reload handler
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Metadata
|
||||||
|
```cpp
|
||||||
|
std::string getType() // "JsonDataTree"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### JsonDataTree Implementation
|
||||||
|
**Location**: `/mnt/c/Users/alexi/Documents/projects/groveengine/include/grove/JsonDataTree.h` (87 lines)
|
||||||
|
**Implementation**: `/mnt/c/Users/alexi/Documents/projects/groveengine/src/JsonDataTree.cpp` (partial read)
|
||||||
|
|
||||||
|
**Internal Structure**:
|
||||||
|
```cpp
|
||||||
|
class JsonDataTree : public IDataTree {
|
||||||
|
private:
|
||||||
|
std::string m_basePath; // Root directory
|
||||||
|
std::unique_ptr<JsonDataNode> m_root; // Root container
|
||||||
|
std::unique_ptr<JsonDataNode> m_configRoot; // config/ subtree
|
||||||
|
std::unique_ptr<JsonDataNode> m_dataRoot; // data/ subtree
|
||||||
|
std::unique_ptr<JsonDataNode> m_runtimeRoot; // runtime/ subtree (in-memory)
|
||||||
|
|
||||||
|
std::map<std::string, std::filesystem::file_time_type> m_configFileTimes;
|
||||||
|
std::vector<std::function<void()>> m_reloadCallbacks;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
|
||||||
|
1. **Initialization** (`JsonDataTree(basePath)`)
|
||||||
|
- Creates root node
|
||||||
|
- Calls `loadConfigTree()` from disk
|
||||||
|
- Calls `loadDataTree()` from disk
|
||||||
|
- Calls `initializeRuntimeTree()` (empty in-memory)
|
||||||
|
- Attaches all three as children to root
|
||||||
|
|
||||||
|
2. **File-Based Loading** (`scanDirectory()`)
|
||||||
|
- Recursively scans config/ and data/ directories
|
||||||
|
- Creates JsonDataNode tree from JSON files
|
||||||
|
- Builds hierarchical structure
|
||||||
|
- config/ marked as read-only
|
||||||
|
|
||||||
|
3. **Hot-Reload Detection** (`checkForChanges()`)
|
||||||
|
- Tracks file modification times
|
||||||
|
- Detects file deletions
|
||||||
|
- Detects new files
|
||||||
|
- Returns bool (changed?)
|
||||||
|
|
||||||
|
4. **Hot-Reload Execution** (`reloadIfChanged()`)
|
||||||
|
- Calls `loadConfigTree()` to reload from disk
|
||||||
|
- Fires all registered callbacks
|
||||||
|
- Allows modules to refresh configuration
|
||||||
|
|
||||||
|
5. **Save Operations**
|
||||||
|
- `saveData()`: Saves data/ subtree to disk
|
||||||
|
- `saveNode(path)`: Saves specific data/ path
|
||||||
|
- Only allows data/ paths (read-only protection)
|
||||||
|
- Creates JSON files matching hierarchy
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Data Flow Patterns
|
||||||
|
|
||||||
|
### Pattern 1: Reading Configuration
|
||||||
|
```cpp
|
||||||
|
// Engine startup
|
||||||
|
auto tree = std::make_unique<JsonDataTree>("gamedata");
|
||||||
|
auto tankConfig = tree->getConfigRoot()
|
||||||
|
->getChild("tanks")
|
||||||
|
->getChild("heavy_mk1");
|
||||||
|
|
||||||
|
// Module receives config
|
||||||
|
void TankModule::setConfiguration(const IDataNode& config, ...) {
|
||||||
|
m_armor = config.getInt("armor"); // Default: 0
|
||||||
|
m_speed = config.getDouble("speed"); // Default: 0.0
|
||||||
|
m_weaponType = config.getString("weapon_type"); // Default: ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 2: Saving State
|
||||||
|
```cpp
|
||||||
|
// Module creates state
|
||||||
|
auto state = std::make_unique<JsonDataNode>("state", json::object());
|
||||||
|
state->setData(std::make_unique<JsonDataValue>(
|
||||||
|
json{{"position", {x, y}}, {"health", hp}}
|
||||||
|
));
|
||||||
|
|
||||||
|
// Engine persists
|
||||||
|
tree->getDataRoot()->setChild("tank_123", std::move(state));
|
||||||
|
tree->saveNode("data/tank_123");
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 3: Hot-Reload (Distributed)
|
||||||
|
```cpp
|
||||||
|
// Master: Detect and broadcast
|
||||||
|
if (masterTree->reloadIfChanged()) {
|
||||||
|
auto config = masterTree->getConfigRoot();
|
||||||
|
io->publish("config:reload", std::move(config));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Worker: Receive and apply
|
||||||
|
auto msg = io->pullMessage();
|
||||||
|
if (msg.topic == "config:reload") {
|
||||||
|
auto configRoot = tree->getConfigRoot();
|
||||||
|
configRoot->setChild("updated", std::move(msg.data));
|
||||||
|
|
||||||
|
// Notify modules
|
||||||
|
for (auto& module : modules) {
|
||||||
|
auto moduleConfig = configRoot->getChild(module->getType());
|
||||||
|
module->setConfiguration(*moduleConfig, io, scheduler);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 4: Advanced Queries
|
||||||
|
```cpp
|
||||||
|
// Pattern matching
|
||||||
|
auto heavyUnits = root->getChildrenByNameMatch("*_heavy_*");
|
||||||
|
auto tanks = root->getChildrenByNameMatch("tank_*");
|
||||||
|
|
||||||
|
// Property-based query
|
||||||
|
auto highArmor = root->queryByProperty("armor",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() > 150;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Hash validation
|
||||||
|
std::string oldHash = configNode->getTreeHash();
|
||||||
|
// ... later ...
|
||||||
|
if (oldHash != configNode->getTreeHash()) {
|
||||||
|
// Config changed - refresh caches
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Storage and Persistence
|
||||||
|
|
||||||
|
### File Structure
|
||||||
|
```
|
||||||
|
gamedata/
|
||||||
|
├── config/
|
||||||
|
│ ├── tanks.json → JsonDataNode "tanks" with children
|
||||||
|
│ ├── weapons.json
|
||||||
|
│ └── mods/super_mod/
|
||||||
|
│ └── new_tanks.json
|
||||||
|
├── data/
|
||||||
|
│ ├── campaign_progress.json
|
||||||
|
│ ├── player_stats.json
|
||||||
|
│ └── unlocks.json
|
||||||
|
└── runtime/
|
||||||
|
(in-memory only)
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSON Format
|
||||||
|
Each node can have:
|
||||||
|
- Own data (any JSON value)
|
||||||
|
- Child nodes (files become nodes)
|
||||||
|
- Properties (key-value pairs in object data)
|
||||||
|
|
||||||
|
**Example tanks.json**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"heavy_mk1": {
|
||||||
|
"armor": 200,
|
||||||
|
"speed": 15.5,
|
||||||
|
"weapon_type": "cannon_105mm"
|
||||||
|
},
|
||||||
|
"medium_t_72": {
|
||||||
|
"armor": 140,
|
||||||
|
"speed": 60.0,
|
||||||
|
"weapon_type": "cannon_125mm"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tree Structure**:
|
||||||
|
```
|
||||||
|
config/
|
||||||
|
└── tanks
|
||||||
|
├── heavy_mk1 (data: {armor: 200, ...})
|
||||||
|
└── medium_t_72 (data: {armor: 140, ...})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Synchronization Mechanisms
|
||||||
|
|
||||||
|
### Hash System
|
||||||
|
- **Data Hash**: Validates single node integrity
|
||||||
|
- **Tree Hash**: Validates subtree (change detection)
|
||||||
|
- **Subtree Hash**: Validates specific child path
|
||||||
|
|
||||||
|
**Use Case**: Quick change detection without tree traversal
|
||||||
|
|
||||||
|
### Hot-Reload System
|
||||||
|
1. `checkForChanges()` - File timestamp comparison
|
||||||
|
2. `reloadIfChanged()` - Reload + callback firing
|
||||||
|
3. `onTreeReloaded()` - Register callback handlers
|
||||||
|
|
||||||
|
**Distributed Pattern**:
|
||||||
|
- Master detects changes
|
||||||
|
- Broadcasts new config via IIO
|
||||||
|
- Workers apply synchronized updates
|
||||||
|
- Modules notified via `setConfiguration()`
|
||||||
|
|
||||||
|
### Read-Only Enforcement
|
||||||
|
- config/ nodes: Immutable by modules
|
||||||
|
- data/ nodes: Writable by modules
|
||||||
|
- runtime/ nodes: Always writable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Synchronization Features
|
||||||
|
|
||||||
|
### Thread Safety Considerations
|
||||||
|
Current implementation is **NOT thread-safe**:
|
||||||
|
- No mutex protection in JsonDataNode
|
||||||
|
- No mutex protection in JsonDataTree
|
||||||
|
- Concurrent access requires external synchronization
|
||||||
|
|
||||||
|
**Recommended Pattern**:
|
||||||
|
```cpp
|
||||||
|
std::mutex treeMutex;
|
||||||
|
|
||||||
|
// Reader
|
||||||
|
std::lock_guard<std::mutex> lock(treeMutex);
|
||||||
|
auto data = tree->getNode(path);
|
||||||
|
|
||||||
|
// Writer
|
||||||
|
std::lock_guard<std::mutex> lock(treeMutex);
|
||||||
|
tree->getDataRoot()->setChild("path", std::move(node));
|
||||||
|
tree->saveData();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Copy Semantics
|
||||||
|
All getters return unique_ptr copies (not references):
|
||||||
|
- `getChild()` → new JsonDataNode copy
|
||||||
|
- `getData()` → new JsonDataValue copy
|
||||||
|
- Protects internal state
|
||||||
|
- Enables safe distribution
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Existing Tests
|
||||||
|
|
||||||
|
### Test Files Found
|
||||||
|
- `/mnt/c/Users/alexi/Documents/projects/groveengine/tests/integration/test_04_race_condition.cpp`
|
||||||
|
- Tests concurrent compilation and hot-reload
|
||||||
|
- Uses JsonDataNode for configuration
|
||||||
|
- Tests module integrity validation
|
||||||
|
- Tests concurrent access patterns
|
||||||
|
|
||||||
|
### Test Scenarios (planTI/)
|
||||||
|
1. **scenario_01_production_hotreload.md** - Hot-reload validation
|
||||||
|
2. **scenario_02_chaos_monkey.md** - Random failure injection
|
||||||
|
3. **scenario_03_stress_test.md** - Load testing
|
||||||
|
4. **scenario_04_race_condition.md** - Concurrency testing
|
||||||
|
5. **scenario_05_multimodule.md** - Multi-module coordination
|
||||||
|
6. **scenario_07_limits.md** - Extreme conditions
|
||||||
|
7. **scenario_06_error_recovery.md** - Error handling
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Critical Features Requiring Integration Tests
|
||||||
|
|
||||||
|
### 1. Tree Navigation & Search
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Tree construction from file system
|
||||||
|
- Exact name matching (getChildrenByName)
|
||||||
|
- Pattern matching with wildcards
|
||||||
|
- Deep subtree search efficiency
|
||||||
|
- Path building and navigation
|
||||||
|
- Edge cases: empty names, special characters, deep nesting
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Test exact matching
|
||||||
|
auto tanks = root->getChildrenByName("tanks");
|
||||||
|
assert(tanks.size() == 1);
|
||||||
|
|
||||||
|
// Test pattern matching
|
||||||
|
auto heavy = root->getChildrenByNameMatch("*heavy*");
|
||||||
|
assert(heavy.size() == 2); // e.g., heavy_mk1, tank_heavy_v2
|
||||||
|
|
||||||
|
// Test deep navigation
|
||||||
|
auto node = root->getChild("vehicles")->getChild("tanks")->getChild("heavy");
|
||||||
|
assert(node != nullptr);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Data Persistence & Save/Load
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Save entire data/ tree
|
||||||
|
- Save specific nodes
|
||||||
|
- Load from disk
|
||||||
|
- Nested structure preservation
|
||||||
|
- Data type preservation (numbers, strings, booleans, arrays)
|
||||||
|
- Empty node handling
|
||||||
|
- Large data handling (1MB+)
|
||||||
|
- File corruption recovery
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Create and save
|
||||||
|
auto node = std::make_unique<JsonDataNode>("player", json::object());
|
||||||
|
tree->getDataRoot()->setChild("player1", std::move(node));
|
||||||
|
assert(tree->saveNode("data/player1"));
|
||||||
|
|
||||||
|
// Reload and verify
|
||||||
|
auto reloaded = tree->getDataRoot()->getChild("player1");
|
||||||
|
assert(reloaded != nullptr);
|
||||||
|
assert(reloaded->hasData());
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Hot-Reload System
|
||||||
|
**What Needs Testing**:
|
||||||
|
- File change detection
|
||||||
|
- Config reload accuracy
|
||||||
|
- Callback execution
|
||||||
|
- Multiple callback handling
|
||||||
|
- Timing consistency
|
||||||
|
- No data/ changes during reload
|
||||||
|
- No runtime/ changes during reload
|
||||||
|
- Rapid successive reloads
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Register callback
|
||||||
|
bool callbackFired = false;
|
||||||
|
tree->onTreeReloaded([&]() { callbackFired = true; });
|
||||||
|
|
||||||
|
// Modify config file
|
||||||
|
modifyConfigFile("config/tanks.json");
|
||||||
|
std::this_thread::sleep_for(10ms);
|
||||||
|
|
||||||
|
// Trigger reload
|
||||||
|
assert(tree->reloadIfChanged());
|
||||||
|
assert(callbackFired);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Property-Based Queries
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Predicate evaluation
|
||||||
|
- Type-safe access
|
||||||
|
- Complex predicates (AND, OR)
|
||||||
|
- Performance with large datasets
|
||||||
|
- Empty result sets
|
||||||
|
- Single result matches
|
||||||
|
- Null value handling
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Query by numeric property
|
||||||
|
auto armored = root->queryByProperty("armor",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() >= 150;
|
||||||
|
});
|
||||||
|
assert(armored.size() >= 1);
|
||||||
|
|
||||||
|
// Query by string property
|
||||||
|
auto cannons = root->queryByProperty("weapon",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isString() && val.asString().find("cannon") != std::string::npos;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Hash System & Validation
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Hash consistency (same data = same hash)
|
||||||
|
- Hash change detection
|
||||||
|
- Tree hash includes all children
|
||||||
|
- Subtree hash isolation
|
||||||
|
- Hash format (lowercase hex, 64 chars for SHA256)
|
||||||
|
- Performance of hash computation
|
||||||
|
- Deep tree hashing
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
auto hash1 = node->getDataHash();
|
||||||
|
auto hash2 = node->getDataHash();
|
||||||
|
assert(hash1 == hash2); // Consistent
|
||||||
|
|
||||||
|
// Modify data
|
||||||
|
node->setData(...);
|
||||||
|
auto hash3 = node->getDataHash();
|
||||||
|
assert(hash1 != hash3); // Changed
|
||||||
|
|
||||||
|
// Tree hash includes children
|
||||||
|
auto treeHash1 = node->getTreeHash();
|
||||||
|
node->setChild("new", ...);
|
||||||
|
auto treeHash2 = node->getTreeHash();
|
||||||
|
assert(treeHash1 != treeHash2); // Child change detected
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Read-Only Enforcement
|
||||||
|
**What Needs Testing**:
|
||||||
|
- config/ nodes reject modifications
|
||||||
|
- data/ nodes allow modifications
|
||||||
|
- runtime/ nodes allow modifications
|
||||||
|
- Exception on modification attempt
|
||||||
|
- Error message contains path
|
||||||
|
- Read-only flag propagation to children
|
||||||
|
- Inherited read-only status
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
auto configNode = tree->getConfigRoot();
|
||||||
|
assert_throws<std::runtime_error>([&]() {
|
||||||
|
configNode->setChild("new", std::make_unique<JsonDataNode>("x", json::object()));
|
||||||
|
});
|
||||||
|
|
||||||
|
auto dataNode = tree->getDataRoot();
|
||||||
|
dataNode->setChild("new", std::make_unique<JsonDataNode>("x", json::object())); // OK
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Type Safety & Data Access
|
||||||
|
**What Needs Testing**:
|
||||||
|
- getString with default fallback
|
||||||
|
- getInt with type coercion
|
||||||
|
- getDouble precision
|
||||||
|
- getBool parsing
|
||||||
|
- hasProperty existence check
|
||||||
|
- Wrong type access returns default
|
||||||
|
- Null handling
|
||||||
|
- Array/object access edge cases
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
auto node = ...; // Has {"armor": 200, "speed": 60.5, "active": true}
|
||||||
|
|
||||||
|
assert(node->getInt("armor") == 200);
|
||||||
|
assert(node->getDouble("speed") == 60.5);
|
||||||
|
assert(node->getBool("active") == true);
|
||||||
|
assert(node->getString("name", "default") == "default"); // Missing key
|
||||||
|
|
||||||
|
assert(node->hasProperty("armor"));
|
||||||
|
assert(!node->hasProperty("missing"));
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Concurrent Access Patterns
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Safe reader access (multiple threads reading simultaneously)
|
||||||
|
- Safe writer access (single writer with lock)
|
||||||
|
- Race condition detection
|
||||||
|
- No data corruption under load
|
||||||
|
- Reload safety during concurrent reads
|
||||||
|
- No deadlocks
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
std::mutex treeMutex;
|
||||||
|
std::vector<std::thread> readers;
|
||||||
|
|
||||||
|
for (int i = 0; i < 10; ++i) {
|
||||||
|
readers.emplace_back([&]() {
|
||||||
|
std::lock_guard<std::mutex> lock(treeMutex);
|
||||||
|
auto data = tree->getConfigRoot()->getChild("tanks");
|
||||||
|
assert(data != nullptr);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
for (auto& t : readers) t.join();
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9. Error Handling & Edge Cases
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Invalid paths (non-existent nodes)
|
||||||
|
- Empty names
|
||||||
|
- Special characters in names
|
||||||
|
- Null data nodes
|
||||||
|
- Circular reference prevention
|
||||||
|
- Memory cleanup on exception
|
||||||
|
- File system errors (permissions, disk full)
|
||||||
|
- Corrupted JSON recovery
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Non-existent node
|
||||||
|
auto missing = tree->getNode("config/does/not/exist");
|
||||||
|
assert(missing == nullptr);
|
||||||
|
|
||||||
|
// Empty name
|
||||||
|
auto node = std::make_unique<JsonDataNode>("", json::object());
|
||||||
|
assert(node->getName() == "");
|
||||||
|
assert(node->getPath() == ""); // Root-like behavior
|
||||||
|
```
|
||||||
|
|
||||||
|
### 10. Performance & Scale
|
||||||
|
**What Needs Testing**:
|
||||||
|
- Large tree navigation (1000+ nodes)
|
||||||
|
- Deep nesting (100+ levels)
|
||||||
|
- Pattern matching performance
|
||||||
|
- Hash computation speed
|
||||||
|
- File I/O performance
|
||||||
|
- Memory usage
|
||||||
|
- Reload speed
|
||||||
|
|
||||||
|
**Test Scenarios**:
|
||||||
|
```cpp
|
||||||
|
// Create large tree
|
||||||
|
auto root = std::make_unique<JsonDataNode>("root", json::object());
|
||||||
|
for (int i = 0; i < 1000; ++i) {
|
||||||
|
root->setChild("child_" + std::to_string(i),
|
||||||
|
std::make_unique<JsonDataNode>("x", json::object()));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Benchmark pattern matching
|
||||||
|
auto start = std::chrono::high_resolution_clock::now();
|
||||||
|
auto results = root->getChildrenByNameMatch("child_*");
|
||||||
|
auto end = std::chrono::high_resolution_clock::now();
|
||||||
|
|
||||||
|
assert(results.size() == 1000);
|
||||||
|
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
|
||||||
|
assert(duration < 100); // Should be fast
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
The DataNode system is a complete, production-ready data management framework providing:
|
||||||
|
|
||||||
|
1. **Three-tier abstraction** (Interface → Implementation → Integration)
|
||||||
|
2. **Hierarchical organization** (config/, data/, runtime/)
|
||||||
|
3. **Advanced queries** (exact matching, pattern matching, property-based)
|
||||||
|
4. **Hash-based validation** (change detection, integrity checking)
|
||||||
|
5. **Hot-reload support** (file monitoring, callback system)
|
||||||
|
6. **Type-safe access** (IDataValue interface with coercion)
|
||||||
|
7. **Read-only enforcement** (configuration immutability)
|
||||||
|
8. **Persistence layer** (file-based save/load)
|
||||||
|
|
||||||
|
**Critical missing piece**: No integration tests specifically for DataNode system. All existing tests focus on module loading, hot-reload, and race conditions, but not on DataNode functionality itself.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
780
planTI/scenario_11_io_system.md
Normal file
780
planTI/scenario_11_io_system.md
Normal file
@ -0,0 +1,780 @@
|
|||||||
|
# Scénario 11: IO System Stress Test
|
||||||
|
|
||||||
|
**Priorité**: ⭐⭐ SHOULD HAVE
|
||||||
|
**Phase**: 2 (SHOULD HAVE)
|
||||||
|
**Durée estimée**: ~5 minutes
|
||||||
|
**Effort implémentation**: ~4-6 heures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Objectif
|
||||||
|
|
||||||
|
Valider que le système IntraIO (pub/sub intra-process) fonctionne correctement dans tous les cas d'usage:
|
||||||
|
- Pattern matching avec wildcards et regex
|
||||||
|
- Multi-module routing (1-to-1, 1-to-many)
|
||||||
|
- Message batching et flushing (low-frequency subscriptions)
|
||||||
|
- Backpressure et queue overflow
|
||||||
|
- Thread safety (concurrent publish/pull)
|
||||||
|
- Health monitoring et métriques
|
||||||
|
- Subscription lifecycle
|
||||||
|
|
||||||
|
**Bug connu à valider**: IntraIOManager ne route qu'au premier subscriber (limitation std::move sans clone)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Description
|
||||||
|
|
||||||
|
### Setup Initial
|
||||||
|
1. Créer 5 modules avec IntraIO:
|
||||||
|
- **ProducerModule** - Publie 1000 msg/s sur différents topics
|
||||||
|
- **ConsumerModule** - Souscrit à plusieurs patterns
|
||||||
|
- **BroadcastModule** - Publie sur topics avec multiples subscribers
|
||||||
|
- **BatchModule** - Utilise low-frequency subscriptions
|
||||||
|
- **StressModule** - Stress test avec 10k msg/s
|
||||||
|
|
||||||
|
2. Configurer IntraIOManager avec routage entre modules
|
||||||
|
|
||||||
|
3. Tester 8 scénarios différents sur 5 minutes
|
||||||
|
|
||||||
|
### Test Séquence
|
||||||
|
|
||||||
|
#### Test 1: Basic Publish-Subscribe (30s)
|
||||||
|
1. ProducerModule publie 100 messages sur "test:basic"
|
||||||
|
2. ConsumerModule souscrit à "test:basic"
|
||||||
|
3. Vérifier:
|
||||||
|
- 100 messages reçus
|
||||||
|
- Ordre FIFO préservé
|
||||||
|
- Aucun message perdu
|
||||||
|
|
||||||
|
#### Test 2: Pattern Matching (30s)
|
||||||
|
1. ProducerModule publie sur:
|
||||||
|
- "player:001:position"
|
||||||
|
- "player:001:health"
|
||||||
|
- "player:002:position"
|
||||||
|
- "enemy:001:position"
|
||||||
|
2. ConsumerModule souscrit aux patterns:
|
||||||
|
- "player:*" (devrait matcher 3 messages)
|
||||||
|
- "player:001:*" (devrait matcher 2 messages)
|
||||||
|
- "*:position" (devrait matcher 3 messages)
|
||||||
|
3. Vérifier matching counts corrects
|
||||||
|
|
||||||
|
#### Test 3: Multi-Module Routing (60s)
|
||||||
|
1. ProducerModule publie "broadcast:data" (100 messages)
|
||||||
|
2. ConsumerModule, BatchModule, StressModule souscrivent tous à "broadcast:*"
|
||||||
|
3. Vérifier:
|
||||||
|
- **Bug attendu**: Seul le premier subscriber reçoit (limitation clone)
|
||||||
|
- Logger quel module reçoit
|
||||||
|
- Documenter le bug pour fix futur
|
||||||
|
|
||||||
|
#### Test 4: Message Batching (60s)
|
||||||
|
1. BatchModule configure low-frequency subscription:
|
||||||
|
- Pattern: "batch:*"
|
||||||
|
- Interval: 1000ms
|
||||||
|
- replaceable: true
|
||||||
|
2. ProducerModule publie "batch:metric" à 100 Hz (toutes les 10ms)
|
||||||
|
3. Vérifier:
|
||||||
|
- BatchModule reçoit ~1 message/seconde (dernier seulement)
|
||||||
|
- Batching fonctionne correctement
|
||||||
|
|
||||||
|
#### Test 5: Backpressure & Queue Overflow (30s)
|
||||||
|
1. ProducerModule publie 50k messages sur "stress:flood"
|
||||||
|
2. ConsumerModule souscrit mais ne pull que 100 msg/s
|
||||||
|
3. Vérifier:
|
||||||
|
- Queue overflow détecté (health.dropping = true)
|
||||||
|
- Messages droppés comptés (health.droppedMessageCount > 0)
|
||||||
|
- Système reste stable (pas de crash)
|
||||||
|
|
||||||
|
#### Test 6: Thread Safety (60s)
|
||||||
|
1. Lancer 10 threads qui publient simultanément (1000 msg chacun)
|
||||||
|
2. Lancer 5 threads qui pullent simultanément
|
||||||
|
3. Vérifier:
|
||||||
|
- Aucun crash
|
||||||
|
- Aucune corruption de données
|
||||||
|
- Total messages reçus = total envoyés (ou moins si overflow)
|
||||||
|
|
||||||
|
#### Test 7: Health Monitoring (30s)
|
||||||
|
1. ProducerModule publie à différents débits:
|
||||||
|
- Phase 1: 100 msg/s (normal)
|
||||||
|
- Phase 2: 10k msg/s (overload)
|
||||||
|
- Phase 3: 100 msg/s (recovery)
|
||||||
|
2. Monitorer health metrics:
|
||||||
|
- queueSize augmente/diminue correctement
|
||||||
|
- averageProcessingRate reflète réalité
|
||||||
|
- dropping flag activé/désactivé au bon moment
|
||||||
|
|
||||||
|
#### Test 8: Subscription Lifecycle (30s)
|
||||||
|
1. Créer/détruire subscriptions dynamiquement
|
||||||
|
2. Vérifier:
|
||||||
|
- Messages après unsubscribe ne sont pas reçus
|
||||||
|
- Re-subscribe fonctionne
|
||||||
|
- Pas de leak de subscriptions dans IntraIOManager
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏗️ Implémentation
|
||||||
|
|
||||||
|
### ProducerModule Structure
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
// ProducerModule.h
|
||||||
|
class ProducerModule : public IModule {
|
||||||
|
public:
|
||||||
|
void initialize(std::shared_ptr<IDataNode> config) override;
|
||||||
|
void process(float deltaTime) override;
|
||||||
|
std::shared_ptr<IDataNode> getState() const override;
|
||||||
|
void setState(std::shared_ptr<IDataNode> state) override;
|
||||||
|
bool isIdle() const override { return true; }
|
||||||
|
|
||||||
|
private:
|
||||||
|
std::shared_ptr<IIO> io;
|
||||||
|
int messageCount = 0;
|
||||||
|
float publishRate = 100.0f; // Hz
|
||||||
|
float accumulator = 0.0f;
|
||||||
|
|
||||||
|
void publishTestMessages();
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### ConsumerModule Structure
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
// ConsumerModule.h
|
||||||
|
class ConsumerModule : public IModule {
|
||||||
|
public:
|
||||||
|
void initialize(std::shared_ptr<IDataNode> config) override;
|
||||||
|
void process(float deltaTime) override;
|
||||||
|
std::shared_ptr<IDataNode> getState() const override;
|
||||||
|
void setState(std::shared_ptr<IDataNode> state) override;
|
||||||
|
bool isIdle() const override { return true; }
|
||||||
|
|
||||||
|
// Test helpers
|
||||||
|
int getReceivedCount() const { return receivedMessages.size(); }
|
||||||
|
const std::vector<IIO::Message>& getMessages() const { return receivedMessages; }
|
||||||
|
|
||||||
|
private:
|
||||||
|
std::shared_ptr<IIO> io;
|
||||||
|
std::vector<IIO::Message> receivedMessages;
|
||||||
|
|
||||||
|
void processIncomingMessages();
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Principal
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
// test_11_io_system.cpp
|
||||||
|
#include "helpers/TestMetrics.h"
|
||||||
|
#include "helpers/TestAssertions.h"
|
||||||
|
#include "helpers/TestReporter.h"
|
||||||
|
#include <thread>
|
||||||
|
#include <atomic>
|
||||||
|
|
||||||
|
int main() {
|
||||||
|
TestReporter reporter("IO System Stress Test");
|
||||||
|
TestMetrics metrics;
|
||||||
|
|
||||||
|
// === SETUP ===
|
||||||
|
DebugEngine engine;
|
||||||
|
|
||||||
|
// Charger modules
|
||||||
|
engine.loadModule("ProducerModule", "build/modules/libProducerModule.so");
|
||||||
|
engine.loadModule("ConsumerModule", "build/modules/libConsumerModule.so");
|
||||||
|
engine.loadModule("BroadcastModule", "build/modules/libBroadcastModule.so");
|
||||||
|
engine.loadModule("BatchModule", "build/modules/libBatchModule.so");
|
||||||
|
engine.loadModule("StressModule", "build/modules/libStressModule.so");
|
||||||
|
|
||||||
|
// Initialiser avec IOFactory
|
||||||
|
auto config = createJsonConfig({
|
||||||
|
{"transport", "intra"},
|
||||||
|
{"instanceId", "test_engine"}
|
||||||
|
});
|
||||||
|
|
||||||
|
engine.initializeModule("ProducerModule", config);
|
||||||
|
engine.initializeModule("ConsumerModule", config);
|
||||||
|
engine.initializeModule("BroadcastModule", config);
|
||||||
|
engine.initializeModule("BatchModule", config);
|
||||||
|
engine.initializeModule("StressModule", config);
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 1: Basic Publish-Subscribe
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 1: Basic Publish-Subscribe ===\n";
|
||||||
|
|
||||||
|
// ConsumerModule subscribe to "test:basic"
|
||||||
|
auto consumerIO = engine.getModuleIO("ConsumerModule");
|
||||||
|
consumerIO->subscribe("test:basic", {});
|
||||||
|
|
||||||
|
// ProducerModule publie 100 messages
|
||||||
|
auto producerIO = engine.getModuleIO("ProducerModule");
|
||||||
|
for (int i = 0; i < 100; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{
|
||||||
|
{"id", i},
|
||||||
|
{"payload", "test_message_" + std::to_string(i)}
|
||||||
|
});
|
||||||
|
producerIO->publish("test:basic", std::move(data));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process pour permettre routing
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
// Vérifier réception
|
||||||
|
int receivedCount = 0;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
auto msg = consumerIO->pullMessage();
|
||||||
|
receivedCount++;
|
||||||
|
|
||||||
|
// Vérifier ordre FIFO
|
||||||
|
auto* jsonData = dynamic_cast<JsonDataNode*>(msg.data.get());
|
||||||
|
int msgId = jsonData->getJsonData()["id"];
|
||||||
|
ASSERT_EQ(msgId, receivedCount - 1, "Messages should be in FIFO order");
|
||||||
|
}
|
||||||
|
|
||||||
|
ASSERT_EQ(receivedCount, 100, "Should receive all 100 messages");
|
||||||
|
reporter.addAssertion("basic_pubsub", receivedCount == 100);
|
||||||
|
std::cout << "✓ TEST 1 PASSED: " << receivedCount << " messages received\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 2: Pattern Matching
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 2: Pattern Matching ===\n";
|
||||||
|
|
||||||
|
// Subscribe to different patterns
|
||||||
|
consumerIO->subscribe("player:*", {});
|
||||||
|
consumerIO->subscribe("player:001:*", {});
|
||||||
|
consumerIO->subscribe("*:position", {});
|
||||||
|
|
||||||
|
// Publish test messages
|
||||||
|
std::vector<std::string> testTopics = {
|
||||||
|
"player:001:position", // Matches all 3 patterns
|
||||||
|
"player:001:health", // Matches pattern 1 and 2
|
||||||
|
"player:002:position", // Matches pattern 1 and 3
|
||||||
|
"enemy:001:position" // Matches pattern 3 only
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const auto& topic : testTopics) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"topic", topic}});
|
||||||
|
producerIO->publish(topic, std::move(data));
|
||||||
|
}
|
||||||
|
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
// Count messages by pattern
|
||||||
|
std::map<std::string, int> patternCounts;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
auto msg = consumerIO->pullMessage();
|
||||||
|
auto* jsonData = dynamic_cast<JsonDataNode*>(msg.data.get());
|
||||||
|
std::string topic = jsonData->getJsonData()["topic"];
|
||||||
|
patternCounts[topic]++;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Note: Due to pattern overlap, same message might be received multiple times
|
||||||
|
std::cout << "Pattern matching results:\n";
|
||||||
|
for (const auto& [topic, count] : patternCounts) {
|
||||||
|
std::cout << " " << topic << ": " << count << " times\n";
|
||||||
|
}
|
||||||
|
|
||||||
|
reporter.addAssertion("pattern_matching", true);
|
||||||
|
std::cout << "✓ TEST 2 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 3: Multi-Module Routing (Bug Detection)
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 3: Multi-Module Routing (1-to-many) ===\n";
|
||||||
|
|
||||||
|
// All modules subscribe to "broadcast:*"
|
||||||
|
consumerIO->subscribe("broadcast:*", {});
|
||||||
|
auto broadcastIO = engine.getModuleIO("BroadcastModule");
|
||||||
|
broadcastIO->subscribe("broadcast:*", {});
|
||||||
|
auto batchIO = engine.getModuleIO("BatchModule");
|
||||||
|
batchIO->subscribe("broadcast:*", {});
|
||||||
|
auto stressIO = engine.getModuleIO("StressModule");
|
||||||
|
stressIO->subscribe("broadcast:*", {});
|
||||||
|
|
||||||
|
// Publish 10 broadcast messages
|
||||||
|
for (int i = 0; i < 10; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"broadcast_id", i}});
|
||||||
|
producerIO->publish("broadcast:data", std::move(data));
|
||||||
|
}
|
||||||
|
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
// Check which modules received messages
|
||||||
|
int consumerReceived = consumerIO->hasMessages();
|
||||||
|
int broadcastReceived = broadcastIO->hasMessages();
|
||||||
|
int batchReceived = batchIO->hasMessages();
|
||||||
|
int stressReceived = stressIO->hasMessages();
|
||||||
|
|
||||||
|
std::cout << "Broadcast distribution:\n";
|
||||||
|
std::cout << " ConsumerModule: " << consumerReceived << " messages\n";
|
||||||
|
std::cout << " BroadcastModule: " << broadcastReceived << " messages\n";
|
||||||
|
std::cout << " BatchModule: " << batchReceived << " messages\n";
|
||||||
|
std::cout << " StressModule: " << stressReceived << " messages\n";
|
||||||
|
|
||||||
|
// Expected: Only ONE module receives due to std::move limitation
|
||||||
|
int totalReceived = consumerReceived + broadcastReceived + batchReceived + stressReceived;
|
||||||
|
|
||||||
|
if (totalReceived == 10) {
|
||||||
|
std::cout << "⚠️ BUG: Only one module received all messages (clone() not implemented)\n";
|
||||||
|
reporter.addMetric("broadcast_bug_present", 1.0f);
|
||||||
|
} else if (totalReceived == 40) {
|
||||||
|
std::cout << "✓ FIXED: All modules received copies (clone() implemented!)\n";
|
||||||
|
reporter.addMetric("broadcast_bug_present", 0.0f);
|
||||||
|
}
|
||||||
|
|
||||||
|
reporter.addAssertion("multi_module_routing_tested", true);
|
||||||
|
std::cout << "✓ TEST 3 COMPLETED (bug documented)\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 4: Message Batching
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 4: Message Batching (Low-Frequency) ===\n";
|
||||||
|
|
||||||
|
// Configure low-freq subscription
|
||||||
|
IIO::SubscriptionConfig batchConfig;
|
||||||
|
batchConfig.replaceable = true;
|
||||||
|
batchConfig.batchInterval = 1000; // 1 second
|
||||||
|
batchIO->subscribeLowFreq("batch:*", batchConfig);
|
||||||
|
|
||||||
|
// Publish at 100 Hz for 3 seconds (300 messages)
|
||||||
|
auto batchStart = std::chrono::high_resolution_clock::now();
|
||||||
|
int batchedPublished = 0;
|
||||||
|
|
||||||
|
for (int sec = 0; sec < 3; sec++) {
|
||||||
|
for (int i = 0; i < 100; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{
|
||||||
|
{"timestamp", batchedPublished},
|
||||||
|
{"value", batchedPublished * 0.1f}
|
||||||
|
});
|
||||||
|
producerIO->publish("batch:metric", std::move(data));
|
||||||
|
batchedPublished++;
|
||||||
|
|
||||||
|
// Simulate 10ms interval
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(10));
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
auto batchEnd = std::chrono::high_resolution_clock::now();
|
||||||
|
float batchDuration = std::chrono::duration<float>(batchEnd - batchStart).count();
|
||||||
|
|
||||||
|
// Check how many batched messages received
|
||||||
|
int batchesReceived = 0;
|
||||||
|
while (batchIO->hasMessages() > 0) {
|
||||||
|
auto msg = batchIO->pullMessage();
|
||||||
|
batchesReceived++;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << "Published: " << batchedPublished << " messages over " << batchDuration << "s\n";
|
||||||
|
std::cout << "Received: " << batchesReceived << " batches\n";
|
||||||
|
std::cout << "Expected: ~" << static_cast<int>(batchDuration) << " batches (1/second)\n";
|
||||||
|
|
||||||
|
// Should receive ~3 batches (1 per second)
|
||||||
|
ASSERT_TRUE(batchesReceived >= 2 && batchesReceived <= 4,
|
||||||
|
"Should receive 2-4 batches for 3 seconds");
|
||||||
|
reporter.addMetric("batch_count", batchesReceived);
|
||||||
|
reporter.addAssertion("batching_works", batchesReceived >= 2);
|
||||||
|
std::cout << "✓ TEST 4 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 5: Backpressure & Queue Overflow
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 5: Backpressure & Queue Overflow ===\n";
|
||||||
|
|
||||||
|
// Subscribe but don't pull
|
||||||
|
consumerIO->subscribe("stress:flood", {});
|
||||||
|
|
||||||
|
// Flood with 50k messages
|
||||||
|
std::cout << "Publishing 50000 messages...\n";
|
||||||
|
for (int i = 0; i < 50000; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"flood_id", i}});
|
||||||
|
producerIO->publish("stress:flood", std::move(data));
|
||||||
|
|
||||||
|
if (i % 10000 == 0) {
|
||||||
|
std::cout << " " << i << " messages published\n";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
// Check health
|
||||||
|
auto health = consumerIO->getHealth();
|
||||||
|
std::cout << "Health status:\n";
|
||||||
|
std::cout << " Queue size: " << health.queueSize << " / " << health.maxQueueSize << "\n";
|
||||||
|
std::cout << " Dropping: " << (health.dropping ? "YES" : "NO") << "\n";
|
||||||
|
std::cout << " Dropped count: " << health.droppedMessageCount << "\n";
|
||||||
|
std::cout << " Processing rate: " << health.averageProcessingRate << " msg/s\n";
|
||||||
|
|
||||||
|
ASSERT_TRUE(health.queueSize > 0, "Queue should have messages");
|
||||||
|
|
||||||
|
// Likely queue overflow happened
|
||||||
|
if (health.dropping || health.droppedMessageCount > 0) {
|
||||||
|
std::cout << "✓ Backpressure detected correctly\n";
|
||||||
|
reporter.addAssertion("backpressure_detected", true);
|
||||||
|
}
|
||||||
|
|
||||||
|
reporter.addMetric("queue_size", health.queueSize);
|
||||||
|
reporter.addMetric("dropped_messages", health.droppedMessageCount);
|
||||||
|
std::cout << "✓ TEST 5 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 6: Thread Safety
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 6: Thread Safety (Concurrent Pub/Pull) ===\n";
|
||||||
|
|
||||||
|
std::atomic<int> publishedTotal{0};
|
||||||
|
std::atomic<int> receivedTotal{0};
|
||||||
|
std::atomic<bool> running{true};
|
||||||
|
|
||||||
|
consumerIO->subscribe("thread:*", {});
|
||||||
|
|
||||||
|
// 10 publisher threads
|
||||||
|
std::vector<std::thread> publishers;
|
||||||
|
for (int t = 0; t < 10; t++) {
|
||||||
|
publishers.emplace_back([&, t]() {
|
||||||
|
for (int i = 0; i < 1000; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{
|
||||||
|
{"thread", t},
|
||||||
|
{"id", i}
|
||||||
|
});
|
||||||
|
producerIO->publish("thread:test", std::move(data));
|
||||||
|
publishedTotal++;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// 5 consumer threads
|
||||||
|
std::vector<std::thread> consumers;
|
||||||
|
for (int t = 0; t < 5; t++) {
|
||||||
|
consumers.emplace_back([&]() {
|
||||||
|
while (running || consumerIO->hasMessages() > 0) {
|
||||||
|
if (consumerIO->hasMessages() > 0) {
|
||||||
|
try {
|
||||||
|
auto msg = consumerIO->pullMessage();
|
||||||
|
receivedTotal++;
|
||||||
|
} catch (...) {
|
||||||
|
std::cerr << "ERROR: Exception during pull\n";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
std::this_thread::sleep_for(std::chrono::microseconds(100));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait for publishers
|
||||||
|
for (auto& t : publishers) {
|
||||||
|
t.join();
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << "All publishers done: " << publishedTotal << " messages\n";
|
||||||
|
|
||||||
|
// Let consumers finish
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(500));
|
||||||
|
running = false;
|
||||||
|
|
||||||
|
for (auto& t : consumers) {
|
||||||
|
t.join();
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << "All consumers done: " << receivedTotal << " messages\n";
|
||||||
|
|
||||||
|
// May have drops, but should be stable
|
||||||
|
ASSERT_GT(receivedTotal, 0, "Should receive at least some messages");
|
||||||
|
reporter.addMetric("concurrent_published", publishedTotal);
|
||||||
|
reporter.addMetric("concurrent_received", receivedTotal);
|
||||||
|
reporter.addAssertion("thread_safety", true); // No crash = success
|
||||||
|
std::cout << "✓ TEST 6 PASSED (no crashes)\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 7: Health Monitoring Accuracy
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 7: Health Monitoring Accuracy ===\n";
|
||||||
|
|
||||||
|
consumerIO->subscribe("health:*", {});
|
||||||
|
|
||||||
|
// Phase 1: Normal load (100 msg/s)
|
||||||
|
std::cout << "Phase 1: Normal load (100 msg/s for 2s)\n";
|
||||||
|
for (int i = 0; i < 200; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"phase", 1}});
|
||||||
|
producerIO->publish("health:test", std::move(data));
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(10));
|
||||||
|
|
||||||
|
// Pull to keep queue low
|
||||||
|
if (consumerIO->hasMessages() > 0) {
|
||||||
|
consumerIO->pullMessage();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
auto healthPhase1 = consumerIO->getHealth();
|
||||||
|
std::cout << " Queue: " << healthPhase1.queueSize << ", Dropping: " << healthPhase1.dropping << "\n";
|
||||||
|
|
||||||
|
// Phase 2: Overload (10k msg/s without pulling)
|
||||||
|
std::cout << "Phase 2: Overload (10000 msg/s for 1s)\n";
|
||||||
|
for (int i = 0; i < 10000; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"phase", 2}});
|
||||||
|
producerIO->publish("health:test", std::move(data));
|
||||||
|
}
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
auto healthPhase2 = consumerIO->getHealth();
|
||||||
|
std::cout << " Queue: " << healthPhase2.queueSize << ", Dropping: " << healthPhase2.dropping << "\n";
|
||||||
|
|
||||||
|
ASSERT_GT(healthPhase2.queueSize, healthPhase1.queueSize,
|
||||||
|
"Queue should grow during overload");
|
||||||
|
|
||||||
|
// Phase 3: Recovery (pull all)
|
||||||
|
std::cout << "Phase 3: Recovery (pulling all messages)\n";
|
||||||
|
int pulled = 0;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
consumerIO->pullMessage();
|
||||||
|
pulled++;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto healthPhase3 = consumerIO->getHealth();
|
||||||
|
std::cout << " Pulled: " << pulled << " messages\n";
|
||||||
|
std::cout << " Queue: " << healthPhase3.queueSize << ", Dropping: " << healthPhase3.dropping << "\n";
|
||||||
|
|
||||||
|
ASSERT_EQ(healthPhase3.queueSize, 0, "Queue should be empty after pulling all");
|
||||||
|
reporter.addAssertion("health_monitoring", true);
|
||||||
|
std::cout << "✓ TEST 7 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 8: Subscription Lifecycle
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 8: Subscription Lifecycle ===\n";
|
||||||
|
|
||||||
|
// Subscribe
|
||||||
|
consumerIO->subscribe("lifecycle:test", {});
|
||||||
|
|
||||||
|
// Publish 10 messages
|
||||||
|
for (int i = 0; i < 10; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"id", i}});
|
||||||
|
producerIO->publish("lifecycle:test", std::move(data));
|
||||||
|
}
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
int count1 = 0;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
consumerIO->pullMessage();
|
||||||
|
count1++;
|
||||||
|
}
|
||||||
|
ASSERT_EQ(count1, 10, "Should receive 10 messages");
|
||||||
|
|
||||||
|
// Unsubscribe (if API exists - might not be implemented yet)
|
||||||
|
// consumerIO->unsubscribe("lifecycle:test");
|
||||||
|
|
||||||
|
// Publish 10 more
|
||||||
|
for (int i = 10; i < 20; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"id", i}});
|
||||||
|
producerIO->publish("lifecycle:test", std::move(data));
|
||||||
|
}
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
// If unsubscribe exists, should receive 0. If not, will receive 10.
|
||||||
|
int count2 = 0;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
consumerIO->pullMessage();
|
||||||
|
count2++;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << "After unsubscribe: " << count2 << " messages (0 if unsubscribe works)\n";
|
||||||
|
|
||||||
|
// Re-subscribe
|
||||||
|
consumerIO->subscribe("lifecycle:test", {});
|
||||||
|
|
||||||
|
// Publish 10 more
|
||||||
|
for (int i = 20; i < 30; i++) {
|
||||||
|
auto data = std::make_unique<JsonDataNode>(nlohmann::json{{"id", i}});
|
||||||
|
producerIO->publish("lifecycle:test", std::move(data));
|
||||||
|
}
|
||||||
|
engine.update(1.0f/60.0f);
|
||||||
|
|
||||||
|
int count3 = 0;
|
||||||
|
while (consumerIO->hasMessages() > 0) {
|
||||||
|
consumerIO->pullMessage();
|
||||||
|
count3++;
|
||||||
|
}
|
||||||
|
ASSERT_EQ(count3, 10, "Should receive 10 messages after re-subscribe");
|
||||||
|
|
||||||
|
reporter.addAssertion("subscription_lifecycle", true);
|
||||||
|
std::cout << "✓ TEST 8 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// RAPPORT FINAL
|
||||||
|
// ========================================================================
|
||||||
|
|
||||||
|
metrics.printReport();
|
||||||
|
reporter.printFinalReport();
|
||||||
|
|
||||||
|
return reporter.getExitCode();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Métriques Collectées
|
||||||
|
|
||||||
|
| Métrique | Description | Seuil |
|
||||||
|
|----------|-------------|-------|
|
||||||
|
| **basic_pubsub** | Messages reçus dans test basique | 100/100 |
|
||||||
|
| **pattern_matching** | Pattern matching fonctionne | true |
|
||||||
|
| **broadcast_bug_present** | Bug 1-to-1 détecté (1.0) ou fixé (0.0) | Documentation |
|
||||||
|
| **batch_count** | Nombre de batches reçus | 2-4 |
|
||||||
|
| **queue_size** | Taille queue pendant flood | > 0 |
|
||||||
|
| **dropped_messages** | Messages droppés détectés | >= 0 |
|
||||||
|
| **concurrent_published** | Messages publiés concurrents | 10000 |
|
||||||
|
| **concurrent_received** | Messages reçus concurrents | > 0 |
|
||||||
|
| **health_monitoring** | Health metrics précis | true |
|
||||||
|
| **subscription_lifecycle** | Subscribe/unsubscribe fonctionne | true |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Critères de Succès
|
||||||
|
|
||||||
|
### MUST PASS
|
||||||
|
1. ✅ Basic pub/sub: 100/100 messages en FIFO
|
||||||
|
2. ✅ Pattern matching fonctionne (wildcards)
|
||||||
|
3. ✅ Batching réduit fréquence (100 msg/s → ~1 msg/s)
|
||||||
|
4. ✅ Backpressure détecté (dropping flag ou dropped count)
|
||||||
|
5. ✅ Thread safety: aucun crash en concurrence
|
||||||
|
6. ✅ Health monitoring reflète état réel
|
||||||
|
7. ✅ Re-subscribe fonctionne
|
||||||
|
|
||||||
|
### KNOWN BUGS (Documentation)
|
||||||
|
1. ⚠️ Multi-module routing: Seul 1er subscriber reçoit (pas de clone())
|
||||||
|
2. ⚠️ Unsubscribe API peut ne pas exister
|
||||||
|
|
||||||
|
### NICE TO HAVE
|
||||||
|
1. ✅ Fix du bug clone() pour 1-to-many routing
|
||||||
|
2. ✅ Unsubscribe API implémentée
|
||||||
|
3. ✅ Compression pour batching
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐛 Cas d'Erreur Attendus
|
||||||
|
|
||||||
|
| Erreur | Cause | Action |
|
||||||
|
|--------|-------|--------|
|
||||||
|
| Messages perdus | Routing bug | WARN - documenter |
|
||||||
|
| Pattern pas match | Regex incorrect | FAIL - fix pattern |
|
||||||
|
| Pas de batching | Config ignorée | FAIL - check SubscriptionConfig |
|
||||||
|
| Pas de backpressure | Health non mis à jour | FAIL - fix IOHealth |
|
||||||
|
| Crash concurrent | Race condition | FAIL - add mutex |
|
||||||
|
| Queue size incorrect | Compteur bugué | FAIL - fix queueSize tracking |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 Output Attendu
|
||||||
|
|
||||||
|
```
|
||||||
|
================================================================================
|
||||||
|
TEST: IO System Stress Test
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
=== TEST 1: Basic Publish-Subscribe ===
|
||||||
|
✓ TEST 1 PASSED: 100 messages received
|
||||||
|
|
||||||
|
=== TEST 2: Pattern Matching ===
|
||||||
|
Pattern matching results:
|
||||||
|
player:001:position: 3 times
|
||||||
|
player:001:health: 2 times
|
||||||
|
player:002:position: 2 times
|
||||||
|
enemy:001:position: 1 times
|
||||||
|
✓ TEST 2 PASSED
|
||||||
|
|
||||||
|
=== TEST 3: Multi-Module Routing (1-to-many) ===
|
||||||
|
Broadcast distribution:
|
||||||
|
ConsumerModule: 10 messages
|
||||||
|
BroadcastModule: 0 messages
|
||||||
|
BatchModule: 0 messages
|
||||||
|
StressModule: 0 messages
|
||||||
|
⚠️ BUG: Only one module received all messages (clone() not implemented)
|
||||||
|
✓ TEST 3 COMPLETED (bug documented)
|
||||||
|
|
||||||
|
=== TEST 4: Message Batching (Low-Frequency) ===
|
||||||
|
Published: 300 messages over 3.02s
|
||||||
|
Received: 3 batches
|
||||||
|
Expected: ~3 batches (1/second)
|
||||||
|
✓ TEST 4 PASSED
|
||||||
|
|
||||||
|
=== TEST 5: Backpressure & Queue Overflow ===
|
||||||
|
Publishing 50000 messages...
|
||||||
|
0 messages published
|
||||||
|
10000 messages published
|
||||||
|
20000 messages published
|
||||||
|
30000 messages published
|
||||||
|
40000 messages published
|
||||||
|
Health status:
|
||||||
|
Queue size: 10000 / 10000
|
||||||
|
Dropping: YES
|
||||||
|
Dropped count: 40000
|
||||||
|
Processing rate: 0.0 msg/s
|
||||||
|
✓ Backpressure detected correctly
|
||||||
|
✓ TEST 5 PASSED
|
||||||
|
|
||||||
|
=== TEST 6: Thread Safety (Concurrent Pub/Pull) ===
|
||||||
|
All publishers done: 10000 messages
|
||||||
|
All consumers done: 9847 messages
|
||||||
|
✓ TEST 6 PASSED (no crashes)
|
||||||
|
|
||||||
|
=== TEST 7: Health Monitoring Accuracy ===
|
||||||
|
Phase 1: Normal load (100 msg/s for 2s)
|
||||||
|
Queue: 2, Dropping: NO
|
||||||
|
Phase 2: Overload (10000 msg/s for 1s)
|
||||||
|
Queue: 9998, Dropping: YES
|
||||||
|
Phase 3: Recovery (pulling all messages)
|
||||||
|
Pulled: 9998 messages
|
||||||
|
Queue: 0, Dropping: NO
|
||||||
|
✓ TEST 7 PASSED
|
||||||
|
|
||||||
|
=== TEST 8: Subscription Lifecycle ===
|
||||||
|
After unsubscribe: 10 messages (0 if unsubscribe works)
|
||||||
|
✓ TEST 8 PASSED
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
METRICS
|
||||||
|
================================================================================
|
||||||
|
Basic pub/sub: 100/100
|
||||||
|
Batch count: 3
|
||||||
|
Queue size: 10000
|
||||||
|
Dropped messages: 40000
|
||||||
|
Concurrent published: 10000
|
||||||
|
Concurrent received: 9847
|
||||||
|
Broadcast bug present: 1.0 (not fixed yet)
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
ASSERTIONS
|
||||||
|
================================================================================
|
||||||
|
✓ basic_pubsub
|
||||||
|
✓ pattern_matching
|
||||||
|
✓ multi_module_routing_tested
|
||||||
|
✓ batching_works
|
||||||
|
✓ backpressure_detected
|
||||||
|
✓ thread_safety
|
||||||
|
✓ health_monitoring
|
||||||
|
✓ subscription_lifecycle
|
||||||
|
|
||||||
|
Result: ✅ PASSED (8/8 tests)
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📅 Planning
|
||||||
|
|
||||||
|
**Jour 1 (3h):**
|
||||||
|
- Implémenter ProducerModule, ConsumerModule, BroadcastModule
|
||||||
|
- Implémenter BatchModule, StressModule
|
||||||
|
- Setup IOFactory pour tests
|
||||||
|
|
||||||
|
**Jour 2 (3h):**
|
||||||
|
- Implémenter test_11_io_system.cpp
|
||||||
|
- Tests 1-4 (pub/sub, patterns, routing, batching)
|
||||||
|
|
||||||
|
**Jour 3 (2h):**
|
||||||
|
- Tests 5-8 (backpressure, threads, health, lifecycle)
|
||||||
|
- Debug + validation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Prochaine étape**: `scenario_12_datanode.md`
|
||||||
914
planTI/scenario_12_datanode.md
Normal file
914
planTI/scenario_12_datanode.md
Normal file
@ -0,0 +1,914 @@
|
|||||||
|
# Scénario 12: DataNode Integration Test
|
||||||
|
|
||||||
|
**Priorité**: ⭐⭐ SHOULD HAVE
|
||||||
|
**Phase**: 2 (SHOULD HAVE)
|
||||||
|
**Durée estimée**: ~4 minutes
|
||||||
|
**Effort implémentation**: ~5-7 heures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Objectif
|
||||||
|
|
||||||
|
Valider que le système DataNode (IDataTree/JsonDataTree) fonctionne correctement pour tous les cas d'usage:
|
||||||
|
- Tree navigation (exact match, pattern matching, queries)
|
||||||
|
- Hot-reload system (file watch, callbacks, isolation)
|
||||||
|
- Persistence (save/load, data integrity)
|
||||||
|
- Hash system (data hash, tree hash, change detection)
|
||||||
|
- Read-only enforcement (config/ vs data/ vs runtime/)
|
||||||
|
- Type safety et defaults
|
||||||
|
- Performance avec large trees (1000+ nodes)
|
||||||
|
|
||||||
|
**Note**: Le DataNode est le système central de configuration et persistence du moteur.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📋 Description
|
||||||
|
|
||||||
|
### Setup Initial
|
||||||
|
1. Créer un IDataTree avec structure complète:
|
||||||
|
- **config/** - Configuration read-only avec 500 nodes
|
||||||
|
- **data/** - Persistence read-write avec 300 nodes
|
||||||
|
- **runtime/** - State temporaire avec 200 nodes
|
||||||
|
2. Total: ~1000 nodes dans l'arbre
|
||||||
|
3. Fichiers JSON sur disque pour config/ et data/
|
||||||
|
|
||||||
|
### Test Séquence
|
||||||
|
|
||||||
|
#### Test 1: Tree Navigation & Exact Matching (30s)
|
||||||
|
1. Créer hiérarchie: `config/units/tanks/heavy_mk1`
|
||||||
|
2. Tester navigation:
|
||||||
|
- `getChild("units")` → `getChild("tanks")` → `getChild("heavy_mk1")`
|
||||||
|
- `getChildrenByName("heavy_mk1")` - direct children only
|
||||||
|
- `getPath()` - verify full path
|
||||||
|
3. Vérifier:
|
||||||
|
- Nodes trouvés correctement
|
||||||
|
- Path correct: "config/units/tanks/heavy_mk1"
|
||||||
|
- getChild retourne nullptr si non trouvé
|
||||||
|
|
||||||
|
#### Test 2: Pattern Matching (Wildcards) (30s)
|
||||||
|
1. Créer nodes:
|
||||||
|
- `config/units/tanks/heavy_mk1`
|
||||||
|
- `config/units/tanks/heavy_mk2`
|
||||||
|
- `config/units/infantry/heavy_trooper`
|
||||||
|
- `config/units/aircraft/light_fighter`
|
||||||
|
2. Tester patterns:
|
||||||
|
- `getChildrenByNameMatch("*heavy*")` → 3 matches
|
||||||
|
- `getChildrenByNameMatch("tanks/*")` → 2 matches
|
||||||
|
- `getChildrenByNameMatch("*_mk*")` → 2 matches
|
||||||
|
3. Vérifier tous les matches corrects
|
||||||
|
|
||||||
|
#### Test 3: Property-Based Queries (30s)
|
||||||
|
1. Créer nodes avec propriétés:
|
||||||
|
- `heavy_mk1`: armor=150, speed=30, cost=1000
|
||||||
|
- `heavy_mk2`: armor=180, speed=25, cost=1200
|
||||||
|
- `light_fighter`: armor=50, speed=120, cost=800
|
||||||
|
2. Query predicates:
|
||||||
|
- `queryByProperty("armor", val > 100)` → 2 units
|
||||||
|
- `queryByProperty("speed", val > 50)` → 1 unit
|
||||||
|
- `queryByProperty("cost", val <= 1000)` → 2 units
|
||||||
|
3. Vérifier résultats des queries
|
||||||
|
|
||||||
|
#### Test 4: Hot-Reload System (60s)
|
||||||
|
1. Créer `config/gameplay.json` sur disque
|
||||||
|
2. Charger dans tree avec `onTreeReloaded` callback
|
||||||
|
3. Modifier fichier sur disque (changer valeur)
|
||||||
|
4. Appeler `checkForChanges()` → devrait détecter changement
|
||||||
|
5. Appeler `reloadIfChanged()` → callback déclenché
|
||||||
|
6. Vérifier:
|
||||||
|
- Callback appelé exactement 1 fois
|
||||||
|
- Nouvelles valeurs chargées
|
||||||
|
- Anciens nodes remplacés
|
||||||
|
|
||||||
|
#### Test 5: Hot-Reload Isolation (30s)
|
||||||
|
1. Charger 2 fichiers: `config/units.json` et `config/maps.json`
|
||||||
|
2. Modifier seulement `units.json`
|
||||||
|
3. Vérifier:
|
||||||
|
- `checkForChanges()` détecte seulement units.json
|
||||||
|
- Reload ne touche pas maps.json
|
||||||
|
- Callback reçoit info sur quel fichier changé
|
||||||
|
|
||||||
|
#### Test 6: Persistence (Save/Load) (60s)
|
||||||
|
1. Créer structure data/:
|
||||||
|
- `data/player/stats` - {kills: 42, deaths: 3}
|
||||||
|
- `data/player/inventory` - {gold: 1000, items: [...]}
|
||||||
|
- `data/world/time` - {day: 5, hour: 14}
|
||||||
|
2. Appeler `saveData()` → écrit sur disque
|
||||||
|
3. Vérifier fichiers créés:
|
||||||
|
- `data/player/stats.json`
|
||||||
|
- `data/player/inventory.json`
|
||||||
|
- `data/world/time.json`
|
||||||
|
4. Charger dans nouveau tree
|
||||||
|
5. Vérifier data identique (deep comparison)
|
||||||
|
|
||||||
|
#### Test 7: Selective Save (30s)
|
||||||
|
1. Modifier seulement `data/player/stats`
|
||||||
|
2. Appeler `saveNode("data/player/stats")`
|
||||||
|
3. Vérifier:
|
||||||
|
- Seulement stats.json écrit
|
||||||
|
- Autres fichiers non modifiés (mtime identique)
|
||||||
|
|
||||||
|
#### Test 8: Hash System (Data Hash) (30s)
|
||||||
|
1. Créer node avec data: `{value: 42}`
|
||||||
|
2. Calculer `getDataHash()`
|
||||||
|
3. Modifier data: `{value: 43}`
|
||||||
|
4. Recalculer hash
|
||||||
|
5. Vérifier hashes différents
|
||||||
|
|
||||||
|
#### Test 9: Hash System (Tree Hash) (30s)
|
||||||
|
1. Créer arbre:
|
||||||
|
```
|
||||||
|
root
|
||||||
|
├─ child1 {data: 1}
|
||||||
|
└─ child2 {data: 2}
|
||||||
|
```
|
||||||
|
2. Calculer `getTreeHash()`
|
||||||
|
3. Modifier child1 data
|
||||||
|
4. Recalculer tree hash
|
||||||
|
5. Vérifier hashes différents (propagation)
|
||||||
|
|
||||||
|
#### Test 10: Read-Only Enforcement (30s)
|
||||||
|
1. Tenter `setChild()` sur node config/
|
||||||
|
2. Devrait throw exception
|
||||||
|
3. Vérifier:
|
||||||
|
- Exception levée
|
||||||
|
- Message descriptif
|
||||||
|
- Config/ non modifié
|
||||||
|
|
||||||
|
#### Test 11: Type Safety & Defaults (20s)
|
||||||
|
1. Créer node: `{armor: 150, name: "Tank"}`
|
||||||
|
2. Tester accès:
|
||||||
|
- `getInt("armor")` → 150
|
||||||
|
- `getInt("missing", 100)` → 100 (default)
|
||||||
|
- `getString("name")` → "Tank"
|
||||||
|
- `getBool("active", true)` → true (default)
|
||||||
|
- `getDouble("speed")` → throw ou default
|
||||||
|
|
||||||
|
#### Test 12: Deep Tree Performance (30s)
|
||||||
|
1. Créer tree avec 1000 nodes:
|
||||||
|
- 10 catégories
|
||||||
|
- 10 subcatégories each
|
||||||
|
- 10 items each
|
||||||
|
2. Mesurer temps:
|
||||||
|
- Pattern matching "*" (tous nodes): < 100ms
|
||||||
|
- Query by property: < 50ms
|
||||||
|
- Tree hash calculation: < 200ms
|
||||||
|
3. Vérifier performance acceptable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏗️ Implémentation
|
||||||
|
|
||||||
|
### Test Module Structure
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
// DataNodeTestModule.h
|
||||||
|
class DataNodeTestModule : public IModule {
|
||||||
|
public:
|
||||||
|
void initialize(std::shared_ptr<IDataNode> config) override;
|
||||||
|
void process(float deltaTime) override;
|
||||||
|
std::shared_ptr<IDataNode> getState() const override;
|
||||||
|
void setState(std::shared_ptr<IDataNode> state) override;
|
||||||
|
bool isIdle() const override { return true; }
|
||||||
|
|
||||||
|
// Test helpers
|
||||||
|
void createTestTree();
|
||||||
|
void testNavigation();
|
||||||
|
void testPatternMatching();
|
||||||
|
void testQueries();
|
||||||
|
void testHotReload();
|
||||||
|
void testPersistence();
|
||||||
|
void testHashes();
|
||||||
|
void testReadOnly();
|
||||||
|
void testTypeAccess();
|
||||||
|
void testPerformance();
|
||||||
|
|
||||||
|
private:
|
||||||
|
std::shared_ptr<IDataTree> tree;
|
||||||
|
int reloadCallbackCount = 0;
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Principal
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
// test_12_datanode.cpp
|
||||||
|
#include "helpers/TestMetrics.h"
|
||||||
|
#include "helpers/TestAssertions.h"
|
||||||
|
#include "helpers/TestReporter.h"
|
||||||
|
#include <fstream>
|
||||||
|
|
||||||
|
int main() {
|
||||||
|
TestReporter reporter("DataNode Integration Test");
|
||||||
|
TestMetrics metrics;
|
||||||
|
|
||||||
|
// === SETUP ===
|
||||||
|
std::filesystem::create_directories("test_data/config");
|
||||||
|
std::filesystem::create_directories("test_data/data");
|
||||||
|
|
||||||
|
// Créer IDataTree
|
||||||
|
auto tree = std::make_shared<JsonDataTree>("test_data");
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 1: Tree Navigation & Exact Matching
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 1: Tree Navigation & Exact Matching ===\n";
|
||||||
|
|
||||||
|
// Créer hiérarchie
|
||||||
|
auto configRoot = tree->getConfigRoot();
|
||||||
|
auto units = std::make_shared<JsonDataNode>("units", nlohmann::json::object());
|
||||||
|
auto tanks = std::make_shared<JsonDataNode>("tanks", nlohmann::json::object());
|
||||||
|
auto heavyMk1 = std::make_shared<JsonDataNode>("heavy_mk1", nlohmann::json{
|
||||||
|
{"armor", 150},
|
||||||
|
{"speed", 30},
|
||||||
|
{"cost", 1000}
|
||||||
|
});
|
||||||
|
|
||||||
|
tanks->setChild("heavy_mk1", heavyMk1);
|
||||||
|
units->setChild("tanks", tanks);
|
||||||
|
configRoot->setChild("units", units);
|
||||||
|
|
||||||
|
// Navigation
|
||||||
|
auto foundUnits = configRoot->getChild("units");
|
||||||
|
ASSERT_TRUE(foundUnits != nullptr, "Should find units node");
|
||||||
|
|
||||||
|
auto foundTanks = foundUnits->getChild("tanks");
|
||||||
|
ASSERT_TRUE(foundTanks != nullptr, "Should find tanks node");
|
||||||
|
|
||||||
|
auto foundHeavy = foundTanks->getChild("heavy_mk1");
|
||||||
|
ASSERT_TRUE(foundHeavy != nullptr, "Should find heavy_mk1 node");
|
||||||
|
|
||||||
|
// Path
|
||||||
|
std::string path = foundHeavy->getPath();
|
||||||
|
std::cout << "Path: " << path << "\n";
|
||||||
|
ASSERT_TRUE(path.find("heavy_mk1") != std::string::npos, "Path should contain node name");
|
||||||
|
|
||||||
|
// Not found
|
||||||
|
auto notFound = foundTanks->getChild("does_not_exist");
|
||||||
|
ASSERT_TRUE(notFound == nullptr, "Should return nullptr for missing child");
|
||||||
|
|
||||||
|
reporter.addAssertion("navigation_exact", true);
|
||||||
|
std::cout << "✓ TEST 1 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 2: Pattern Matching (Wildcards)
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 2: Pattern Matching ===\n";
|
||||||
|
|
||||||
|
// Ajouter plus de nodes
|
||||||
|
auto heavyMk2 = std::make_shared<JsonDataNode>("heavy_mk2", nlohmann::json{
|
||||||
|
{"armor", 180},
|
||||||
|
{"speed", 25},
|
||||||
|
{"cost", 1200}
|
||||||
|
});
|
||||||
|
tanks->setChild("heavy_mk2", heavyMk2);
|
||||||
|
|
||||||
|
auto infantry = std::make_shared<JsonDataNode>("infantry", nlohmann::json::object());
|
||||||
|
auto heavyTrooper = std::make_shared<JsonDataNode>("heavy_trooper", nlohmann::json{
|
||||||
|
{"armor", 120},
|
||||||
|
{"speed", 15},
|
||||||
|
{"cost", 500}
|
||||||
|
});
|
||||||
|
infantry->setChild("heavy_trooper", heavyTrooper);
|
||||||
|
units->setChild("infantry", infantry);
|
||||||
|
|
||||||
|
auto aircraft = std::make_shared<JsonDataNode>("aircraft", nlohmann::json::object());
|
||||||
|
auto lightFighter = std::make_shared<JsonDataNode>("light_fighter", nlohmann::json{
|
||||||
|
{"armor", 50},
|
||||||
|
{"speed", 120},
|
||||||
|
{"cost", 800}
|
||||||
|
});
|
||||||
|
aircraft->setChild("light_fighter", lightFighter);
|
||||||
|
units->setChild("aircraft", aircraft);
|
||||||
|
|
||||||
|
// Pattern: *heavy*
|
||||||
|
auto heavyUnits = configRoot->getChildrenByNameMatch("*heavy*");
|
||||||
|
std::cout << "Pattern '*heavy*' matched: " << heavyUnits.size() << " nodes\n";
|
||||||
|
for (const auto& node : heavyUnits) {
|
||||||
|
std::cout << " - " << node->getName() << "\n";
|
||||||
|
}
|
||||||
|
// Should match: heavy_mk1, heavy_mk2, heavy_trooper
|
||||||
|
ASSERT_EQ(heavyUnits.size(), 3, "Should match 3 'heavy' units");
|
||||||
|
reporter.addMetric("pattern_heavy_count", heavyUnits.size());
|
||||||
|
|
||||||
|
// Pattern: *_mk*
|
||||||
|
auto mkUnits = configRoot->getChildrenByNameMatch("*_mk*");
|
||||||
|
std::cout << "Pattern '*_mk*' matched: " << mkUnits.size() << " nodes\n";
|
||||||
|
// Should match: heavy_mk1, heavy_mk2
|
||||||
|
ASSERT_EQ(mkUnits.size(), 2, "Should match 2 '_mk' units");
|
||||||
|
|
||||||
|
reporter.addAssertion("pattern_matching", true);
|
||||||
|
std::cout << "✓ TEST 2 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 3: Property-Based Queries
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 3: Property-Based Queries ===\n";
|
||||||
|
|
||||||
|
// Query: armor > 100
|
||||||
|
auto armoredUnits = configRoot->queryByProperty("armor",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() >= 100;
|
||||||
|
});
|
||||||
|
|
||||||
|
std::cout << "Units with armor >= 100: " << armoredUnits.size() << "\n";
|
||||||
|
for (const auto& node : armoredUnits) {
|
||||||
|
int armor = node->getInt("armor");
|
||||||
|
std::cout << " - " << node->getName() << " (armor=" << armor << ")\n";
|
||||||
|
ASSERT_GE(armor, 100, "Armor should be >= 100");
|
||||||
|
}
|
||||||
|
// Should match: heavy_mk1 (150), heavy_mk2 (180), heavy_trooper (120)
|
||||||
|
ASSERT_EQ(armoredUnits.size(), 3, "Should find 3 armored units");
|
||||||
|
|
||||||
|
// Query: speed > 50
|
||||||
|
auto fastUnits = configRoot->queryByProperty("speed",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() > 50;
|
||||||
|
});
|
||||||
|
|
||||||
|
std::cout << "Units with speed > 50: " << fastUnits.size() << "\n";
|
||||||
|
// Should match: light_fighter (120)
|
||||||
|
ASSERT_EQ(fastUnits.size(), 1, "Should find 1 fast unit");
|
||||||
|
|
||||||
|
reporter.addAssertion("property_queries", true);
|
||||||
|
std::cout << "✓ TEST 3 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 4: Hot-Reload System
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 4: Hot-Reload System ===\n";
|
||||||
|
|
||||||
|
// Créer fichier config
|
||||||
|
nlohmann::json gameplayConfig = {
|
||||||
|
{"difficulty", "normal"},
|
||||||
|
{"maxPlayers", 4},
|
||||||
|
{"timeLimit", 3600}
|
||||||
|
};
|
||||||
|
|
||||||
|
std::ofstream configFile("test_data/config/gameplay.json");
|
||||||
|
configFile << gameplayConfig.dump(2);
|
||||||
|
configFile.close();
|
||||||
|
|
||||||
|
// Charger dans tree
|
||||||
|
tree->loadConfigFile("gameplay.json");
|
||||||
|
|
||||||
|
// Setup callback
|
||||||
|
int callbackCount = 0;
|
||||||
|
tree->onTreeReloaded([&callbackCount]() {
|
||||||
|
callbackCount++;
|
||||||
|
std::cout << " → Reload callback triggered (count=" << callbackCount << ")\n";
|
||||||
|
});
|
||||||
|
|
||||||
|
// Vérifier contenu initial
|
||||||
|
auto gameplay = configRoot->getChild("gameplay");
|
||||||
|
ASSERT_TRUE(gameplay != nullptr, "gameplay node should exist");
|
||||||
|
std::string difficulty = gameplay->getString("difficulty");
|
||||||
|
ASSERT_EQ(difficulty, "normal", "Initial difficulty should be 'normal'");
|
||||||
|
|
||||||
|
std::cout << "Initial difficulty: " << difficulty << "\n";
|
||||||
|
|
||||||
|
// Modifier fichier
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||||
|
gameplayConfig["difficulty"] = "hard";
|
||||||
|
gameplayConfig["maxPlayers"] = 8;
|
||||||
|
|
||||||
|
std::ofstream configFile2("test_data/config/gameplay.json");
|
||||||
|
configFile2 << gameplayConfig.dump(2);
|
||||||
|
configFile2.close();
|
||||||
|
|
||||||
|
// Force file timestamp update
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||||
|
|
||||||
|
// Check for changes
|
||||||
|
bool hasChanges = tree->checkForChanges();
|
||||||
|
std::cout << "Has changes: " << (hasChanges ? "YES" : "NO") << "\n";
|
||||||
|
ASSERT_TRUE(hasChanges, "Should detect file modification");
|
||||||
|
|
||||||
|
// Reload
|
||||||
|
bool reloaded = tree->reloadIfChanged();
|
||||||
|
std::cout << "Reloaded: " << (reloaded ? "YES" : "NO") << "\n";
|
||||||
|
ASSERT_TRUE(reloaded, "Should reload changed file");
|
||||||
|
|
||||||
|
// Vérifier callback
|
||||||
|
ASSERT_EQ(callbackCount, 1, "Callback should be called exactly once");
|
||||||
|
|
||||||
|
// Vérifier nouvelles valeurs
|
||||||
|
gameplay = configRoot->getChild("gameplay");
|
||||||
|
difficulty = gameplay->getString("difficulty");
|
||||||
|
int maxPlayers = gameplay->getInt("maxPlayers");
|
||||||
|
|
||||||
|
std::cout << "After reload - difficulty: " << difficulty << ", maxPlayers: " << maxPlayers << "\n";
|
||||||
|
ASSERT_EQ(difficulty, "hard", "Difficulty should be updated to 'hard'");
|
||||||
|
ASSERT_EQ(maxPlayers, 8, "maxPlayers should be updated to 8");
|
||||||
|
|
||||||
|
reporter.addAssertion("hot_reload", true);
|
||||||
|
reporter.addMetric("reload_callback_count", callbackCount);
|
||||||
|
std::cout << "✓ TEST 4 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 5: Hot-Reload Isolation
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 5: Hot-Reload Isolation ===\n";
|
||||||
|
|
||||||
|
// Créer second fichier
|
||||||
|
nlohmann::json mapsConfig = {
|
||||||
|
{"defaultMap", "desert"},
|
||||||
|
{"mapCount", 10}
|
||||||
|
};
|
||||||
|
|
||||||
|
std::ofstream mapsFile("test_data/config/maps.json");
|
||||||
|
mapsFile << mapsConfig.dump(2);
|
||||||
|
mapsFile.close();
|
||||||
|
|
||||||
|
tree->loadConfigFile("maps.json");
|
||||||
|
|
||||||
|
// Modifier seulement gameplay.json
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||||
|
gameplayConfig["difficulty"] = "extreme";
|
||||||
|
|
||||||
|
std::ofstream configFile3("test_data/config/gameplay.json");
|
||||||
|
configFile3 << gameplayConfig.dump(2);
|
||||||
|
configFile3.close();
|
||||||
|
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||||
|
|
||||||
|
// Check changes
|
||||||
|
hasChanges = tree->checkForChanges();
|
||||||
|
ASSERT_TRUE(hasChanges, "Should detect gameplay.json change");
|
||||||
|
|
||||||
|
// Verify maps.json not affected
|
||||||
|
auto maps = configRoot->getChild("maps");
|
||||||
|
std::string defaultMap = maps->getString("defaultMap");
|
||||||
|
ASSERT_EQ(defaultMap, "desert", "maps.json should not be affected");
|
||||||
|
|
||||||
|
reloaded = tree->reloadIfChanged();
|
||||||
|
ASSERT_TRUE(reloaded, "Should reload only changed file");
|
||||||
|
|
||||||
|
// Verify maps still intact
|
||||||
|
maps = configRoot->getChild("maps");
|
||||||
|
defaultMap = maps->getString("defaultMap");
|
||||||
|
ASSERT_EQ(defaultMap, "desert", "maps.json should still be 'desert' after isolated reload");
|
||||||
|
|
||||||
|
reporter.addAssertion("reload_isolation", true);
|
||||||
|
std::cout << "✓ TEST 5 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 6: Persistence (Save/Load)
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 6: Persistence (Save/Load) ===\n";
|
||||||
|
|
||||||
|
auto dataRoot = tree->getDataRoot();
|
||||||
|
|
||||||
|
// Créer structure data/
|
||||||
|
auto player = std::make_shared<JsonDataNode>("player", nlohmann::json::object());
|
||||||
|
auto stats = std::make_shared<JsonDataNode>("stats", nlohmann::json{
|
||||||
|
{"kills", 42},
|
||||||
|
{"deaths", 3},
|
||||||
|
{"level", 15}
|
||||||
|
});
|
||||||
|
auto inventory = std::make_shared<JsonDataNode>("inventory", nlohmann::json{
|
||||||
|
{"gold", 1000},
|
||||||
|
{"items", nlohmann::json::array({"sword", "shield", "potion"})}
|
||||||
|
});
|
||||||
|
|
||||||
|
player->setChild("stats", stats);
|
||||||
|
player->setChild("inventory", inventory);
|
||||||
|
dataRoot->setChild("player", player);
|
||||||
|
|
||||||
|
auto world = std::make_shared<JsonDataNode>("world", nlohmann::json::object());
|
||||||
|
auto time = std::make_shared<JsonDataNode>("time", nlohmann::json{
|
||||||
|
{"day", 5},
|
||||||
|
{"hour", 14},
|
||||||
|
{"minute", 30}
|
||||||
|
});
|
||||||
|
world->setChild("time", time);
|
||||||
|
dataRoot->setChild("world", world);
|
||||||
|
|
||||||
|
// Save all data
|
||||||
|
tree->saveData();
|
||||||
|
|
||||||
|
// Vérifier fichiers créés
|
||||||
|
ASSERT_TRUE(std::filesystem::exists("test_data/data/player/stats.json"),
|
||||||
|
"stats.json should exist");
|
||||||
|
ASSERT_TRUE(std::filesystem::exists("test_data/data/player/inventory.json"),
|
||||||
|
"inventory.json should exist");
|
||||||
|
ASSERT_TRUE(std::filesystem::exists("test_data/data/world/time.json"),
|
||||||
|
"time.json should exist");
|
||||||
|
|
||||||
|
std::cout << "Files saved successfully\n";
|
||||||
|
|
||||||
|
// Créer nouveau tree et charger
|
||||||
|
auto tree2 = std::make_shared<JsonDataTree>("test_data");
|
||||||
|
tree2->loadDataDirectory();
|
||||||
|
|
||||||
|
auto dataRoot2 = tree2->getDataRoot();
|
||||||
|
auto player2 = dataRoot2->getChild("player");
|
||||||
|
ASSERT_TRUE(player2 != nullptr, "player node should load");
|
||||||
|
|
||||||
|
auto stats2 = player2->getChild("stats");
|
||||||
|
int kills = stats2->getInt("kills");
|
||||||
|
int deaths = stats2->getInt("deaths");
|
||||||
|
|
||||||
|
std::cout << "Loaded: kills=" << kills << ", deaths=" << deaths << "\n";
|
||||||
|
ASSERT_EQ(kills, 42, "kills should be preserved");
|
||||||
|
ASSERT_EQ(deaths, 3, "deaths should be preserved");
|
||||||
|
|
||||||
|
reporter.addAssertion("persistence", true);
|
||||||
|
std::cout << "✓ TEST 6 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 7: Selective Save
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 7: Selective Save ===\n";
|
||||||
|
|
||||||
|
// Get mtime of inventory.json before
|
||||||
|
auto inventoryPath = std::filesystem::path("test_data/data/player/inventory.json");
|
||||||
|
auto mtimeBefore = std::filesystem::last_write_time(inventoryPath);
|
||||||
|
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||||
|
|
||||||
|
// Modify only stats
|
||||||
|
stats->setInt("kills", 100);
|
||||||
|
|
||||||
|
// Save only stats
|
||||||
|
tree->saveNode("data/player/stats");
|
||||||
|
|
||||||
|
// Check inventory.json not modified
|
||||||
|
auto mtimeAfter = std::filesystem::last_write_time(inventoryPath);
|
||||||
|
|
||||||
|
ASSERT_EQ(mtimeBefore, mtimeAfter, "inventory.json should not be modified");
|
||||||
|
|
||||||
|
// Load stats and verify
|
||||||
|
auto tree3 = std::make_shared<JsonDataTree>("test_data");
|
||||||
|
tree3->loadDataDirectory();
|
||||||
|
auto stats3 = tree3->getDataRoot()->getChild("player")->getChild("stats");
|
||||||
|
int newKills = stats3->getInt("kills");
|
||||||
|
|
||||||
|
ASSERT_EQ(newKills, 100, "Selective save should update only stats");
|
||||||
|
|
||||||
|
reporter.addAssertion("selective_save", true);
|
||||||
|
std::cout << "✓ TEST 7 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 8: Hash System (Data Hash)
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 8: Hash System (Data Hash) ===\n";
|
||||||
|
|
||||||
|
auto testNode = std::make_shared<JsonDataNode>("test", nlohmann::json{
|
||||||
|
{"value", 42}
|
||||||
|
});
|
||||||
|
|
||||||
|
std::string hash1 = testNode->getDataHash();
|
||||||
|
std::cout << "Hash 1: " << hash1.substr(0, 16) << "...\n";
|
||||||
|
|
||||||
|
// Modify data
|
||||||
|
testNode->setInt("value", 43);
|
||||||
|
|
||||||
|
std::string hash2 = testNode->getDataHash();
|
||||||
|
std::cout << "Hash 2: " << hash2.substr(0, 16) << "...\n";
|
||||||
|
|
||||||
|
ASSERT_TRUE(hash1 != hash2, "Hashes should differ after data change");
|
||||||
|
|
||||||
|
reporter.addAssertion("data_hash", true);
|
||||||
|
std::cout << "✓ TEST 8 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 9: Hash System (Tree Hash)
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 9: Hash System (Tree Hash) ===\n";
|
||||||
|
|
||||||
|
auto root = std::make_shared<JsonDataNode>("root", nlohmann::json::object());
|
||||||
|
auto child1 = std::make_shared<JsonDataNode>("child1", nlohmann::json{{"data", 1}});
|
||||||
|
auto child2 = std::make_shared<JsonDataNode>("child2", nlohmann::json{{"data", 2}});
|
||||||
|
|
||||||
|
root->setChild("child1", child1);
|
||||||
|
root->setChild("child2", child2);
|
||||||
|
|
||||||
|
std::string treeHash1 = root->getTreeHash();
|
||||||
|
std::cout << "Tree Hash 1: " << treeHash1.substr(0, 16) << "...\n";
|
||||||
|
|
||||||
|
// Modify child1
|
||||||
|
child1->setInt("data", 999);
|
||||||
|
|
||||||
|
std::string treeHash2 = root->getTreeHash();
|
||||||
|
std::cout << "Tree Hash 2: " << treeHash2.substr(0, 16) << "...\n";
|
||||||
|
|
||||||
|
ASSERT_TRUE(treeHash1 != treeHash2, "Tree hash should change when child changes");
|
||||||
|
|
||||||
|
reporter.addAssertion("tree_hash", true);
|
||||||
|
std::cout << "✓ TEST 9 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 10: Read-Only Enforcement
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 10: Read-Only Enforcement ===\n";
|
||||||
|
|
||||||
|
auto readOnlyNode = configRoot->getChild("gameplay");
|
||||||
|
|
||||||
|
bool exceptionThrown = false;
|
||||||
|
try {
|
||||||
|
auto newChild = std::make_shared<JsonDataNode>("illegal", nlohmann::json{{"bad", true}});
|
||||||
|
readOnlyNode->setChild("illegal", newChild);
|
||||||
|
} catch (const std::runtime_error& e) {
|
||||||
|
std::cout << "✓ Exception thrown: " << e.what() << "\n";
|
||||||
|
exceptionThrown = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
ASSERT_TRUE(exceptionThrown, "Should throw exception when modifying read-only node");
|
||||||
|
|
||||||
|
reporter.addAssertion("readonly_enforcement", true);
|
||||||
|
std::cout << "✓ TEST 10 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 11: Type Safety & Defaults
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 11: Type Safety & Defaults ===\n";
|
||||||
|
|
||||||
|
auto typeNode = std::make_shared<JsonDataNode>("types", nlohmann::json{
|
||||||
|
{"armor", 150},
|
||||||
|
{"name", "Tank"},
|
||||||
|
{"active", true},
|
||||||
|
{"speed", 30.5}
|
||||||
|
});
|
||||||
|
|
||||||
|
int armor = typeNode->getInt("armor");
|
||||||
|
ASSERT_EQ(armor, 150, "getInt should return correct value");
|
||||||
|
|
||||||
|
int missing = typeNode->getInt("missing", 100);
|
||||||
|
ASSERT_EQ(missing, 100, "getInt with default should return default");
|
||||||
|
|
||||||
|
std::string name = typeNode->getString("name");
|
||||||
|
ASSERT_EQ(name, "Tank", "getString should return correct value");
|
||||||
|
|
||||||
|
bool active = typeNode->getBool("active");
|
||||||
|
ASSERT_EQ(active, true, "getBool should return correct value");
|
||||||
|
|
||||||
|
bool defaultBool = typeNode->getBool("nothere", false);
|
||||||
|
ASSERT_EQ(defaultBool, false, "getBool with default should return default");
|
||||||
|
|
||||||
|
double speed = typeNode->getDouble("speed");
|
||||||
|
ASSERT_EQ(speed, 30.5, "getDouble should return correct value");
|
||||||
|
|
||||||
|
reporter.addAssertion("type_safety", true);
|
||||||
|
std::cout << "✓ TEST 11 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// TEST 12: Deep Tree Performance
|
||||||
|
// ========================================================================
|
||||||
|
std::cout << "\n=== TEST 12: Deep Tree Performance ===\n";
|
||||||
|
|
||||||
|
auto perfRoot = std::make_shared<JsonDataNode>("perf", nlohmann::json::object());
|
||||||
|
|
||||||
|
// Create 1000 nodes: 10 x 10 x 10
|
||||||
|
int nodeCount = 0;
|
||||||
|
for (int cat = 0; cat < 10; cat++) {
|
||||||
|
auto category = std::make_shared<JsonDataNode>("cat_" + std::to_string(cat),
|
||||||
|
nlohmann::json::object());
|
||||||
|
|
||||||
|
for (int sub = 0; sub < 10; sub++) {
|
||||||
|
auto subcategory = std::make_shared<JsonDataNode>("sub_" + std::to_string(sub),
|
||||||
|
nlohmann::json::object());
|
||||||
|
|
||||||
|
for (int item = 0; item < 10; item++) {
|
||||||
|
auto itemNode = std::make_shared<JsonDataNode>("item_" + std::to_string(item),
|
||||||
|
nlohmann::json{
|
||||||
|
{"id", nodeCount},
|
||||||
|
{"value", nodeCount * 10}
|
||||||
|
});
|
||||||
|
subcategory->setChild("item_" + std::to_string(item), itemNode);
|
||||||
|
nodeCount++;
|
||||||
|
}
|
||||||
|
|
||||||
|
category->setChild("sub_" + std::to_string(sub), subcategory);
|
||||||
|
}
|
||||||
|
|
||||||
|
perfRoot->setChild("cat_" + std::to_string(cat), category);
|
||||||
|
}
|
||||||
|
|
||||||
|
std::cout << "Created " << nodeCount << " nodes\n";
|
||||||
|
ASSERT_EQ(nodeCount, 1000, "Should create 1000 nodes");
|
||||||
|
|
||||||
|
// Pattern matching: find all items
|
||||||
|
auto start = std::chrono::high_resolution_clock::now();
|
||||||
|
auto allItems = perfRoot->getChildrenByNameMatch("item_*");
|
||||||
|
auto end = std::chrono::high_resolution_clock::now();
|
||||||
|
|
||||||
|
float patternTime = std::chrono::duration<float, std::milli>(end - start).count();
|
||||||
|
std::cout << "Pattern matching found " << allItems.size() << " items in " << patternTime << "ms\n";
|
||||||
|
|
||||||
|
ASSERT_EQ(allItems.size(), 1000, "Should find all 1000 items");
|
||||||
|
ASSERT_LT(patternTime, 100.0f, "Pattern matching should be < 100ms");
|
||||||
|
reporter.addMetric("pattern_time_ms", patternTime);
|
||||||
|
|
||||||
|
// Query by property
|
||||||
|
start = std::chrono::high_resolution_clock::now();
|
||||||
|
auto queryResults = perfRoot->queryByProperty("value",
|
||||||
|
[](const IDataValue& val) {
|
||||||
|
return val.isNumber() && val.asInt() > 5000;
|
||||||
|
});
|
||||||
|
end = std::chrono::high_resolution_clock::now();
|
||||||
|
|
||||||
|
float queryTime = std::chrono::duration<float, std::milli>(end - start).count();
|
||||||
|
std::cout << "Query found " << queryResults.size() << " results in " << queryTime << "ms\n";
|
||||||
|
|
||||||
|
ASSERT_LT(queryTime, 50.0f, "Query should be < 50ms");
|
||||||
|
reporter.addMetric("query_time_ms", queryTime);
|
||||||
|
|
||||||
|
// Tree hash
|
||||||
|
start = std::chrono::high_resolution_clock::now();
|
||||||
|
std::string treeHash = perfRoot->getTreeHash();
|
||||||
|
end = std::chrono::high_resolution_clock::now();
|
||||||
|
|
||||||
|
float hashTime = std::chrono::duration<float, std::milli>(end - start).count();
|
||||||
|
std::cout << "Tree hash calculated in " << hashTime << "ms\n";
|
||||||
|
|
||||||
|
ASSERT_LT(hashTime, 200.0f, "Tree hash should be < 200ms");
|
||||||
|
reporter.addMetric("treehash_time_ms", hashTime);
|
||||||
|
|
||||||
|
reporter.addAssertion("performance", true);
|
||||||
|
std::cout << "✓ TEST 12 PASSED\n";
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// CLEANUP
|
||||||
|
// ========================================================================
|
||||||
|
std::filesystem::remove_all("test_data");
|
||||||
|
|
||||||
|
// ========================================================================
|
||||||
|
// RAPPORT FINAL
|
||||||
|
// ========================================================================
|
||||||
|
|
||||||
|
metrics.printReport();
|
||||||
|
reporter.printFinalReport();
|
||||||
|
|
||||||
|
return reporter.getExitCode();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Métriques Collectées
|
||||||
|
|
||||||
|
| Métrique | Description | Seuil |
|
||||||
|
|----------|-------------|-------|
|
||||||
|
| **pattern_heavy_count** | Nodes matchés par pattern "*heavy*" | 3 |
|
||||||
|
| **reload_callback_count** | Callbacks déclenchés lors reload | 1 |
|
||||||
|
| **pattern_time_ms** | Temps pattern matching 1000 nodes | < 100ms |
|
||||||
|
| **query_time_ms** | Temps property query 1000 nodes | < 50ms |
|
||||||
|
| **treehash_time_ms** | Temps calcul tree hash 1000 nodes | < 200ms |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ✅ Critères de Succès
|
||||||
|
|
||||||
|
### MUST PASS
|
||||||
|
1. ✅ Navigation exacte fonctionne (getChild, getPath)
|
||||||
|
2. ✅ Pattern matching trouve tous les matches
|
||||||
|
3. ✅ Property queries retournent résultats corrects
|
||||||
|
4. ✅ Hot-reload détecte changements fichier
|
||||||
|
5. ✅ Hot-reload callback déclenché
|
||||||
|
6. ✅ Hot-reload isolation (un fichier modifié n'affecte pas autres)
|
||||||
|
7. ✅ Persistence save/load préserve data
|
||||||
|
8. ✅ Selective save modifie seulement node ciblé
|
||||||
|
9. ✅ Data hash change quand data modifié
|
||||||
|
10. ✅ Tree hash change quand children modifiés
|
||||||
|
11. ✅ Read-only nodes throw exception si modifiés
|
||||||
|
12. ✅ Type access avec defaults fonctionne
|
||||||
|
13. ✅ Performance acceptable sur 1000 nodes
|
||||||
|
|
||||||
|
### NICE TO HAVE
|
||||||
|
1. ✅ Pattern matching < 50ms (optimal)
|
||||||
|
2. ✅ Query < 25ms (optimal)
|
||||||
|
3. ✅ Tree hash < 100ms (optimal)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🐛 Cas d'Erreur Attendus
|
||||||
|
|
||||||
|
| Erreur | Cause | Action |
|
||||||
|
|--------|-------|--------|
|
||||||
|
| Pattern pas match | Regex incorrecte | FAIL - fix wildcard conversion |
|
||||||
|
| Query vide | Predicate trop strict | WARN - vérifier logique |
|
||||||
|
| Hot-reload pas détecté | File watch bug | FAIL - fix checkForChanges() |
|
||||||
|
| Callback pas appelé | onTreeReloaded bug | FAIL - fix callback system |
|
||||||
|
| Persistence data corrompu | JSON malformé | FAIL - add validation |
|
||||||
|
| Hash identiques | Hash calculation bug | FAIL - fix getDataHash() |
|
||||||
|
| Read-only pas enforced | isReadOnly check manquant | FAIL - add check |
|
||||||
|
| Type mismatch crash | Pas de default handling | FAIL - add try/catch |
|
||||||
|
| Performance > seuils | Algorithme O(n²) | FAIL - optimize |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 Output Attendu
|
||||||
|
|
||||||
|
```
|
||||||
|
================================================================================
|
||||||
|
TEST: DataNode Integration Test
|
||||||
|
================================================================================
|
||||||
|
|
||||||
|
=== TEST 1: Tree Navigation & Exact Matching ===
|
||||||
|
Path: config/units/tanks/heavy_mk1
|
||||||
|
✓ TEST 1 PASSED
|
||||||
|
|
||||||
|
=== TEST 2: Pattern Matching ===
|
||||||
|
Pattern '*heavy*' matched: 3 nodes
|
||||||
|
- heavy_mk1
|
||||||
|
- heavy_mk2
|
||||||
|
- heavy_trooper
|
||||||
|
Pattern '*_mk*' matched: 2 nodes
|
||||||
|
✓ TEST 2 PASSED
|
||||||
|
|
||||||
|
=== TEST 3: Property-Based Queries ===
|
||||||
|
Units with armor >= 100: 3
|
||||||
|
- heavy_mk1 (armor=150)
|
||||||
|
- heavy_mk2 (armor=180)
|
||||||
|
- heavy_trooper (armor=120)
|
||||||
|
Units with speed > 50: 1
|
||||||
|
✓ TEST 3 PASSED
|
||||||
|
|
||||||
|
=== TEST 4: Hot-Reload System ===
|
||||||
|
Initial difficulty: normal
|
||||||
|
Has changes: YES
|
||||||
|
Reloaded: YES
|
||||||
|
→ Reload callback triggered (count=1)
|
||||||
|
After reload - difficulty: hard, maxPlayers: 8
|
||||||
|
✓ TEST 4 PASSED
|
||||||
|
|
||||||
|
=== TEST 5: Hot-Reload Isolation ===
|
||||||
|
✓ TEST 5 PASSED
|
||||||
|
|
||||||
|
=== TEST 6: Persistence (Save/Load) ===
|
||||||
|
Files saved successfully
|
||||||
|
Loaded: kills=42, deaths=3
|
||||||
|
✓ TEST 6 PASSED
|
||||||
|
|
||||||
|
=== TEST 7: Selective Save ===
|
||||||
|
✓ TEST 7 PASSED
|
||||||
|
|
||||||
|
=== TEST 8: Hash System (Data Hash) ===
|
||||||
|
Hash 1: 5d41402abc4b2a76...
|
||||||
|
Hash 2: 7c6a180b36896a0e...
|
||||||
|
✓ TEST 8 PASSED
|
||||||
|
|
||||||
|
=== TEST 9: Hash System (Tree Hash) ===
|
||||||
|
Tree Hash 1: a1b2c3d4e5f6g7h8...
|
||||||
|
Tree Hash 2: 9i8j7k6l5m4n3o2p...
|
||||||
|
✓ TEST 9 PASSED
|
||||||
|
|
||||||
|
=== TEST 10: Read-Only Enforcement ===
|
||||||
|
✓ Exception thrown: Cannot modify read-only node 'gameplay'
|
||||||
|
✓ TEST 10 PASSED
|
||||||
|
|
||||||
|
=== TEST 11: Type Safety & Defaults ===
|
||||||
|
✓ TEST 11 PASSED
|
||||||
|
|
||||||
|
=== TEST 12: Deep Tree Performance ===
|
||||||
|
Created 1000 nodes
|
||||||
|
Pattern matching found 1000 items in 45.3ms
|
||||||
|
Query found 500 results in 23.7ms
|
||||||
|
Tree hash calculated in 134.2ms
|
||||||
|
✓ TEST 12 PASSED
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
METRICS
|
||||||
|
================================================================================
|
||||||
|
Pattern heavy count: 3
|
||||||
|
Reload callback count: 1
|
||||||
|
Pattern time: 45.3ms (threshold: < 100ms) ✓
|
||||||
|
Query time: 23.7ms (threshold: < 50ms) ✓
|
||||||
|
Tree hash time: 134.2ms (threshold: < 200ms) ✓
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
ASSERTIONS
|
||||||
|
================================================================================
|
||||||
|
✓ navigation_exact
|
||||||
|
✓ pattern_matching
|
||||||
|
✓ property_queries
|
||||||
|
✓ hot_reload
|
||||||
|
✓ reload_isolation
|
||||||
|
✓ persistence
|
||||||
|
✓ selective_save
|
||||||
|
✓ data_hash
|
||||||
|
✓ tree_hash
|
||||||
|
✓ readonly_enforcement
|
||||||
|
✓ type_safety
|
||||||
|
✓ performance
|
||||||
|
|
||||||
|
Result: ✅ PASSED (12/12 tests)
|
||||||
|
|
||||||
|
================================================================================
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📅 Planning
|
||||||
|
|
||||||
|
**Jour 1 (4h):**
|
||||||
|
- Setup JsonDataTree avec test directory
|
||||||
|
- Implémenter tests 1-6 (navigation, patterns, queries, hot-reload, persistence)
|
||||||
|
|
||||||
|
**Jour 2 (3h):**
|
||||||
|
- Implémenter tests 7-12 (selective save, hashes, readonly, types, performance)
|
||||||
|
- Debug + validation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Prochaine étape**: `scenario_13_cross_system.md`
|
||||||
1022
planTI/scenario_13_cross_system.md
Normal file
1022
planTI/scenario_13_cross_system.md
Normal file
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user