This commit implements a complete test infrastructure for validating hot-reload stability and robustness across multiple scenarios. ## New Test Infrastructure ### Test Helpers (tests/helpers/) - TestMetrics: FPS, memory, reload time tracking with statistics - TestReporter: Assertion tracking and formatted test reports - SystemUtils: Memory usage monitoring via /proc/self/status - TestAssertions: Macro-based assertion framework ### Test Modules - TankModule: Realistic module with 50 tanks for production testing - ChaosModule: Crash-injection module for robustness validation - StressModule: Lightweight module for long-duration stability tests ## Integration Test Scenarios ### Scenario 1: Production Hot-Reload (test_01_production_hotreload.cpp) ✅ PASSED - End-to-end hot-reload validation - 30 seconds simulation (1800 frames @ 60 FPS) - TankModule with 50 tanks, realistic state - Source modification (v1.0 → v2.0), recompilation, reload - State preservation: positions, velocities, frameCount - Metrics: ~163ms reload time, 0.88MB memory growth ### Scenario 2: Chaos Monkey (test_02_chaos_monkey.cpp) ✅ PASSED - Extreme robustness testing - 150+ random crashes per run (5% crash probability per frame) - 5 crash types: runtime_error, logic_error, out_of_range, domain_error, state corruption - 100% recovery rate via automatic hot-reload - Corrupted state detection and rejection - Random seed for unpredictable crash patterns - Proof of real reload: temporary files in /tmp/grove_module_*.so ### Scenario 3: Stress Test (test_03_stress_test.cpp) ✅ PASSED - Long-duration stability validation - 10 minutes simulation (36000 frames @ 60 FPS) - 120 hot-reloads (every 5 seconds) - 100% reload success rate (120/120) - Memory growth: 2 MB (threshold: 50 MB) - Avg reload time: 160ms (threshold: 500ms) - No memory leaks, no file descriptor leaks ## Core Engine Enhancements ### ModuleLoader (src/ModuleLoader.cpp) - Temporary file copy to /tmp/ for Linux dlopen cache bypass - Robust reload() method: getState() → unload() → load() → setState() - Automatic cleanup of temporary files - Comprehensive error handling and logging ### DebugEngine (src/DebugEngine.cpp) - Automatic recovery in processModuleSystems() - Exception catching → logging → module reload → continue - Module state dump utilities for debugging ### SequentialModuleSystem (src/SequentialModuleSystem.cpp) - extractModule() for safe module extraction - registerModule() for module re-registration - Enhanced processModules() with error handling ## Build System - CMake configuration for test infrastructure - Shared library compilation for test modules (.so) - CTest integration for all scenarios - PIC flag management for spdlog compatibility ## Documentation (planTI/) - Complete test architecture documentation - Detailed scenario specifications with success criteria - Global test plan and validation thresholds ## Validation Results All 3 integration scenarios pass successfully: - Production hot-reload: State preservation validated - Chaos Monkey: 100% recovery from 150+ crashes - Stress Test: Stable over 120 reloads, minimal memory growth 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
59 lines
2.1 KiB
C++
59 lines
2.1 KiB
C++
#include "TestReporter.h"
|
|
#include <iostream>
|
|
|
|
namespace grove {
|
|
|
|
TestReporter::TestReporter(const std::string& name) : scenarioName(name) {}
|
|
|
|
void TestReporter::addMetric(const std::string& name, float value) {
|
|
metrics[name] = value;
|
|
}
|
|
|
|
void TestReporter::addAssertion(const std::string& name, bool passed) {
|
|
assertions.push_back({name, passed});
|
|
}
|
|
|
|
void TestReporter::printFinalReport() const {
|
|
std::cout << "\n";
|
|
std::cout << "════════════════════════════════════════════════════════════════\n";
|
|
std::cout << "FINAL REPORT: " << scenarioName << "\n";
|
|
std::cout << "════════════════════════════════════════════════════════════════\n\n";
|
|
|
|
// Metrics
|
|
if (!metrics.empty()) {
|
|
std::cout << "Metrics:\n";
|
|
for (const auto& [name, value] : metrics) {
|
|
std::cout << " " << name << ": " << value << "\n";
|
|
}
|
|
std::cout << "\n";
|
|
}
|
|
|
|
// Assertions
|
|
if (!assertions.empty()) {
|
|
std::cout << "Assertions:\n";
|
|
bool allPassed = true;
|
|
for (const auto& [name, passed] : assertions) {
|
|
std::cout << " " << (passed ? "✓" : "✗") << " " << name << "\n";
|
|
if (!passed) allPassed = false;
|
|
}
|
|
std::cout << "\n";
|
|
|
|
if (allPassed) {
|
|
std::cout << "Result: ✅ PASSED\n";
|
|
} else {
|
|
std::cout << "Result: ❌ FAILED\n";
|
|
}
|
|
}
|
|
|
|
std::cout << "════════════════════════════════════════════════════════════════\n";
|
|
}
|
|
|
|
int TestReporter::getExitCode() const {
|
|
for (const auto& [name, passed] : assertions) {
|
|
if (!passed) return 1; // FAIL
|
|
}
|
|
return 0; // PASS
|
|
}
|
|
|
|
} // namespace grove
|