From 446512fdcbfe9b13bad14a63480a245559d6a630 Mon Sep 17 00:00:00 2001 From: prplV Date: Mon, 23 Jun 2025 10:13:27 -0400 Subject: [PATCH] readme update --- README.md | 95 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..bd53682 --- /dev/null +++ b/README.md @@ -0,0 +1,95 @@ +# ML-API: Metrics Analysis & Recommendations Service + +ML-API is a high-performance Rust service designed to process system metrics, analyze them using machine learning models, and provide actionable recommendations. The service transforms raw metric arrays into structured prompts, sends them to configured ML models, and returns system insights. + +## Key Features + +- **Dual Interface Support**: Choose between WebSocket or REST API for your integration needs +- **Flexible ML Backend Integration**: Easily connect to any ML model API (Ollama, OpenAI, etc.) +- **Smart Prompt Generation**: Automatically transforms metrics into optimized ML prompts +- **Comprehensive Analysis**: Returns system health insights and improvement recommendations +- **Configurable Logging**: Adjustable log levels for optimal debugging and monitoring + +## Quick Start + +### Prerequisites + +- Rust 1.70 or later +- Configured ML backend (Ollama, OpenAI, etc.) + +### Configuration + +Create a `.env` file in your project root: + +```env +# ML API target URL (e.g., Ollama would use: http://localhost:11434/api/generate) +ML_TARGET_URL="http://your-ml-backend/api" + +# Log level (TRACE|DEBUG|INFO|WARN|ERROR|OFF) +ML_LOG_LEVEL="INFO" +``` + +### Installation + +```bash +git clone https://github.com/your-repo/ml-api.git +cd ml-api +cargo build --release +``` + +### Running the Service + +```bash +cargo run --release +``` + +## API Endpoints + +### WebSocket Interface +- **Endpoint**: `ws://your-server/api/method/ws` +- **Protocol**: Real-time metric processing with continuous feedback + +### REST API +- **POST Endpoint**: `http://your-server/api/method/rest` +- **GET Endpoint**: `http://your-server/api/swagger` (API documentation) + +## Request Format + +Send your metrics as a JSON array in the following format: + +```json +{ + "service_name": "zvks", + "metrics": [ + { + "id": "10001", + "name": "cpu_utilization", + "type": "i64", + "addr": "enode.monitoring.api", + "value": null, + "description": "cpu_utilization", + "status": 0, + "device": 18, + "source": "module$11" + }, + // ... metrics ... + ] +} +``` + +## Response Format + +The API returns comprehensive analysis and recommendations in **PLAIN TEXT** + +## Advanced Configuration + +### Logging Levels + +Adjust the log granularity by changing `ML_LOG_LEVEL` in your `.env`: + +- `TRACE`: Full debug information (verbose) +- `DEBUG`: Development-level logging +- `INFO`: Standard operational messages (default) +- `WARN`: Only warnings and errors +- `ERROR`: Critical errors only +- `OFF`: Disable logging completely \ No newline at end of file