93 lines
2.4 KiB
Markdown
93 lines
2.4 KiB
Markdown
# ML-API: Metrics Analysis & Recommendations Service
|
|
|
|
ML-API is a high-performance Rust service designed to process system metrics, analyze them using machine learning models, and provide actionable recommendations. The service transforms raw metric arrays into structured prompts, sends them to configured ML models, and returns system insights.
|
|
|
|
## Key Features
|
|
|
|
- **Dual Interface Support**: Choose between WebSocket or REST API for your integration needs
|
|
- **Flexible ML Backend Integration**: Easily connect to any ML model API (Ollama, OpenAI, etc.)
|
|
- **Smart Prompt Generation**: Automatically transforms metrics into optimized ML prompts
|
|
- **Comprehensive Analysis**: Returns system health insights and improvement recommendations
|
|
- **Configurable Logging**: Adjustable log levels for optimal debugging and monitoring
|
|
|
|
## Quick Start
|
|
|
|
### Prerequisites
|
|
|
|
- Rust 1.70 or later
|
|
- Configured ML backend (Ollama, OpenAI, etc.)
|
|
|
|
### Configuration
|
|
|
|
Create a `.env` file in your project root:
|
|
|
|
```env
|
|
ML_TARGET_URL="http://url.to/ml/api"
|
|
ML_MODEL_NAME="kis-test"
|
|
ML_REQUEST_TIMEOUT="10"
|
|
ML_API_LOG_LEVEL="INFO"
|
|
ML_API_PORT="5134"
|
|
```
|
|
|
|
### Installation
|
|
|
|
```bash
|
|
git clone https://github.com/your-repo/ml-api.git
|
|
cd ml-api
|
|
cargo build --release
|
|
```
|
|
|
|
### Running the Service
|
|
|
|
```bash
|
|
cargo run --release
|
|
```
|
|
|
|
## API Endpoints
|
|
|
|
### WebSocket Interface
|
|
- **Endpoint**: `ws://your-server/api/method/ws`
|
|
- **Protocol**: Real-time metric processing with continuous feedback
|
|
|
|
### REST API
|
|
- **POST Endpoint**: `http://your-server/api/method/rest`
|
|
- **GET Endpoint**: `http://your-server/api/swagger` (API documentation)
|
|
|
|
## Request Format
|
|
|
|
Send your metrics as a JSON array in the following format:
|
|
|
|
```json
|
|
{
|
|
"service_name": "zvks",
|
|
"metrics": [
|
|
{
|
|
"id": "10001",
|
|
"name": "cpu_utilization",
|
|
"description": "cpu_utilization",
|
|
"value": 1.23,
|
|
"status": 0,
|
|
"device": 18,
|
|
"source": "module$11"
|
|
},
|
|
// ... metrics ...
|
|
]
|
|
}
|
|
```
|
|
|
|
## Response Format
|
|
|
|
The API returns comprehensive analysis and recommendations in **PLAIN TEXT**
|
|
|
|
## Advanced Configuration
|
|
|
|
### Logging Levels
|
|
|
|
Adjust the log granularity by changing `ML_LOG_LEVEL` in your `.env`:
|
|
|
|
- `TRACE`: Full debug information (verbose)
|
|
- `DEBUG`: Development-level logging
|
|
- `INFO`: Standard operational messages (default)
|
|
- `WARN`: Only warnings and errors
|
|
- `ERROR`: Critical errors only
|
|
- `OFF`: Disable logging completely |