First commit
This commit is contained in:
82
.github/copilot-instructions.md
vendored
Normal file
82
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
# Copilot / AI Agent Instructions for system2mqtt
|
||||
|
||||
Short, actionable guidance so an AI coding agent can be immediately productive.
|
||||
|
||||
1. Purpose
|
||||
- This repo collects host metrics and publishes them to Home Assistant over MQTT.
|
||||
- Main entry point: `system2mqtt` console script (defined in `setup.py`).
|
||||
|
||||
2. How to run (development)
|
||||
- Create a venv and install deps:
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
pip install -e .
|
||||
```
|
||||
- Run the service with the provided entry point:
|
||||
```bash
|
||||
system2mqtt
|
||||
```
|
||||
- Environment override: set `SYSTEM2MQTT_CONFIG` to point to a `config.yaml` path.
|
||||
|
||||
3. Big-picture architecture
|
||||
- `system2mqtt.main` (console entry) loads configuration, connects to MQTT, discovers collector modules under `src/system2mqtt/collectors/`, and loops to:
|
||||
- call each collector's `collect_metrics()`
|
||||
- publish Home Assistant discovery payloads and state/attributes topics
|
||||
- publish availability to `system2mqtt/{HOSTNAME}/status`
|
||||
- Collectors are pure Python modules (examples: `collectors/system_metrics.py`, `collectors/cpu_temperature.py`, `collectors/zfs_pools.py`). The main program imports them dynamically via `collectors.<name>` and expects a `collect_metrics()` function.
|
||||
|
||||
4. Collector conventions (exact, follow these)
|
||||
- Each collector should expose `collect_metrics() -> Dict` which returns a dict with an `entities` list.
|
||||
- Each entity must include at least: `sensor_id`, `name`, `value`, `state_class`, `unit_of_measurement`, `device_class`, and `attributes`.
|
||||
- Optional: module-level `DEFAULT_INTERVAL` can be present (collectors include this today), but note: current `main` does not use per-collector intervals (see "Notable quirks").
|
||||
- Example entity (from `system_metrics.py`):
|
||||
```py
|
||||
{
|
||||
"sensor_id": "cpu_usage",
|
||||
"name": "CPU Usage",
|
||||
"value": "12.3",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "%",
|
||||
"device_class": "power_factor",
|
||||
"attributes": {"friendly_name": "CPU Usage"}
|
||||
}
|
||||
```
|
||||
|
||||
5. Configuration and secrets
|
||||
- Default config path: `~/.config/system2mqtt/config.yaml` (created from `config.yaml.example` on first run).
|
||||
- `SYSTEM2MQTT_CONFIG` env var can override the config file location.
|
||||
- Required config keys under `mqtt`: `host`, `port`, `username`, `password`, `client_id`, `discovery_prefix`.
|
||||
|
||||
6. MQTT topics & discovery (concrete examples)
|
||||
- Discovery topic format published by `main`:
|
||||
`{discovery_prefix}/sensor/system2mqtt_{HOSTNAME}_{sensor_id}/config`
|
||||
- State topic format: `system2mqtt/{HOSTNAME}/{sensor_id}/state`
|
||||
- Attributes topic: `system2mqtt/{HOSTNAME}/{sensor_id}/attributes` (JSON)
|
||||
- Availability: `system2mqtt/{HOSTNAME}/status` with `online` / `offline` payloads (retained)
|
||||
|
||||
7. Integration points & dependencies
|
||||
- MQTT client: `paho-mqtt` (callback handlers in `main.py`).
|
||||
- System metrics: `psutil` used by `collectors/system_metrics.py`.
|
||||
- ZFS collectors call `zpool` binaries via `subprocess` (requires ZFS tools present on host).
|
||||
|
||||
8. Notable quirks & places to be careful
|
||||
- The README mentions per-collector update intervals, and collector modules define `DEFAULT_INTERVAL`, but `main.py` currently uses a fixed `time.sleep(60)` and does not schedule collectors individually. If changing scheduling, update `main.py` to read collector `DEFAULT_INTERVAL` or config `collectors.intervals`.
|
||||
- `main.py` lists collectors from the relative `collectors` directory using `os.listdir('collectors')`. For correct imports:
|
||||
- Run from package-installed environment (`pip install -e .`) and call `system2mqtt` (recommended), or
|
||||
- Ensure working directory / `PYTHONPATH` is set so `collectors` import works.
|
||||
- On first run the code copies `config.yaml.example` to the user config dir and exits — tests or CI should populate a config before invoking `system2mqtt`.
|
||||
|
||||
9. Suggested tasks for AI agents (concrete, small changes)
|
||||
- Implement per-collector scheduling using `DEFAULT_INTERVAL` or `config['collectors']['intervals']`.
|
||||
- Add unit tests around collector `collect_metrics()` return schema (validate required keys).
|
||||
- Improve error handling around dynamic imports (log which path was attempted).
|
||||
|
||||
10. Where to look in the code
|
||||
- Entry point & runtime: `src/system2mqtt/main.py`
|
||||
- Collector examples: `src/system2mqtt/collectors/*.py` (`system_metrics.py`, `cpu_temperature.py`, `zfs_pools.py`)
|
||||
- Example config: `config.yaml.example`
|
||||
- Packaging / console script: `setup.py` (entry point `system2mqtt`)
|
||||
|
||||
If anything here is unclear or you want the instructions to emphasize other areas (tests, CI, packaging), tell me which part to expand or correct.
|
||||
51
.gitignore
vendored
Normal file
51
.gitignore
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Virtual Environment
|
||||
venv/
|
||||
.env/
|
||||
.envrc
|
||||
.env.*
|
||||
.env.local
|
||||
.venv/
|
||||
env/
|
||||
ENV/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
# Testing
|
||||
.coverage
|
||||
htmlcov/
|
||||
.pytest_cache/
|
||||
|
||||
# Distribution
|
||||
dist/
|
||||
build/
|
||||
*.egg-info/
|
||||
|
||||
# Local configuration
|
||||
config.yaml
|
||||
.DS_Store
|
||||
174
README.md
Normal file
174
README.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# system2mqtt
|
||||
|
||||
A system for monitoring hosts by collecting metrics and sending them to Home Assistant via MQTT.
|
||||
|
||||
## Features
|
||||
|
||||
- Modular structure for metric collectors
|
||||
- Easy extensibility through new collectors
|
||||
- Automatic discovery in Home Assistant
|
||||
- Encrypted MQTT communication
|
||||
- Detailed device information in Home Assistant
|
||||
- Individual update intervals per collector
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/yourusername/system2mqtt.git
|
||||
cd system2mqtt
|
||||
```
|
||||
|
||||
2. Install Python dependencies:
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. Configure the system:
|
||||
```yaml
|
||||
mqtt:
|
||||
host: "mqtt.example.com" # MQTT Broker Address
|
||||
port: 1883 # MQTT Port
|
||||
username: "your_username"
|
||||
password: "your_password"
|
||||
client_id: "system2mqtt"
|
||||
discovery_prefix: "homeassistant" # Home Assistant Discovery Prefix
|
||||
|
||||
collectors:
|
||||
# Default interval for all collectors (in seconds)
|
||||
default_interval: 60
|
||||
|
||||
# Specific intervals for individual collectors
|
||||
intervals:
|
||||
zfs_pools: 300 # ZFS Pools every 5 minutes
|
||||
cpu_temperature: 30 # CPU Temperature every 30 seconds
|
||||
system_metrics: 60 # System Metrics every minute
|
||||
```
|
||||
|
||||
## Configuration via environment variables
|
||||
|
||||
You can override configuration values using environment variables.
|
||||
|
||||
Precedence: **defaults (code) < config file (YAML) < environment variables (ENV)**.
|
||||
|
||||
Recognized environment variables include:
|
||||
|
||||
- `SYSTEM2MQTT_CONFIG` — path to a YAML config file (overrides default lookup)
|
||||
- `MQTT_HOST` — MQTT server host (default: `localhost`)
|
||||
- `MQTT_PORT` — MQTT server port (default: `1883`)
|
||||
- `MQTT_USERNAME`, `MQTT_PASSWORD` — MQTT credentials
|
||||
- `MQTT_CLIENT_ID` — MQTT client id template (supports `{hostname}`)
|
||||
- `MQTT_DISCOVERY_PREFIX` — Home Assistant discovery prefix (default: `homeassistant`)
|
||||
- `COLLECTORS_DEFAULT_INTERVAL` — override global collectors default interval
|
||||
- `COLLECTOR_<NAME>_INTERVAL` — override per-collector interval (e.g. `COLLECTOR_system_metrics_INTERVAL=30`)
|
||||
|
||||
## Usage
|
||||
|
||||
Run the system directly or as a systemd service (see [SYSTEMD_SETUP.md](SYSTEMD_SETUP.md)):
|
||||
|
||||
```bash
|
||||
# Direct execution
|
||||
python3 system2mqtt.py
|
||||
|
||||
# Or use the run script
|
||||
./run.sh
|
||||
```
|
||||
|
||||
## Collectors
|
||||
|
||||
### System Metrics
|
||||
|
||||
Collects basic system metrics:
|
||||
- Last Boot Time
|
||||
- Load Average (1, 5, 15 minutes)
|
||||
- Memory Usage (Total, Available, Used)
|
||||
- Swap Usage (Total, Available, Used)
|
||||
- CPU Usage
|
||||
- Memory Usage
|
||||
- Swap Usage
|
||||
|
||||
Default Update Interval: 60 seconds
|
||||
|
||||
### CPU Temperature
|
||||
|
||||
Collects CPU temperature data:
|
||||
- Supports Linux and FreeBSD
|
||||
- Automatic OS detection
|
||||
- Correct unit (°C) and device class (temperature)
|
||||
|
||||
Default Update Interval: 30 seconds
|
||||
|
||||
### ZFS Pools
|
||||
|
||||
Collects information about ZFS pools:
|
||||
- Pool Health
|
||||
- Total Size
|
||||
- Used Space
|
||||
- Free Space
|
||||
- Usage Percentage
|
||||
- Additional Attributes (readonly, dedup, altroot)
|
||||
|
||||
Default Update Interval: 300 seconds (5 minutes)
|
||||
|
||||
## Update Intervals
|
||||
|
||||
Each collector has a predefined default update interval that can be overridden in the configuration file:
|
||||
|
||||
1. Default intervals are defined in the collector files
|
||||
2. These intervals can be customized per collector in `config.yaml`
|
||||
3. If no specific interval is defined in the configuration, the collector's default interval is used
|
||||
4. If no default interval is defined in the collector, the global `default_interval` from the configuration is used
|
||||
|
||||
## Data Format
|
||||
|
||||
The data exchange format is versioned and follows Home Assistant specifications. Each collector returns a JSON object with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"entities": [
|
||||
{
|
||||
"sensor_id": "unique_sensor_id",
|
||||
"name": "Sensor Name",
|
||||
"value": "sensor_value",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "unit",
|
||||
"device_class": "device_class",
|
||||
"icon": "mdi:icon",
|
||||
"attributes": {
|
||||
"friendly_name": "Friendly Name",
|
||||
"additional_attributes": "values"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Fields
|
||||
|
||||
- `sensor_id`: Unique ID for the sensor (used for MQTT topics)
|
||||
- `name`: Display name of the sensor
|
||||
- `value`: Current value of the sensor
|
||||
- `state_class`: Type of measurement (measurement, total, total_increasing)
|
||||
- `unit_of_measurement`: Unit of measurement
|
||||
- `device_class`: Type of sensor (temperature, humidity, pressure, etc.)
|
||||
- `icon`: Material Design Icon name (mdi:...)
|
||||
- `entity_category`: Category of the sensor (diagnostic, config, system)
|
||||
- `attributes`: Additional information as key-value pairs
|
||||
|
||||
### Versioning
|
||||
|
||||
The format is versioned to allow for future extensions. The current version is 1.0.
|
||||
|
||||
## Home Assistant Integration
|
||||
|
||||
The system uses Home Assistant's MQTT Discovery feature. Sensors are automatically detected and appear in Home Assistant with:
|
||||
- Correct name and icon
|
||||
- Current values
|
||||
- Historical data
|
||||
- Detailed device information
|
||||
|
||||
## License
|
||||
|
||||
MIT License
|
||||
78
SYSTEMD_SETUP.md
Normal file
78
SYSTEMD_SETUP.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# systemd Service Installation
|
||||
|
||||
Quick setup to run system2mqtt as a systemd service on Linux.
|
||||
|
||||
## Installation Steps
|
||||
|
||||
1. **Create dedicated user:**
|
||||
```bash
|
||||
sudo useradd -r -s /usr/sbin/nologin -d /opt/system2mqtt system2mqtt
|
||||
```
|
||||
|
||||
2. **Install system to /opt/system2mqtt:**
|
||||
```bash
|
||||
sudo mkdir -p /opt/system2mqtt
|
||||
sudo cp -r system2mqtt.py collectors config.yaml.example run.sh /opt/system2mqtt/
|
||||
sudo cp requirements.txt /opt/system2mqtt/
|
||||
sudo chmod +x /opt/system2mqtt/run.sh
|
||||
sudo chown -R system2mqtt:system2mqtt /opt/system2mqtt
|
||||
```
|
||||
|
||||
3. **Configure:**
|
||||
```bash
|
||||
# Copy and edit config (in user's home directory)
|
||||
sudo -u system2mqtt mkdir -p ~system2mqtt/.config/system2mqtt
|
||||
sudo -u system2mqtt cp /opt/system2mqtt/config.yaml.example ~system2mqtt/.config/system2mqtt/config.yaml
|
||||
sudo nano ~system2mqtt/.config/system2mqtt/config.yaml
|
||||
|
||||
# OR use environment variables in .env
|
||||
sudo nano /opt/system2mqtt/.env
|
||||
sudo chown system2mqtt:system2mqtt /opt/system2mqtt/.env
|
||||
```
|
||||
|
||||
4. **Install systemd service:**
|
||||
```bash
|
||||
sudo cp system2mqtt.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable system2mqtt
|
||||
sudo systemctl start system2mqtt
|
||||
```
|
||||
|
||||
5. **Check status:**
|
||||
```bash
|
||||
sudo systemctl status system2mqtt
|
||||
sudo journalctl -u system2mqtt -f
|
||||
```
|
||||
|
||||
## Service Management
|
||||
|
||||
- **Start:** `sudo systemctl start system2mqtt`
|
||||
- **Stop:** `sudo systemctl stop system2mqtt`
|
||||
- **Restart:** `sudo systemctl restart system2mqtt`
|
||||
- **Logs:** `sudo journalctl -u system2mqtt -f`
|
||||
- **Enable on boot:** `sudo systemctl enable system2mqtt`
|
||||
- **Disable on boot:** `sudo systemctl disable system2mqtt`
|
||||
|
||||
## Notes
|
||||
|
||||
- Service runs as unprivileged `system2mqtt` user
|
||||
- User is member of `adm` and `systemd-journal` groups for system metrics access
|
||||
- ZFS (read-only): On many systems, read-only queries like `zpool status` and `zpool list` work without special privileges. If they fail on your host, consider one of these options:
|
||||
- Add the user to a `zfs` group if present (Debian/Ubuntu with `zfsutils-linux` often provide it):
|
||||
```bash
|
||||
sudo usermod -aG zfs system2mqtt
|
||||
sudo systemctl restart system2mqtt
|
||||
```
|
||||
- Allow read-only ZFS commands via sudoers without a password:
|
||||
```bash
|
||||
echo "system2mqtt ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/sbin/zfs" | sudo tee /etc/sudoers.d/system2mqtt
|
||||
sudo visudo -c
|
||||
```
|
||||
- Use ZFS delegation (if supported in your setup) to grant specific permissions:
|
||||
```bash
|
||||
sudo zfs allow system2mqtt snapshot,send,receive YOUR_POOL
|
||||
```
|
||||
- `run.sh` automatically creates venv, installs/updates dependencies on each start
|
||||
- Auto-restarts on failure (RestartSec=10)
|
||||
- Reads environment from `/opt/system2mqtt/.env` if present
|
||||
- Logs to systemd journal (view with `journalctl`)
|
||||
124
collectors/cpu_temperature.py
Normal file
124
collectors/cpu_temperature.py
Normal file
@@ -0,0 +1,124 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
import glob
|
||||
from typing import Dict, Any, Optional, List, Tuple
|
||||
import sys
|
||||
|
||||
# Default update interval in seconds
|
||||
DEFAULT_INTERVAL = 30 # 30 seconds
|
||||
|
||||
def get_temperature_linux_coretemp() -> List[Tuple[float, str]]:
|
||||
"""Get CPU temperatures using coretemp module."""
|
||||
temps = []
|
||||
try:
|
||||
for hwmon_dir in glob.glob('/sys/class/hwmon/hwmon*'):
|
||||
try:
|
||||
with open(os.path.join(hwmon_dir, 'name'), 'r') as f:
|
||||
if f.read().strip() == 'coretemp':
|
||||
# Found coretemp, get all temperatures
|
||||
for temp_file in glob.glob(os.path.join(hwmon_dir, 'temp*_input')):
|
||||
try:
|
||||
with open(temp_file, 'r') as tf:
|
||||
temp = float(tf.read().strip()) / 1000.0
|
||||
# Get label if available
|
||||
label = "Package"
|
||||
label_file = temp_file.replace('_input', '_label')
|
||||
if os.path.exists(label_file):
|
||||
with open(label_file, 'r') as lf:
|
||||
label = lf.read().strip()
|
||||
temps.append((temp, label))
|
||||
except (FileNotFoundError, ValueError):
|
||||
continue
|
||||
except (FileNotFoundError, ValueError):
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
return temps
|
||||
|
||||
def get_temperature_linux_thermal() -> List[Tuple[float, str]]:
|
||||
"""Get CPU temperatures using thermal zones."""
|
||||
temps = []
|
||||
try:
|
||||
for thermal_dir in glob.glob('/sys/class/thermal/thermal_zone*'):
|
||||
try:
|
||||
with open(os.path.join(thermal_dir, 'type'), 'r') as f:
|
||||
zone_type = f.read().strip()
|
||||
if 'cpu' in zone_type.lower():
|
||||
with open(os.path.join(thermal_dir, 'temp'), 'r') as tf:
|
||||
temp = float(tf.read().strip()) / 1000.0
|
||||
temps.append((temp, zone_type))
|
||||
except (FileNotFoundError, ValueError):
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
return temps
|
||||
|
||||
def get_temperature_freebsd() -> List[Tuple[float, str]]:
|
||||
"""Get CPU temperatures on FreeBSD systems."""
|
||||
temps = []
|
||||
try:
|
||||
# Get number of CPUs
|
||||
cpu_count = int(subprocess.check_output(['sysctl', '-n', 'hw.ncpu']).decode().strip())
|
||||
|
||||
# Get temperature for each CPU
|
||||
for cpu in range(cpu_count):
|
||||
try:
|
||||
temp = subprocess.check_output(['sysctl', '-n', f'dev.cpu.{cpu}.temperature']).decode().strip()
|
||||
temp_value = float(temp)
|
||||
temps.append((temp_value, f'CPU {cpu}'))
|
||||
except (subprocess.SubprocessError, ValueError):
|
||||
continue
|
||||
except (subprocess.SubprocessError, ValueError):
|
||||
pass
|
||||
return temps
|
||||
|
||||
def collect_metrics() -> Dict[str, Any]:
|
||||
"""Collect CPU temperature metrics."""
|
||||
metrics = {
|
||||
"entities": []
|
||||
}
|
||||
|
||||
temps = []
|
||||
|
||||
# Get CPU temperatures based on OS
|
||||
if sys.platform.startswith('linux'):
|
||||
# Try coretemp first (most reliable)
|
||||
temps.extend(get_temperature_linux_coretemp())
|
||||
|
||||
# If no coretemp found, try thermal zones
|
||||
if not temps:
|
||||
temps.extend(get_temperature_linux_thermal())
|
||||
|
||||
elif sys.platform.startswith('freebsd'):
|
||||
temps.extend(get_temperature_freebsd())
|
||||
|
||||
# Add temperature sensors
|
||||
if temps:
|
||||
# Only keep package temperatures
|
||||
package_temps = [(t, l) for t, l in temps if 'Package' in l]
|
||||
|
||||
# Add package temperature
|
||||
for temp, label in package_temps:
|
||||
metrics['entities'].append({
|
||||
'sensor_id': 'cpu_temperature',
|
||||
'name': 'CPU Temperature',
|
||||
'value': str(round(temp, 1)),
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': '°C',
|
||||
'device_class': 'temperature',
|
||||
'icon': 'mdi:thermometer',
|
||||
'attributes': {
|
||||
'friendly_name': 'CPU Temperature',
|
||||
'source': 'coretemp'
|
||||
}
|
||||
})
|
||||
|
||||
return metrics
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Example usage
|
||||
metrics = collect_metrics()
|
||||
print(metrics)
|
||||
169
collectors/system_metrics.py
Normal file
169
collectors/system_metrics.py
Normal file
@@ -0,0 +1,169 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import psutil
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
# Default update interval in seconds
|
||||
DEFAULT_INTERVAL = 60 # 1 minute
|
||||
|
||||
def collect_metrics() -> Dict[str, Any]:
|
||||
"""Collect system metrics and return them in the required format."""
|
||||
# Get system metrics
|
||||
boot_time = datetime.fromtimestamp(psutil.boot_time())
|
||||
load_avg = psutil.getloadavg()
|
||||
memory = psutil.virtual_memory()
|
||||
swap = psutil.swap_memory()
|
||||
cpu_percent = psutil.cpu_percent(interval=1)
|
||||
|
||||
# Convert bytes to GB or TB conditionally
|
||||
def to_size(bytes_value: int):
|
||||
tb = 1024**4
|
||||
gb = 1024**3
|
||||
if bytes_value >= tb:
|
||||
return round(bytes_value / tb, 2), 'TB'
|
||||
return round(bytes_value / gb, 2), 'GB'
|
||||
|
||||
return {
|
||||
"version": "1.0",
|
||||
"entities": [
|
||||
{
|
||||
"name": "Last Boot",
|
||||
"sensor_id": "last_boot",
|
||||
"state_class": "total",
|
||||
"device_class": "timestamp",
|
||||
"unit_of_measurement": "",
|
||||
"value": boot_time.astimezone().isoformat(),
|
||||
"icon": "mdi:clock-time-four",
|
||||
"attributes": {
|
||||
"friendly_name": "Last Boot Time"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Load Average (15m)",
|
||||
"sensor_id": "load_15m",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "",
|
||||
"device_class": "power_factor",
|
||||
"value": str(round(load_avg[2], 1)),
|
||||
"icon": "mdi:speedometer",
|
||||
"attributes": {
|
||||
"friendly_name": "System Load (15m)"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Load Average (5m)",
|
||||
"sensor_id": "load_5m",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "",
|
||||
"device_class": "power_factor",
|
||||
"value": str(round(load_avg[1], 1)),
|
||||
"icon": "mdi:speedometer",
|
||||
"attributes": {
|
||||
"friendly_name": "System Load (5m)"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Load Average (1m)",
|
||||
"sensor_id": "load_1m",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "",
|
||||
"device_class": "power_factor",
|
||||
"value": str(round(load_avg[0], 1)),
|
||||
"icon": "mdi:speedometer",
|
||||
"attributes": {
|
||||
"friendly_name": "System Load (1m)"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Memory Free",
|
||||
"sensor_id": "memory_free",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": to_size(memory.available)[1],
|
||||
"device_class": "data_size",
|
||||
"value": str(to_size(memory.available)[0]),
|
||||
"icon": "mdi:memory",
|
||||
"attributes": {
|
||||
"friendly_name": "Available Memory"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Memory Used",
|
||||
"sensor_id": "memory_used",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": to_size(memory.used)[1],
|
||||
"device_class": "data_size",
|
||||
"value": str(to_size(memory.used)[0]),
|
||||
"icon": "mdi:memory",
|
||||
"attributes": {
|
||||
"friendly_name": "Used Memory"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Memory Usage",
|
||||
"sensor_id": "memory_usage",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "%",
|
||||
"device_class": "power_factor",
|
||||
"value": str(memory.percent),
|
||||
"icon": "mdi:chart-line",
|
||||
"attributes": {
|
||||
"friendly_name": "Memory Usage"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "CPU Usage",
|
||||
"sensor_id": "cpu_usage",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "%",
|
||||
"device_class": "power_factor",
|
||||
"value": str(cpu_percent),
|
||||
"icon": "mdi:cpu-64-bit",
|
||||
"attributes": {
|
||||
"friendly_name": "CPU Usage"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Swap Free",
|
||||
"sensor_id": "swap_free",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": to_size(swap.free)[1],
|
||||
"device_class": "data_size",
|
||||
"value": str(to_size(swap.free)[0]),
|
||||
"icon": "mdi:harddisk",
|
||||
"attributes": {
|
||||
"friendly_name": "Free Swap"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Swap Used",
|
||||
"sensor_id": "swap_used",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": to_size(swap.used)[1],
|
||||
"device_class": "data_size",
|
||||
"value": str(to_size(swap.used)[0]),
|
||||
"icon": "mdi:harddisk",
|
||||
"attributes": {
|
||||
"friendly_name": "Used Swap"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Swap Usage",
|
||||
"sensor_id": "swap_usage",
|
||||
"state_class": "measurement",
|
||||
"unit_of_measurement": "%",
|
||||
"device_class": "power_factor",
|
||||
"value": str(swap.percent),
|
||||
"icon": "mdi:chart-line",
|
||||
"attributes": {
|
||||
"friendly_name": "Swap Usage"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Example usage
|
||||
metrics = collect_metrics()
|
||||
print(metrics)
|
||||
178
collectors/zfs_pools.py
Normal file
178
collectors/zfs_pools.py
Normal file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import subprocess
|
||||
from typing import Dict, Any, List
|
||||
import json
|
||||
import shutil
|
||||
import os
|
||||
|
||||
# Default update interval in seconds
|
||||
DEFAULT_INTERVAL = 300 # 5 minutes
|
||||
|
||||
def get_zfs_pools() -> List[Dict[str, Any]]:
|
||||
"""Get information about ZFS pools."""
|
||||
try:
|
||||
# Get list of pools
|
||||
pools = subprocess.check_output(['zpool', 'list', '-H', '-o', 'name,size,alloc,free,health,readonly,dedup,altroot']).decode().strip().split('\n')
|
||||
|
||||
pool_info = []
|
||||
for pool in pools:
|
||||
if not pool: # Skip empty lines
|
||||
continue
|
||||
|
||||
name, size, alloc, free, health, readonly, dedup, altroot = pool.split('\t')
|
||||
|
||||
# Get detailed pool status
|
||||
status = subprocess.check_output(['zpool', 'status', name]).decode()
|
||||
|
||||
# Get pool properties
|
||||
properties = subprocess.check_output(['zpool', 'get', 'all', name]).decode()
|
||||
|
||||
pool_info.append({
|
||||
'name': name,
|
||||
'size': size,
|
||||
'allocated': alloc,
|
||||
'free': free,
|
||||
'health': health,
|
||||
'readonly': readonly == 'on',
|
||||
'dedup': dedup,
|
||||
'altroot': altroot,
|
||||
'status': status,
|
||||
'properties': properties
|
||||
})
|
||||
|
||||
return pool_info
|
||||
except subprocess.SubprocessError as e:
|
||||
print(f"Error getting ZFS pool information: {e}")
|
||||
return []
|
||||
|
||||
def convert_size_to_bytes(size_str: str) -> int:
|
||||
"""Convert ZFS size string to bytes."""
|
||||
units = {
|
||||
'B': 1,
|
||||
'K': 1024,
|
||||
'M': 1024**2,
|
||||
'G': 1024**3,
|
||||
'T': 1024**4,
|
||||
'P': 1024**5
|
||||
}
|
||||
|
||||
try:
|
||||
number = float(size_str[:-1])
|
||||
unit = size_str[-1].upper()
|
||||
return int(number * units[unit])
|
||||
except (ValueError, KeyError):
|
||||
return 0
|
||||
|
||||
def collect_metrics() -> Dict[str, Any]:
|
||||
"""Collect ZFS pool metrics. Skips cleanly if ZFS is unavailable."""
|
||||
metrics = {"entities": []}
|
||||
|
||||
# Check binary availability
|
||||
zpool_path = shutil.which('zpool')
|
||||
zfs_path = shutil.which('zfs')
|
||||
if not zpool_path or not zfs_path:
|
||||
# Skip gracefully when binaries are missing
|
||||
return {"entities": []}
|
||||
|
||||
# Check device node (required for libzfs operations)
|
||||
if not os.path.exists('/dev/zfs'):
|
||||
# Skip if device not present in container/host
|
||||
return {"entities": []}
|
||||
|
||||
pools = get_zfs_pools()
|
||||
|
||||
def fmt_size(bytes_value: int):
|
||||
tb = 1024**4
|
||||
gb = 1024**3
|
||||
if bytes_value >= tb:
|
||||
return round(bytes_value / tb, 2), 'TB'
|
||||
return round(bytes_value / gb, 2), 'GB'
|
||||
|
||||
for pool in pools:
|
||||
# Pool health status
|
||||
metrics['entities'].append({
|
||||
'sensor_id': f'zfs_pool_{pool["name"]}_health',
|
||||
'name': f'ZFS Pool {pool["name"]} Health',
|
||||
'value': pool['health'],
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': '',
|
||||
'device_class': 'enum',
|
||||
'icon': 'mdi:database-check',
|
||||
'attributes': {
|
||||
'friendly_name': f'ZFS Pool {pool["name"]} Health Status',
|
||||
'readonly': pool['readonly'],
|
||||
'dedup': pool['dedup'],
|
||||
'altroot': pool['altroot']
|
||||
}
|
||||
})
|
||||
|
||||
# Pool size
|
||||
size_bytes = convert_size_to_bytes(pool['size'])
|
||||
size_value, size_unit = fmt_size(size_bytes)
|
||||
metrics['entities'].append({
|
||||
'sensor_id': f'zfs_pool_{pool["name"]}_size',
|
||||
'name': f'ZFS Pool {pool["name"]} Size',
|
||||
'value': str(size_value),
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': size_unit,
|
||||
'device_class': 'data_size',
|
||||
'icon': 'mdi:database',
|
||||
'attributes': {
|
||||
'friendly_name': f'ZFS Pool {pool["name"]} Total Size'
|
||||
}
|
||||
})
|
||||
|
||||
# Pool allocated space
|
||||
alloc_bytes = convert_size_to_bytes(pool['allocated'])
|
||||
alloc_value, alloc_unit = fmt_size(alloc_bytes)
|
||||
metrics['entities'].append({
|
||||
'sensor_id': f'zfs_pool_{pool["name"]}_allocated',
|
||||
'name': f'ZFS Pool {pool["name"]} Allocated',
|
||||
'value': str(alloc_value),
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': alloc_unit,
|
||||
'device_class': 'data_size',
|
||||
'icon': 'mdi:database-minus',
|
||||
'attributes': {
|
||||
'friendly_name': f'ZFS Pool {pool["name"]} Allocated Space'
|
||||
}
|
||||
})
|
||||
|
||||
# Pool free space
|
||||
free_bytes = convert_size_to_bytes(pool['free'])
|
||||
free_value, free_unit = fmt_size(free_bytes)
|
||||
metrics['entities'].append({
|
||||
'sensor_id': f'zfs_pool_{pool["name"]}_free',
|
||||
'name': f'ZFS Pool {pool["name"]} Free',
|
||||
'value': str(free_value),
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': free_unit,
|
||||
'device_class': 'data_size',
|
||||
'icon': 'mdi:database-plus',
|
||||
'attributes': {
|
||||
'friendly_name': f'ZFS Pool {pool["name"]} Free Space'
|
||||
}
|
||||
})
|
||||
|
||||
# Pool usage percentage
|
||||
usage_percent = (alloc_bytes / size_bytes * 100) if size_bytes > 0 else 0
|
||||
metrics['entities'].append({
|
||||
'sensor_id': f'zfs_pool_{pool["name"]}_usage',
|
||||
'name': f'ZFS Pool {pool["name"]} Usage',
|
||||
'value': str(round(usage_percent, 1)),
|
||||
'state_class': 'measurement',
|
||||
'unit_of_measurement': '%',
|
||||
'device_class': 'power_factor',
|
||||
'icon': 'mdi:chart-donut',
|
||||
'attributes': {
|
||||
'friendly_name': f'ZFS Pool {pool["name"]} Usage Percentage'
|
||||
}
|
||||
})
|
||||
|
||||
return metrics
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Example usage
|
||||
metrics = collect_metrics()
|
||||
print(json.dumps(metrics, indent=2))
|
||||
54
config.yaml.example
Normal file
54
config.yaml.example
Normal file
@@ -0,0 +1,54 @@
|
||||
# MQTT Configuration
|
||||
mqtt:
|
||||
# MQTT Broker Address (default: localhost)
|
||||
host: "localhost"
|
||||
|
||||
# MQTT Port (Default: 1883 for unencrypted, 8883 for TLS)
|
||||
port: 1883
|
||||
|
||||
# MQTT Username
|
||||
username: "your_username"
|
||||
|
||||
# MQTT Password
|
||||
password: "your_password"
|
||||
|
||||
# MQTT Client ID (will be extended with hostname)
|
||||
client_id: "system2mqtt_{hostname}"
|
||||
|
||||
# Home Assistant Discovery Prefix
|
||||
discovery_prefix: "homeassistant"
|
||||
|
||||
# MQTT State Prefix for sensors
|
||||
state_prefix: "system2mqtt"
|
||||
|
||||
# Collector Configuration
|
||||
collectors:
|
||||
# Default interval for all collectors (in seconds)
|
||||
# Used when no specific interval is defined
|
||||
default_interval: 60
|
||||
|
||||
# Specific intervals for individual collectors
|
||||
# These override the collector's default intervals
|
||||
intervals:
|
||||
# ZFS Pools are updated every 5 minutes
|
||||
zfs_pools: 300
|
||||
|
||||
# CPU Temperature is updated every 30 seconds
|
||||
cpu_temperature: 30
|
||||
|
||||
# System Metrics are updated every minute
|
||||
system_metrics: 60
|
||||
|
||||
# Notes:
|
||||
# 1. The default intervals for collectors are:
|
||||
# - zfs_pools: 300 seconds (5 minutes)
|
||||
# - cpu_temperature: 30 seconds
|
||||
# - system_metrics: 60 seconds (1 minute)
|
||||
#
|
||||
# 2. These intervals can be overridden here
|
||||
#
|
||||
# 3. If no specific interval is defined, the collector's
|
||||
# default interval will be used
|
||||
#
|
||||
# 4. If no default interval is defined in the collector,
|
||||
# the global default_interval will be used
|
||||
8
requirements.txt
Normal file
8
requirements.txt
Normal file
@@ -0,0 +1,8 @@
|
||||
paho-mqtt>=2.1.0
|
||||
psutil>=5.9.0
|
||||
pyyaml>=6.0
|
||||
black>=23.0.0
|
||||
isort>=5.12.0
|
||||
mypy>=1.0.0
|
||||
pytest>=7.0.0
|
||||
pytest-cov>=4.0.0
|
||||
23
run.sh
Executable file
23
run.sh
Executable file
@@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Virtual environment directory
|
||||
VENV_DIR=".venv"
|
||||
|
||||
# Create virtual environment if it doesn't exist
|
||||
if [ ! -d "$VENV_DIR" ]; then
|
||||
echo "Creating virtual environment..."
|
||||
python3 -m venv "$VENV_DIR"
|
||||
fi
|
||||
|
||||
# Activate virtual environment
|
||||
echo "Activating virtual environment..."
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
# Install/update dependencies
|
||||
echo "Installing/updating dependencies..."
|
||||
pip install --quiet --upgrade pip
|
||||
pip install --quiet -r requirements.txt
|
||||
|
||||
# Start the service
|
||||
echo "Starting system2mqtt..."
|
||||
python3 system2mqtt.py
|
||||
1
src/system2mqtt/main.py
Normal file
1
src/system2mqtt/main.py
Normal file
@@ -0,0 +1 @@
|
||||
from system2mqtt import main as main
|
||||
326
system2mqtt.py
Normal file
326
system2mqtt.py
Normal file
@@ -0,0 +1,326 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import json
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
import yaml
|
||||
import platform
|
||||
import asyncio
|
||||
import paho.mqtt.client as mqtt
|
||||
from typing import Dict, Any, List
|
||||
import importlib.util
|
||||
import glob
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Default configuration values used when config file or env vars are missing
|
||||
CONFIG_DEFAULTS = {
|
||||
'mqtt': {
|
||||
'host': 'localhost',
|
||||
'port': 1883,
|
||||
'username': None,
|
||||
'password': None,
|
||||
'client_id': 'system2mqtt_{hostname}',
|
||||
'discovery_prefix': 'homeassistant'
|
||||
},
|
||||
'collectors': {
|
||||
'default_interval': 60,
|
||||
'intervals': {}
|
||||
}
|
||||
}
|
||||
|
||||
class System2MQTT:
|
||||
def __init__(self, config_path: str = "config.yaml"):
|
||||
self.config = self._load_config(config_path)
|
||||
self.hostname = socket.gethostname()
|
||||
self.client = None # paho MQTT client initialized in connect()
|
||||
self.connected = False
|
||||
self.device_info = self._get_device_info()
|
||||
self.collectors = self._load_collectors()
|
||||
self.last_run = {} # Speichert den Zeitpunkt des letzten Laufs für jeden Sammler
|
||||
|
||||
def _load_config(self, config_path: str) -> Dict[str, Any]:
|
||||
"""Load configuration from YAML file, apply defaults and environment overrides.
|
||||
|
||||
Precedence: CONFIG_DEFAULTS < config file < environment variables
|
||||
"""
|
||||
# Determine config path: env var overrides parameter
|
||||
env_path = os.environ.get('SYSTEM2MQTT_CONFIG')
|
||||
if env_path:
|
||||
config_path = env_path
|
||||
|
||||
config = {}
|
||||
# Start with defaults
|
||||
config.update(CONFIG_DEFAULTS)
|
||||
|
||||
# Try loading YAML if present
|
||||
if os.path.exists(config_path):
|
||||
try:
|
||||
with open(config_path, 'r') as f:
|
||||
loaded = yaml.safe_load(f) or {}
|
||||
# Deep merge loaded config into defaults (shallow merge is enough for our shape)
|
||||
for k, v in loaded.items():
|
||||
if isinstance(v, dict) and k in config:
|
||||
config[k].update(v)
|
||||
else:
|
||||
config[k] = v
|
||||
except Exception as e:
|
||||
print(f"Warning: failed to load config file {config_path}: {e}")
|
||||
print("Proceeding with defaults and environment overrides.")
|
||||
else:
|
||||
print(f"Config file '{config_path}' not found; using defaults and environment variables if set.")
|
||||
|
||||
# Ensure necessary sub-keys exist
|
||||
config.setdefault('mqtt', CONFIG_DEFAULTS['mqtt'].copy())
|
||||
config.setdefault('collectors', CONFIG_DEFAULTS['collectors'].copy())
|
||||
config['collectors'].setdefault('intervals', {})
|
||||
|
||||
# Apply environment variable overrides
|
||||
self._merge_env_overrides(config)
|
||||
|
||||
return config
|
||||
|
||||
def _merge_env_overrides(self, config: Dict[str, Any]):
|
||||
"""Merge environment variable overrides into the config dict.
|
||||
|
||||
Recognized env vars (examples): MQTT_HOST, MQTT_PORT, MQTT_USERNAME, MQTT_PASSWORD,
|
||||
MQTT_CLIENT_ID, MQTT_DISCOVERY_PREFIX, COLLECTORS_DEFAULT_INTERVAL, COLLECTOR_<NAME>_INTERVAL
|
||||
"""
|
||||
# MQTT overrides
|
||||
if 'MQTT_HOST' in os.environ:
|
||||
config['mqtt']['host'] = os.environ['MQTT_HOST']
|
||||
if 'MQTT_PORT' in os.environ:
|
||||
try:
|
||||
config['mqtt']['port'] = int(os.environ['MQTT_PORT'])
|
||||
except ValueError:
|
||||
print("Warning: MQTT_PORT is not an integer; ignoring env override")
|
||||
if 'MQTT_USERNAME' in os.environ:
|
||||
config['mqtt']['username'] = os.environ['MQTT_USERNAME']
|
||||
if 'MQTT_PASSWORD' in os.environ:
|
||||
config['mqtt']['password'] = os.environ['MQTT_PASSWORD']
|
||||
if 'MQTT_CLIENT_ID' in os.environ:
|
||||
config['mqtt']['client_id'] = os.environ['MQTT_CLIENT_ID']
|
||||
if 'MQTT_DISCOVERY_PREFIX' in os.environ:
|
||||
config['mqtt']['discovery_prefix'] = os.environ['MQTT_DISCOVERY_PREFIX']
|
||||
|
||||
# Collectors default interval
|
||||
if 'COLLECTORS_DEFAULT_INTERVAL' in os.environ:
|
||||
try:
|
||||
config['collectors']['default_interval'] = int(os.environ['COLLECTORS_DEFAULT_INTERVAL'])
|
||||
except ValueError:
|
||||
print("Warning: COLLECTORS_DEFAULT_INTERVAL is not an integer; ignoring env override")
|
||||
|
||||
# Per-collector overrides
|
||||
for key, val in os.environ.items():
|
||||
if key.startswith('COLLECTOR_') and key.endswith('_INTERVAL'):
|
||||
# Example: COLLECTOR_system_metrics_INTERVAL
|
||||
parts = key.split('_')
|
||||
if len(parts) >= 3:
|
||||
name = '_'.join(parts[1:-1])
|
||||
try:
|
||||
config['collectors']['intervals'][name] = int(val)
|
||||
except ValueError:
|
||||
print(f"Warning: {key} must be an integer; ignoring")
|
||||
|
||||
def _setup_mqtt_client(self) -> mqtt.Client:
|
||||
"""Setup paho-mqtt client with configuration (callback API v2 when available)."""
|
||||
client_id = self.config['mqtt'].get('client_id', 'system2mqtt_{hostname}').format(hostname=self.hostname)
|
||||
# Prefer callback API v2 to avoid deprecation warnings; fall back if older paho
|
||||
try:
|
||||
client = mqtt.Client(callback_api_version=mqtt.CallbackAPIVersion.VERSION2, client_id=client_id)
|
||||
except TypeError:
|
||||
client = mqtt.Client(client_id=client_id)
|
||||
username = self.config['mqtt'].get('username')
|
||||
password = self.config['mqtt'].get('password')
|
||||
if username or password:
|
||||
client.username_pw_set(username, password)
|
||||
client.on_connect = self._on_connect
|
||||
client.on_disconnect = self._on_disconnect
|
||||
return client
|
||||
|
||||
def _on_connect(self, client, userdata, flags, rc, properties=None):
|
||||
"""Callback when connected to broker (paho)."""
|
||||
try:
|
||||
rc_val = int(rc)
|
||||
except Exception:
|
||||
rc_val = 0
|
||||
if rc_val == 0:
|
||||
print("Connected to MQTT broker")
|
||||
self.connected = True
|
||||
else:
|
||||
print(f"Failed to connect to MQTT broker with code: {rc_val}")
|
||||
self.connected = False
|
||||
|
||||
def _on_disconnect(self, client, userdata, rc, reason_code=None, properties=None):
|
||||
"""Callback when disconnected (paho v2)."""
|
||||
print("Disconnected from MQTT broker")
|
||||
self.connected = False
|
||||
|
||||
def _get_device_info(self) -> Dict[str, Any]:
|
||||
"""Get device information for Home Assistant."""
|
||||
return {
|
||||
"identifiers": [f"system2mqtt_{self.hostname}"],
|
||||
"name": f"System {self.hostname}",
|
||||
"model": platform.machine(),
|
||||
"manufacturer": platform.system()
|
||||
}
|
||||
|
||||
def _get_unique_id(self, sensor_id: str) -> str:
|
||||
"""Generate unique_id from sensor_id."""
|
||||
return f"system2mqtt_{self.hostname}_{sensor_id}"
|
||||
|
||||
def _get_state_topic(self, sensor_id: str) -> str:
|
||||
"""Generate state topic from sensor_id."""
|
||||
return f"system2mqtt/{self.hostname}/{sensor_id}/state"
|
||||
|
||||
def _load_collectors(self) -> List[Dict[str, Any]]:
|
||||
"""Load all collector modules from the collectors directory."""
|
||||
collectors = []
|
||||
collector_dir = os.path.join(os.path.dirname(__file__), 'collectors')
|
||||
|
||||
# Find all Python files in the collectors directory
|
||||
collector_files = glob.glob(os.path.join(collector_dir, '*.py'))
|
||||
|
||||
for collector_file in collector_files:
|
||||
if collector_file.endswith('__init__.py'):
|
||||
continue
|
||||
|
||||
module_name = os.path.splitext(os.path.basename(collector_file))[0]
|
||||
spec = importlib.util.spec_from_file_location(module_name, collector_file)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
if hasattr(module, 'collect_metrics'):
|
||||
# Get interval from config or use collector's default
|
||||
default_interval = getattr(module, 'DEFAULT_INTERVAL', self.config['collectors']['default_interval'])
|
||||
interval = self.config['collectors']['intervals'].get(
|
||||
module_name,
|
||||
default_interval
|
||||
)
|
||||
|
||||
collectors.append({
|
||||
'module': module,
|
||||
'name': module_name,
|
||||
'interval': interval
|
||||
})
|
||||
print(f"Loaded collector: {module_name} (interval: {interval}s)")
|
||||
|
||||
return collectors
|
||||
|
||||
async def process_collector_data(self, data: Dict[str, Any]):
|
||||
"""Process data from collectors and publish to MQTT."""
|
||||
if not self.connected:
|
||||
print("Not connected to MQTT broker")
|
||||
return
|
||||
|
||||
# Publish discovery messages for each entity
|
||||
for entity in data['entities']:
|
||||
sensor_id = entity['sensor_id']
|
||||
unique_id = self._get_unique_id(sensor_id)
|
||||
state_topic = self._get_state_topic(sensor_id)
|
||||
discovery_topic = f"{self.config['mqtt']['discovery_prefix']}/sensor/{unique_id}/config"
|
||||
attributes_topic = f"{state_topic}/attributes"
|
||||
availability_topic = f"system2mqtt/{self.hostname}/status"
|
||||
|
||||
# Prepare discovery message
|
||||
discovery_msg = {
|
||||
"name": entity['name'],
|
||||
"unique_id": unique_id,
|
||||
"state_topic": state_topic,
|
||||
"state_class": entity['state_class'],
|
||||
"unit_of_measurement": entity['unit_of_measurement'],
|
||||
"device_class": entity['device_class'],
|
||||
"device": self.device_info,
|
||||
"json_attributes_topic": attributes_topic,
|
||||
"availability_topic": availability_topic,
|
||||
"payload_available": "online",
|
||||
"payload_not_available": "offline"
|
||||
}
|
||||
|
||||
# Include icon if provided by the collector
|
||||
if 'icon' in entity and entity['icon']:
|
||||
discovery_msg["icon"] = entity['icon']
|
||||
|
||||
# Publish discovery message
|
||||
self.client.publish(discovery_topic, json.dumps(discovery_msg), qos=0, retain=True)
|
||||
# Publish availability (retained)
|
||||
self.client.publish(availability_topic, "online", qos=0, retain=True)
|
||||
|
||||
# Publish state
|
||||
self.client.publish(state_topic, str(entity['value']), qos=0, retain=True)
|
||||
|
||||
# Publish attributes if present
|
||||
if 'attributes' in entity:
|
||||
self.client.publish(attributes_topic, json.dumps(entity['attributes']), qos=0, retain=True)
|
||||
|
||||
def should_run_collector(self, collector: Dict[str, Any]) -> bool:
|
||||
"""Check if a collector should run based on its interval."""
|
||||
now = datetime.now()
|
||||
last_run = self.last_run.get(collector['name'])
|
||||
|
||||
if last_run is None:
|
||||
return True
|
||||
|
||||
interval = timedelta(seconds=collector['interval'])
|
||||
return (now - last_run) >= interval
|
||||
|
||||
async def collect_and_publish(self):
|
||||
"""Collect metrics from all collectors and publish them."""
|
||||
for collector in self.collectors:
|
||||
if not self.should_run_collector(collector):
|
||||
continue
|
||||
|
||||
try:
|
||||
data = collector['module'].collect_metrics()
|
||||
await self.process_collector_data(data)
|
||||
self.last_run[collector['name']] = datetime.now()
|
||||
print(f"Updated {collector['name']} metrics")
|
||||
except Exception as e:
|
||||
print(f"Error collecting metrics from {collector['name']}: {e}")
|
||||
|
||||
async def connect(self):
|
||||
"""Connect to MQTT broker using paho-mqtt and wait briefly for on_connect."""
|
||||
try:
|
||||
self.client = self._setup_mqtt_client()
|
||||
self.client.connect(self.config['mqtt']['host'], self.config['mqtt']['port'])
|
||||
self.client.loop_start()
|
||||
# Wait up to 5 seconds for on_connect to fire
|
||||
for _ in range(50):
|
||||
if self.connected:
|
||||
break
|
||||
await asyncio.sleep(0.1)
|
||||
except Exception as e:
|
||||
print(f"Error connecting to MQTT broker: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
async def disconnect(self):
|
||||
"""Disconnect from MQTT broker (paho)."""
|
||||
try:
|
||||
if self.client:
|
||||
self.client.loop_stop()
|
||||
self.client.disconnect()
|
||||
finally:
|
||||
pass
|
||||
|
||||
async def async_main():
|
||||
"""Async main function."""
|
||||
system2mqtt = System2MQTT()
|
||||
await system2mqtt.connect()
|
||||
|
||||
try:
|
||||
# Initial collection
|
||||
await system2mqtt.collect_and_publish()
|
||||
|
||||
# Main loop - check every second if any collector needs to run
|
||||
while True:
|
||||
await system2mqtt.collect_and_publish()
|
||||
await asyncio.sleep(1)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nShutting down...")
|
||||
finally:
|
||||
await system2mqtt.disconnect()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(async_main())
|
||||
26
system2mqtt.service
Normal file
26
system2mqtt.service
Normal file
@@ -0,0 +1,26 @@
|
||||
[Unit]
|
||||
Description=System2MQTT - System Metrics to Home Assistant via MQTT
|
||||
After=network-online.target mosquitto.service
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=system2mqtt
|
||||
Group=system2mqtt
|
||||
SupplementaryGroups=adm systemd-journal
|
||||
WorkingDirectory=/opt/system2mqtt
|
||||
ExecStart=/opt/system2mqtt/run.sh
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
# Environment can be overridden via drop-in or .env file
|
||||
EnvironmentFile=-/opt/system2mqtt/.env
|
||||
|
||||
# Security hardening
|
||||
PrivateTmp=yes
|
||||
NoNewPrivileges=yes
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
Reference in New Issue
Block a user