1 Monitoring & Stats
Matthias Klein edited this page 2025-08-03 20:39:10 +00:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

📊 Monitoring & Stats

Comprehensive guide to understanding GTS-HolMirDas output, monitoring deployment health, and interpreting performance metrics.

📈 Understanding Statistics Output

Runtime Statistics Breakdown

After each processing cycle, GTS-HolMirDas displays comprehensive statistics:

📊 GTS-HolMirDas Run Statistics:
   ⏱️  Runtime: 0:02:08
   📄 Total posts processed: 32
   🌐 Current known instances: 2,385
    New instances discovered: +12
   📡 RSS feeds processed: 30
   ⚡ Posts per minute: 14.9

Metric Explanations:

Metric Description Good Values Troubleshooting
Runtime Total processing time 1-5 minutes >10min: Check network/RSS feeds
Posts processed New URLs added to federation 20-100 per run 0: All feeds duplicate/broken
Known instances Total federated instances Growing trend Stagnant: Check federation
New instances Instances discovered this run 5-30 per run 0: No new content discovered
RSS feeds Successfully processed feeds = total configured <total: Check feed validity
Posts per minute Processing throughput 10-50 ppm <5: Performance issues

Performance Benchmarks

Typical Performance by Deployment Size:

Setup Runtime Posts/Run New Instances/Run Memory Usage
Small (10 feeds) 30-60s 15-40 2-8 50-100MB
Medium (30 feeds) 1-3min 30-80 5-15 100-200MB
Large (50+ feeds) 3-8min 50-150 10-30 200-400MB

🔍 Health Monitoring

Healthcheck Integration

GTS-HolMirDas supports external monitoring services like Healthchecks.io:

HEALTHCHECK_URL=https://hc-ping.com/your-uuid-here

Healthcheck Behavior:

  • Success ping: Sent after successful completion
  • Failure: No ping sent (monitored by timeout)
  • 🔄 Start ping: Sent at process start (optional)

Docker Container Monitoring

Monitor resource usage with Docker stats:

# Live monitoring
docker stats gts-holmirdas

# Snapshot view
docker stats --no-stream gts-holmirdas

# Formatted output
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" gts-holmirdas

Healthy Resource Ranges:

  • CPU: 5-20% during processing, <1% idle
  • Memory: 50-400MB depending on feed count
  • Network: Minimal (<10MB/run)

Federation Growth Tracking

Track your GoToSocial federation expansion:

# Check current instance count via API
curl -H "Authorization: Bearer YOUR_TOKEN" \
  https://your-instance.com/api/v1/instance | \
  jq '.stats.domain_count'

Expected Growth Patterns:

  • Week 1: 500-1,500 instances (initial discovery)
  • Month 1: 2,000-4,000 instances (steady growth)
  • Month 3: 5,000+ instances (mature federation)

Content Discovery Analytics

RSS Feed Performance Analysis:

Feed Type Typical Posts/Day Instance Discovery Quality Score
Mastodon Tags 20-100 High
Tech Blogs 5-20 Medium
Reddit RSS 50-200 Low
News Sites 30-80 Medium

Performance Degradation Indicators

🚨 Warning Signs:

Issue Symptom Cause Solution
Slow Processing Runtime >10min Network issues, broken feeds Check feeds, reduce count
Low Discovery <5 new instances/run Poor feed quality Review RSS sources
Memory Growth >500MB sustained Memory leaks Restart container
Zero Posts 0 posts multiple runs All duplicates/broken feeds Add new RSS sources

🎯 Optimization Strategies

Performance Tuning

Environment Variable Optimization:

# High-frequency, low-volume
MAX_POSTS_PER_RUN=15
SLEEP_INTERVAL=1800  # 30 minutes

# Low-frequency, high-volume  
MAX_POSTS_PER_RUN=50
SLEEP_INTERVAL=10800  # 3 hours

# Balanced approach
MAX_POSTS_PER_RUN=25
SLEEP_INTERVAL=3600  # 1 hour

RSS Feed Quality Management

Feed Performance Scoring:

Monitor which feeds provide the best instance discovery:

  1. High-value feeds: Mastodon/GoToSocial tag feeds
  2. Medium-value feeds: Tech community RSS
  3. Low-value feeds: General news (less federation benefit)

Feed Rotation Strategy:

  • Review monthly: Remove feeds with <5 posts/week
  • A/B test: Try new feeds for 2 weeks
  • Quality over quantity: 20 good feeds > 50 mediocre feeds

📈 Advanced Monitoring Setup

Grafana Dashboard (Future Enhancement)

Potential metrics to track:

  • Federation growth rate
  • Post processing throughput
  • RSS feed success rates
  • Memory/CPU trends
  • Error rate monitoring

Log Analysis

Important log patterns to monitor:

# Success patterns
grep "Total posts processed" docker-logs

# Error patterns  
grep -i "error\|failed\|timeout" docker-logs

# Performance patterns
grep "Runtime:" docker-logs | tail -10

🔧 Troubleshooting Metrics

When Statistics Look Wrong

Common Issues:

Problem Symptoms Diagnosis Fix
New instances always shows total +2,431 instances instead of +12 Counter reset/missing previous data Restart fixes, data persistence issue
Zero posts consistently 📄 Total posts processed: 0 All URLs already processed Normal after initial runs
Extreme runtime ⏱️ Runtime: 0:45:23 Network timeouts, broken feeds Check feed validity
Memory growing Docker stats shows increasing RAM Memory leak or large data Restart container

Diagnostic Commands

# Check feed validity
curl -I https://mastodon.social/tags/homelab.rss

# Verify GoToSocial API access
curl -H "Authorization: Bearer $TOKEN" \
  https://your-instance.com/api/v1/accounts/verify_credentials

# Container resource monitoring
docker logs --tail=50 gts-holmirdas
docker exec gts-holmirdas ps aux

📋 Monitoring Checklist

Daily Monitoring

  • Check latest run statistics
  • Verify reasonable runtime (<5 minutes)
  • Confirm new posts discovered (>0)
  • Monitor Docker container health

Weekly Monitoring

  • Review federation growth trend
  • Analyze feed performance
  • Check memory usage patterns
  • Review error logs

Monthly Monitoring

  • Evaluate RSS feed quality
  • Consider feed rotation
  • Review instance discovery rate
  • Plan capacity scaling

📞 Need Help?

If your metrics don't match expected patterns, check the Troubleshooting Guide or open an issue on the repository.