One Dashboard To Rule Your Servers: Grafana + Prometheus for Proxmox, KVM, VPS and Dedicated Boxes

If you have a mix of Proxmox nodes and external servers, getting a clean, truthful view of CPU, memory and disk is strangely hard. Proxmox’s built-in charts blur the line between free, cached and used memory, which makes planning resources awkward. I also needed email alerts for low disk space and high CPU or memory after learning the hard way that a full disk can freeze VMs and turn recovery into a risky dance. Finally, I wanted all nodes in one place, not just Proxmox, but remote KVMs, VPSs and dedicated servers from various providers.

LiteLLM at Home: One Endpoint, Real Budgets, Zero Surprises

I’ve been tinkering with LiteLLM at home, and pairing it with my OpenWebUI setup has unlocked something surprisingly practical: real budgets I can enforce automatically so there are no surprise bills. LiteLLM gives me one endpoint for many providers, per-key rate limits that fit each person in my household, and a clean way to bring local models into the same flow. In this post I break down how it works, why it matters for home-brew projects or small teams, and how to deploy it with Docker, end to end. I’ll also show simple recipes for monthly caps, per-key tokens-per-minute or requests-per-minute, and routing that prefers cheap or local models first.

Open WebUI: A Pay As You Go, multi-model, family-friendly alternative to ChatGPT Plus

Open WebUI is an extensible, self-hosted interface for large language models that feels familiar if you have used ChatGPT, yet it gives you control over models, costs, and access. I have been thinking about Open WebUI as a self hosted alternative on a Pay As You Go model versus paying …

Mastering AI Prompt Writing

Writing effective prompts for AI is less about magical keywords and more about clear communication. Whether you’re creating documentation, training a colleague, or automating tasks, the way you frame your request to an AI determines the quality of the result. In this extended guide, we’ll go step-by-step through prompt structures, metaphors that make the concept click, workplace training scenarios, and a library of examples you can use immediately.

Proxmox for the HomeLab

Meet Proxmox VE (Virtual Environment), a free and open‑source virtualization platform that fuses full KVM-based virtualization with LXC containers into one seamless web interface. It’s popular among home lab enthusiasts and IT pros who want enterprise-grade tools without enterprise-level licensing fees.

This post takes a deep dive into what Proxmox is and how to set up a powerful home lab using it; from the basics of installation to more advanced configurations like VLANs, backups, and clustering.

From VIM to Cursor. Why I’m Rethinking My Code Editor After 15+ Years

Cursor AI IDE is an AI-enhanced fork of Visual Studio Code that integrates large language models directly into your coding workflow. As a long-time VIM user who actively avoids bloated tools like VS, I never expected to be saying this; but after using Cursor for the past month, I’m seriously considering switching full-time. The productivity gains are just that significant, especially when working with complex monorepos.

Breaking the Monolith (Rails + ReactJS)

In the evolution of Ruby on Rails monoliths with embedded React views, complexity often grows in hidden layers. Models leak across layers, JavaScript bundles bloat, and teams struggle with test sprawl and unclear boundaries. But there’s a middle ground between monolith chaos and full microservices.

By modularizing Rails backends into Engines and separating React logic into distinct, domain-scoped bundles, teams can gain clarity, testability, and long-term scalability; all while keeping the development speed of a monolith.

Model Context Protocol Servers and AI… What???

Model Context Protocol (MCP) servers are reshaping how AI systems interact with the world. Think of them as standardized “ports” that let AI tools securely and intelligently plug into other systems, whether it’s file storage, databases, productivity tools, or developer environments. As the AI ecosystem matures, these servers are becoming critical infrastructure, quietly powering some of the most advanced capabilities in tools like Claude, Copilot, and Replit’s Ghostwriter.

This post digs into the Model Context Protocol (MCP), how servers implement it, why it’s becoming the USB‑C of AI, and what it means for developers building the next generation of intelligent apps.

The Testing Pyramid: What to Test, Where, and Why It Matters

The Testing Pyramid is a widely adopted strategy that helps teams design scalable, reliable, and maintainable test suites. As systems grow in complexity, so does the need for clear testing boundaries. The pyramid has evolved to include not just unit and end-to-end tests, but also component, functional, integration, and contract tests.

This post breaks down each layer, explains how it fits into a modern development workflow, and uses a car manufacturing analogy to make it all easier to understand.

Kubernetes for Production: What You Need to Know to Get Started

Kubernetes has become the industry-standard platform for deploying and managing containerized applications. Its popularity is well-earned, offering capabilities like self-healing, automated scaling, and zero-downtime deployments. But many developers hit a wall when moving from sandbox environments to real-world production setups. This post aims to simplify that jump by breaking down what Kubernetes is, how to use it, and the essential hardware and configuration needed for a reliable production deployment.