The Evolution of LogWisp: From a Simple Script to a Flexible Log Transport System

• 4 min read
golang logging devops sse networking

LogWisp was born from a familiar devops frustration: the daily ritual of SSHing into multiple servers to tail and grep log files! While powerful, enterprise-grade logging platforms exist, they often felt like overkill for my small projects and low traffic use cases. I wanted a simple, self-contained, single-binary solution that could stream, aggregate, and filter logs with minimal fuss. This is the story of how that idea evolved into a flexible, pipeline-based log transport system.

The Initial Prototype: A Simple Streaming Solution

The first iteration of LogWisp was a minimal proof-of-concept. The goal was to monitor log files in a directory and stream new lines over HTTP to a browser. For this, Server-Sent Events (SSE) was a natural choice over WebSockets due to its simplicity, its operation over standard HTTP, and its built-in client reconnection logic.

For the networking layer, I selected high-performance Go libraries: fasthttp for its efficient HTTP/S implementation and gnet for raw TCP connections. This came with a deliberate trade-off: gnet offers excellent performance but lacks native TLS support. The decision was made to position the TCP transport for use within secure internal networks, while relying on fasthttp for encrypted, internet-facing endpoints.

An Architectural Shift: The Pipeline Model

The early prototype worked, but its monolithic design was limiting. Adding features like log filtering or sending logs to multiple destinations would have required significant refactoring. This led to the project’s most critical evolution: a complete restructuring around a pipeline-based architecture.

This new model treats log processing as a series of independent, configurable stages. Each log entry flows through a defined path, allowing for modular and flexible processing. This architectural change was a major upgrade that enabled the construction of complex logging topologies from simple, reusable components.

Model

Source(s) -> [Limiter(s)] -> [Filter(s)] -> [Formatter] -> Sink(s)

  • Sources are the inputs, such as monitoring a directory, listening on a TCP port, or accepting HTTP posts.
  • Limiters are pipeline rate limiter and network access control limiters.
  • Filters apply include/exclude logic using regular expressions.
  • Formatters transform the log entry into the desired output format (e.g., raw text, JSON).
  • Sinks are the outputs, writing to the console, a file, or streaming to network clients.

This architectural change was a major upgrade that enabled the construction of complex logging topologies from simple, reusable components.

Building a Transport System

With a flexible pipeline foundation, the focus shifted to expanding its capabilities. The introduction of http_client and tcp_client sinks was a key milestone, turning LogWisp into a true log transport system. This feature allowed one LogWisp instance to forward its processed logs to a source on another instance, enabling patterns like distributed log aggregation from edge servers to a central collector.

To make the system more robust for production use, several key operational features were added. These included a pipeline-level rate limiter to prevent log floods, network access controls for security, and configuration hot-reloading to allow for dynamic updates without downtime.

Hardening for Security

As more networking features and capabilities were added, network security became a priority. The initial implementation added basic authentication for HTTP endpoints, along with helper commands (logwisp auth and logwisp tls) to simplify the generation of credentials and self-signed certificates.

A major security enhancement was replacing the initial password hashing mechanism with Argon2id. Interestingly, this transition highlighted a challenge with modern development tools: most AI code generators I used were trained predominantly on bcrypt examples, which made generating correct Argon2id code more difficult than expected.

To address the unencrypted nature of the TCP transport, Argon2-SCRAM-SHA256 was implemented. This challenge-response mechanism provides strong authentication over plaintext channels without exposing credentials, making the TCP source and sink suitable for trusted environments. While it does not prevent Man-in-the-Middle (MITM) attacks, it effectively blocks unauthorized clients from initiating a connection—the primary goal in a secure internal network where encryption is handled at a different layer.

Current State and Refinements

Recent development has focused on improving the user experience by restructuring the configuration schema and command-line interface to be more intuitive and consistent. The project remains an ongoing effort, a practical tool that has evolved significantly from its simple beginnings. It continues to improve through iterative design, faithful to its original goal: to be a lightweight tool that relies on a minimal set of dependencies, favoring the standard library while carefully selecting mature, high-performance libraries where necessary.

Check out the full project on GitHub to see the code and explore the documentation.