lowbit

Zero-bloat, idiomatic Go libraries built for speed and scale.

Package Installation
cooper

A Go package for handling the HTTP/1.1 `101 Switching Protocols` handshake on both ends of a connection returning a `net.Conn`.

httptcpweb
$go get lowbit.dev/cooper
v0.0.0

A common problem: you need a long-lived, bidirectional connection between a client and server, but your infrastructure speaks HTTP. Load balancers, reverse proxies, monitoring tooling, all of it is built around HTTP. Starting from scratch on a raw TCP port means fighting your own stack.

HTTP has a solution for this. It's called 101 Switching Protocols, and it's been in the spec since HTTP/1.1. A client sends an upgrade request; the server agrees; the connection is handed off as a raw socket. WebSocket uses it. So does HTTP/2. So can your protocol.

The question is what you reach for to implement it.


The trap

The obvious move is a WebSocket library, even if you don't want WebSocket semantics. The handshake is the same, the framing can be stripped away, and at least something handles the HTTP negotiation. Except now you're carrying a dependency that was designed for a different protocol, with abstractions that don't fit your model, updated on someone else's schedule.

The other move is http.Hijacker, Go's built-in escape hatch from the HTTP handler layer. It's the right instinct. But using it correctly is more precise than it looks: the 101 response has to be written by hand to the hijacked buffer, that buffer may already contain the first bytes of your protocol stream and must not be discarded, and the client side needs its own bufio-aware response parsing. None of this is hard, but it's the kind of thing that gets subtly wrong under pressure and is invisible in testing.


What Cooper does instead

Cooper is the implementation of that precision work, extracted once and made composable.

Server side. cooper.Hijack is an http.Handler. It validates the upgrade request, writes the 101 response, drains any buffered bytes back into the connection, and calls your handler with a clean net.Conn.

http.Handle("/stream", cooper.Hijack(func(conn net.Conn, proto string) {
    defer conn.Close()
    // conn is a raw net.Conn. Speak whatever protocol you want.
    serveMyProtocol(conn)
}, cooper.Protocols("myproto/1")))

Client side. cooper.Upgrade does the same in reverse over a connection you provide. You construct the request, set the headers, and get back a net.Conn.

conn, _ := net.Dial("tcp", "host:8080")

req, _ := http.NewRequest("GET", "http://host:8080/stream", nil)
req.Header.Set("Upgrade", "myproto/1")

raw, err := cooper.Upgrade(conn, req)
// raw is a net.Conn. The HTTP handshake is done.

Building WebSocket on top

WebSocket's handshake requires one additional step: a challenge-response exchange using Sec-WebSocket-Key and Sec-WebSocket-Accept. Cooper doesn't know what WebSocket is, but it provides the hooks.

On the server, extra headers are injected into the 101 response:

cooper.ResponseHeaders(func(r *http.Request, proto string) http.Header {
    h := http.Header{}
    h.Set("Sec-WebSocket-Accept", computeAccept(r.Header.Get("Sec-WebSocket-Key")))
    return h
})

On the client, the response is validated before the connection is returned:

cooper.ResponseValidator(func(req *http.Request, resp *http.Response) error {
    if resp.Header.Get("Sec-WebSocket-Accept") != computeExpected(req) {
        return errors.New("accept header mismatch")
    }
    return nil
})

A complete WebSocket implementation needs exactly these two hooks from the handshake layer. Cooper provides them and nothing else. The framing, message parsing, and ping/pong logic live in your code, where they belong.


What was not built

Cooper has no protocol registry. No connection pool. No middleware chain. No generated code. The net.Conn it returns has no wrapper that changes how reads and writes behave the only exception is an internal prefixConn that prepends any bytes the HTTP layer had already buffered, which is transparent to the caller and exists purely to ensure nothing is lost at the handshake boundary.

The full implementation is four files. Zero external dependencies. Every type in the public API is from the standard library.

The scope is deliberate. A package that does one thing and costs almost nothing to understand will still be working correctly long after a framework would have been replaced twice.

webassets

A fast, flexible Go package for serving static web assets with in-memory caching, on-the-fly minification, and virtual bundle support.

httpstatic-assets
$go get lowbit.dev/webassets
v0.0.0

Every Go web application that serves static assets eventually makes the same discovery: http.FileServer is correct, but it is not enough. It reads from disk on every request. It does not minify. It has no concept of a bundle — a composed artifact that your application treats as a single file but your source tree stores as several. And so you reach for something bigger.

The options are mostly the same: a full asset pipeline with a configuration format, a middleware from a framework you weren't planning to use, or a Node.js build step that somehow ends up as a permanent fixture in your deployment. Each one solves the immediate problem and brings a new layer you now have to understand, update, and justify to the next person who reads the code.

The underlying needs are not complicated. Read a file once, serve it from memory after that. If it's JavaScript or CSS, make it smaller first. If several source files belong together logically, let them be requested as one.


What webassets does

webassets.New returns an http.Handler. That is the entire public surface that matters.

h := webassets.New("./web/static",
    webassets.WithCache(true),
    webassets.WithMinification(true),
)
http.Handle("/assets/", http.StripPrefix("/assets", h))

On the first request for a file, it is read from disk, optionally minified, and stored in memory. Every subsequent request gets the in-memory copy. The work of reading and minifying each file happens exactly once, even under concurrent load — a singleflight guard ensures that a hundred simultaneous cold-cache requests for the same file result in one disk read, not a hundred.

When caching is disabled, the handler delegates entirely to http.FileServer. You get all the standard HTTP semantics — ETags, If-Modified-Since, Range requests — without reimplementing them.


Bundles

A bundle is a virtual file. It does not exist on disk; it is assembled at request time from the source files you name.

webassets.WithBundle("app.bundle.js", "js/vendor.js", "js/app.js")

A request for /assets/app.bundle.js reads js/vendor.js and js/app.js in order, concatenates them, runs the result through the minifier if enabled, and caches it. The caller sees a single response. The source files stay separate on disk and in version control, where they belong.


Exclusions and protected files

Some paths should never be cached — admin panels, dynamically generated manifests, anything where stale data is worse than a disk read. WithCacheExclude removes those paths from the cache without disabling it everywhere else.

webassets.WithCacheExclude("/assets/manifest.json")

Files whose names begin with _ are not served at all. A 403 is returned regardless of whether the cache is on or off. This is a convention, not a configuration option: if a file starts with _, it is not a public asset.


Error visibility

The handler always recovers from errors and returns a safe HTTP response. A missing file is a 404. A bundle that fails to build is a 500. Nothing panics. If you want to know when these things happen — and in production you probably do — you provide a function:

webassets.WithErrorLogger(func(err error) {
    slog.Error("webassets", "err", err)
})

What was not built

There is no asset fingerprinting. No content hash appended to filenames for cache-busting. No HTTP/2 server push hints. No live reload for development. No configuration file. No CLI. No concept of environments.

Some of these may be worth building. None of them belong here. The scope of this package is the boundary between your HTTP handler and the files you want to serve — not the build pipeline that produces those files, not the CDN that fronts them, not the browser cache that holds them. Staying inside that boundary is what keeps the package auditable and its behavior unsurprising.

One external dependency ships: tdewolff/minify, vendored. Minification is not something the standard library provides, the implementation is not trivial to get right across the range of valid JavaScript and CSS, and the cost of the dependency is lower than the cost of owning the parser. It is vendored so that the version you ship is the version you chose, not whatever go get resolves to on the next clean build.

The singleflight implementation is local — forty lines of sync.Mutex and sync.WaitGroup. The standard library already provides everything it needs.

backstage

A standalone, zero-external-dependency Go package for managing background jobs and scheduled tasks.

concurrencyinfrastructurescheduler
$go get lowbit.dev/backstage
v0.0.0

Every application eventually grows a background layer. Cache warming. Nightly cleanup. Email delivery. Report generation. It starts as a single goroutine launched in main, then a ticker, then two tickers, then a sync.WaitGroup that someone added to make shutdown less violent. None of it is wrong, exactly. But none of it is the thing you actually wanted to build, and now it lives in your application code, tangled up with the rest of it.

At some point someone reaches for a library. That is where the trouble usually starts.


The trap

The Go ecosystem has several background-job libraries. Most of them share a shape: a central scheduler, a registry of named jobs, middleware hooks, retry policies, priority queues, admin dashboards behind a flag nobody reads. They are frameworks in the sense that the framework is the primary concern and your jobs are a detail it manages.

The tradeoff is not just API surface. It is control. When the library owns the scheduler loop, owns the worker pool, owns the logging — you are adapting your application to its model, not the other way around. When something breaks at midnight, you are reading someone else's source code to understand what is happening to your production traffic.

There is also the external cron library, reached for because parsing "*/5 * * * *" looks tedious. It isn't. A 5-field cron parser is a few dozen lines of straightforward code. Importing a library for it means subscribing permanently to someone else's release cycle for a problem that is already solved.


What backstage does instead

backstage is a supervisor. It manages named worker pools and a scheduler. You register queues. You dispatch jobs. You schedule recurring tasks. You call Serve and hand it a context. When the context is cancelled, it drains in-flight work and returns.

That is the entire model.

Worker pools. A queue is a named pool of goroutines backed by a Store. The default store is an in-memory buffered channel. Dispatch is non-blocking — if the store cannot accept the job, you get an error back immediately and decide what to do with it.

sv := backstage.New("myapp")

sv.RegisterQueue("emails", backstage.QueueConfig{
    Workers: 3,
    Buffer:  100,
})

if err := sv.Dispatch("emails", backstage.Job{
    Name: "send-welcome",
    Run:  sendWelcomeEmail,
}); err != nil {
    // errors.Is(err, backstage.ErrQueueFull) — handle in your code
}

Schedules. Three built-in implementations cover the realistic surface: a fixed interval, a daily wall-clock time, and a standard 5-field cron expression. The cron parser is written in-house. No external dependency, no import of a library that exists solely to parse a string format that has not changed in forty years.

// Every 15 minutes, measured from when the task fires.
sv.Schedule(backstage.Every(15*time.Minute), backstage.Job{
    Name: "refresh-cache",
    Run:  refreshCache,
})

// Daily at 03:00 UTC.
sv.ScheduleOnQueue("reports", backstage.MustCron("0 3 * * *"), backstage.Job{
    Name: "nightly-report",
    Run:  generateReport,
})

Scheduled tasks can run directly in their own goroutine, or they can be dispatched onto a named queue — so concurrency is controlled by the queue's worker count, not by how many timers fire at once.

Retries. Each job carries its own retry policy: a count and an optional delay between attempts. A job that panics is never retried — a panic is a programmer error, not a transient failure. If the context is cancelled during a retry delay, the job stops without further attempts.

sv.Dispatch("emails", backstage.Job{
    ID:         "welcome-" + userID, // optional; used by stores for deduplication
    Name:       "send-welcome",
    Retries:    3,
    RetryDelay: 5 * time.Second,
    Run:        sendWelcomeEmail,
})

The ID field is optional. When set it appears in every log record for that execution, and custom store implementations receive the full Job value on Push — so deduplication and idempotency checks live where they belong: in the store, close to the persistence layer.

Graceful shutdown. Serve blocks until the context is cancelled, then waits up to a configurable drain timeout for in-flight workers to finish. Workers receive the same context, so long-running jobs can respect cancellation.

sv.Serve(ctx) // blocks; returns nil after draining

The Store interface

By default, jobs live in memory. A process restart loses them. For most background work — cache warming, cleanup sweeps, notification fanout — that is acceptable. For work that must not be lost, the queue's backing store is an interface:

type Store interface {
    Push(job Job) error
    Pop(ctx context.Context) (Job, error)
    Len() int
}

Implement those three methods — backed by a database table, a Redis list, whatever fits your system — and pass it in:

sv.RegisterQueue("billing", backstage.QueueConfig{
    Workers: 2,
    Store:   mydb.NewJobStore(db, "billing"),
})

The supervisor, scheduler, and workers are unchanged. The durability guarantee moves into your store implementation, where you control it, where you can test it, where it does not depend on how backstage evolves.


What was not built

There is no priority queue. No dead-letter queue. No admin interface. No job status tracking, no history. No middleware chain. No plugin system.

All of those are real problems. They are also problems with correct answers that vary by application — a retry policy for an email queue is different from one for a billing event, and both should live in the code that understands the domain, not in a generic library that can only approximate it.

backstage handles the mechanics: goroutines, lifecycles, scheduling, context propagation, structured logging. Everything above that layer is yours.

The full implementation is nine files. Zero external dependencies. The Schedule interface is a single method. The Store interface is three. Every public type is composable with the standard library and nothing else.

stencil

A minimal Go template engine built on `html/template`. Layout wrapping, reusable partials, and per-render overrides.

htmltemplatingweb
$go get lowbit.dev/stencil
v0.0.0

Every Go web project eventually needs the same things: a layout that wraps each page, reusable partials for nav bars and alerts, and templates that reload in development but ship compiled in production. Go's html/template handles the rendering. It has no opinion on any of the rest.

That gap is where every Go web project eventually writes the same boilerplate.


The trap

The first instinct is to reach for a third-party template engine. Most of them introduce their own syntax, their own escaping rules, and their own lifecycle, none of which you asked for. You wanted layouts and partials; you got a framework.

The second instinct is to stay with html/template and wire it up yourself. It works. But the setup is the kind of thing that gets quietly reinvented on every project: layout composition, a partial cache, the filesystem toggle between development and production. None of it is hard. It just never feels worth extracting, until you've written it a third time.


What Stencil does instead

Stencil is that wiring, extracted once and made reusable.

Layouts. A layout is an ordinary html/template file with a single {{ yield }} call where the page content should appear. Pages render inside it by default.

<!-- layouts/base.html -->
<html>
  <body>{{ yield }}</body>
</html>
engine := stencil.New(stencil.Config{
    Views:         views,
    Layouts:       layouts,
    Partials:      partials,
    DefaultLayout: "base",
})

engine.Render(w, "home", data)

Partials. Any partial can be rendered from inside a template with {{ partial "name" . }}. The partial is parsed on first use and cached for the lifetime of the engine — the parse cost is paid once.

{{ partial "nav" . }}
{{ partial "alert" "Something went wrong" }}

Per-render overrides. A single call can use a different layout or skip the layout entirely — useful for HTMX partial responses.

// Use a different layout for this one page
engine.Render(w, "about", data, stencil.WithLayout("minimal"))

// Return just the fragment, no wrapping layout
engine.Render(w, "table-rows", data, stencil.WithoutLayout())

The dev/prod split

Stencil accepts fs.FS for each template directory. The engine itself does not distinguish between a live filesystem and an embedded one — that decision belongs to the caller.

For development, pass os.DirFS. View and layout templates are read from disk on every render; changing a file takes effect immediately. Partials are cached by default — set DisableCache: true to re-parse them on every render as well, so no manual cache invalidation is needed.

engine := stencil.New(stencil.Config{
    Views:         os.DirFS("views"),
    Layouts:       os.DirFS("layouts"),
    Partials:      os.DirFS("partials"),
    DefaultLayout: "base",
    DisableCache:  true,
})

For production, embed the directories and pass the sub-filesystems. Templates are compiled into the binary; no disk I/O at runtime.

//go:embed views layouts partials
var templateFS embed.FS

views, _    := fs.Sub(templateFS, "views")
layouts, _  := fs.Sub(templateFS, "layouts")
partials, _ := fs.Sub(templateFS, "partials")

engine := stencil.New(stencil.Config{
    Views:         views,
    Layouts:       layouts,
    Partials:      partials,
    DefaultLayout: "base",
})

The same engine code runs in both environments. The only difference is what gets passed in at startup.


What was not built

Stencil has no template inheritance hierarchy. No slot system. No hot-reload watcher. No global template registry. Reload() exists on the engine for development use — it clears the partial cache so updated files are picked up without a process restart — but no filesystem events are monitored automatically.

The full implementation is a single file. Zero external dependencies. Every type in the public API is from the standard library or from html/template, which is part of the standard library.

The scope is deliberate. A layout engine that does exactly what html/template already does, minus the ceremony, will outlast any framework that tried to replace it.

Design Philosophy

Zero Bloat

We rely on the Go standard library. Dependencies are treated as liabilities.

High Performance

Every allocation is intentional. Built for concurrency and high-throughput.

Idiomatic Go

No magic, no massive abstraction layers. Clean, predictable interfaces.

Contributing

Lowbit is open-source and forged by the community. Whether you're fixing bugs, improving docs, or proposing core features, we want your input. Please read our contribution guidelines before opening a Pull Request.

Not a framework.
Just the bits that power them.

lowbit.dev/package

pkg.go.dev Source
$go get lowbit.dev/