Live Logs
Production logs in one command. No VPN, no jump host, no twenty dollar a seat log aggregator. Day one on the job and you already see what is happening.
The onboarding doc says "please check production logs." The new hire starts the clock. Install the corporate VPN client. Find the SSH key. Get added to the Google group for the jump host. Request access to the log aggregator. Wait two days for IT. By the time they can actually read a log line, the bug has moved on.
sp00ky Cloud skips the whole ritual. Install the CLI, log in with GitHub, and your prod logs are right there, in the terminal you were already in.
Day one, already in production
A brand new engineer can read sp00ky logs about three minutes after we hand them a
laptop. Install spky, run spky cloud login, type
spky cloud logs, you are in. No VPN, no jump host, no kubectl context
juggling. Permissions are tied to the team invite in spky cloud team,
not to whatever half-configured IAM role someone set up last quarter.
Filter like you mean it
Tail everything or nothing. Narrow by service to just your backend, the sync layer,
the database, the scheduler. Comma separate them if you want two at once. There is
even a shorthand: spooky means scheduler plus SSP, because those are
the two you almost always want together.
Need to watch two streams side by side? Pass --split v and you get a
vertical split, or --split h for stacked. Looking for a specific error?
--grep runs a real regex on the server, so your terminal is not doing
the filtering at 2am.
A real TUI, not a browser tab
Pass -i and you get an interactive TUI browser with scroll, search,
service filters, time filters, and a follow toggle. It is the log viewer you would
have built on a quiet Friday, except someone already built it and it ships in your
CLI.
Need history? --since 2h or --since 3d rewinds and keeps
tailing live. Pin both ends with --until and the stream closes when the
window runs out, which is the shape you want for a post-incident replay.
Everyone sees green (or yellow, or red)
Drop a one line live status badge into your GitHub README and everyone looking at the repo knows if prod is happy. No dashboard tax.
How it stacks up
sp00ky's logs are scoped to your sp00ky deployment, on purpose. Here is how that trades off against the serious log platforms.
| What you want to do | sp00ky | kubectl logs | Datadog | Grafana Loki | Better Stack |
|---|---|---|---|---|---|
| New hires reading prod on day one with no VPN setup | ✓ | ✕ | ~ | ✕ | ~ |
| Tail and filter prod logs from the CLI you already use | ✓ | ✓ | ~ | ~ | ~ |
| Server side regex search across services | ✓ | ✕ | ✓ | ✓ | ✓ |
| Interactive TUI browser for logs | ✓ | ✕ | ✕ | ✕ | ✕ |
| Long term retention, compliance, SIEM integrations | ✕ | ✕ | ✓ | ✓ | ✓ |
| Dashboards, SLOs, paging integrations | ✕ | ✕ | ✓ | ✓ | ✓ |
| ✓ first class · ~ possible, with effort or caveats · ✕ not a goal | |||||
When to pick something else
If you need years of retention, audit trails for compliance, SLO tracking, or a real paging rotation, pick a proper observability platform. Datadog and Better Stack are full products that do that well. Grafana Loki is the self hosted move if you already run Grafana. sp00ky's logs are tuned for the thing they are tuned for: reading what production is doing, right now, without any setup.
Most teams end up using both. sp00ky for "what just happened," your heavyweight platform for "what happened last quarter." We will not be offended.
Ready to build something amazing?
Full docs, install guide, and API reference — all in one place.
Read the docs