Tbh I've been working on a big feedback list since early this year after finding scour through /r/rss. This year got busy etc so I haven't sent it in yet but I'd like to soon once I polish it up.
Being able to apply custom interests to all feeds globally has been a wonderful way to run into new stuff online. I was genuinely surprised how great of addition scour was to my rss setup since I'm already a longterm experienced user with a well curated follow list of a several hundred feeds.
Feedback is extremely welcome! Feel free to email ideas to me (in any state of polish) or post them on https://feedback.scour.ing. Looking forward to hearing your suggestions!
Hey, was checking out the site and stumbled across a small bug: head to https://scour.ing/browse/feeds/popular while logged out, press the + button on a feed, and it breaks the browser back button. Happens on FF and Chrome.
TBH, _I_ was also genuinely surprised when I made the initial MVP of Scour, pointed it at HN Newest, and right away was finding great posts with only 1-3 points.
I thought a lot of good stuff was probably getting buried in the fire hose, but I had no idea how well it would actually work at finding those hidden gems for me.
Just created an account, added 3 of my interests pertaining to what I am working on right now; instantly found 3-4 articles that gave me actual, actionable insight on what I wanted to do.
Sounds interesting, though that durability tradeoff is not one that I’d think most people/applications want to make. When you save something to the DB, you generally want that to mean it’s been durably stored.
Are there specific applications you’re targeting where latency matters more than durability?
1- Session stores (can be reconstructed from auth service)
2- leaderboards/counters (recent scores/counters can be recalculated)
3- Real-time analytics/metrics (losing ~100ms of metrics is acceptable)
4- Caching layers with upstream persistence
5- High-frequency systems where latency > everything
I generally think that for KV stores, there are more use cases that can accept this _slightly_ relaxed durability model than not. of course this isn't the case for a main DB. KV stores often handle derived data, caches, or state that can be rebuilt.
That said, for cases needing stronger durability, you can call flush_all() after critical operations - gives you fsync-level guarantees. Also considering adding a "sync" or "Full ACID" mode that auto-flushes on every write for users who want strong durability.
The philosophy is: make the fast path really fast for those who need it, but provide escape hatches for stronger guarantees when needed.
This seems around the durability that most databases can reach. Aside from more specialized hardware arrangements, with a single computer, embedded database there is always a window of data loss. The durability expectation is that some in-flight window of data will be lost, but on restart, it should recover to a consistent state of the last settled operation if at all possible.
A related questions is if the code base is mature enough when configured for higher durability to work as intended. Even with Rust, there needs to be some hard systems testing and it's often not just a matter of sprinkling flushes around. Further optimization can try to close the window tighter - maybe with a transaction log, but then you obviously trade some speed for it.
When operations complete in 200ns instead of blocking for microseconds/milliseconds on fsync, you avoid thread pool exhaustion and connection queueing. Each sync operation blocks that thread until disk confirms - tying up memory, connection slots, and causing tail latency spikes.
With FeOxDB's write-behind approach:
- Operations return immediately, threads stay available
- Background workers batch writes, amortizing sync costs across many operations
- Same hardware can handle 100x more concurrent requests
- Lower cloud bills from needing fewer instances
For desktop apps, this means your KV store doesn't tie up threads that the UI needs. For servers, it means handling more users without scaling up.
The durability tradeoff makes sense when you realize most KV workloads are derived data that can be rebuilt. Why block threads and exhaust IOPS for fsync-level durability on data that doesn't need it?
Comments like these are very motivating, so thank you!