Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This made me do a double-take. Surely you would never do this, right? It seems to be directly counter to the idea of being able to audit changes:

“Event replay: if we want to adjust a past event, for example because it was incorrect, we can just do that and rebuild the app state.”



No, that definitely happens.

There are two kinds of adjustments: an adjustment transaction (pontual), or re-interpreting what happened (systemic). The event sourcing pattern is useful on both situations.

Sometimes you need to replay events to have a correct report because your interpretation at the time was incorrect or it needs to change for whatever reason (external).

Auditing isn't about not changing anything, but being able to trace back and explain how you arrived at the result. You can have as many "versions" as you want of the final state, though.



The argument I've always heard for this was issues with code, not the event. If for a period of time you have a bug in your code, with event sourcing, you can fix the bug and replay all the events to correct current projections of state.


What if your correction renders subsequent events nonsensical?


There is a very real chance of this happening, and two choices.

One - bake whatever happens into your system permanently, like 99% of all apps, and disallow corrections.

Two - keep the events around so that you can check and re-check your corrections before you check in new code or data.


Instead of modifying the original (and incorrect) event, you can add a manual correction event with the info of who did it and why, and replay the events. This is how we dealt with such corrections with event sourcing.


But you don't need to replay in that case. You just fire the correction event and the rest is taken care of.


It would be outside of the normal exceptional cases, yes.

Like buggy data that crashes the system.

If you have the old events there, you can "measure twice, cut once", in the sense that you can keep re-running your old events and compare them to the new events under unit-test conditions, and be absolutely sure that your history re-writing won't break anything else.

It's not for just doing a refund or something.


Yeah, that's a big NO. Events are immutable. If an event is wrong, you post an event with an amendment. Then yes, rebuild the app state.


Not speaking about their case, but I think some cases a "versioned mutable data store" with a event log that lists updates/inserts makes more sense than an "immutable event log" one like kafka.

Consider the update_order_item_quantity event in a classic event sourced systems. It's not possible to guarantee that two waiters dispatching two such events at same time when current quantity is 1 would not cause the quantity to become negative/invalid.

If the data store allowed for mutability and produced an event log it's easy:

Instead of dispatching the update_order_item_quantity you would update the order document specifying the current version. In the previous example second request would fail since it specified a stale version_id. And you can get the auditability benefits of classic event sourcing system as well because you have versions and an event log.

This kind of architecture is trivial to implement with CouchDB and easier to maintain than kafka. Pity it's impossible to find managed hosting for CouchDB outside of IBM.


Any modern DB with a WAL (write ahead log) is an immutable event system, where the events are the DB primitives (insert, update, delete...).

When you construct your own event system you are constructing a DB with your own primitives (deposit, withdraw, transfer, apply monthly interest...).

You have to figure out your transaction semantics. For example, how to reject invalid events.


> Any modern DB with a WAL (write ahead log) is an immutable event system, where the events are the DB primitives (insert, update, delete...).

Agreed, I just wish apart from WAL they also had versioning as first class and their update api required clients to pass the version they have "last seen" to prevent inconsistencies.


On most SQL databases, you can put CHECK constraints on columns so that the database rejects events. But this is controversial, as people don't like putting logic on the DB.


CosmosDB has etags on every document


DBs only work because the events are artificial and nobody cares about what's written in them.

And DBs are not really CQRS because the events are artificial and don't have business data that people are interested in keeping.


The big caveat here is GDPR and other privacy laws. In some cases you need the ability to scrub the event store completely of PII for legal reasons, even if only to null the relevant fields in the events.

Without preemptive defensive coding in your aggregates (whatever you call them) this can quickly blow up in your face.


What I have read about it is: encrypt PII with a client-depending key, do not post the key to the event system. When an erasure request comes in, delete the corresponding key. Now the data cannot be decrypted anymore for that client.


That's what I said too, and the answer was "No, just because it cannot be decrypted today does not mean it cannot be decrypted in the future. The data must be deleted"


That is some serious architecture to put in place before you can even start using event sourcing.


For finance recordkeeping requirements take precedence over privacy requirements. Audit trail data must be on WORM storage and must not be scrubbable.


It's poorly phrased but I'm not sure they meant "mutate the past". The keyword is "adjust" which could mean "append a correction".


But then you wouldn't need a replay. So the author really means mutate the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: