While talking about transaction observers and how they relate to syncing, a larger topic of “when to sync” came up. Let’s jot down some thoughts and assumptions.
User’s point of view:
I want a magic experience - whatever I just did on device X should instantly appear on device Y
- this one’s fairly obvious, with a caveat of “fomo anxiety” (fear of missing out) - do I have the latest data? do I now? is this all? any more tweets?
- so on one hand, we’d like to just provide a seamless data replication experience
- on the other hand, we’d like to provide an end-user with an ability to trigger data synchronization themselves (pull to refresh, menu->settings->sync now, etc)
- in some cases, this actually works around imperfections of the underlying system, that’s unable to keep up with user data changes across their device ecosystem. Providing user with a manual way to force a sync becomes a necessary part of a functional UX - it’s really a crutch though! It’s a distinction between Firefox Sync (works most of the time sort of, except when I have to force a sync to move stuff around) vs something like Google Docs multi-user document editing
- in some of the cases, this “manual sync” could be a “placebo sync” - it won’t really do any work (if we’re confident we’re up to date!), but it’ll give user with an assurance that all is good. This might just look like a button with some fake “syncing…” state on a timer, or it might be a debouncing request to sync, or…
Application complexity point of view:
Sub-systems should be decoupled, and should interact in simple and predictable fashion over strongly typed interfaces
- For example, transaction observer shouldn’t be tightly coupled to the sync engine; sync engine should not be any more than a regular observer, which just happens to move bits over the network sometimes
- Sync is a “dirty” observer, in a sense that by reacting to transactions happening in the storage, it might also dirty the storage layer. Most common example: user added some data, sync tx observer fired, downloaded latest data from the server, merged it & wrote locally (causing the observer chain to re-fire!), and then uploaded the final state to the server
- But sync isn’t unique in this position - one can easily imagine application-level observers behaving similarly. E.g. not our caching layer, but the “tell me when new categories are added and I’ll populate them with default items” cadre
- And so we have a generic, adaptable “if this happened, then this happens” flow orchestration!
Client’s performance point of view:
I want to do as little work as possible
- But, what if we have an application that generates a ton of storage writes?
- Or even, what if we have a UI/problem domain which encourages a lot of storage writes?
- What if our scale of “a lot” goes up 10x, or even 10000x in some applications compared to a “typical” application? E.g. instead of a fast-clicking, quick-tapping user we now have a JS client that’s programmatically generating writes whenever accelerometer events occur?
- That is to say, while we can optimize for a certain flow and rate of events, when thinking about generic applicability we can not mandate it.
- In other words, while it’s feasible to say “please don’t use our storage/syncing layer in crazy ways”, we must still do a sensible thing once the “crazy way” inevitably occurs
- This implies batching our transaction observer notifications, debouncing our sync requests, avoiding triggering unnecessary event loops, etc
Device’s point of view:
I want to consume as little energy as possible
- If an end-user application is a browser like Firefox, it’s fairly safe to assume that they’re browsing the internets, and thus their radio is quite often in a “hot” state. Although, I expect that while reading a long article, a radio will fairly promptly switch itself to cheaper states
- However, if the end-user application is a note taking app, it’s safe to assume that it has no reason to cause radio to be on for any other reason than to synchronize itself with other clients
- In the former case, it’s not too bad to assume that a sync after a write (a history visit) is relatively inexpensive in terms of power consumption, because it will operate over an already-hot device radio
- In the latter case, we can’t assume what else is happening on our device (other apps syncing in the background, or user’s spotify app streaming music, or…), and so we can’t assume that triggering a sync is going to be relatively cheap power-wise
- Thankfully, operating systems like Android realize this problem, and batch operations together on behalf of their tenant applications
- And example of that is the alarm manager on Android, which encourages you to allow alarms to fire “approximately every X minutes”, as opposed to “exactly every X minutes” - to allow for batching!
- Similarly, Android apps that implement a SyncAdapter will request OS to sync them, and will be told by the OS that they should go sync right now, but entirely depend on the OS to actually schedule these events and pick the optimal time for them to occur in relation to other activity on the device
- This implies that our sync/storage layer must have generic, external tie-ins:
- outgoing, from us to the application layer (which integrates with the OS): “i have stuff to sync, plz”
- incoming, from the application layer to us: “go sync right now” and maybe “you have a time window of X seconds to sync!”
Storage/sync, schema-agnostic, shared Rust library point of view:
I want to be able to intelligently integrate with my host’s idea of when certain events should happen, or are allowed to happen
- my previous POV ended up spilling over into this POV! So see previous point
- but the core of an idea is: there’s tension between “i know i have work to do, so i’ll just go and do it”, versus “i should let my environment tell me when the optimal time is to do work”
- we should strike for balance here, but err on a side of generalization when it comes to shared, cross-platform, cross-application-domain storage layer: support immediate execution as well as intelligent outside tie-ins