javascript - IndexedDb transaction auto-commit behavior in edge cases -


tx committed when :

  • request success callback returns - means multiple requests can executed within transaction boundaries when next request executed success callback of previous one
  • when task returns event loop

it means if no requests submitted it, not committed until returns event loop. these facts pose 2 problematic states :

  • placing new idb request enqueuing new task event loop queue within success callback of previous request instead of submitting new request synchronously
    • in case first success callback returns idb request has been scheduled
      • are asynchronous requests executed within single initial transaction? quite essential in case want implement result pulling back-pressure consumer gives feedback in form of future ready consume response
  • creating readwrite tx, not placing requests against , creating 1 before returning event loop
    • does creating new 1 implicitly commits previous tx ? if not, serious write lock starvations might occur, because :

if multiple "readwrite" transactions attempting access same object store (i.e. if have overlapping scope), transaction created first must transaction gets access object store first. due requirements in previous paragraph, means transaction has access object store until transaction finished.

the example of enqueuing new task event loop queue within success callback recursive request submission back-pressure :

    function recursivefn(key) {       val req = store.get(key)       req.onsuccess = function() {         observer.onnext(req.result).onsuccess { recursivefn(nextkey) }        }     }  observer#onnext // returns future[ack] ack either continue or cancel 

now can onsuccess or onnext settimeout(0) or not make whole thing part of 1 transaction?

bonus question :

i think readonly transactions exposed consumer/user because hard detect end of batch read if recursively submit new requests success callback of previous 1 right? otherwise don't see other reason them exposed user, right ?

i'm not sure understand question i'll offer answer on whether can safely use idb transaction events move state machine.

yes , no. yes in theory, no in practice.

i think understand transaction lifetime. rehash:

the lifetime of transactions lasts long it's referenced: it's "active" long it's being referenced, after said "finished" , transaction committed.

in theory, oncomplete should fire whenever transaction commits. there's useful tip in spec on suggests how loop:

to determine if transaction has completed successfully, listen transaction’s complete event rather idbobjectstore.add request’s success event, because transaction may still fail after success event fires.

to safely use mechanism sure watch non-success events including onblocked , onabort well.

practically speaking, i've found transactions flakey when long-lived or done consecutively in batches (as you've noted in idb comment). i'm not engineering apps require tricky behavior because, no matter how spec says should behavior, i'm seeing wonky transactions in both firefox , chromium (but blink, interestingly) when multiple tabs open.

i spent many weeks rewriting dash reuse transactions supposed performance gains. in end not pass basic write tests , forced abandon simultaneous/queued/consecutive transactions , rewrite once again. time picked one-transaction-at-a-time model slower but, me, more reliable (and suggest avoid lib , use ydn bulk inserts).

i'm not sure on application requirements, in humble opinion tying in i/o event loop seems disastrous idea. if needed event loop understand term use requestanimationframe() , throttle callback if needed fewer ticks 1 per ~33 milliseconds.


Comments

Popular posts from this blog

python - mat is not a numerical tuple : openCV error -

c# - MSAA finds controls UI Automation doesn't -

wordpress - .htaccess: RewriteRule: bad flag delimiters -