You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor everything to async await most operations are now simple async by design
This is not a programmer choice it is a async operation to work on a distributed cluster
That is not a opinion this needs to get done first even in the most current version
to upgrade existing users every api call needs to be expected to be async even the
.plugin chain while we can keep it for convenience but i would not suggest to do so
remove adapter gets replaced by internal message channels and optional additional ECMAScript Proxy
New Correct Build Process ESNext Only if a user runs in legacy env he needs to Polyfill.
Splitting platforms via pouchdb-platform which implements runtime bindings in a central place to get easier maintainability
New versioning based on the dependencies and hashes. Managing own version references leads always to errors.
publishing /lib folder on git later when all changes are stable /lib gets the new development and published version the development version will be run able without additonal build steps.
replace md5-spark and all other crypto hashing with "hash-wasm"
replace leveldb with leveldb-wasm
implement opfs / fs as nativ default storage backend. all other backends will implement the same api
Fix: NPM Packaging remove the node_modules folder move all into packages correct the build add packages to workspaces in package.json
Deprecation next Major
TODO:
Adoption Strategie
lib + platform === pouchdb
Release pouchdb-platform
Release pouchdb-lib
Update docs once stabilised.
For contributors some pseudo code
constPlatform=import('pouchdb-platform')constpouchDB=import('pouchdb/lib/adapter-indexdb.js').then(adapter=>adapter.init(Platform));constPouchDB=newBroadcastChannel('db:nameOr-ref');PouchDB.onmessage=pouchdbPouchDB.postMessage("put",.....)PouchDB.postMessage("upsert",.....)PouchDB.postMessage("destroy",.....)PouchDB.postMessage("remove",.....)PouchDB.postMessage("del",.....)// Translates on the backend (the pouchDB) {id: i++, method, params }// Translates on all other connected units to {id,response}// is the response for the given task id{response}// Response without a id means System stderr or warning or info
This Implements the so called Append Only Logging Pipeline the same thing as Kafka Streams and Couchbase XDCR does . This is in fact the low level implementation detail of the XDCR and chromium devtools protocol implementation joined into readAble ECMAScript
the next trick is the main pouchdb instance runs in a so called sharedWorker
in nodejs a sharedWorker gets implemented via origin.socket where as on other contexts like deno and WInterOP ENVS its gets implemented based on new navigator.SharedWorker
This gives us a single context (DB) so we can serialize the IO in a central place.
for both the user code looks the same it is import('./sharedWorker.js')
after that they use postMessage onmessage respectively when apply more logic then a single function they use streams
For inifinity scale it is needed to horizontal scale while you can on bigger instances
run more then a single horizontal instance same concept as kubernetes docker containers
a container has a given workload size eg mem cpu bound lets say 1vcpu 512mb mem then we can horizontal scale that on the same host via running multiple instances or we can shard on remote hosts.
sharedWorker
nodejs
// if socket exists return connection// if not createSocket return connection// the onconnection handler of the net socket is equal to onconnect in a sharedWorker
constnodeOnConnectStream=newReadableStream({start(c){require('net').createServer(c.enqueue);},})constsharedWorkerOnConnectStream=newReadableStream({start(c){globalThis.oncoonect=({ports:[port]})=>c.enqueue(port);},})// Accepts httpRequest as string and a port of type MessageChannel.constonRequest=newWriteableStream({write([data,port]){console.log(data.toString())const[firstLine, ...otherLines]=data.toString().split('\n');const[method,path,httpVersion]=firstLine.trim().split(' ');constheaders=Object.fromEntries(otherLines.filter(_=>_).map(line=>line.split(':').map(part=>part.trim())).map(([name, ...rest])=>[name,rest.join(' ')]));constrequest={
method, path, httpVersion, headers
}console.log(request)constname=request.path.split('/')[1];port.postMessage(`HTTP/1.1 200 OK\n\nhallo ${name}`)}});constnodeRequestStream=newTramsformStream({transform(port,handler){constchannel=newMessageChannel();channel.port1.onmessage=(res)=>port.write(res) ? socket.end((err)=>{console.log(err)}) : socket.end((err)=>{console.log(err)});port.on('data',(httpRequest)=>handler.enqueue([httpRequest,channel.port2]));}});nodeOnConnectStream.pipeThrough(nodeRequestStream).pipeTo(onRequest);// port is == MessageChannelconstsharedWorkerRequestStream=newTramsformStream({transform(port,handler){port.on('message',(httpRequest)=>handler.enqueue([httpRequest,port]));}});sharedWorkerOnConnectStream.pipeThrough(sharedWorkerRequestStream).pipeTo(onRequest);
The text was updated successfully, but these errors were encountered:
Can be done incremental
This is not a programmer choice it is a async operation to work on a distributed cluster
That is not a opinion this needs to get done first even in the most current version
to upgrade existing users every api call needs to be expected to be async even the
.plugin chain while we can keep it for convenience but i would not suggest to do so
Deprecation next Major
TODO:
Adoption Strategie
lib + platform === pouchdb
For contributors some pseudo code
This Implements the so called Append Only Logging Pipeline the same thing as Kafka Streams and Couchbase XDCR does . This is in fact the low level implementation detail of the XDCR and chromium devtools protocol implementation joined into readAble ECMAScript
the next trick is the main pouchdb instance runs in a so called sharedWorker
in nodejs a sharedWorker gets implemented via origin.socket where as on other contexts like deno and WInterOP ENVS its gets implemented based on new navigator.SharedWorker
This gives us a single context (DB) so we can serialize the IO in a central place.
for both the user code looks the same it is import('./sharedWorker.js')
after that they use postMessage onmessage respectively when apply more logic then a single function they use streams
Scalability Inifinity Scale
For inifinity scale it is needed to horizontal scale while you can on bigger instances
run more then a single horizontal instance same concept as kubernetes docker containers
a container has a given workload size eg mem cpu bound lets say 1vcpu 512mb mem then we can horizontal scale that on the same host via running multiple instances or we can shard on remote hosts.
sharedWorker
nodejs
Cross platform http-server
node accept json post as body
node first url part / === name
The text was updated successfully, but these errors were encountered: