You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fill in data, use django_compat test for confirmation -> maybe skip some of the data massaging if the tests pass without, list in "other sdks compat" section
Missing features before rolling-out to customers - posthog-js only
Overflow detection (local for now, reuse same algorithm) and LIKELY_ANONYMOUS_IDS
how do we handle custom proxies that might not forward /i/?
Billing limits (needs a redis client, update the values out-of-band, fail open)
Kafka writes timeouts and error handling, maybe implement limited retries? -> rdkafka handles retries for us, up to 5 minutes by default. We'll timeout at the nginx level for now, and keep the messages in the rdkafka produce queue
Missing features for compat with other SDKs
For all these (and the ones we'll add), let's instrument the django code path to check whether it's actually active, and how many teams would be impacted
Source sent_at from the event body if present (used by some sdks + custom clients)
Source sent_at from the body on x-www-form-urlencoded requests: old posthog-js versions?
Source events from toplevel batch field if present
Check whether we indeed silently drop events with missing fields as documented, instead of returning an error -> if we keep dropping, let's implement an ingestion warning for this!
Known differences with django capture
These won't be fixed unless we aim at being compatible with the long tail of posthog-js versions:
the raw kafka message does not hold a site_url anymore, it looks unused now -> confirm it's the case
no support for lz64 compression, was removed from posthog-js
no support for the /engage endpoint, we can leave it routed to django
events bigger than the maxkafka message size trigger an INVALID_REQUEST status instead of a INTERNAL_ERROR
dates written to kafka are in RFC3339 format, a subset of ISO8601 that plugin-server should accept OK. Let's make sure CH does (partition_stats consumer)
sent_at timestamp in second not supported. Will be ignored, and event timestamp used without correction
The text was updated successfully, but these errors were encountered:
xvello
changed the title
Missing features and known changes
Missing features and known differences
Sep 15, 2023
Missing features before rolling-out to team2
{token}:{distinct_id}
localitysafe_clickhouse_string
invocations in capture.py. The plugin-server equivalent is easier to read. [slack thread] -> serde rejects such payloads, see this documentationdata
, use django_compat test for confirmation -> maybe skip some of the data massaging if the tests pass without, list in "other sdks compat" sectionMissing features before rolling-out to customers - posthog-js only
/i/
?Missing features for compat with other SDKs
For all these (and the ones we'll add), let's instrument the django code path to check whether it's actually active, and how many teams would be impacted
sent_at
from the event body if present (used by some sdks + custom clients)sent_at
from the body onx-www-form-urlencoded
requests: old posthog-js versions?batch
field if presentKnown differences with django capture
These won't be fixed unless we aim at being compatible with the long tail of posthog-js versions:
site_url
anymore, it looks unused now -> confirm it's the caselz64
compression, was removed from posthog-js/engage
endpoint, we can leave it routed to djangosent_at
timestamp in second not supported. Will be ignored, and eventtimestamp
used without correctionThe text was updated successfully, but these errors were encountered: