26
loading...
This website collects cookies to deliver better user experience
/wait
endpoint on Sync Inc after writing changes to Stripe. This endpoint will return a 200
when we've confirmed the database is completely up-to-date. This means they can read from their database after writing to Stripe and know it's consistent.events
endpoint. This endpoint serves the same purpose as a replication slot on a database. It contains a list of all create/update/delete events that have happened for a given account on Stripe.queue = [
{"/v1/customers", "cur9sjkxi1x"},
{"/v1/invoices", "cur0pskoxiq1"},
# ...
]
options = [
producer: [
module: BackfillProducer,
rate_limiting: [
allowed_messages: 50,
interval: :timer.seconds(1)
]
],
processors: [
default: [
concurrency: 50,
max_demand: 1
]
]
]
rate_limiting
setting is all we need to ensure we process no more than 50 pages per second. This leaves a comfy 50 requests per second left over in a customer's Stripe quota.processors
, we specify that we want up to 50 concurrent workers and that each may request one unit of work per time (in our case, a page).nil
cursors). Our workers checkout a page to work and fetch it. Each page contains up to 100 objects. Each of those objects can contain a list of paginateable children. As such, the worker's first job is to populate all objects in the page completely.object
field which identifies what the entity is./events
to grab the most recent cursor. After the backfill completes, we first catch up on all /events that occurred while we were backfilling. After those are processed, the database is up-to-date. And it's time to poll /events
indefinitely./events
endpoint every 500ms to check to see if there’s anything new to process, continuously. This is how we can promise "sub-second" lag.payload
that can take one of several shapes. For example, our "Stripe backfill complete" log looks like this:{
"kind": "stripe_backfill_complete",
"row_count": 1830,
"last_event_before_backfill": "evt_1J286oDXGuvRIWUJKfUqKpsJ"
}
/events
endpoint to poll, webhooks are not necessary. The trick is to just poll it frequently enough to get as close to real-time as possible! What's great is that you can use the same sync system to get a change made milliseconds ago or to catch up on all changes that happened during unexpected downtime.GET <https://api.syncinc.so/api/stripe/wait/:id>
200
. You can now make a subsequent read to your database with confidence.