34
loading...
This website collects cookies to deliver better user experience
RateLimitDemo.java
program on PostgreSQL in Amazon RDS with and without updating the same rows, in Read Committed and Serializable isolation. The results were:id
s in Read Committed isolation levelid
s in Serializable isolation levelid
s in Read Committed isolation levelid
s in Serializable isolation levelRateLimitDemo.java
I change the id
to concatenate the session pid: rate_limiting_token_bucket_request(?||pg_backend_pid(),?)
and I set Read Committed isolation level: connection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED)
java RateLimitDemo 50 "jdbc:yugabytedb://47cc8863-9344-4a9c-bc02-0dd9f843dceb.cloudportal.yugabyte.com/yugabyte?user=admin&password=Covid-19" "user2" 1000 20 | awk 'BEGIN{t=systime()}/remaining$/{c=c+1;p=100*$5/$3}NR%100==0{printf "rate: %8.2f/s (last pct: %5.2f) max retry:%3d\n",c/(systime()-t),p,retry}/retry/{sub(/#/,"",$6);if($6>retry)retry=$6}'
rate: 1063.08/s (last pct: 100.00) max retry: 1
rate: 1063.32/s (last pct: 100.00) max retry: 1
rate: 1063.56/s (last pct: 100.00) max retry: 1
rate: 1063.80/s (last pct: 100.00) max retry: 1
rate: 1064.05/s (last pct: 100.00) max retry: 1
rate: 1064.29/s (last pct: 100.00) max retry: 1
rate: 1064.53/s (last pct: 100.00) max retry: 1
rate: 1062.20/s (last pct: 100.00) max retry: 1
rate: 1062.44/s (last pct: 100.00) max retry: 1
rate: 1062.68/s (last pct: 100.00) max retry: 1
rate: 1062.92/s (last pct: 100.00) max retry: 1
rate: 1063.16/s (last pct: 100.00) max retry: 1
RateLimitDemo.java
I keep the id
to concatenated the session pid: rate_limiting_token_bucket_request(?||pg_backend_pid(),?)
and I set Read Committed isolation level: connection.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE)
rate: 1079.99/s (last pct: 100.00) max retry: 2
rate: 1079.38/s (last pct: 100.00) max retry: 2
rate: 1079.44/s (last pct: 100.00) max retry: 2
rate: 1079.51/s (last pct: 100.00) max retry: 2
rate: 1079.57/s (last pct: 100.00) max retry: 2
rate: 1079.63/s (last pct: 100.00) max retry: 2
rate: 1079.69/s (last pct: 100.00) max retry: 2
rate: 1079.75/s (last pct: 100.00) max retry: 2
rate: 1079.81/s (last pct: 100.00) max retry: 2
rate: 1079.87/s (last pct: 100.00) max retry: 2
rate: 1079.94/s (last pct: 100.00) max retry: 2
rate: 1080.00/s (last pct: 100.00) max retry: 2
id
has an additional insert)id
those operations are sent to the right node, the tablet leader, and wait to get the write quorum from another node (I have a multi-AZ configuration here)id
. In my RateLimitDemo.java
I put back the id
alone: rate_limiting_token_bucket_request(?,?)
and I set Read Committed isolation level: connection.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED)
rate: 126.99/s (last pct: 96.69) max retry: 4
rate: 127.59/s (last pct: 84.78) max retry: 4
rate: 127.39/s (last pct: 95.62) max retry: 4
rate: 127.17/s (last pct: 98.37) max retry: 4
rate: 126.97/s (last pct: 96.71) max retry: 4
rate: 127.58/s (last pct: 95.68) max retry: 4
rate: 127.38/s (last pct: 97.62) max retry: 4
rate: 127.15/s (last pct: 96.68) max retry: 4
rate: 127.76/s (last pct: 88.00) max retry: 4
rate: 127.54/s (last pct: 92.53) max retry: 4
rate: 127.34/s (last pct: 94.48) max retry: 4
rate: 127.91/s (last pct: 92.85) max retry: 4
rate: 127.69/s (last pct: 92.68) max retry: 4
rate: 127.50/s (last pct: 92.87) max retry: 4
rate: 128.02/s (last pct: 83.57) max retry: 4
rate: 127.78/s (last pct: 95.61) max retry: 4
rate: 128.33/s (last pct: 95.61) max retry: 4
rate: 128.14/s (last pct: 95.66) max retry: 4
rate: 128.74/s (last pct: 98.04) max retry: 4
rate: 128.51/s (last pct: 92.40) max retry: 4
rate: 128.29/s (last pct: 94.63) max retry: 4
(pid@host [email protected]) 3771 calls 3487 tokens 3.8 /sec 60000 remaining
(pid@host [email protected]) 2706 calls 2460 tokens 2.6 /sec 60000 remaining
(pid@host [email protected]) 1513 calls 1422 tokens 1.5 /sec 60000 remaining
2022-01-05T23:00:26.369987Z SQLSTATE 40001 on retry #0 com.yugabyte.util.PSQLException: ERROR: All transparent retries exhausted. Operation failed. Try again.: Value write after transaction start: { physical: 1641423626346947 } >= { physical: 1641423625848215 }: kConflict
2022-01-05T23:00:26.371342Z SQLSTATE 40001 on retry #0 com.yugabyte.util.PSQLException: ERROR: Operation expired: Transaction aborted: kAborted
(pid@host [email protected]) 2568 calls 2482 tokens 2.7 /sec 60000 remaining
(pid@host [email protected]) 2272 calls 2186 tokens 2.4 /sec 60000 remaining
All transparent retries exhausted
). This explains why it is slower even with a small number of application retries.RateLimitDemo.java
I put back the id
alone: rate_limiting_token_bucket_request(?,?)
and I set Read Committed isolation level: connection.setTransactionIsolation(Connection.SERIALIZABLE)
rate: 116.60/s (last pct: 93.95) max retry: 8
rate: 116.60/s (last pct: 93.25) max retry: 8
rate: 116.60/s (last pct: 92.66) max retry: 8
rate: 116.60/s (last pct: 88.60) max retry: 8
rate: 116.60/s (last pct: 88.48) max retry: 8
rate: 116.60/s (last pct: 93.95) max retry: 8
rate: 116.59/s (last pct: 88.60) max retry: 8
rate: 116.59/s (last pct: 93.92) max retry: 8
rate: 116.59/s (last pct: 93.92) max retry: 8
rate: 116.59/s (last pct: 92.66) max retry: 8
rate: 116.59/s (last pct: 87.81) max retry: 8
rate: 116.59/s (last pct: 88.73) max retry: 8
rate: 116.59/s (last pct: 87.81) max retry: 8
rate: 116.58/s (last pct: 91.75) max retry: 8
rate: 116.58/s (last pct: 92.66) max retry: 8
rate: 116.58/s (last pct: 93.25) max retry: 8
rate: 116.58/s (last pct: 93.95) max retry: 8
rate: 116.58/s (last pct: 87.81) max retry: 8
rate: 116.58/s (last pct: 91.75) max retry: 8
rate: 116.59/s (last pct: 93.95) max retry: 8
rate_limiting_token_bucket_request(?,?)
one):id
. Yes, in this race condition, throughput is lower as it cannot be distributed but response time is higher given data locality in RAM and CPU).id
and 50 threads on different ones, which is more realistic), with low CPU usage. If you are in this race condition with all token requests on few users or tenants IDs, and need higher throughput, this Token Bucket is not scalable. I'll show another algorithm in the next posts.