I have been investigating the use of concurrency with the batch targeting endpoint. I’ve tried two different approaches so far. In both cases, I am updating targeting on twitter campaigns using the same OAuth token (for the same advertiser account).
My first design was to batch create targeting criteria 100 at a time, disregarding which line items were involved in each batch request. So, for example, several concurrent requests could be adding targeting criteria objects to the same line item. When I tried this with a thread pool of 10 concurrent threads processing the callables I’d created, I started to get HTTP 503 Service Unavailable errors roughly 50% of the time.
Inferring that your system might have a lock on parent entity (twitter line item), I refactored to split the batch requests into sequential series of requests. Each callable dispatched to the thread pool now consists of a series of batch requests for a set of line items, but a line item can only appear in one series of requests (all the series are disjoint with respect to the line items involved). This way, any implicit locking issue should be avoided because requests to modify targeting on any given line item are now in a single callable, in a series of batch requests that will run sequentially. Using a 10 thread pool to process the callables, I now get many fewer HTTP 503 Service Unavailable errors with this approach. That said, I am still getting 503 Service Unavailable errors which prevent the entire operation from succeeding.
We have advertisers that have 740 exclude keywords per twitter campaign, due to brand safety requirements. I would like to make this process fast for our traffickers, but am not certain how best to go about this, given the http response codes I’m seeing.
Our server uses Spring Retry, configured to retry 3 times, exponentially backing off. For each request, we retry at 1 second, 3 seconds, and 9 seconds, in case of HTTP 5XX error. For the cases described above, there are requests that fail on the 3rd retry, and then aren’t recoverable. I could configure retry differently but if there’s some implicit info I’m missing that would help, I’d prefer to use it.
Do you have information as to what the best approach would be or implicit details that I’m missing that may help with this problem?