Concurrency limitations and batch targeting create endpoint


#1

I have been investigating the use of concurrency with the batch targeting endpoint. I’ve tried two different approaches so far. In both cases, I am updating targeting on twitter campaigns using the same OAuth token (for the same advertiser account).

My first design was to batch create targeting criteria 100 at a time, disregarding which line items were involved in each batch request. So, for example, several concurrent requests could be adding targeting criteria objects to the same line item. When I tried this with a thread pool of 10 concurrent threads processing the callables I’d created, I started to get HTTP 503 Service Unavailable errors roughly 50% of the time.

Inferring that your system might have a lock on parent entity (twitter line item), I refactored to split the batch requests into sequential series of requests. Each callable dispatched to the thread pool now consists of a series of batch requests for a set of line items, but a line item can only appear in one series of requests (all the series are disjoint with respect to the line items involved). This way, any implicit locking issue should be avoided because requests to modify targeting on any given line item are now in a single callable, in a series of batch requests that will run sequentially. Using a 10 thread pool to process the callables, I now get many fewer HTTP 503 Service Unavailable errors with this approach. That said, I am still getting 503 Service Unavailable errors which prevent the entire operation from succeeding.

We have advertisers that have 740 exclude keywords per twitter campaign, due to brand safety requirements. I would like to make this process fast for our traffickers, but am not certain how best to go about this, given the http response codes I’m seeing.

Our server uses Spring Retry, configured to retry 3 times, exponentially backing off. For each request, we retry at 1 second, 3 seconds, and 9 seconds, in case of HTTP 5XX error. For the cases described above, there are requests that fail on the 3rd retry, and then aren’t recoverable. I could configure retry differently but if there’s some implicit info I’m missing that would help, I’d prefer to use it.

Do you have information as to what the best approach would be or implicit details that I’m missing that may help with this problem?


#2

Thank you so much for this great level of detail. We definitely want to be able to advocate batch API to our partners so please allow us to work through this issue with you.

It would help a lot to get a recent line item ID we can find this error in our logs with, or preferably to share the request and response.

It’s generally true that our system locks when performing write operations, but I would expect the batch API to be able to process a request it does not reject. Depending on the content of the request, it could simply be taking too long and times out, so I need to see the actual request so be able to investigate why the 503 is happening.

Thanks,

John


#3

Hi John,

The calls were made yesterday around [2016-10-12 01:48:18,221] GMT.

The line items involved in account okrly1 are {6g7yf, 6g7yg, 6g7yh, 6g7yi, 6g7yj}.

The error is always HTTP 503 {“code”:“SERVICE_UNAVAILABLE”,“message”:“Service unavailable due to request timeout; please try the request again later”}. I hadn’t noticed that the error was always the same previously, so thanks for mentioning the timeout in your reply.

I think what I will try for the time being is to special case this batch call, adding a last retry at 81 seconds (one more exponential backoff… or maybe just change it to linear backoff, retrying every 5 seconds… I’ll play around and see what seems to solve the problem). I can work with our UI people to make the UX better for this case, so that seems acceptable.


#4

Hey,

For the case of adding a bunch of keywords, I confirmed internally and the best thing to do right now is to make the call to update all 740 keywords at the same time for the line item, instead of 100 at a time. Previously the system had difficulty dealing with such a large # of keywords but recently was optimized so believe it would be possible now. Can you please try adding that and see if it makes the system behave more smoothly operating on one 1 line item at a time? The lock would not just be upon line item but basically entire account structure so if you make requests in parallel it would definitely start having those lock errors.

Thanks,

John


#5

Hi John,

Thanks for the additional info. I will modify my algorithm to submit huge keyword batches sequentially before submitting the other requests in parallel and see how that goes. I was able to optimize my code with the current endpoint functionality so that the error rate is around 1-3% for 5 threads running 320 batch requests in parallel. With spring retry in place to catch any failures, that approach now always works, and is 4 times faster than a fully sequential approach. With huge keyword only batches, it sounds like it will be even faster, so that’s great.

Chris


#6

Hi John,

I tried what you suggested above. Now I get the following API error:

400 BAD REQUEST {“errors”:[{“code”:“EXCEEDED_MAX_COUNT”,“message”:“The number of operations in this batch were 783. The maximum allowed in a single batch is 100”}]

When will you be rolling out support for large batches for keywords?

Thanks,

Chris


#7

@chris_august7: Could you try again? We just deployed a fix for this where the maximum number of operations is now 1,000.


#8

will do! thanks.


#9

Hi @juanshishido,

I tried this today.

3 line items. 3 requests.

749 keywords and 49 exclude keywords in a batch for one line item in each request.

twitter api responded with HTTP 503 INTERNAL SERVER ERROR and no response body.

Please advise as to the next step on this.

Thanks,

Chris


#10

@chris_august7: Could you provide the account and line item IDs for this as well as an approximate time (preferably in UTC) so we can check the logs?


#11

@chris_august7: Could you please also provide an example request? (We know it’ll be large with so many keywords.)


#12

Hi @juanshishido,

I have emailed you and Evan Romero the sample keyword list and requests. A lot of it is a keyword exclude list for a large advertiser, so its not appropriate to post those keywords on this site. Let me know when you get the email.

Thanks,

Chris Merrill


#13

Thanks for sending that over, @chris_august7.


#14

We are still working to determine the issue here, @chris_august7. Thanks for your patience!


#15

@chris_august7: We’re actively working toward a fix for this. We’ll provide an update as soon as we can.