COST_LIMIT_EXCEEDED. x-cost-rate-limit

api

#1

Hello,

I’m using TWURL in order to retrieve campaign metrics same to those in CAMPAIGNS DASHBOARD
(https://ads.twitter.com/accounts/xxxxxxxx/campaigns_dashboard)

I ran into problem with Rate Limit.
This is acute problem for me because my process runs in batch mode.

I was able to download only 5 campaign metrics other comes empty,
and it’s because there is rate limit.

We have 28 campaigns and it will be more in the future, so I need you disable this limit or
set it high enough in order to provide me the opportunity to download metrics for all out campaigns.

I’ve already read this:
https://dev.twitter.com/ads/analytics#RateLimiting
and this
https://dev.twitter.com/ads/analytics/best-practices

Thanks,
Alex


#2

Are you using the data in real time? If not you can delay the process by only grabbing 5 every minute since the limits reset after a minute. The limits are deliberately set to avoid abuse which 5 per minute is very reasonable.


#3

We need to do it once in a day, but it should be fast as if it was online.
Please disable cost-rate-limit for us.

Please also take care of “request timeout” that I get during data retrieval process.


#4

@Marketsdotcom implementing our best practices as you noted is the best way to go about this.

If you put some retry logic in your logic with an exponential back-off for error responses or when you exceed the rate limits, you should be able to get everything you might need.


#5

Dear Andrs,

I’ve mentioned in my initial message that I’m familiar with the Best Practice and I’ve read this information.

I’ve already implemented “retry logic” in my code and I was able to download all data after 13 minutes through getting rid of timeout errors.

But, as I said to our account manager Yaniv this is not acceptable, we need to do it fast because it delays our night processes.

Thanks,
Alex


#6

How many Promoted Ads do you have in global? And also, how much is your x_request_cost?


#7

@Marketsdotcom when you make requests to analytics endpoints, you should look at the headers that you get back - in particular X-Rate-Limit-Reset, which is the remaining window before the rate limit resets in UTC epoch seconds.

You can find even more details on rate limiting here and tips on how to avoid hitting our rate limits.

To get a little more context on your use case, what are you trying to do with the stats and why is it an issue if it takes 13 minutes? What kind of requests are you making?


#8

@Marketsdotcom our best practices recommend pulling stats data 7 days at a time (minimum) with hourly granularity at the promoted tweet or line item level, persisting that on your side and rolling it up to the campaign, account of funding instrument level. The cost-based rate limiting simply won’t scale for you if you’re trying to pull everything at a higher level. If you’re not following that intended practice and pulling at the campaign level or above, you’re much definitely much more likely to run into rate limit issues much quicker.

That being said, with the current cost-based rate limit of 5000 / per min / per token there’s no way it should be taking you 13 mins to pull stats for 28 campaigns unless you’re pulling in a very sub-optimal way.

Can you provide the following:

  • A code snippet or pseudo code for how you’re handling retries when you do get rate limited
  • Replayable twurl examples of all the most common stats calls your making

#9

Dear brandonmblack,

Thank you a lot for your reply!
Sorry for delay, I have a huge workload…

I run commands like these below in a loop for all my campaigns until I get all my files full with data.
I do this only for (today - 5 days).
After each 5 campaigns I wait for 65 sec. and then retrieve another 5.
If there some errors in Json files, I retrieve these campaigns again in same loop with 65 secs delay for each 5 campaigns.

twurl -X GET -H ads-api.twitter.com “/0/stats/accounts/18ce53y9hd8/campaigns/24xq1?granularity=DAY&start_time=2015-08-20&end_time=2015-08-25” > /projects/stam/alexf/camaing_24xq1_twitter.json

twurl -X GET -H ads-api.twitter.com “/0/stats/accounts/18ce53y9hd8/campaigns/2ai84?granularity=DAY&start_time=2015-08-20&end_time=2015-08-25” > /projects/stam/alexf/camaing_2ai84_twitter.json

twurl -X GET -H ads-api.twitter.com “/0/stats/accounts/18ce53y9hd8/campaigns/2l9uu?granularity=DAY&start_time=2015-08-20&end_time=2015-08-25” > /projects/stam/alexf/camaing_2l9uu_twitter.json

twurl -X GET -H ads-api.twitter.com “/0/stats/accounts/18ce53y9hd8/campaigns/2lpwh?granularity=DAY&start_time=2015-08-20&end_time=2015-08-25” > /projects/stam/alexf/camaing_2lpwh_twitter.json

twurl -X GET -H ads-api.twitter.com “/0/stats/accounts/18ce53y9hd8/campaigns/2m9wq?granularity=DAY&start_time=2015-08-20&end_time=2015-08-25” > /projects/stam/alexf/camaing_2m9wq_twitter.json


#10

Problem is you haven’t read best practices. Twitter recommends you to get stats from line item or promoted tweet level.

And after that you should combine your results.

Regards


#11

Sorry, it could be I don’t understand what I should do according to the Best Practice,
I’m just asking for simple example in Twurl that should resolve my issue.
Could you please help me with this?
(p.s.: I’m a SAS programmer with 0 (zero) experience in using Twurl)


#12

EDIT: @Marketsdotcom If you really want to fetch things on campaign level, you can fetch all of them in a single request by adding the parameter campaign_ids instead of the id at the end of the URL, so for your requests above that would become:

twurl -X GET -H ads-api.twitter.com "/0/stats/accounts/18ce53y9hd8/campaigns?campaign_ids=2m9wq,2lpwh,2l9uu,2ai84,24xq1&granularity=DAY&start_time=2015-08-20&end_time=2015-08-25" > /projects/stam/alexf/camaigns_twitter.json

You’ll get a row per campaign back, which you can then separate into the different .json files to your liking.


As @brandonmblack explained:

So, this means that instead of going to /campaigns, you should use /promoted_tweets or /line_items.
He also mentioned that you should use hourly granularity, so granularity=HOUR instead of granularity=DAY

I haven’t worked with Twurl yet, but seeing your example I think it would become e.g.:

twurl -X GET -H ads-api.twitter.com "/0/stats/accounts/18ce53y9hd8/promoted_tweets?
promoted_tweet_ids=[COMMA_SEPARATED_LIST_OF_TWEET_IDS]&granularity=HOUR&
start_time=2015-08-20&end_time=2015-08-25" > /projects/stam/alexf/promoted_tweets_twitter.json

You’ll then need to group the result to your liking, so if you want data per campaign you’ll have to group it yourself.


#13

Thank you very much!
I’ll try it and update you with the results.