Help. Data for the last day/Count of requests



We ask to explain some issues about Premium-API.

Our task, which we want to solve with your API:

  1. Get count of tweets (about 1500 keywords) for the one day.
    Problem: now we get the data for last 30 days, but they all do not need us. We need data for the last day only.

  2. This action we need to repeat every day (1500 keywords per day = 1500*30 = 45000 per month).
    Problem: unclear API description. When 500 queries limit reset?

We tried to set up our application by making only 3 queries about 3 keywords. And the Request Usage now: 3 out of 500. We misunderstood how API works. Сan we get data on the count of each of 1500-1700 tweets per one request? We need one such request every day.

Please suggest your option to solve our tasks.



  1. Can you please provide an example of your request for us to review?

  2. Account requests reset on the day that you are approved for access to the developer account. You can view this date (charge date) in the “Billing” page.

If you have a paid premium account, the most Tweets you can get with one request is 500. The sandbox version only allows 100 Tweets per request. You can read more about this in our documentation (under the Pagination section):

For example, say your query matches 6,000 Tweets over the past 30 days (if you do not include date parameters in your request, the API will default to the full 30-day period). The API will respond with the first ‘page’ of results with either the first ‘maxResults’ of Tweets or all Tweets from the first 30 days if there are less than that for that time period. That response will contain a ‘next’ token and you’ll make another call with that ‘next’ token added to the request. To retrieve all of the 6,000 Tweets, approximately 12 requests will be necessary.


Hello! Here is an example of the code, and the result that we need.
I think that the way to do it better exists.
Please, tell us how to get the count of list items without a loop.

# account_type: premium
# endpoint:
# python3 implementation:

from searchtweets import gen_rule_payload, load_credentials, collect_results

premium_search_args = load_credentials("twitter_keys.yaml", yaml_key="search_tweets_premium", env_overwrite=False)

keywords = ("beyonce", "shakira", "spears")  # test keywords
result = dict()

for kw in keywords:
    count_rule = gen_rule_payload(kw, results_per_call=100, count_bucket="day")
    counts = collect_results(count_rule, result_stream_args=premium_search_args)
    result[kw] = counts[0].get('count')  # we use only first value

# expected result: {'beyonce': 18701, 'shakira': 4406, 'spears': 3258}


LeBraat, so our developer’s reply is above. Could you help us and solve this problem?


I ask you to reply.


Hello! We have a claim to you.
We paid for the premium API, but we don’t get support. We can’t get the answer to our question:

We can not use what we paid for.


I’m not too familiar with Python, but if you are trying to get a total count of 1500 keywords for a single day, you could use the following:

curl -X POST "" -d '{"query":"keyword1 OR keyword2 OR keyword3 OR ...","fromDate":"201808080000","toDate":"201808090000","bucket":"day"}' -H "Authorization: Bearer TOKEN"

Making sure to replace TOKEN with your bearer token and replacing the keyword1, 2, 3 with your keywords.

If you need to get the counts for each of those keywords separately, then you can just run the query with a single keyword and rerun the request for each of your keywords.

I’ll see if I can point someone your way to help with the python query.


Thanks for the response.
That’s what we need - the counts for each of the keywords separately.
Does this mean that our monthly premium-API limit is objectively only for 1/3 of a day’s requirement?
Are there any special conditions for our task? Maybe a special request at special price?
We are interested in cooperation and solve our problem.


Your math in the first post was correct:
(1500 keywords per day = 1500*30 = 45000 requests per month)

I suggest that you reach out to our sales team about our enterprise products at that volume of requests.