Premium Search | Full Archive



We’re using the FUll Archive Search but having problems trying to get all Tweets from a given user account.

Here is our endpoint and params:

And here is our Postman cfg / result - we get an authentication error:


If we remove the from param (and do a GET), and just have the keyword ‘jamie_maguire1’ - we get data?

Can you advise please?


Hi @SocialOpinions. Have you tried running this as a cURL?

Copy the following request into your command line after making the appropriate adjustments to the following:

  • Environment name e.g. Prod

  • Bearer Token e.g. AAAAAAAAAAAA0%2EUifi76ZC9Ub0wn...

  • fromDate and toDate e.g. fromDate":"201403040000"

curl --request POST \
  --url{ENVIRONMENT_NAME}.json \
  --header 'authorization: Bearer {BEARER_TOKEN}' \
  --header 'content-type: application/json' \
  --data '{
                "maxResults": "500",



We’re using Postman, can we use that for requests? We used it daily in our development stack.

Kind Regards,



Hi Jamie,

When using Postman to use the POST http method, try putting your query in the Body (choose raw) instead of as a Parameter.

It worked for me when I tried this (see screenshots):



Aweseome! That works! Thankyou!

One of the accounts that we’re processing is @joelsartore, the account has 1268 Tweets so with the maxResults parameter being set to 500, we’d only expect the “next” paging parameter to be returned 2-3 times but the pages/results (and we confirmed this in our .NET code also ) only every seem to contain approximately 40-50 results at a time meaning we need to make far more requests that we theoretically should.

For example, you can see this request has only returned 31 records:

Can you advise please?



@Hamza - any updates please?


You can take a look to my forum message.

If this user has less that 500 tweets per month, request will only returne tweets posted each month. That means that you will waste a request for each month you want to download.


We’ve just maxed out our allowance for the month (100) when it should only have used 2-3 requests as the account we were processing only has approximately 1300 Tweets.

@Aurelia - can you advise please?

Kind Regards.


Sorry, I just re-read this.

So effectively, the Premium Search has to go through each month, as far back as 2006, then run X number of requests until it gets to the “present day” ?

Is this the expected behaviour?


No. The expected behavior, as far as I could understand, is that request should be full (500 or 100 tweets) indifferently when the tweets were post. The only request you should not get full is the firsts one or the last one, if I’m not wrong.


Hello folks,

We note the following in our documentation:

The API will respond with the first ‘page’ of results with either the first ‘maxResults’ of Tweets or all Tweets from the first 30 days if there are less than that for that time period.

This means that if you have your maxResults set to 500, but there were only 300 Tweets posted in the first 30-day period, you will only receive those 300 Tweets in the first page.

Mismatch between requests used and tweets downloaded - Search API

Resultados de la búsqueda

Resultado de traducción

¿Is this applied for the full-archive too? I mean, if I want to download tweets between 3 different months, for example: 15/03 - 15/06. If between 15/03 - 15/04 there are only posted 90 tweets, ¿will my first request jump to the next one indifferently that my maxResults are 500? ¿Will my request not be fill in with tweets until it reaches 500 tweets if they are not posted 30 days after my start date? Thank u for your explanation.


This pagination applies to all Search API endpoints.

If you have 800 Tweets in a given 30 day period, you will have to make two requests to pull the complete results. However, if you just have 400 Tweets in month one, and then 100 Tweets in month two, you will have to use two requests to pull the full results.


Thanks for replying.

In theory, we need to make a request for each 30 day period, and as you say, if a given period has more than 500 Tweets, the “next” parameter is used to make another request to grab the next batch of 500? (providing they exist).

Then, we continue work through each 30 day period using the “next” parameter until we get to the “present day”?


Most of your restatement is correct. Only a couple of changes.

  1. It is actually 31 days. I should have been clearer about this earlier.
  2. The data endpoint delivers data in reverse chronological order.


Thanks for the clarification.

closed #17

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.