Premium API endpoint


@ibksolar, the example request and response shown would be in HTTP. They are the request and response that you would see in an HTTP inspector, like Fiddler or Wireshark, or a browser’s dev toolbar “network” tab.

You can construct “raw” HTTP posts in tools like Fiddler and Postman. Or, you can construct them in higher level programming languages, like C# or Python.

I’ve coded this in C# on my own, but I now use a library called TweetInvi, which makes it easy to accomplish.

, Lee


Thanks for your prompt reply @lxcichano . Can you please tell me that how can I run following code in python, because this is not python syntax. I’d tried but found errors.

require 'net/http’
require ‘uri’

uri = URI.parse(“”)

headers = {“Authorization”:“Bearer MYREALLYLONGTOKENBASEDONKEYS”, “Content-Type”: “application/json”}

data = {“query”:"{snow OR sleet OR hail OR (freezing rain)) has:images", “fromDate”:“201709270000”,“toDate”:“201709280000”}

http =, uri.port)
request =, headers)
request.body = data

response = http.request(request)

This is the code I’d taken from Twitter Official help.
P.s. I’d done all pre-requisites steps but through this I can’t having extracted tweets.

Can you please help me? or send a online resource?


Hello, @umairhanif00. Your posted code appears to be Ruby. The first detail I notice is that your uri is parsing an SSL endpoint (HTTPS), but your response is not using SSL.

 uri = URI.parse(“”)
response = http.request(request)

This post might help you:

You could also try this converter:


I’d taken help from this link Application-auth. Copied all the stuff and modified the consumer and consumer secret keys.

When I use,

SEARCH_TWEETS_URL = ‘’ # this is standard endpoint
it gives me all the tweets which I am asking


When I change this to,
SEARCH_TWEETS_URL = '‘’ # This is Premium Search endpoint mentioned at Twitter documentation.

It gives me error of invalid request / token.

Can you please tell me where exactly I am doing a mistake?
Thanks :slight_smile: Anxiously waiting for your reply.


Hello, @umairhanif00. In your SEARCH_TWEETS_URL, what looks like the name of the JSON file (/development.json) should match your ‘Dev environment label’. Does it?


Yes my environment label is ‘development’


Are you using the headers and payload variables from that sample code?

It would help to see those, too, but don’t post your actual token. It is hard to troubleshoot code without seeing your code.


headers = {‘Authorization’: ‘Bearer {}’.format(access_token)}
payload = {‘q’: ‘PSL’, ‘count’: 10, ‘result_type’: ‘popular’}

access_token is coming from above code snippet

Sorry for the inconvenience caused, I am from cellphone and can give you this assistance. Will post you the code early in the morning. Can you please tell me through this?


q and result_type are not premium search API operators. The operators are different for the premium API. Please see the integration guide and documentation.


Yes, I suspected you might be using the standard search operators from that sample code. As @andypiper says, you must update those for the 30-day API endpoint. Here is a fully constructed URL that should work:

You can find the full information here:

If you go to your dashboard, and hover the three blue dots you will get direct links for the docs.


Sorry I’m coming in late @lxcichano. Thanks for the help but I eventually got help some other way.

@umairhanif00: I’m not sure I totally understand exactly what you want to do but if what you want is to set up your premium account and search historical tweet (30days or full archive) using python, then I might just be able to help and I think it’s pretty easy.

What you need?

  1. Your environment label( I assume you got this already, if not, it’s the Dev environment label at your
  2. I hope you have your Bearer Token, access token and all… If you do, then you’re good to go.
  3. All you need do to get historical tweets is edit this python code:
import requests
import json

endpoint = "" #You'd need to replace "30day" with "fullarchive" if that's your case and replace "dev" with you dev environment label. 

headers = {"Authorization":"Bearer MYREALLYLONGTOKENBASEDONKEYS", "Content-Type": "application/json"}  

data = '{"query":"(snow OR sleet OR hail OR (freezing rain)) has:images", "fromDate": "201802020000", "toDate": "201802240000"}'

response =,data=data,headers=headers).json()

print(json.dumps(response, indent = 2))

Let me know if it helps.


Thanks Andy and Lee. It worked for me :slight_smile:


Hey, I hope you guys are best with your health and development works. How can I add the negate function i.e. (-is:retweet) in my query?

payload = {"query": (hash_one) -is:retweet}
response = requests.get(SEARCH_TWEETS_URL, params=payload, headers=headers).json()

whereas, hash_one is the user input of #tags or @TwitterHandles or Keywords.
I want to negate the retweets , in which pycharm IDE is giving me syntax error.

I’d also tried with hard-coded handles, #tags:

payload = {"query": "(@thePSLt20) -is:retweet"}

But its searches for “hash_one” and results me in empty string of tweets.

Its not working either way.

Please help. @andypiper @lxcichano @ibksolar


Hello all. Is there anyway to do this using R instead of Python?


Yes, there’s an example in this thread, kindly provided by @hupseb - it would be great to see more examples shared in R, and possibly contributions to a library like rtweet :slight_smile:


Thank you Andy. Yes I have seen that great example, but I was confused about some parts of it. I have posted a question there. Would you kindly be able to look at it ?


Hey Gowlnar,

you would have to add to your query the from operator. You can find all operators of the Premium API here:

Back to my example in this post:

Just change as follows:

resTweets <- POST(url = “”,
add_headers(“authorization” = bearerTokenb, “content-Type” = “application/json”),
body = “{“query”: “from:realDonaldTrump”,“maxResults”: 20}”)

You don’t need to put the app name in the body. You are authenticated by the bearerToken already at that point.

The error responses of the API are very good and helpful.
Hope i could help :slight_smile:

I will publish some examples for R on GitHub soon.
Collecting tons of tweets at the moment.


can I ask how to add the time limitation in the body inR


Hi, is there a way of using the ‘next’ token parameter for pagination using the R code?
Does the next parameter go in the body same as other parameters (query, maxResults, etc.)?



Yes, the next token is an additional parameter in the request body.