Premium API endpoint


#13

@ibksolar, the example request and response shown would be in HTTP. They are the request and response that you would see in an HTTP inspector, like Fiddler or Wireshark, or a browser’s dev toolbar “network” tab.

You can construct “raw” HTTP posts in tools like Fiddler and Postman. Or, you can construct them in higher level programming languages, like C# or Python.

I’ve coded this in C# on my own, but I now use a library called TweetInvi, which makes it easy to accomplish.

, Lee


#14

Thanks for your prompt reply @lxcichano . Can you please tell me that how can I run following code in python, because this is not python syntax. I’d tried but found errors.

require 'net/http’
require ‘uri’

uri = URI.parse(“https://api.twitter.com/1.1/tweets/search/30day/dev.json”)

headers = {“Authorization”:“Bearer MYREALLYLONGTOKENBASEDONKEYS”, “Content-Type”: “application/json”}

data = {“query”:"{snow OR sleet OR hail OR (freezing rain)) has:images", “fromDate”:“201709270000”,“toDate”:“201709280000”}

http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Post.new(uri.request_uri, headers)
request.body = data

response = http.request(request)

This is the code I’d taken from Twitter Official help.
P.s. I’d done all pre-requisites steps but through this I can’t having extracted tweets.

Can you please help me? or send a online resource?


#15

Hello, @umairhanif00. Your posted code appears to be Ruby. The first detail I notice is that your uri is parsing an SSL endpoint (HTTPS), but your response is not using SSL.

 uri = URI.parse(“https://api.twitter.com/1.1/tweets/search/30day/dev.json”)
response = http.request(request)

This post might help you:

You could also try this converter:


#17

I’d taken help from this link Application-auth. Copied all the stuff and modified the consumer and consumer secret keys.

When I use,

SEARCH_TWEETS_URL = ‘https://api.twitter.com/1.1/search/tweets.json’ # this is standard endpoint
it gives me all the tweets which I am asking

BUT

When I change this to,
SEARCH_TWEETS_URL = '‘https://api.twitter.com/1.1/tweets/search/30day/development.json’ # This is Premium Search endpoint mentioned at Twitter documentation.

It gives me error of invalid request / token.

Can you please tell me where exactly I am doing a mistake?
Thanks :slight_smile: Anxiously waiting for your reply.


#18

Hello, @umairhanif00. In your SEARCH_TWEETS_URL, what looks like the name of the JSON file (/development.json) should match your ‘Dev environment label’. Does it?


#19

Yes my environment label is ‘development’


#20

Are you using the headers and payload variables from that sample code?

It would help to see those, too, but don’t post your actual token. It is hard to troubleshoot code without seeing your code.


#21

headers = {‘Authorization’: ‘Bearer {}’.format(access_token)}
payload = {‘q’: ‘PSL’, ‘count’: 10, ‘result_type’: ‘popular’}

access_token is coming from above code snippet

Sorry for the inconvenience caused, I am from cellphone and can give you this assistance. Will post you the code early in the morning. Can you please tell me through this?


#22

q and result_type are not premium search API operators. The operators are different for the premium API. Please see the integration guide and documentation.


#23

Yes, I suspected you might be using the standard search operators from that sample code. As @andypiper says, you must update those for the 30-day API endpoint. Here is a fully constructed URL that should work:
https://api.twitter.com/1.1/tweets/search/30day/development.json?maxResults=10&query=from:SomeTwitterHandler

You can find the full information here:
https://developer.twitter.com/en/docs/tweets/search/api-reference/premium-search#DataParameters

If you go to your dashboard, and hover the three blue dots you will get direct links for the docs.


#24

Sorry I’m coming in late @lxcichano. Thanks for the help but I eventually got help some other way.

@umairhanif00: I’m not sure I totally understand exactly what you want to do but if what you want is to set up your premium account and search historical tweet (30days or full archive) using python, then I might just be able to help and I think it’s pretty easy.

What you need?

  1. Your environment label( I assume you got this already, if not, it’s the Dev environment label at your https://developer.twitter.com/en/account/environments
  2. I hope you have your Bearer Token, access token and all… If you do, then you’re good to go.
  3. All you need do to get historical tweets is edit this python code:
import requests
import json

endpoint = "https://api.twitter.com/1.1/tweets/search/30day/dev.json" #You'd need to replace "30day" with "fullarchive" if that's your case and replace "dev" with you dev environment label. 

headers = {"Authorization":"Bearer MYREALLYLONGTOKENBASEDONKEYS", "Content-Type": "application/json"}  

data = '{"query":"(snow OR sleet OR hail OR (freezing rain)) has:images", "fromDate": "201802020000", "toDate": "201802240000"}'

response = requests.post(endpoint,data=data,headers=headers).json()

print(json.dumps(response, indent = 2))

Let me know if it helps.


#25

Thanks Andy and Lee. It worked for me :slight_smile:


#26

Hey, I hope you guys are best with your health and development works. How can I add the negate function i.e. (-is:retweet) in my query?

payload = {"query": (hash_one) -is:retweet}
response = requests.get(SEARCH_TWEETS_URL, params=payload, headers=headers).json()

whereas, hash_one is the user input of #tags or @TwitterHandles or Keywords.
I want to negate the retweets , in which pycharm IDE is giving me syntax error.

I’d also tried with hard-coded handles, #tags:

payload = {"query": "(@thePSLt20) -is:retweet"}

But its searches for “hash_one” and results me in empty string of tweets.

Its not working either way.

Please help. @andypiper @lxcichano @ibksolar


#27

Hello all. Is there anyway to do this using R instead of Python?


#28

Yes, there’s an example in this thread, kindly provided by @hupseb - it would be great to see more examples shared in R, and possibly contributions to a library like rtweet :slight_smile:


#29

Thank you Andy. Yes I have seen that great example, but I was confused about some parts of it. I have posted a question there. Would you kindly be able to look at it ?


#30

Hey Gowlnar,

you would have to add to your query the from operator. You can find all operators of the Premium API here: https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/premium-operators

Back to my example in this post:

Just change as follows:

resTweets <- POST(url = “https://api.twitter.com/1.1/tweets/search/fullarchive/LIVE.json”,
add_headers(“authorization” = bearerTokenb, “content-Type” = “application/json”),
body = “{“query”: “from:realDonaldTrump”,“maxResults”: 20}”)

You don’t need to put the app name in the body. You are authenticated by the bearerToken already at that point.

The error responses of the API are very good and helpful.
Hope i could help :slight_smile:

I will publish some examples for R on GitHub soon.
Collecting tons of tweets at the moment.


#31

Hi,
can I ask how to add the time limitation in the body inR


#32

Hi, is there a way of using the ‘next’ token parameter for pagination using the R code?
Does the next parameter go in the body same as other parameters (query, maxResults, etc.)?

Thanks!


#33

Yes, the next token is an additional parameter in the request body.