Hello,

I am new to developing and also to the Twitter API interface. I managed to search in Tweet counts (all) and Tweet Search (all) for a few words, but I cannot add rules to it, like max_results=300 or place_country=FR for example.
Indeed, each time I do so, it gives me 0 results, I think I might not add them the right way in the query, I tried putting them with %20 between each, like this for example :

query=(happy%20OR%20sad)%20place_country=us%20lang=en%20sample=10%20max_results=200" -H “Authorization: Bearer ‘BEARER TOKEN’”

Am I doing something wrong ?

Thank you for your help and have a nice day !
Paul

See Search Tweets - How to build a query | Docs | Twitter Developer Platform for how to build a query for search

And Filtered stream - How to build a rule | Docs | Twitter Developer Platform for building stream rules

In your example, it should be : not = for the operators.

I recommend using a library or another tool instead of specifying a query as a url parameter and encoding it yourself. Give twarc a try:

After setup pip install twarc and twarc2 configure Your search example would be:

twarc2 search --archive "(happy OR sad) place_country:us lang:en" results.json

Twarc can also do streaming for v2 twarc2 (en) - twarc

You can start the stream in one terminal and while it’s running add /
Remove rules as needed.

Thank you so much ! It works perfectly now.

Have a nice day !

Also, I cannot find if we can find the total numbers of tweets per country and per year, I would need them to compare with my results. I tried with the method you gave me but it didn’t work :

twarc2 counts --archive --start-time 2017-01-01 --end-time 2018-01-01 place_country:fr lang:fr results.json

Thank you

You won’t be able to get this exactly - only a very small fraction of tweets contain geo information, so you will have to estimate this yourself somehow.

But for the commnad, you need to surround the query with quotes appropriately for your terminal:

twarc2 counts --archive --granularity "day" --start-time "2017-01-01" --end-time "2018-01-01" "place_country:fr lang:fr" results.json

(i added --granularity "day" to get daily counts, default is “hour”)