Hi! I am using the premium api full archive counts endpoint for academic research. I am using the the search-tweets python wrapper to collect daily count data. Strangely, some of my requests seem to be double counting against my monthly quota, although there is no pagination in the results returned. I have specified results_per_call=31 and max_results=31. My queries are less than 1024 characters, but include several “OR” parameters.
I’m not able to predict when a request will double count against my quota, and I am concerned that I will exhaust my requests quota before I can complete my designed data collection.
Any insight or explanation for the double counting is greatly appreciated. Thank you.
The count queries might require more than response (usually if you query a popular term)
For higher volume queries, there is the potential that the generation of counts will take long enough to potentially trigger a response timeout. When this occurs you will receive less than 31 days of counts but will be provided a ‘next’ token in order to continue making requests for the entire payload of counts.
The library you are using might be processing the pagination automatically, so I would check that.
1 Like
Thanks - this seems to be what’s happening.
system
Closed
#4
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.