Hi,
I am a researcher on the academic research track, trying to pull historical tweets within certain geographical bounding boxes. I have not exceeded my monthly limit for this month. However, the endpoint that I am calling is now only returning 503 errors for me. This is pretty bizarre and I’m not sure if this is a problem with Twitter or with me, because my code was working earlier.
Thanks!

How are you making calls? what what script / library? I would try to reduce the max_results parameter from 500 to 250 or 100 to test. I’ve seen this error before here 503 Error with search/all · Issue #449 · DocNow/twarc · GitHub

Hey! Thanks for the help. I tried changing the max_results to be 100, but I’m still getting the same error.

I’m just using the requests library in Python with my specific parameters. The code looks almost exactly the same as the sample code.

The URL I’m calling is: https://api.twitter.com/2/tweets/search/all. My params are, where box_param is a string of a bounding box:
query_params = {'query': box_param,'start_time': '2011-01-01T00:00:00Z', 'end_time': '2021-04-01T00:00:00Z','tweet.fields': 'created_at,geo', 'expansions': 'author_id,geo.place_id', 'user.fields': 'location', 'max_results': 100}

Ah, in that case - usually 503 errors go away after a while because it’s something on twitter’s end.

Hi,

I also can’t download what I need due to the 503 error. The sad thing is that somewhere in early May it was all working with Twarc, but now it doesn’t. It starts to download like 50 MB of tweets and then it only chokes with 503 errors and waiting and waiting more secs.

I tried changing the max results limit, didn’t help anything.

Any suggestions?
Each time I’ve tried I’ve updated Twarc, now It’s not working with version 2.2.0

What’s the exact query / URL that fails (it should be in twarc.log) just to see because sometimes specifying context annotations throws 503 errors, and turning off that expansion helps. Unfortunately twarc doesn’t have an easy command line switch to specify those yet.

Hi, thanks for asking. Generally it is a very simple 1-word query with beginning and end time specified as well as max-results. Only language en was selected as additional argument. In beginning of May the same and similar queries allowed me to download over 5 million tweets, but now it gets stuck.

1 Like