Users/lookup and 502 HTTP Errors count against rate limit


We’re running a rather popular Twitter Analytics service that works really well for most people. Lately, we see more and more accounts with 1M+ followers sign up (including Twitter executives/investors/etc.) and we fail on scanning their accounts.

We run a two-stage process:
Stage 1: Use following/ids to get a complete list of IDs for the user’s followers. This one goes through fine mostly, also for large accounts (few hours).
Stage 2: Run a full scan (once!) of all IDs using users/lookup to understand the follower base demographics (most popular followers, VIP followers, etc.)

Stage 2 requires a few minutes to a day at most for smaller accounts (<100K followers).

For large accounts (500K up), we are unable to scan through since most requests fail with a 502. Our tool then throttles down to not 100 users/lookup requests, but tries 50, then 10, then 1 and this often goes through or detects a damaged record and skips it. It throttles back up to 100 requests afterwards.

My question is, and I would need a definite answer to proceed with our business or shut it down, is it feasible to think this issue with the 502 errors will be addressed in the near future? We are trying to build an analytics business on top of Twitter’s platform and are very happy with the API and the data we can get through it, we are adhering to every limit and restriction and work very hard to be a ‘nice API user’ in every possible way.

Is the 502 a simple reason for overload on Twitter’s side? What is the best way to work around this undocumented limitation? I lose a rate limited request every time I get a 502 (which should not happen since I got nothing back for my request!).


And @episod - you’ve always been super helpful on the forums here. I know you probably can’t do much about this now. I saw/understand your previous answers on this topic and wanted to link them up here as well:

I can see how Twitter needs to deal with request time outs, but it’s really hard to work around this when every failed request takes a rate call away from me. I’d be happy requesting the same data again if there was no rate call deducted the first time around.

It’s just that most requests send back a 502 for large accounts. Of 180 requests, an easy 100 or more go blank and take a previous rate call.


The 502s are a regular part of life in working with our APIs that export bulk amounts of data. Asking for less data per request is really the best way to manage this. This problem, especially with this method, will be better addressed over time. Different parts of the API are running on different back-end architectures, and users/lookup is still part of the older architecture.

I’ll follow up with the engineering team on whether there’s anything we can do to improve performance with users/lookup in the near-term.

What’s happening on our end is just that it takes longer than the maximum amount of time to convert the user IDs into full user objects, and then to prepare and serve a JSON response.


Makes total sense and I get that. I am fine if this does not allow to fetch more results per request, but the issue is much more towards the request limit token that I lose if a request times out. I get nothing back and still lose a token.

Always appreciate you taking time to answer these things Taylor, thanks.


@episod Just wanted to respond that I have now noticed that failed requests (HTTP 502 RESPONSE) are not counted against the rate limit anymore. They used to be.

If this sticks, you guys solved a big problem. Can you confirm that this change to not count failed API calls against the rate limits, in fact happened?

My routines automatically scale the request size down from 100 to 50 to 10 to 1 as soon as it hits a 502, but I usually lost up to 4 API calls before I got a result that I can use. Seems to have been addressed now. Thank you!