Millions of requests to follower_ids without breaking rate limit



I’m working on an analytics application for businesses based on their payment data, and I would like to be able to find the customer with the highest number of followers. Because a business may have potentially millions of customers, rate limits quickly become an issue, especially since I’m trying to do this concurrently. What’s the best way to retrieve this data given the built-in limitations of the API?


You may need to look into using gnip. If you can queue the processes and don’t need the results quickly you could cache the results and let it slow build over time with the current rate limits but that might not be ideal.


Unfortunately I can’t point you to a Gnip product that provides easy access to this data at the moment (but I’ll pass that along as a potential enhancement idea for the future). The friends/followers endpoints have relatively low rate limits, so you would have to do this in a cached / long-running manner.


@andypiper @DanielCHood Thank you guys for your quick responses! So, if my math is correct, if I had a data set of 1 million users, assuming they each had twitter accounts, it would take me 46 days to figure out the number of twitter followers for each user? 1 million / (15 requests / minute * 60 minutes / hour * 24 hours / day)


Well, 1% of that because you can lookup 100 at a time with users/lookup but yes it’s a slow build.


I didn’t even realize that endpoint gave me what I needed (this my first time using the API). users/lookup has a much higher limit, and should definitely be workable for my purposes. Thank you!