For some reason I got removed from the blacklist. Maybe the blacklisting isn’t for life but rather a couple of days.
Now, specifically answering your questions:
Q: Which API methods are you using?
A: get followers/ids and get users/lookup
Q: Are you handling other error conditions well, like when you get other 400 or 500 series codes?
A: I cant recall receiving any other 4xx condition. I occasionally get some 503 (The Twitter servers are up, but overloaded with requests. Try again later). In that case I immediately retry the request (should I add a sleep there too?)
Q: How aggressively are you making requests within a rate limit window?
A: I believe this is the key factor, if I’m not misunderstanding the question. I basically try to run out of requests as soon as possible (no sleep between requests) and the wait for the rest of the window. Probably twitter would expect me to sleep between the requests so that instead of hitting the server very often at the beginning of the window and nothing during the rest of the window, I would hit constantly at during the whole window. Is this correct?
Q: Are you crawling data?
A: I’m actually testing a java library (Twitter4j) and some modifications I made to its code and sent to the author of the library. So I was heavily hitting the API to see if the data I was fetching went along with the one that I could actually see in the page and also checking some other internal code tests. So, I guess that from a Twitter perspective this could have been seen as a crawl. I also started and stopped the processes many times to make the tests resulting in making the same call against the same user more than once. I also think this could be related to the blacklisting.
So far, I added a sleep between calls and everything seems to be OK. I’m also not exhausting all the calls in a window so that should help too.
Btw, thanks for you reply @episod