Streaming API hits 420 very often


#1

Streaming API seems to receive 420 very often. As mentioned in the docs Steaming API is like downloading a infinitely large file which never ends so according to this it must never hit 420. But even if it seems gets rate limit, i dont think it should be as often as every time I start the streaming.

I am using Twython 3.* and I am just tracking for a hashtag.


#2

If you get a HTTP 420 when connecting to the Streaming API, it usually means that you either have too many open connections or that you’ve attempted to connect too quickly recently. The more you try to connect while you receive HTTP 420, the longer you’ll keep getting HTTP 420.

Review all of your connections and your connection strategy and insure that you’re maintaining a single open connection.


#3

I am not trying to reconnect very often.

This is the code which does the streaming.

class MyStreamer(TwythonStreamer): def on_success(self, data): if 'text' in data: feed = data print feed
def on_error(self, status_code, data):
    print status_code
    logger.error("Rate limited, sleeping for 3000")
    time.sleep(3000)
    TwitterCheckin()
    # Want to stop trying to get data because of the error?
    # Uncomment the next line!
    # self.disconnect()

def TwitterCheckin():
stream_user = UserSocialAuth.objects.get(provider=‘twitter’, uid=’<TWITTER_ID>’)
stream_user.refresh_token()
stream = MyStreamer(settings.TWITTER_CONSUMER_KEY, settings.TWITTER_CONSUMER_SECRET,
stream_user.tokens[‘oauth_token’], stream_user.tokens[‘oauth_token_secret’])
logger.error(“Stream started”)
stream.statuses.filter(track=’#testinghere’)

I make a call to TwitterCheckin() only once in the entire lifetime of the code which this is part of.
status_code return 420 as soon as I make the call to TwitterCheckin()


#4

Have you verified that you have no other versions of the same script possibly running or that your program has completely closed its connections before opening again? Are you on any kind of shared IP address?

This might not be an issue with code so much as process or network management. Though I would definitely recommend taking a more explicit approach than just sleeping 3000 on any error you receive.