Statuses/filter looks rate limited/capped but no sign


#1

We’re using the https://stream.twitter.com/1.1/statuses/filter.json api.
Since 1:30 AM this night we see it’s sending us much less data than before.
See: http://cl.ly/image/2z2b1B0H211Z

We’re not getting any of the rate limit messages (https://dev.twitter.com/docs/streaming-apis/messages#Limit_notices_limit) nor disconnect messages. The amounts of the follow/track parameter hasn’t changed drastically either.


#2

This is for Twitter user “engagorbot”.


#3

Can you let me know what the message / second rate you’re seeing after the drop occurred? Do you expect to have elevated access?


#4
  • We have elevated access (#track = 10k);
  • Before the drop we had a daily average of 150/s (with peaks to 280/s). It dropped suddenly (on 2013-07-23 01:30 GMT, see screenshot above) to 100/s. (After a short spike, because of #royalbaby.)

We certainly don’t receive any rate limit or disconnect warnings from the api.

Thanks.


#5
  • We have elevated access (#track = 10k);
  • Before the drop we had a daily average of 150/s (with peaks to 280/s). It dropped suddenly (on 2013-07-23 01:30 GMT, see screenshot above) to 100/s. (After a short spike, because of #royalbaby.)

We certainly don’t receive any rate limit or disconnect warnings from the api.

Thanks.
[repost as reply]


#6

We’ve been further investigating this and also found frequent remote disconnects (roughly every 7 minutes).

After enabling the stall_warnings on the streaming API, we indeed receive warnings about this. We process tweets asynchronously with RabbitMQ and after double checking, we spent almost no time in pushing the tweets to RMQ.

Network issues are also unlikely.

Is there enything else you can see on your end?


#7

We aso just noticed that we are not yet using the 1.1 version of the streaming API with oAuth. We will migrate to the new API as soon as possible.

Could this be related to the sudden drop in messages?


#8

Are you able to exceed the 100/s if you change to broader terms? Or does it seem like 100/s is the absolute maximum you get?


#9

Hi kurrik,

We have been receiving “falling_behind” errors again now and then (and disconnected because of it sometimes), esp. during last 2 days. We believe we have nailed it down to it being a bandwidth issue between our servers (Brussels, Belgium) and yours, since:

  • if we disable gzip (use identity) the streaming rate drops to 1/5 of our current one
  • if we don’t do anything with the messages (just discard, and no processing) we don’t see an improvement, so it looks like the processing on our end is not the bottleneck
  • our current rate is about 220/s (which is just enough / just not enough depending on time of day) to be able to catch up

We’re also seeing this:

  • if we add a very broad keyword to the list (eg. “a”), the rate actually goes down to about 150-160/s. I don’t get why.

The traffic is routed over amsix at the moment. (Our host is contacting your network team too, we have received a reply about there being packet loss, 1%.)

Here’s a ping from the server where we consuming the firehose:
oemebamo@server114:~$ ping stream.twitter.com
PING stream.twitter.com (199.16.156.110) 56(84) bytes of data.
64 bytes from 199.16.156.110: icmp_req=1 ttl=58 time=96.0 ms
64 bytes from 199.16.156.110: icmp_req=2 ttl=58 time=95.8 ms

Here’s a traceroute:
oemebamo@server114:~$ traceroute stream.twitter.com
traceroute to stream.twitter.com (199.16.156.20), 30 hops max, 60 byte packets
1 109.68.166.124 (109.68.166.124) 0.320 ms 0.364 ms 0.436 ms
2 ge-2-0-0.br1.ix.as39923.net (109.68.160.57) 0.428 ms 0.509 ms 0.618 ms
3 195.69.146.46 (195.69.146.46) 4.407 ms 4.402 ms 4.392 ms
4 199.16.159.119 (199.16.159.119) 9.815 ms 9.796 ms 9.755 ms
5 xe-0-2-1.iad1-cr1.twttr.com (199.16.159.123) 80.872 ms 80.975 ms xe-1-2-1.iad-cr2.twttr.com (199.16.159.125) 122.681 ms
6 ae50.atl1-er2.twttr.com (199.16.159.71) 98.482 ms 98.568 ms 98.372 ms
7 199.16.156.20 (199.16.156.20) 101.891 ms 101.443 ms 96.298 ms