Single connection started getting 420

streaming

#1

Although similar cases are reported in the forum, I’m not sure if their issues are the same as mine. Let me try.

We have a daemonized command line tool that accesses the streaming sample data API using twurl continuously, i.e., 1 second sleep each in infinite loop:

timeout -s 9 60 nice -n 19 twurl -t -H stream.twitter.com /1.1/statuses/sample.json > samples_${ts}.json

Its data are consumed as sample data set in our product demo. This script has been running 1+ year without critical problem, but the above command started getting 420 Rate Limit errors since yesterday (Oct 20, 2016). I checked https://dev.twitter.com/rest/public/rate-limits but no window is described for samples endpoint. So I just waited 10 min, 1 hour, 1 day but was still getting 420. No other connection must exist as this script run on a single host by a single user.

Can I get some helps on resolving this? Is the API access acceptable? If not what would be better resolution? Thank you.


Site Streams limited to single connection now?
Issues reported with Streams since 10/21
#2

Hmmm, I’m having exactly the same problem.

Coincidentally it started happening around the same time yours started failing.

We are working on a research project for the Amsterdam municipality and the Dutch Police force. Sampling geo-tagged public tweets from the basic sample stream.

It’s been working fine for a few weeks, and we saw a pretty consistent rate of around 50 tweets per second. All of a sudden any attempt to connect results in 420.

I tried creating a new application key. That worked, but with the new key we are getting very very few items streamed to us. Like one per second with occasional spikes back to the usual rate we were seeing.

I tried contacting twitter via the policy request form, but received an auto response asking me to contact data sales. We don’t need the firehose right now as we are still in the research phase, we just need it to work as it was working using the sample stream.

I’ll try creating a new account later and see if that fixes the issue, but it is worrying why it would just change like this. We haven’t seen any rate limit notifications according to our logs.

I’m wondering if something has been changed on the server side, since I notice some downtime logged in the last couple of days in the twitter api uptime dashboard


#3

Sounds like the issue is worldwide. My host is EC2 in AWS Oregon. I also created a new account and ran the same command. It had worked fine for 1+ day but started getting 420 again. Then I switched back to the original one, which had been idle 1+ day, but it didn’t help but 420. :frowning:


#4

I’m having exactly the same problem in Spain. It started yesterday about 12 am UTC+2.


#5

+1.

I’ve had the same collection code running for years. Yesterday around 12:50 UTC it started getting tons of connection errors, including 420s. I’m on the US East Coast.


#6

Okay we’ve tried another account and it’s still bad… not getting the 420’s with it, but the feed rate is close to zero.

I think something must be very broken right now.

According to the dashboard there were many problems a couple of days ago.

https://dev.twitter.com/overview/status

I guess this was due to the DDoS. But the dashboard says everything is working now. I guess not.

Really hope this can get fixed soon/


#7

Maybe twitter have put everything on lock down after the DDoS?

That seems understandable, but would love any news when normal service will return.

cheers


#8

exactly the same problem in HK since 21st Oct!


#9

The other bizarre thing is that on the occasional attempt where we do manage receive a message now, the JSON place coordinate is set to null now, which is a bit of a surprise for a Geotagged tweet.

It certainly wasn’t like that before the 21st.

Strangely they do contain bounding box polygons for the message.

Something is very wrong


#10

Opening a single thread to bring these discussions together.


#11