Starting around 5:30pm Eastern Time yesterday (Nov 7, 2016) we started to see an uncharacteristically high number of returns from calls to the search/tweets endpoint that contained no data. The vast majority of calls to that endpoint are continuing to returning data as expected, but the errors persisted so I investigated.
It turns out that in every case in which we’ve seen a failure, the http headers returned by the call contain 2 “content-length” length headers. The first contains a very plausible numeric value for the size of the content, but the second is either:
This causes libcurl, which is what we’re using to fetch data, to think the size of the body is 0 and so we get no data.
Is anybody else seeing this?
Here’s a full set of headers included in a response we received earlier today (note the 2 consecutive content-length headers:
HTTP/1.1 200 OK
cache-control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0
content-disposition: attachment; filename=json.json
date: Tue, 08 Nov 2016 16:16:34 GMT
expires: Tue, 31 Mar 1981 05:00:00 GMT
last-modified: Tue, 08 Nov 2016 16:16:34 GMT
set-cookie: lang=en; Path=/
set-cookie: guest_id=v1%3A147862179430166399; Domain=.twitter.com; Path=/; Expires=Thu, 08-Nov-2018 16:16:34 UTC
status: 200 OK
x-xss-protection: 1; mode=block