I’m working with a large company on displaying Twitter content on their web site. We had been retrieving data for a number of searches using the 1.0 API for years and switched to 1.1 just before the 1.0 retirement. Everything worked fine in dev but when scaled up to production we started hitting our rate limit. We’re caching content locally on a schedule (as we always have) and using application-only authentication (as it’s display-only so users will not log in to see it). It’s working except that we’re hitting our rate limit very quickly despite taking all of the proper precautions.
The back-end set up is fairly complicated and I think we may have finally found where it fails, but so far can’t figure out why. We have a PHP-based proxy that authenticates using our app-only bearer token and retrieves the data. That files needs to work through a proxy to get outside of our internal network. If I run the PHP on an outside server everything looks good.
If we look at the headers coming back when run through the proxy, there are exactly 100 copies of the “set-cookie” header. Each has a different “guest_id” value. So my current thinking is that something is going wrong in the proxy, causing each request to count as 100 requests, meaning instead of about 250 requests per rate-limit window, we’re using 20,000…
Does anyone have any thoughts on how that could be? If we’re getting 100 copies of set-cookie back in one request, does that mean the outgoing request had 100 copies in it? The PHP is not setting 100 copies of the header… I’m primarily a front-end designer/developer trying to help our IT team diagnose this.