How does the robots.txt disallow all user-agents? The disallow is blank. Also, if you notice above it works great. No other site is seeming to have an issue except Twitter. Would a blank robots.txt be better? Or should I add Twitter specific user-agent?
https://dev.twitter.com/cards/getting-started#crawling - if you look at that link it shows
User-agent: Twitterbot
Disallow:
As being allowed, which is what I have.