Is there any documentation on what the TwitterBot considers valid robots.txt? Since this is a non standard file, it’s really up to each bot to implement or not implement whatever feature set they want (i.e. googlebot decided to implement a wildcard character when looking through urls in the robots.txt file).
Trial and error is far too slow since the twitter validator tool for twitter cards decides to cache the robots.txt file for 24 hours. Any help, especially documentation, would be appreciated. Thanks!