Failed to Crawl URL


I added Twitter Cards meta tags to my forum site. The site is developed in PHP using vbulletin. However, whenever I run the URL in the Card Validator, it showing Failed to Crawl URL, debugId: 242571936811…

Please help out.



Can somebody help please?

I checked and there is no robots.txt in the server.
One of the URLs is



Icel, could you please try adding a robots.txt file with the contents below and try again?

User-agent: Twitterbot Allow: *


I added a robots.txt file as instructed but Im still getting the same error.

Thanks for helping out.


Try and edit your source in note book and then reconvert into php or html as it will automatically convert into php.


Icel, I think I found the origin of your issue. I tried to retrieve the page you mentioned via cURL using the command below:

$ curl -v -A Twitterbot

However, this is returning a HTTP 500 error:

... < HTTP/1.0 500 Internal Server Error ...

Actually, even in a browser, the page returns HTML as expected but a 500 response code.


Hi Romain,

Our server has been fixed. However, I am still showing ‘Failed to crawl URL’ error.

What should I do? Is it because I’m using vbulletin?


Hi Icel,

I’m still seeing a 500 error on the URL above. Could you please check with cURL before retrying on the validator?



same here with me


Hi Icel, I wanted to follow up on this thread. I see two possible issues here.

(1) The robots.txt file is not in the right place or it cannot be accessed at

(2) You have no value for the twitter:creator tag. It’s unfortunate, but our crawler throws an error when tags have empty values instead of ignoring them and rendering without them. I would suggest leaving out creator if you are not using it, or not include it on pages where it would be empty. I’m bringing this to engineering team as the error handling should be better.



It just suddenly work! Just hoping that this is stable.

Thank you for helping out.