Can't understand why I'm getting Twitter Card validation errors



WARN: No metatags found

curl -A Twitterbot returns the HTML with meta tags intact.

<meta name="twitter:card" content="summary_large_image"/>
<meta name="twitter:image" content=""/>
<meta name="twitter:image:alt" content="A brownish-yellow salamander sanding on a mossy rock with large round eyes."/>
<meta name="twitter:site" content="@usfwssoutheast">
<meta name="twitter:title" content="At-Risk Species Conservation" />
<meta name="twitter:description" content="The Endangered Species Act provides a variety of ways for the U.S. Fish and Wildlife Service and our partners to conserve and recover species while reducing regulatory burden." />

<meta property="fb:admins" content="1417314007,1222320132"/>
<meta property="og:site_name" content="" />
<meta property="og:type" content="website" />
<meta property="og:url" content="" />
<meta property="og:title" content="At-Risk Species Conservation" />
<meta property="og:description" content="The Endangered Species Act provides a variety of ways for the U.S. Fish and Wildlife Service and our partners to conserve and recover species while reducing regulatory burden." />
<meta property="og:image" content="" />


This is just speculation really… I don’t think twitter:url is a valid tag but it doesn’t match the URL anyway. Also your twitter:title and twitter:description use property instead of name, which may not be handled and they are required fields.


I changed property to name for those two meta tags. Not sure how that happened. I had the correct URL on the actual page, just pasted the wrong version into the editor.

I also removed twitter:url; you’re right that’s not a twitter tag, it’s used by open graph.

Just tried to revalidate and had the same result.


Could be something screwy with your page. It doesn’t render nicely for me and has some HTML validation errors in the head.



I didn’t have any CSS linked which explains the rendering. I can’t imagine why ARIA roles would have anything to do with this. As far as I can tell “search” is a valid HTML5 input type, too.


I don’t know exactly what is happening here. The page looks OK to me. Is there any chance that your system is blocking Twitter’s outbound IP addresses, or doing any different routing based on those IPs or user agent?


I’ve had some issues with our firewall’s Root SSL certificate in the past when using Atom Package Manager and NPM.

Is there an easy way to test outbound IP addresses? I’m no networking wiz.


I’m not aware of one sorry. I do not believe this is SSL-related, though - the crawler is seeing a 404 page, so it looks more likely to be related to routing on the webserver.


The crawler is seeing a 404?

Why am I seeing my HTML come through when i do curl -A Twitterbot ? I thought that was a command-line method for testing the crawler.


Robots.txt is allowing all user agents.

User-agent: *


Yes, that curl command should indeed be a good way to test this, which is why I wondered whether it might be IP-related (the cards crawler runs on a different IP to my laptop where I’m testing from). I can’t see anything unusual in the curl trace from here either.

Puzzling :confused:


Ahh, now I understand what you mean about IP issues – where the crawler is actually being run from.

That’s a bummer! We’re finishing up a responsive redesign from scratch and really want to get the added social engagement form cards. I haven’t had any issue with Open Graph, :frowning:.


I understand the frustration! I’m pretty much at the end of debugging options I can offer from this side, unfortunately. The site looks really nice, BTW! There’s more on our IP ranges in the troubleshooting documentation.


Thanks, Andy. Feel free to close.


Thank you for your patience - I’ll leave this open in case you figure it out and are able to share the resolution, but otherwise this will autoclose in a couple of weeks.

closed #16

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.