Thanks for the reply. Where can we use the curl command to check for ourselves? that would be a great addition to the sticky FAQ. I have a robot.txt on my root, but I just found one looking through my whole server in an etc folder that must be overriding the root file I’ll try to change that.
update: googlebot is not having any trouble crawling my site.
my robots.txt is:
User-agent: *
Disallow:
my htacces only had
RewriteCond %{HTTP_USER_AGENT} libwww-perl.*
RewriteRule .* ? [F,L]
in regards to blocking anything, but removing it didn’t help
INFO: Page fetched successfully
INFO: 5 metatags were found
WARN: Not whitelisted
http://www.solacecrafting.com(slash)index.html
(appending index.html directly was posting a preview in the forums)
INFO: Page fetched successfully
WARN: No metatags found