Fixed: ERROR Fetching the page failed because the fetcher cannot resolve the address


#1

Ok, first of all i want you to know that i have read plenty of articles without founding a solution (i should burn my developer’s & digital marketer certificate). As the title say, i moved on from Joomla to Wordpress site, and since then my twitter cards are not working.

URL: www.grow-digital.gr
Theme: Newspaper 7
Wordpress v4.6
Yoast SEO Plugin
Meta Tags - Checked
NO SSL Certificate

Robots.txt seems fine ( http://www.grow-digital.gr/robots.txt)

Im worried about some errors from my server’s log when i try sometimes to validate:

2016-08-22 22:46:02	Warning		RSA server certificate wildcard CommonName (CN) `*.grserver.gr' does NOT match server name!?	Apache error
2016-08-22 23:19:44	Warning		RSA server certificate wildcard CommonName (CN) `*.grserver.gr' does NOT match server name!?	Apache error
2016-08-22 23:34:46	Warning		RSA server certificate wildcard CommonName (CN) `*.grserver.gr' does NOT match server name!?	Apache error

Please i need to find a solution, we are losing too much traffic from twitter, letting our competition grow.

Please tell me if you need more info.

.htaccess

Header unset Pragma
FileETag None
Header unset ETag

# BEGIN Expire headers
<ifModule mod_expires.c>
ExpiresActive On
ExpiresDefault "access plus 5 seconds"
ExpiresByType image/x-icon "access plus 604800 seconds"
ExpiresByType image/jpeg "access plus 604800 seconds"
ExpiresByType image/png "access plus 604800 seconds"
ExpiresByType image/gif "access plus 604800 seconds"
ExpiresByType application/x-shockwave-flash "access plus 604800 seconds"
ExpiresByType text/css "access plus 604800 seconds"
ExpiresByType text/javascript "access plus 604800 seconds"
ExpiresByType application/javascript "access plus 604800 seconds"
ExpiresByType application/x-javascript "access plus 604800 seconds"
#ExpiresByType text/html "access plus 600 seconds"
#ExpiresByType application/xhtml+xml "access plus 600 seconds"
</ifModule>
# END Expire headers

<FilesMatch "\\.(js|css|html|htm|php|xml)$">
SetOutputFilter DEFLATE
</FilesMatch>

<IfModule mod_gzip.c>
mod_gzip_on Yes
mod_gzip_dechunk Yes
mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$
mod_gzip_item_include handler ^cgi-script$
mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/x-javascript.*
mod_gzip_item_exclude mime ^image/.*
mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.*
</IfModule>

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

# END WordPress

**## Heading**

#2

So you’re saying that your ServerName and SSL certificate name do not match? That will be an issue as our crawler is strict on validation.


#3

Hello @andypiper and thank you for your time.

I do not use any SSL Certificate for my website, i just see this warning on my apache error log (warning not error).

2 months ago twitterbot was able to crawl my website. Suddenly when i made a clear install of wordpress and launch the website at 16 August, i realise that i have problem with validator.

Do you need any other information?


#4

Your robots.txt file is explicitly preventing (Disallowing) Twitterbot from accessing your whole site. You’ll need to fix that to start off with.


#5

Sorry we were making fixes. Robots.txt fixed (the problem still remains):

User-Agent: *
Allow: /wp-content/uploads/
Disallow: /wp-content/plugins/

Sitemap: http://www.grow-digital.gr/sitemap_index.xml


#6

Just checked this in the iframely debugger and it appears to have a timeout issue. Our validator seems to have issues resolving the address. I think this is network related and not a cards crawler problem. I’m not sure what to suggest at present.


#7

I will contact my host provider, i will let this issue open in order to give you feedback about this problem.


#8

Greetings,

Finally i found the solution to my problem. I contact my host provider and after changing many things like CNAME’s, IP’s etc, we realised that bitninja had blacklisted twitterbot’s IP’s. So after whitelisting twitterbot i was able to fetch all of my pages.

Maybe you should do something about this, since a lot of ppl here were complaining.

You may close this subject.

Thanks for your time.


#9