Firebase Download Image URL Doesn't Show in Preview



Hi. So, I’m using Firebase to store my images and used the download URL generated from its API as the source value for my image tags. However, when I tried using the same URL as my meta data content for twitter:image and validate with card validator, the image preview doesn’t show.

I’ve checked my meta data, and the twitter:image has the corresponding image URL content value.

I’ve also verified other images not coming from my Firebase storage and the image previews are shown.

Any ideas? Here’s a sample image from my Firebase storage

Player card appears but without thumbnail

Are you able to share an example link where you’ve defined a card and this is not working?

Is Firebase blocking access for the crawler in robots.txt? You can see more things to check in the FAQ and troubleshooting post.


Thanks for the quick response.

I’m asking Firebase support if it’s blocking the twitter bot as I don’t exactly know where to look for the right robots.txt file. I’ll make a sample link in the next 24 hours.


Here’s an example


Ah! It’s a Meteor app.

The Cards crawler is not capable of executing Javascript. You’ll need to have the cards meta tags as static values in your page header, rather than having them render on page load. I’m not seeing that when I pull the page using curl -A Twitterbot


Yes. I’m using spiderable + flow router seo. if you do curl -A Twitterbot again, you’ll find the twitter tags just below the first 3 meta tags and style tag.

Meta tags from the curl.


   <link rel="stylesheet" type="text/css" class="__meteor-css__" href="/3dbea58c581208a2b2d06b5eee43f3c41d501beb.css?meteor_css_resource=true">  <link rel="stylesheet" type="text/css" class="__meteor-css__" href="/1a301db3fca4d8c59d8c57dbb1c3dd37ce417a5d.css?meteor_css_resource=true">  <link rel="stylesheet" type="text/css" class="__meteor-css__" href="/6bb5fc24f8c9c3adcebb4666effdd53ba433f6ac.css?meteor_css_resource=true">
   <meta charset="utf-8">
   <title>This is a test title</title>
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <!-- Removed style tag here-->
   <meta name="twitter:title" content="This is a Title" data-flow-router-seo="true">
   <meta name="twitter:url" content="" data-flow-router-seo="true">
   <meta name="twitter:description" content="This is an awesome description" data-flow-router-seo="true">
   <meta property="og:title" content="This is a Title" data-flow-router-seo="true">
   <meta property="og:url" content="" data-flow-router-seo="true">
   <meta property="og:description" content="This is an awesome description" data-flow-router-seo="true">
   <meta property="og:image" content=";token=bcb8490f-f8b0-4fa8-b069-8a946d3d3961" data-flow-router-seo="true">
   <meta name="twitter:card" content="summary_large_image" data-flow-router-seo="true">
   <meta name="twitter:image" content=";token=bcb8490f-f8b0-4fa8-b069-8a946d3d3961" data-flow-router-seo="true">
   <meta name="twitter:site" content="@joniah2884" data-flow-router-seo="true">

I’ve made the image url as the default value for the twitter:image and og:image in my code.


Ah - my mistake, didn’t spot that.

So the issue here looks like the fact that the image is being blocked by the robots.txt file - I am not sure how you’d persuade Firebase to update or change that.


Yeah. I thought so, too. I’m currently asking Firebase regarding that.

It’s a bit of a bummer, really. If you try to use the same url to Facebook’s Open Graph Debugger, you’d see the preview.


Understand the frustration, especially when compared with others! We try to follow Google’s robots.txt specification closely, and that file at the moment does prevent us from crawling the image :thumbsdown:


UPDATE 12/20: This should now just work. Horray! :champagne: :tada:

Firebase Storage PM here.

Sorry you’ve hit this issue–I’m honestly surprised this hasn’t come up before. Looks like to fix this, we’d have to add:

User-agent: Twitterbot
Allow: /

to our robots.txt. I’m talking with the eng team about this now, but it seems feasible.

Another option might be to serve our content through a CDN (like Fastly, see GCS + Fastly), at which point you can add your own robots.txt set to whatever you’d like.


Thanks @asciimike - great suggestion there, and thank you for thinking about potentially putting in a fix on your side, too.

closed #12

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.