Will Web Intents get to have "Cards" features?


Would love to be able to parametrize Web Intents with things like pictures, are there any planned upgrades to the Web Intent api?

You should stop making things based on tags, lots of new websites are ajax based and might have more than one Tweet,Retweet or share button for different content items on the page. When your server comes to check those tags, unless you have some sort of javascript interpreter it’s impossible for us to pre-fill those meta tags (on ajax based pages with no server side code generation)


This is a really big deal for me. I’d love to implement the user of Twitter Cards on my AJAX site, but twitter scans the headers before my javascript gets a chance to load in all necessary data and set the headers. PLEASE I need a solution for Twitter Cards on AJAX-based sites.

SIDE NOTE: Your (twitter’s) lack of support for an AJAX workflow seems very hypocritical since Twitter itself is an AJAX web app.


@itsjustcon in the end we had to do something really crazy to make this and G+, Facebook buttons work.

Our site is still all client driven (all pages are rendered on the client using javascript templates), but before we send any data we proxy every static html request through a java servlet in charge of building the tags. (we have some caching and a few other tricks to make this as fast as possible and we were able to keep our site still rendering client side saving us a lot of power)


Hi guys,

Just going to chip in a little note on this: It’s not a realistic prospect right now for crawlers in general—including ours—to start executing JavaScript for meta tags. In terms of performance both of the page execution, and initialization of the target URL (requesting a single HTML resources in one request, verses a plethora of JavaScript dependencies before the page can be executed, let alone the content itself be downloaded and put into place for extraction.) Remember, we’re trying to get this metadata back from your site and into a Tweet with minimal delay as that Tweet is fanned out to thousands of recipients in the milliseconds after you send it. The only way this is going to work in a performant manner is by parsing indexable HTML.

To Con’s point: The previous iteration of our own site was rendered entirely on the client, and honestly we suffered similar shortcomings: It’s only since we reengineered that features for crawler—such as OEmbed autodiscovery for embedded Tweets—can be added to permalink pages. Before we’d have suffered the same problem with CMS systems needing to download and execute scripts to get the context. We’re now using a system of initial rendering on the server, and partial page loads as you browser around the site. (We blogged about the architecture change back May: http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html)



Ben your points are fairly valid when it comes to speed and not processing the javascript on your servers as you don’t want to pay that penalty. However I have to throw out all of your points as this works on Facebook, G+, Pinterest, google and bing search.

There is a standard method of crawling Ajax sites where the site renders the javascript itself and serves the static rendered html. Why won’t Twitter adopt the standards? See https://dev.twitter.com/issues/1290 for more information.