Feedback for "Upcoming Changes to PNG Image Support"

png
images

#63

After reviewing feedback, discussing the changes with concerned users and internally looking at the upcoming change, we have a tentative amendment we are going to look at making to the upcoming changes. These are tentative and cannot be guaranteed, but in the name of transparency, we felt it best to share the proposed amendment for better discussion around the changes were this amendment to be approved.

We’ve heard strong feedback to the proposed changes and want to continue supporting artists who use Twitter to share their work without risking theft or degradation of their work. For images that are low resolution but require maintaining a high quality, the test for PNG images will add the following change:

  • If the image is a PNG that is 900 pixels or smaller in the longest dimension (can fit into 900x900), that PNG will be left as-is.

This compromise will yield a negative impact to load latencies for such images on timelines of users with slow internet, but will facilitate the needs of the artist community that do not wish to upload imagery at higher resolutions due to the risk of theft. Given the predominance of the issue we face with PNGs is high resolution images and not artwork that is low resolution, we believe this compromise can help both users with slow internet connections when they try to view Tweets with high resolution images, as well as artists who value high image quality when sharing their work on Twitter.

This amendment will be in place with the rest of the changes on February 11th with the caveat that it may be revisited as we monitor for abuse and potential adverse impact to Twitter users on slow connections.

We know this change is still a compromise. Given the limited time-frame, we hope this can encompass most of the use cases that users have concerns with regarding the upcoming changes to PNGs coming on February 11th.


#64

How about having two different versions of an image, an algorithm based (if low bandwidth detected, load lower bandwidth image) or a toggle of some sort in the options? i’d see this as a solution to both problems if it’s considered


#65

The 900x900 unaltered PNG works for me (I assume this includes transparency):

  • For pixel art this is more than enough canvas
  • For steganography as used in MidBoss (see Gamasutra for details) this will preserve the pixel data and not break steganographic methods. This is also used by other projects, Pico8 and Champions Online just off the top of my head, but probably many more.
  • This also solves our concerns about accessibility, as text when resaved as JPG on Twitter suffers badly in legibility and can make it difficult to read for those with eyesight issues.

#66

Thanks for the comment @Draconas,

We understand this is compromise for now. There are many great suggestions from the community on improvements that I would personally love to see invested in.

This upcoming change is one step, but as we can invest more and technologies improve we will further improve things for everyone.


#67

@Enichan, you’re correct. This amendment would include transparency.

I’m glad you see the use cases this is working to support. Those and more align well with this new proposal and feedback that I have solicited from individual artists and developers has been very supportive.

Thank you for sharing your viewpoint, we truly appreciate it.


#68

The changes announcement has been amended:

Images uploaded as PNG that are 900 pixels or smaller in the longest dimension (can fit into 900x900) will be left as-is.


#69

What about aPNGs? Are those supported or will they be destroyed by this change?


#70

Hi, I’m an artist that utilizes .png files with relatively high resolution but still maintains a low filesize (Usually always under 100kb)

In some cases my work will end up having fractional filesize savings at a severe penalty to quality when saved as .jpeg with 85% quality. Will there be any clause to allow slightly larger .png files to supercede .jpeg files for better image quality?

I have attached an example below.

(.png, 86kb)
(.jpeg at 85%, 81.8kb)


#71

And what about images like https://twitter.com/projectTiGER_/status/726092488478326784 ? If they’re bigger than 900x900 does that mean those will be destroyed as well, given that jpeg does not support transparency?


#72

I like seeing concrete examples like this, thank you for sharing @RWStandard

If you don’t mind, I would like to go through the possibilities.

First, baselining the JPEG is a good start, I got 78.3KBs

The PNG you provided, I downloaded at 54.6KB – this is because our forums already optimize PNGs. Right off the bat, your image is smaller than the JPEG alternative if you run a PNG optimizer (I like ImageOptim2 for macOS, but there are lots of tools out there).

Next, I removed the alpha channel from you image and ran it through an optimizer which brought it down to 47.3KB. Your image really is better than a JPEG.

Now, looking at your image I saw that it has a very limited color palette (which is a really great looking style). So I converted it to PNG8, and it looks pretty close to the same with some potentially minor dithering that is hard to notice. Now, if the PNG8 image happened to be larger than the JPEG, we would keep it since we are adding PNG8 support without conversion. But, the end result is actually 33.2KB – the smallest of all.

Basically, if you have art that is close to the same size as the JPEG it would output, it is very possible to optimize the PNG with an easy to use tool (and removing the transparency) to get a smaller file than the JPEG. Just removing the transparency channel probably would have worked for you, honestly.


#73

Howdy, thank you for being responsive to this issue and addressing our concerns. While I appreciate the concessions the development team has made to attempt to satisfy the artistic community as a whole, I have a few counterpoints I’d like to share as an illustrator who has moved to twitter as one of my main social media and art sharing platforms.

I’ve been following this thread since its inception as I wanted to see if the backlash would warrant a mediated response that would make all artists happy, not just sketch and pixel artists. This new amendment for preserving PNGs at low resolution absolutely does not resolve the issue for digital illustrators with high levels of detail, such as some of the work I do.

I’ve linked a thread containing a visual example of the size difference in images with non-uniform height-width ratios, such as landscapes. While twitter has always been unfriendly to previewing these images, often focusing the preview on portions of the image that aren’t relevant, the proposed 900px constraints would cripple the level of detail available for artists who only understand uploading with PNGs for the best image quality: https://twitter.com/Oricalcon/status/1081416456095584256

I’m a lucky subset of the population that is both an illustrator and a software/web developer, so I’m a bit more educated on how to preserve image quality across file formats and resolutions. However, the vast majority of artists simply know that PNG format is the best for maintaining image quality when uploading to the web.
These individuals would still attempt to upload their images at ratios higher than 900px at largest, causing the “test” to compress to JPG and litter images with visual artifacts. Using @RWStandard ‘s post as an example, the visual artifacts that occurs would be prevalent throughout these illustrators’ images. The compression is much, much more visible on digital illustrations than in photography.

TL;DR: I would be much more comfortable with these changes if twitter were to provide an image preview, a tool, or a link to a converter such as the one @RME utilized for comparison, that would allow artists to see what their uploaded image will look like, or test images themselves so they can maximize quality while minimizing data costs on Twitter’s end. This allows the artist to put their best product forward within Twitter’s new constraints, instead of throwing compressed images littered with visual garbage out to their followers without understanding why their images were so thoroughly trashed.

If none of these proposed concessions are possible, then the best step would be raising the PNG size limit to a much more reasonable 1200px at largest (Still a whole 750px smaller than popular art sharing platforms such as Tumblr) at the cost of a little bit more of these “global user” speeds, as for complex illustrations, 1200px is a much more reasonable image size than 900px.


#74

@NolanOBrien Please clarify how the uploaded JPEG image quality is estimated.

How do you avoid overcompression? Recompressing an 85% JPEG image will make it smaller in file size continuously degrading visual quality.

If JPEG quality is estimated from quantization tables, there are perceptual encoders like Guetzli that achieve lower bitrate using finer quantization tables.

It would be great if images in both JPEG and PNG formats were not recompressed if savings are too low, e.g. less than 5%, or bitrate is already reasonable.

I believe the major source of complaints about quality is 4:2:0 chroma subsampling (compare [1] and [2]). There are encoders that can achieve good bitrate without it. The Google compression team came to the same conclusions [3].

Using large master images certainly improves quality but relying on browser scaling is a huge performance hit starting with memory, e.g. 2048x2048 uses 16MB, which is critical for many users including mobile. Images with a smaller pixel size could look equally good if they were scaled according to physical pixel dimensions specified in the metadata. For example, macOS follows this convention when displaying images in Preview.

Small PNG images can safely have CSS style image-rendering: crisp-edges / image-rendering: pixelated applied. Default browser upscaling ruins pixel art, and while larger images work better, they are not unaffected.

[1] https://getoptimage.com/media/benchmark/red-flowers-optimage.jpg

[2] https://getoptimage.com/media/benchmark/red-flowers-imageoptim.jpg

[3] https://twitter.com/jyzg/status/1031295238189576192


#75

One random note is this attached PNG has failed to upload (with a 500 ISE) both via the Twitter web UI and the Twitter API. Would be cool to see a fix for this.

I’ve fixed in our app by implementing a resize for certain conditions.


#76

Thank you for the technical reply, @vmdanilov.

Unfortunately, using encoders like Guetzli is completely untenable. It’s fine for sites with static assets but when dealing with many millions of images uploaded from users every day, encoder performance is necessary. Guetzli is two orders of magnitude worse than our codec, which is just not scalable in any way. These advanced JPEG encoders also sacrifice a critical feature of JPEG that is critical, which is progressive loading support.

https://www.keycdn.com/blog/benchmarking-guetzli

This comes up regularly though, so I just want to re-emphasize that we have been extremely rigorous in the tradeoffs we are taking to maximizing both quality and performance with JPEGs. It’s a matter of tradeoffs, there is no perfect choice. We are confident that with the hundreds of millions of Twitter users and billions of people that encounter Tweets all over the internet, we are doing the best we can with images while continuing to experiment with improvements.

If we were always serving the image size that the user uploads, I can see the concern you are bringing up, however we don’t push full resolution images to users until they elect to open the full image. We serve numerous variants (sizes) of an image to better fit the size of the UI it is being displayed in. Most of the time, the image will be 680x680 in size until the user opens it up to see full rez. The value of uploading in high resolution is that all variants will have a good quality image and that when the full image is displayed, zooming on it will be less prone to artifacts being apparent.

Basically, rest assured, this is a non-issue.

We are aware of this but there are technical and practical hurdles to adopting this sort of change. I will take this feedback though and follow up with appropriate product owners to consider what would be necessary to accommodate this and what the consequences of such a change would entail. I personally do think that images that are enlarged should preserve pixel quality avoiding interpolation while shrinking an image to fit should use interpolation – we’ll see what the owners feel about that though, as a separate investigation.


#77

@cstesto, can you somehow provide a link to the original for this image? Our forums do a recompression for attached PNGs so it is not the same that you uploaded and I can’t test your use case.


#78

My proposal is to actually give people more incentive to optimize images before uploading, i.e. define a threshold (5-10%) and preserve original image if the savings from recompressing it with the 85% JFIF quality are below this threshold.

I briefly checked and found that uploaded images are not recompressed on re-uploading. But can you explicitly state whether it is due to JPEG quality estimation or savings threshold? Or is it just deduplication in this case?

Progressive JPEG encoding can be added losslessly using something like libjpeg’s jpegtran.

Guetzli is a byproduct of the new image format PIK https://github.com/google/pik. As noted, it is optimized for very high JPEG quality (around 94) only, and even there it is prone to blocking artifacts https://github.com/google/guetzli/issues/253. I don’t see you using them but there are more practical compressors available https://getoptimage.com/benchmark.

The main concern is actually quality here. Not everyone can upload 2048x2048 images. For example, a 680x680 px icon with explicit resolution of 150 dpi is better displayed at 340x340 pt, e.g. 340x340 px served for non-retina and original 680x680 px for retina screens. Without physical dimensions, the icon will be stretched and blurred for retina users. Uploading a 225-300 dpi image of any size should have the same good quality as uploading a 2048x2048 image.


#79

Thanks again for replying, @vmdanilov!

Ah, I misunderstood. We’ll look at what more can be done to preserve quality in uploaded JPEGs. If users are willing to do the hard work first, that does seem reasonable to figure out a way to preserve what they have already optimized. This would be separate effort from the current change.

That’s correct, an reuploaded image will be unchanged. I believe, at this moment, we consider chroma subsampling, JFIF quality (via the quantization table), resolution and progressive encoding to see whether or not to leave the image as-is. However, this is not something to couple to as it is subject to change. Not trying to be cagey, but providing you explicitly with the requirements for when we avoid a transcode risks broken expectations when we change the heuristics.

That’s fair. Something for us to look into as an optimization in a separate effort. Good point.

I think for any codec that is outside the performance requirements we have (and we’ve tested them all), it will require a separate effort to preserve users doing the encoding themselves. Good feedback, taking this seriously for future work.

I think the recommendations we have still stand.

  1. upload at 2048x2048
  2. scale up with full integer scaling to be up to 2048x2048
  3. upload using PNG8
  4. upload PNG below 900x900

I understand this is imperfect, but we are bending backwards to support as many use cases as possible (including the example you gave with PNG8 or PNG under 900x900). We’ll keep iterating to balance between quality and performance.


#80

Thanks for all the well written replies to give some insight on what’s happening and why.

Something I didn’t see addressed directly is the addition of a link to view the original image
As mentioned most image urls can currently add :orig to see the original

If the reason for this change is to reduce page/image load times for users on low bandwidth internet, wouldn’t it make sense to serve (all) users those smaller (thumbnail) images in their feed, while still allowing users to click ‘View original image’ ?

It seems this could be easily achieved with all the image sizes you already have.
As you said most places already use thumbnails since using the full res images in small containers makes the UI slow)

Having a link to view to original image seems trivial to implement and should not affect loading times for anyone that chooses not to click on it.
People on 2g networks would be aware, or could be warned, that clicking ‘original quality’ would mean longer load times while still allowing everyone in the world to access them

You previously agreed it might be good to link to imgur for the original image, why not help artists and keep users on Twitter.com by doing this for them with the data you already have?

Additionally, this would allow for users to opt in to the original quality in their whole feed if they would install a userscript like the following https://github.com/eight04/twitter-original-image (even better if this was an advanced setting built in to Twitter)

I understand the need to standardize and improve page load times for all users, but having an option to at least manually click ‘View original image’ would give people the choice of how they want to use their bandwidth for Twitter when they can.


#81

Thanks for the comment, @MvEerd. It was well thought out and I appreciate wanting to solve for all users.

First, I should point out that the orig variant of images is an internal only variant. Twitter makes no guarantees related to it and it is not intended for any outside consumption. Don’t read orig as meaning “original” but as meaning “origin”, aka from the source in our data center. I would dissuade users from ever accessing that variant.

Your suggestion to have a way to persist access to the originally uploaded version of the image is fair and not unheard of. But your assertion that it would be “trivial to implement” is unfortunately not the case. Our system, built around scalability, is not really like a file system where we can store arbitrary files and be selective of which version we serve out. It would be a very complex effort and would likely require an entirely new service to be spun up for this special case of hosting original images.

Do also remember that extra variants (and increased file sizes) is not purely a matter of transferring the data, it also has an impact on our CDN hit rates. More large files that get cached on CDNs mean even more smaller files being purged from our CDNs, hurting hit rates. For every user with fast internet getting the heavy file size image they want in the highest fidelity, there are many more users that will be faced with slower content loading because the content they are trying to see is not in proximity to them anymore since it was purged from the CDN to make room for the heavy files. It’s a non-trivial balancing act, but we’ll keep working to improve things.

As we continue to improve the balance between quality and performance, these ideas will be kept in mind.


#82

How much information can you give about how to tell whether a PNG file will be recompressed based on a hypothetical JPEG version’s filesize?

I’m making a tool that allows users to optimize their art before uploading to twitter, by losslessly compressing it in optiPNG to see if it’ll come in small enough, and then offering to lossily compress it to 256-color in case that looks less bad than the eventual JPEG compression will.

But how are users supposed to know what approach to take? What good does an “if it’s smaller than the JPEG” clause do, if users don’t KNOW whether it’s going to be smaller than a JPEG whose settings we don’t know? The only way to learn what approach to take is to have already posted the image and seen what filesize it gave once it was online in front of everyone.

I know you said you don’t want to say in case the settings wind up changing, but I can tell you now, on February 11th I’m going to upload a test file, then look at the result in exiftool, and base my script on whatever it gives, and so will anyone else trying to deal with this vague rule that we’re depending on, so you may as well tell us what the initial settings are going to be, rather than making us scramble on the day the change goes live to find out the same information the hard way.