Twitter doesn’t show full photos when they appear in the stream—you need to tap to expand the whole image. Unfortunately, the cropped version of the photo is often framed awkwardly because it’s just the middle section of the image. Twitter is solving that problem with a neural network that can understand the composition of your images.
The neural network is looking for so-called “salient” image regions. Scientists have studied what people consider salient in images for years using eye-tracking technology. We tend to look at faces, animals, and text, but also other objects that have high contrast with the surrounding pixels. That data can be used to train a neural network to identify the salient regions of an image.
The photos in your timeline are cropped to improve consistency and to allow you to see more Tweets at a glance. How do we decide what to crop, that is, which part of the image do we show you?
Doing pixel-level saliency analysis of all the pictures uploaded to Twitter would be too slow, so engineers developed a smaller, faster neural network that can get the gist of a picture. The resulting network is ten times faster than the original approach, and it allows Twitter to adjust the framing on images you upload. Thus, the previews will show you the salient parts of an image rather than whatever happens to be in the middle. See above for before and after comparisons.