S3 supports storing and serving gzip-compressed objects in a transparent manner for the end-user.
This works by gzipping the data uploaded to S3 and setting a "Content-Encoding: gzip" header.
The advantage of this compression technique is that the objects in S3 are then automatically served either compressed or uncompressed to the HTTP clients, depending on the capabilities expressed by the "Accept-Encoding" header in the request.
My proposal is to make S3BinaryStore take advantage of this feature, by automatically compressing data before uploading to S3.
Given that no MIME type information is available in S3BinaryStore#storeValue(), I see no way to have an intelligent algorithm to decide whether to compress or not the data.
A (partial) solution could be to implement storeValue(InputStream stream, String hint, boolean markAsUnused) which provide a user-generated hint.
But I suppose that otherwise the only solution is to try compressing the data in all cases, and see if compression has been beneficial to the data size.