Blurring reduces noise/entropy so that JPEG compresses better. You can use a bilateral filter (Photoshop: Surface Blur; GIMP: Selective Gaussian Blur) for larger amount of smoothing while preserving edges.
Image Posterization
This is actually called image quantization. pngquant[1] and pngnq[2] are two specialized open-source tools for automatically selecting the best palettes to represent the image with least perceptual error.
Selective JPEG Compression
JPEGmini[3] automatically selects areas of interest with custom perceptual metrics and preserves quality within those areas while optimizing out other areas. I tested the sample image in the article: JPEGmini obtained 6.7% decrease in size without manually selecting the mask.
Given that these techniques seem to be well-known, it would be interesting to see a JPEG compressor which uses what we know of human vision to minimize filesize while keeping the image visually the same.
JPEGmini seems to go partway with selective compression, but it seems like there's room to improve the actual compression algorithm to do things like gaussian smoothing instead of just truncating the high-frequency components to reduce filesize.
Well the whole point of a compression scheme like JPEG is to use what we know about human vision to minimize file size while not visually changing anything.
But JPEG is a pretty crude algorithm designed mainly to be computationally cheap [to match the capabilities of computer hardware in 1992 when it was created]. Trying to make better encoders within the limits of what JPEG decoders can handle is quite limiting, and yields only marginal improvements.
It’s possible to do much better if we’re willing to accept more CPU time spent in encoding/decoding using fancier algorithms. Unfortunately it’s really hard to get traction for anything else, because the whole world already has JPEG decoders built into everything. For instance JPEG 2000, designed to be a general purpose replacement with many improvements over JPEG, is now 13 years old but used only in niche applications, such as archiving very high resolution images.
But for use cases where you control the full stack, better compression is quite viable. Video game engines for instance devote considerable attention to image compression formats.
IMHO JPEG algorithm isn't crude. IMHO it's actually quite brilliant—simple and very closely tied to human perception. The core concept of DCT quantization hasn't been beaten yet—even latest video codecs use it, just with tweaks on top like block prediction and better entropy coding.
Wavelet compressors like JPEG 2000 beat JPEG only in lowest quality range where JPEG doesn't even try to compete. Wavelets seem great, because their blurring gives them high PSNR, but lack of texture and softened edges make them lose in human judgement.
JPEG isn't crude, agreed. But your comments seem off to me.
The core trick isn't DCT per se, it is transform coding, Which both DCY and wavelets are, followed by (usually) quantization and then entropy coding.
Typical wavelets used are orthonormal transforms, no loss there. The "lack of texture and softened edges" are a choice of the model used, not a consequence of the transform. True also of DCT and blocking artifacts. This should be obvious since both approaches allow for lossless encoders.
Typically wavelets will beat or match JPEG in any situation both in terms of psnr and the like, and perceptually (though this latter is much more controversial and poorly defined -- and to be fair I am not up to date on the literature here but I would be surprised of that has changed in the last decade).
The real reason for the huge popularity of DCT in still compression at first and video codecs later is that it is cheap to implement in hardware. And once there becomes very cheap to use.
JPEG 2000 is needlessly complex, at its core a wavelet codec is also very simple and elegant.
I have tried your JPEG compressor. While it does make red less dull at 2x2 chroma subsampling, it does also blur red (or orange) into (white) backgrounds. IMO the sum is negative, at least with the particular image I tried.
I'm particularly interested in this because lossy WebP only supports 2x2 chroma subsampling. So any optimisation for JPEG could maybe also be applied to WebP.
I have made a topic about this once on the WebP group, which also has the image I tried with your compressor.
With chroma subsampling there's tradeoff - either bleed black into color areas or bleed color outside. If have any ideas or know research on choosing when which option is best, then I'm all ears.
I'm not the OP, but I've had trouble with the JPEGOptim feature of ImageOptim. It created JPEG files that crashed the JPEG decoder on some older android phones.
The files worked fine elsewhere, so I think it was an decoder bug not an invalid file, but I had to avoid optimizing JPEG files to avoid the problem.
This was about two years ago, so I don't recall version numbers or phone models.
OK, interesting article and some quite effective tricks but this is not "optimization". Optimization means getting the best possible result from something, by some criterion. For example, blurring an image and then getting a bit better compression is not optimal. But if it could be shown that was getting the best possible compression for JPEG then it would be optimal.
The article should be called "Image Compression Tricks". I don't think any of the techniques are considered advanced in today's technology.
The example images they use are the ideal use cases for each of these techniques. The rocks photo has no hard edges. Wood texture by nature is similar to what posterization does. These tricks won't help in many cases. Although if you do happen to have something that fits the criteria, these should be quite useful.
Blurring reduces noise/entropy so that JPEG compresses better. You can use a bilateral filter (Photoshop: Surface Blur; GIMP: Selective Gaussian Blur) for larger amount of smoothing while preserving edges.
Image Posterization
This is actually called image quantization. pngquant[1] and pngnq[2] are two specialized open-source tools for automatically selecting the best palettes to represent the image with least perceptual error.
Selective JPEG Compression
JPEGmini[3] automatically selects areas of interest with custom perceptual metrics and preserves quality within those areas while optimizing out other areas. I tested the sample image in the article: JPEGmini obtained 6.7% decrease in size without manually selecting the mask.
[1]: http://pngquant.org/
[2]: http://pngnq.sourceforge.net/
[3]: http://www.jpegmini.com/main/shrink_photo https://news.ycombinator.com/item?id=2940505