iamcalledrob
today at 12:34 PM
As a designer, I've built variants of this several times throughout my career.
The author's approach is really good, and he hits on pretty much all the problems that arise from more naive approaches. In particular, using a perceptual colorspace, and how the most representative colour may not be the one that appears the most.
However, image processing makes my neck tingle because there are a lot of footguns. PNG bombs, anyone? I feel like any library needs to either be defensively programmed or explicit in its documentation.
The README says "Finding main colors of a reasonably sized image takes about 100ms" -- that's way too slow. I bet the operation takes a few hundred MB of RAM too.
For anyone that uses this, scale down your images substantially first, or only sample every N pixels. Avoid loading the whole thing into memory if possible, unless this handled serially by a job queue of some sort.
You can operate this kind of algorithm much faster and with less RAM usage on a small thumbnail than you would on a large input image. This makes performance concerns less of an issue. And prevents a whole class of OOM DoS vulnerabilities!
As a defensive step, I'd add something like this https://github.com/iamcalledrob/saferimg/blob/master/asset/p... to your test suite and see what happens.
I really wish people would read the article, the library does exactly this:
> Okmain downsamples the image by a power of two until the total number of pixels is below 250,000.
iamcalledrob
today at 1:32 PM
Somehow I missed that, oops. I see that the library samples a maximum of 250K pixels from the input buffer (I jumped over to the project readme)
That being said, this is sampling the fixed-size input buffer for the purposes of determining the right colour. You still have to load the bitmap into memory, with all the associated footguns that arise there. The library just isn't making it worse :) I suppose you could memmap it.
Makes me wonder if the sub-sampling is actually a bit of a red herring, as ideally you'd want to be operating on a small input buffer anyway. Or some sort of interface on top of the raw pixel data, so you can load what's needed on-demand.
That's 500x500, I'm sure you can get good results at 32x32 or 64x64 but then part of your color choice is also getting done by the downsampling algorithm. I wonder if you could get away with just using a downsampling algorithm into a 1x1 and just use that as the main color.
PaulHoule
today at 1:35 PM
That last one is talked about in the article -- it sucks!
I think if you were going to "downsample" for the purpose of creating a color set you could just scan through the picture and randomly select 10% (or whatever) of the pixels and apply k-means to that and not do any averaging which costs resources and makes your colors muddy.
chrisweekly
today at 3:01 PM
your gh link returned 404
EDIT:
then (when url refreshed) triggered a redir loop culminating in a different error ("problem occurred repeatedly")...
ah, ofc, your intent was to demonstrate a problematic asset.
TheJoeMan
today at 5:25 PM
Realizing I intentionally opened a png bomb made me chuckle, like what did I think was going to happen?
> I've built variants of this several times throughout my career.
Got any to share? A self-contained command-line tool to get a good palette from an image is something I’d have a use for.
PaulHoule
today at 1:30 PM
Back in the late 1980s people thought about color quantization a lot because a lot of computers of the time had 16 or 256 colors you could choose out of a larger palette and if you chose well you could do pretty well with photographic images.
Author here: the library just accepts RGB8 bitmaps, probably coming either from Rust's image crate [1] or Python's Pillow [2], which are both mature and widely used. Dealing with codecs is way out of scope.
As for loading into memory at once: I suppose I could integrate with something like libvips and stream strips out of the decoded image without holding the entire bitmap, but that'd require substantially more glue and complexity. The current approach works fine for extracting dominant colours once to save in a database.
You're right that pre-resizing the images makes everything faster, but keep in mind that k-means still requires a pretty nontrivial amount of computation.
[1]: https://crates.io/crates/image
[2]: https://pypi.org/project/pillow/
If you ever did want to wrap this in code processing untrusted images there's a library called "glycin" designed for that purpose (it's used by Loupe, the default Gnome image viewer).
https://gnome.pages.gitlab.gnome.org/glycin/