Accurately Get a Color from Pixel on Screen and Convert Its Color Space

Accurately get a color from pixel on screen and convert its color space

OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.

It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:

let color = bitmap.colorAt(x: 0, y: 0)!

// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!

I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.

Note that we are constructing a color in the device space, not converting a color to the device space.

HTH

Find pixel coordinates by color (color picker control)

StackOverflow is designed for each post to ask and answer one coding question.

(To ask a second question, please make a new post. Include in that post the essential details needed to understand that question. Link back to this question, so you don't have to repeat the information that gives additional background/context.)

Your primary question can be stated as:

Given: A color palette [see picture] generated by [see OnPaintSurface code, starting at // Draw gradient rainbow Color spectrum. How calculate (x,y) coordinates that correspond to a given color?


First, an observation. That 2D palette gives 2 of 3 color axes. You'll need a separate "saturation" slider, to allow picking of any color.

The palette you show is an approximation to an "HSV" color model.

In wiki HSL-HSV models, click on diagram at right. Your palette looks like the rectangle labeled S(HSV) = 1.

Hue + Saturation + Value.

Your ColorList should have fully Saturated colors at max Value.

Going down the screen, the palette reduces Value to near zero.


This is the beginning of an answer to that.

What is needed, is a mathematical formula that corresponds to what is drawn.

Lets look at how that rectangular image was generated.

Rename the color lists so easier to work with.
Store as fields, so can use them later. Use the original Colors from which the SkColors were generated, for easier manipulation.

    private List<Color> saturatedColors;
private List<Color> darkenedColors;
private int nColors => saturatedColors.Count;
private int maxX, maxY; // From your UI rectangle.

The top row (y=0) has saturatedColors, evenly spaced across x.

The bottom row (y=maxY) has darkenedColors, evenly spaced across x.

The pixel colors are linearly interpolated from top row to bottom row.

Goal is to find pixel closest to a given color, "Color goalColor".

Consider each tall, thin rectangle whose corners are two topColors and the corresponding two bottomColors. Goal is to find which rectangle contains goalColor, then find the pixel within that rectangle that is closest to goalColor.

The trickiest part is "comparing" colors, to decide when a color is "between" two colors. This is hard to do in RGB; convert colors to HSV to match the palette you are using. See Greg's answer - ColorToHSV.

Its easier if you make an HSV class:

using System;
using System.Collections.Generic;
using System.Linq;
// OR could use System.Drawing.Color.
using Color = Xamarin.Forms.Color;
...
public class HSV
{
#region --- static ---
public static HSV FromColor(Color color)
{
ColorToHSV(color, out double hue, out double saturation, out double value);
return new HSV(hue, saturation, value);
}

public static List<HSV> FromColors(IEnumerable<Color> colors)
{
return colors.Select(color => FromColor(color)).ToList();
}

const double Epsilon = 0.000001;

// returns Tuple<int colorIndex, double wgtB>.
public static Tuple<int, double> FindHueInColors(IList<HSV> colors, double goalHue)
{
int colorIndex;
double wgtB = 0;
// "- 1": because each iteration needs colors[colorIndex+1].
for (colorIndex = 0; colorIndex < colors.Count - 1; colorIndex++)
{
wgtB = colors[colorIndex].WgtFromHue(colors[colorIndex + 1], goalHue);
// Epsilon compensates for possible round-off error in WgtFromHue.
// To ensure the color is considered within one of the ranges.
if (wgtB >= 0 - Epsilon && wgtB < 1)
break;
}

return new Tuple<int, double>(colorIndex, wgtB);
}

// From https://stackoverflow.com/a/1626175/199364.
public static void ColorToHSV(Color color, out double hue, out double saturation, out double value)
{
int max = Math.Max(color.R, Math.Max(color.G, color.B));
int min = Math.Min(color.R, Math.Min(color.G, color.B));

hue = color.GetHue();
saturation = (max == 0) ? 0 : 1d - (1d * min / max);
value = max / 255d;
}
// From https://stackoverflow.com/a/1626175/199364.
public static Color ColorFromHSV(double hue, double saturation, double value)
{
int hi = Convert.ToInt32(Math.Floor(hue / 60)) % 6;
double f = hue / 60 - Math.Floor(hue / 60);

value = value * 255;
int v = Convert.ToInt32(value);
int p = Convert.ToInt32(value * (1 - saturation));
int q = Convert.ToInt32(value * (1 - f * saturation));
int t = Convert.ToInt32(value * (1 - (1 - f) * saturation));

if (hi == 0)
return Color.FromArgb(255, v, t, p);
else if (hi == 1)
return Color.FromArgb(255, q, v, p);
else if (hi == 2)
return Color.FromArgb(255, p, v, t);
else if (hi == 3)
return Color.FromArgb(255, p, q, v);
else if (hi == 4)
return Color.FromArgb(255, t, p, v);
else
return Color.FromArgb(255, v, p, q);
}
#endregion

public double H { get; set; }
public double S { get; set; }
public double V { get; set; }

// c'tors
public HSV()
{
}
public HSV(double h, double s, double v)
{
H = h;
S = s;
V = v;
}

public Color ToColor()
{
return ColorFromHSV(H, S, V);
}

public HSV Lerp(HSV b, double wgtB)
{
return new HSV(
MathExt.Lerp(H, b.H, wgtB),
MathExt.Lerp(S, b.S, wgtB),
MathExt.Lerp(V, b.V, wgtB));
}

// Returns "wgtB", such that goalHue = Lerp(H, b.H, wgtB).
// If a and b have same S and V, then this is a measure of
// how far to move along segment (a, b), to reach goalHue.
public double WgtFromHue(HSV b, double goalHue)
{
return MathExt.Lerp(H, b.H, goalHue);
}
// Returns "wgtB", such that goalValue = Lerp(V, b.V, wgtB).
public double WgtFromValue(HSV b, double goalValue)
{
return MathExt.Lerp(V, b.V, goalValue);
}
}

public static class MathExt
{
public static double Lerp(double a, double b, double wgtB)
{
return a + (wgtB * (b - a));
}

// Converse of Lerp:
// returns "wgtB", such that
// result == lerp(a, b, wgtB)
public static double WgtFromResult(double a, double b, double result)
{
double denominator = b - a;

if (Math.Abs(denominator) < 0.00000001)
{
if (Math.Abs(result - a) < 0.00000001)
// Any value is "valid"; return the average.
return 0.5;

// Unsolvable - no weight can return this result.
return double.NaN;
}

double wgtB = (result - a) / denominator;
return wgtB;
}
}

Usage:

    public static class Tests {
public static void TestFindHueInColors(List<Color> saturatedColors, Color goalColor)
{
List<HSV> hsvColors = HSV.FromColors(saturatedColors);
HSV goalHSV = HSV.FromColor(goalColor);
var hueAt = HSV.FindHueInColors(hsvColors, goalHSV.H);
int colorIndex = hueAt.Item1;
double wgtB = hueAt.Item2;
// ...
}
}

This is the essence of the approach. From colorIndex, nColors, wgtB, and maxX, it is possible to calculate x. I recommend writing several test cases, to figure out how to do so.

Calculating y is much simpler. Should be possible using goalHSV.V and maxY.

As you can see, this is not trivial to code.

The most important points:

  • Convert to HSV color space.
  • The palette is composed of tall, thin rectangles. Each rectangle has two saturated and max-value colors at the top two corners: (H1, 1.0, 1.0) and (H2, 1.0, 1.0). The bottom two corners are same hues and saturation, but small value. Perhaps (H1, 1.0, 0.01) and (H2, 1.0, 0.01). Convert your actual darkened values to HSV, to see the exact values.
  • Find which H's goalHSV is between.
  • Learn about "Linear Interpolation" ("Lerp"). In that rectangle, the top edge is a Lerp between the two saturated colors, the side edges is a Lerp from a bright color to the corresponding darkened color.

If the above math is too intense, then draw a box with just one of those rectangles. That is, make the gradient with only TWO colors in the top list. Experiment with trying to locate a color pixel within that rectangle.

IMPORTANT: There might not be a pixel that is EXACTLY the color you are starting with. Find which pixel is closest to that color.

If you aren't sure you have the "best" pixel, then read a few nearby pixels, decide which is "closest". That is, which has the smallest var error = (r2-r1)*(r2-r1) + (g2-g1)*(g2-g1) + (b2-b1)*(b2-b1);.

Are RGB images converted to sRGB automatically before being viewed in web browser?

The pixel values in an image file are just numbers. If the image is "in" a space larger than sRGB (such as ProPhoto), the pixel values will only be "converted" to sRGB if you have color management enabled, OR you perform the conversion yourself.

A browser or device will only convert tagged images of a non-sRGB colorspace TO sRGB IF there is a color management engine.

With no color management, and a standard sRGB monitor, all images will displays "as if" they were sRGB, regardless of their colorspace. I.e. they may display incorrectly.

Even with color management, if the image is untagged, it will be displayed as whatever default (usually sRGB) the system is set to use.

As for formulas: the conversion is known generally as "gamut mapping" — you are literally mapping the chromaticity components from one space to another. There are multiple techniques and methods which I will discuss below with links.

If you want to do you own colorspace conversions, take a look at Bruce Lindbloom's site. If you want a color management engine you can play around with, check out Argyll, and here is a list of open source tools.



EDIT TO ADD: Terms & Techniques

Because there are other answers here with some "alternate" (spurious) information, I'm editing my answer here to add and help clarify.

Corrections

  1. sRGB uses the same primary and whitepoint chromaticities as Rec.709 (aka HDTV). The tone response curve is slightly different.
  2. sRGB was developed by HP and Microsoft in 1996, and was set as an international standard by IEC circa 1998.
  3. W3.org (World Wide Web Consortium.) set sRGB as the standard for all web content, defined in CSS Color.
    • Side note, "HTML" is not a standards organization, it is a markup language. sRGB was added to the CSS 3 standard.
  4. Profiles do nothing if there is no color management system in place.

And to be clear (in reference to another answer): RGB is a color MODEL, sRGB is a color SPACE, and a parsable description like an ICC profile of sRGB is a color PROFILE.

Second, sRGB is the STANDARD for web content, and RGB values with no profile displayed in a web browser are nominally assumed to be sRGB, and thus interpreted as sRGB in most browsers. HOWEVER, if the user has wide gamut (non-sRGB monitors) and no color management, then a non-color managed browser is typically displaying at the display's colorspace which can have unexpected results.

Terms

  • RGB is an additive COLOR MODEL. It is so named as it is a tristimulus model that uses three narrow band light "colors" (red green and blue) which are chosen to stimulate each of the eye's cone types as independently as possible.

  • sRGB is a colorSPACE. A color space is a subset of a color model, but adding in specifics such as the chromaticities of the primary colors, the white point, and the tone response curve (aka TRC or gamma).

  • sRGB-IEC61966-2.1.icc is an ICC color PROFILE of the sRGB colorspace, used to inform color management software to the specifics such that appropriate conversion can take place.

    • Some profiles relate to a specific device like a printer.
    • Some profiles are used as a "working space".
    • An ICC profile includes some of the math related information to apply the profile to a given target.
  • Color Management is a system that uses information about device profiles and working color space to handle the conversion for output, viewing, soft proofing on a monitor, etc. See This Crash Course on CM

  • LUT or LookUp Table is another file type that can be used to convert images or apply color "looks".

  • Gamut mapping is the technique to convert, or map, the color coordinates of one color space to the coordinates of a different color space. The method for doing this depends on a number of factors, and the desired result or rendering intent.

  • Rendering intent means the approach used, as in, should it be perceptual? Absolute colorimetric? Relative with black point compensation? Saturation? See this discussion of mapping and Argyll.

Techniques

Colorspace transforms and conversions are non-trivial. It's not as if you can just stick image data through a nominal formula and have a good result.

A reading through the several links above will help, and I'd also like to suggest Elle Stone's site. particularly regarding profiles.

Pixel picking in Swift

NSColorSampler was introduced in macOS 10.15 (Catalina). You can use it to sample a color from the screen using the system's built-in color picking interface. It does not require screen-recording permissions.

It's mentioned around the 5 minutes mark in WWDC 2019, Session 210, What’s New in AppKit for macOS

I would probably wrap it, and take a hybrid approach where I use old methods (with the screen permission prompt) on old versions, and NSColorSampler for new versions.

How to convert RGB image pixels to L*a*b*?

You can do it with PIL/Pillow using the built-in Colour Management System and building a transform like this:

#!/usr/local/bin/python3

import numpy as np
from PIL import Image, ImageCms

# Open image and discard alpha channel which makes wheel round rather than square
im = Image.open('colorwheel.png').convert('RGB')

# Convert to Lab colourspace
srgb_p = ImageCms.createProfile("sRGB")
lab_p = ImageCms.createProfile("LAB")

rgb2lab = ImageCms.buildTransformFromOpenProfiles(srgb_p, lab_p, "RGB", "LAB")
Lab = ImageCms.applyTransform(im, rgb2lab)

And Lab is now your image in Lab colourspace. If you carry on and add the following lines to the end of the above code, you can split the Lab image into its constituent channels and save them each as greyscale images for checking.

# Split into constituent channels so we can save 3 separate greyscales
L, a, b = Lab.split()

L.save('L.png')
a.save('a.png')
b.save('b.png')

So, if you start with this image:

Sample Image

you will get this as the L channel:

Sample Image

this as the a channel:

Sample Image

and this the b channel:

Sample Image

Being non-scientific for a moment, the a channel should be negative/low where the image is green and should be high/positive where the image is magenta so it looks correct. And the b channel should be negative/low where the image is blue and high/positive where it is yellow, so that looks pretty good to me! As regards the L channel, the RGB to greyscale formula is (off the top of my head) something like:

L = 0.2*R + 0.7*G + 0.1*B

So you would expect the L channel to be much brighter where the image is green, and darkest where it is blue.


Alternatively, you can do it with the scikit-image module, maybe even more simply like this:

import numpy as np
from skimage import color, io

# Open image and make Numpy arrays 'rgb' and 'Lab'
rgb = io.imread('image.png')
Lab = color.rgb2lab(rgb)

I am not 100% sure of the scaling, but I suspect the L channel is a float in range 0..100, and that a and b are also floats in range -128..+128, though I may be wrong!

With my colour wheel image above I got the following minima/maxima for each channel:

Lab[:,:,0].min()     # L min
32.29567256501352

Lab[:,:,0].max() # L max
97.13950703971322

Lab[:,:,1].min() # a min
-86.18302974439501

Lab[:,:,1].max() # a max
98.23305386311316

Lab[:,:,2].min() # b min
-107.85730020669489

Lab[:,:,2].max() # b max
94.47812227647823

How do I create NSColor instances that exactly match those in a Photoshop UI mockup?

There's no such thing as a "pure" RGB color that's consistent from one display to the next. Unless you're working in a standardized color space, the numbers that show up for a color on one display will probably render slightly differently on another.

Have your designer convert your mock-ups to a standard color space such as sRGB. Then you can use the methods on NSColor that take not just component values but the color space the components are in; this will get you colors that should look similar on any display.

In other words, a color swatch made from (R, G, B) = (1.0, 1.0, 1.0) in Photoshop using sRGB should look virtually identical to a color swatch drawn using a color you get this way:

NSColorSpace *sRGB = [NSColorSpace sRGBColorSpace];

CGFloat components[3] = { 1.0, 1.0, 1.0 };
NSColor *sRGBRed = [NSColor colorWithColorSpace:sRGB components:&components count:3];

Note though that +[NSColorSpace sRGBColorSpace] is only available on Mac OS X 10.5 Leopard and above. If you still need to run on Tiger, you could write a little utility app that runs on 10.5 and converts the canonical sRGB colors to the calibrated RGB space:

NSColor *calibratedColor = [sRGBRed colorUsingColorSpaceName:NSCalibratedRGBColorSpace];

Regardless of whether you use sRGB or the calibrated space, once you have a set of colors, you could put them in an NSColorList that you write to a file and then load from your application's resources at runtime.



Related Topics



Leave a reply



Submit