How to Get a Color Palette from an Image Using Opencv

How to detect the exact color of the images using hsv color model and opencv

This code demonstrates how to walk through all files in folder ./images and return the detected colours:

import os
import numpy as np
import cv2

# map colour names to HSV ranges
color_list = [
['red', [0, 160, 70], [10, 250, 250]],
['pink', [0, 50, 70], [10, 160, 250]],
['yellow', [15, 50, 70], [30, 250, 250]],
['green', [40, 50, 70], [70, 250, 250]],
['cyan', [80, 50, 70], [90, 250, 250]],
['blue', [100, 50, 70], [130, 250, 250]],
['purple', [140, 50, 70], [160, 250, 250]],
['red', [170, 160, 70], [180, 250, 250]],
['pink', [170, 50, 70], [180, 160, 250]]
]

def detect_main_color(hsv_image, colors):
color_found = 'undefined'
max_count = 0

for color_name, lower_val, upper_val in colors:
# threshold the HSV image - any matching color will show up as white
mask = cv2.inRange(hsv_image, np.array(lower_val), np.array(upper_val))

# count white pixels on mask
count = np.sum(mask)
if count > max_count:
color_found = color_name
max_count = count

return color_found

for root, dirs, files in os.walk('./images'):
f = os.path.basename(root)

for file in files:
img = cv2.imread(os.path.join(root, file))
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
print(f"{file}: {detect_main_color(hsv, color_list)}")

Output with three sample images in subfolder images:

ruby_3.jpg: red
sapphire blue_18.jpg: blue
sapphire pink_18.jpg: pink
sapphire purple_28.jpg: purple
sapphire yellow_9.jpg: yellow

Credits:

  • HSV color ranges
  • how to detect color ranges

Change the color of image using OpenCV

ColorMap does not change the color of image with just one color. It changes the color of the image according to their depth value, That's what I think.
This process is known as pseudo coloring where using other image you are changing the color of your original image.

In OpenCV you can achieve this using Lookup Table and LUT function.

I am providing you the sample code for that, where I have a palette and a grayscale image.

Code of Pseudo Coloring:

cvtColor(im.clone(), im, COLOR_GRAY2BGR);
uchar b[256], g[256], r[256];
int i = 0;
for (double x = 1; x <= palette.rows; ) {

b[i] = palette.at<Vec3b>(x, palette.cols / 2)[0];
g[i] = palette.at<Vec3b>(x, palette.cols / 2)[1];
r[i] = palette.at<Vec3b>(x, palette.cols / 2)[2];
i++;
x += 3.109;
}
Mat channels[] = { Mat(256,1, CV_8U, b), Mat(256,1, CV_8U, g), Mat(256,1, CV_8U, r) };
Mat lut;
cv::merge(channels, 3, lut);
Mat color;
cv::LUT(im, lut, color);

That's working correct for me.
The basic logic behind that is first of all I have covert the grayscale image to Bgr. Then I have read the pixel of value of the color palette and store that in array. And through this array I have created a lookup Table that store that values using merge function. And using that lookup table and LUT function I have just put that values of lookup table into my new Mat variable i.e color

Note: I have incremented x according to my need because my palette height is 800. So just to take 256 input or less I have increamented x by 3.109

Hopefully that helps.

How to make Lab and YCrCb color palette in opencv (python)

You simply need to change the Color spaces of the image. Just make sure you put the correct ranges for each component. An Example with Lab is below. The a,b have ranges in -127 to 127 and hence am subtracting 127.

import cv2
import numpy as np

def nothing(x):
pass

img = np.zeros((300,512,3), np.uint8)
cv2.namedWindow('image')

cv2.createTrackbar('L','image',0,100,nothing)
cv2.createTrackbar('A','image',0,255,nothing)
cv2.createTrackbar('B','image',0,255,nothing)

while(1):

cv2.imshow('image',img)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break

img= cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l = cv2.getTrackbarPos('L','image')
a = cv2.getTrackbarPos('A','image')-127
b = cv2.getTrackbarPos('B','image')-127
img[:] = [l,a,b]

cv2.destroyAllWindows()

How to reduce the number of colors in an image with OpenCV?

There are many ways to do it. The methods suggested by jeff7 are OK, but some drawbacks are:

  • method 1 have parameters N and M, that you must choose, and you must also convert it to another colorspace.
  • method 2 answered can be very slow, since you should compute a 16.7 Milion bins histogram and sort it by frequency (to obtain the 64 higher frequency values)

I like to use an algorithm based on the Most Significant Bits to use in a RGB color and convert it to a 64 color image. If you're using C/OpenCV, you can use something like the function below.

If you're working with gray-level images I recommed to use the LUT() function of the OpenCV 2.3, since it is faster. There is a tutorial on how to use LUT to reduce the number of colors. See: Tutorial: How to scan images, lookup tables... However I find it more complicated if you're working with RGB images.

void reduceTo64Colors(IplImage *img, IplImage *img_quant) {
int i,j;
int height = img->height;
int width = img->width;
int step = img->widthStep;

uchar *data = (uchar *)img->imageData;
int step2 = img_quant->widthStep;
uchar *data2 = (uchar *)img_quant->imageData;

for (i = 0; i < height ; i++) {
for (j = 0; j < width; j++) {

// operator XXXXXXXX & 11000000 equivalent to XXXXXXXX AND 11000000 (=192)
// operator 01000000 >> 2 is a 2-bit shift to the right = 00010000
uchar C1 = (data[i*step+j*3+0] & 192)>>2;
uchar C2 = (data[i*step+j*3+1] & 192)>>4;
uchar C3 = (data[i*step+j*3+2] & 192)>>6;

data2[i*step2+j] = C1 | C2 | C3; // merges the 2 MSB of each channel
}
}
}

Color percentage in image for Python using OpenCV

I've modified your script so you can find the (approximate) percent of green color in your test images. I've added some comments to explain the code:

# Imports
import cv2
import numpy as np

# Read image
imagePath = "D://opencvImages//"
img = cv2.imread(imagePath+"leaves.jpg")

# Here, you define your target color as
# a tuple of three values: RGB
green = [130, 158, 0]

# You define an interval that covers the values
# in the tuple and are below and above them by 20
diff = 20

# Be aware that opencv loads image in BGR format,
# that's why the color values have been adjusted here:
boundaries = [([green[2], green[1]-diff, green[0]-diff],
[green[2]+diff, green[1]+diff, green[0]+diff])]

# Scale your BIG image into a small one:
scalePercent = 0.3

# Calculate the new dimensions
width = int(img.shape[1] * scalePercent)
height = int(img.shape[0] * scalePercent)
newSize = (width, height)

# Resize the image:
img = cv2.resize(img, newSize, None, None, None, cv2.INTER_AREA)

# check out the image resized:
cv2.imshow("img resized", img)
cv2.waitKey(0)

# for each range in your boundary list:
for (lower, upper) in boundaries:

# You get the lower and upper part of the interval:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)

# cv2.inRange is used to binarize (i.e., render in white/black) an image
# All the pixels that fall inside your interval [lower, uipper] will be white
# All the pixels that do not fall inside this interval will
# be rendered in black, for all three channels:
mask = cv2.inRange(img, lower, upper)

# Check out the binary mask:
cv2.imshow("binary mask", mask)
cv2.waitKey(0)

# Now, you AND the mask and the input image
# All the pixels that are white in the mask will
# survive the AND operation, all the black pixels
# will remain black
output = cv2.bitwise_and(img, img, mask=mask)

# Check out the ANDed mask:
cv2.imshow("ANDed mask", output)
cv2.waitKey(0)

# You can use the mask to count the number of white pixels.
# Remember that the white pixels in the mask are those that
# fall in your defined range, that is, every white pixel corresponds
# to a green pixel. Divide by the image size and you got the
# percentage of green pixels in the original image:
ratio_green = cv2.countNonZero(mask)/(img.size/3)

# This is the color percent calculation, considering the resize I did earlier.
colorPercent = (ratio_green * 100) / scalePercent

# Print the color percent, use 2 figures past the decimal point
print('green pixel percentage:', np.round(colorPercent, 2))

# numpy's hstack is used to stack two images horizontally,
# so you see the various images generated in one figure:
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)

Output:

green pixel percentage: 89.89

I've produced some images, this is the binary mask of the green color:

And this is the ANDed out of the mask and the input image:

Some additional remarks about this snippet:

  1. Gotta be careful loading images with OpenCV, as they are loaded in
    BGR format rather than the usual RGB. Here, the snippet has this
    covered by reversing the elements in the boundary list, but keep an
    eye open for this common pitfall.

  2. Your input image was too big to even display it properly using
    cv2.imshow. I resized it and processed that instead. At the end,
    you see I took into account this resized scale in the final percent
    calculation.

  3. Depending on the target color you define and the difference you
    use, you could be producing negative values. In this case, for
    instance, for the R = 0 value, after subtracting diff you would
    get -20. That doesn't make sense when you are encoding color
    intensity in unsigned 8 bits. The values must be in the [0, 255] range.
    Watch out for negative values using this method.

Now, you may see that the method is not very robust. Depending on what you are doing, you could switch to the HSV color space to get a nicer and more accurate binary mask.

You can try the HSV-based mask with this:

# The HSV mask values, defined for the green color:
lowerValues = np.array([29, 89, 70])
upperValues = np.array([179, 255, 255])

# Convert the image to HSV:
hsvImage = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

# Create the HSV mask
hsvMask = cv2.inRange(hsvImage, lowerValues, upperValues)

# AND mask & input image:
hsvOutput = cv2.bitwise_and(img, img, mask=hsvMask)

Which gives you this nice masked image instead:




Related Topics



Leave a reply



Submit