Changing Image Hue with Python Pil

Changing image hue with Python PIL

There is Python code to convert RGB to HSV (and vice versa) in the colorsys module in the standard library. My first attempt used

rgb_to_hsv=np.vectorize(colorsys.rgb_to_hsv)
hsv_to_rgb=np.vectorize(colorsys.hsv_to_rgb)

to vectorize those functions. Unfortunately, using np.vectorize results in rather slow code.

I was able to obtain roughly a 5 times speed up by translating colorsys.rgb_to_hsv and colorsys.hsv_to_rgb into native numpy operations.

import Image
import numpy as np

def rgb_to_hsv(rgb):
# Translated from source of colorsys.rgb_to_hsv
# r,g,b should be a numpy arrays with values between 0 and 255
# rgb_to_hsv returns an array of floats between 0.0 and 1.0.
rgb = rgb.astype('float')
hsv = np.zeros_like(rgb)
# in case an RGBA array was passed, just copy the A channel
hsv[..., 3:] = rgb[..., 3:]
r, g, b = rgb[..., 0], rgb[..., 1], rgb[..., 2]
maxc = np.max(rgb[..., :3], axis=-1)
minc = np.min(rgb[..., :3], axis=-1)
hsv[..., 2] = maxc
mask = maxc != minc
hsv[mask, 1] = (maxc - minc)[mask] / maxc[mask]
rc = np.zeros_like(r)
gc = np.zeros_like(g)
bc = np.zeros_like(b)
rc[mask] = (maxc - r)[mask] / (maxc - minc)[mask]
gc[mask] = (maxc - g)[mask] / (maxc - minc)[mask]
bc[mask] = (maxc - b)[mask] / (maxc - minc)[mask]
hsv[..., 0] = np.select(
[r == maxc, g == maxc], [bc - gc, 2.0 + rc - bc], default=4.0 + gc - rc)
hsv[..., 0] = (hsv[..., 0] / 6.0) % 1.0
return hsv

def hsv_to_rgb(hsv):
# Translated from source of colorsys.hsv_to_rgb
# h,s should be a numpy arrays with values between 0.0 and 1.0
# v should be a numpy array with values between 0.0 and 255.0
# hsv_to_rgb returns an array of uints between 0 and 255.
rgb = np.empty_like(hsv)
rgb[..., 3:] = hsv[..., 3:]
h, s, v = hsv[..., 0], hsv[..., 1], hsv[..., 2]
i = (h * 6.0).astype('uint8')
f = (h * 6.0) - i
p = v * (1.0 - s)
q = v * (1.0 - s * f)
t = v * (1.0 - s * (1.0 - f))
i = i % 6
conditions = [s == 0.0, i == 1, i == 2, i == 3, i == 4, i == 5]
rgb[..., 0] = np.select(conditions, [v, q, p, p, t, v], default=v)
rgb[..., 1] = np.select(conditions, [v, v, v, q, p, p], default=t)
rgb[..., 2] = np.select(conditions, [v, p, t, v, v, q], default=p)
return rgb.astype('uint8')

def shift_hue(arr,hout):
hsv=rgb_to_hsv(arr)
hsv[...,0]=hout
rgb=hsv_to_rgb(hsv)
return rgb

img = Image.open('tweeter.png').convert('RGBA')
arr = np.array(img)

if __name__=='__main__':
green_hue = (180-78)/360.0
red_hue = (180-180)/360.0

new_img = Image.fromarray(shift_hue(arr,red_hue), 'RGBA')
new_img.save('tweeter_red.png')

new_img = Image.fromarray(shift_hue(arr,green_hue), 'RGBA')
new_img.save('tweeter_green.png')

yields

Sample Image

and

Sample Image

PIL Converting an image's hue, then saving out in Python

I think you want a mono-hue image. Is this true?

It's not clear what you want done with the existing bands (alpha and greyscale/level). Do you want alpha to remain alpha and the greyscale to become red saturation? Do you want the alpha to become your red saturation? Do you want greyscale to be the image lightness and the alpha to become the saturation?

Edit:
I've changed the output based on your comment. You wanted the darkest shade of the greyscale band to represent fully saturated red and the lightest grey to represent white (in other words full-saturated with all colors). You also indicated that you wanted alpha to be preserved as alpha in the output. I've made that change too.

This is possible with some band swapping:

import Image
# get an image that is greyscale with alpha
i = Image.open('hsvwheel.png').convert('LA')
# get the two bands
L,A = i.split()
# a fully saturated band
S, = Image.new('L', i.size, 255).split()
# re-combine the bands
# this keeps tha alpha channel in the new image
i2 = Image.merge('RGBA', (S,L,L,A))
# save
i2.save('test.png')

Pillow module - hue change when cropping and saving (no conversion)

The problem was that when I was saving the file, the PIL library automatically switched the color space of the image(ROMM-RGB) to other color space(RGB or sRGB) and basically every color changed.

All you have to do is preserve the color space of the image and you're fine. If you want to convert to another color space you should look up OpenCV library.

I can't explain too much in detail because I am just breaking the ice on this. Here is the code that solved this issue:

resized_image.save('resized.jpg',     #file name
format = 'JPEG', #format of the file
quality = 100, #compression quality
icc_profile = resized_image.info.get('icc_profile','')) #preserve the icc profile of the photo(this was the one that caused problems)

Here is a link to a more in-depth answer: LINK

pillow change picture color when save the picture

You read your image from disk with OpenCV here:

picture = cv2.imread('atadogumgunu.png')

which will use BGR order. Then you save it with PIL here:

saved_im=saved_im.save(i+".jpg")

which expects RGB order - so it is bound not to work.


The easiest thing is to load your image with PIL at the start and avoid OpenCV altogether so just use:

pil_im = Image.open('atadogumgunu.png').convert('RGB')

Changing the color for the overlay of a PIL image makes no difference

Edit

You wish to "change the colour of the logo" with using an alpha matte. You cannot do this unless you actually manipulate the actual image pixels. Using an alpha matting approach is not the right answer for this. I would suggest you actually mask out the regions of the logo you want to change then replacing the colours with what is desired. Alpha matting is primarily used to blend objects together, not change the colour distribution of an object. I have left the answer below for posterity as the original method for alpha matting provided in the original question at its core was incorrect.


The core misunderstanding of this approach comes from how cv2.addWeighted is performed. Citing the documentation (emphasis mine):

In case of multi-channel arrays, each channel is processed independently. The function can be replaced with a matrix expression:

dst = src1*alpha + src2*beta + gamma;

cv2.addWeighted does not process the alpha channel in the way you are expecting correctly. Specifically, it will consider the alpha channel to be just another channel of information and does a weighted sum of this channel alone for the final output. It does not actually achieve alpha matting. Therefore if you want to do alpha matting you will need to actually compute the operation yourself.

Something like:

import numpy as np
import cv2
from PIL import Image

def overlay(path):
### Some code from your function - comments removed for brevity
logo_img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
alpha = logo_img[:, :, 3]

mask = np.zeros(logo_img.shape, dtype=np.uint8)

# r, g, b = (random.randint(0, 255),
# random.randint(0, 255),
# random.randint(0, 255))

r, g, b = (0, 255, 0)
a = 255
mask[:, :] = r, g, b, a
mask[:, :, 3] = alpha

### Alpha matte code here
alp = alpha.astype(np.float32) / 255.0 # To make between [0, 1]
alp = alp[..., None] # For broadcasting
dst_tmp = logo_img[..., :3].astype(np.float32) * alp + mask[..., :3].astype(np.float32) * (1.0 - alp)
dst = np.zeros_like(logo_img)
dst[..., :3] = dst_tmp.astype(np.uint8)
dst[..., 3] = 255

pil_image = Image.fromarray(dst).convert('RGBA')

return pil_image

The first bit of this new function is from what you originally had. However, the section that has the alpha matting is seen above marked after the appropriate comment. The first line of the section will convert the alpha map into the [0, 1] range. This is required so that when you perform the alpha matting operation, which is a weighted sum of two images, you ensure that no output pixels span outside of the range of its native data type. Also, I introduce a singleton third dimension so that we can broadcast the alpha channel over each RGB channel separately to do the weighting correctly. After, we will compute the alpha matte as a weighted sum of the logo image and mask. Take note that I subset and pull out just the RGB channels of each. You don't need the alpha channel here specifically as I'm using it directly in the weighted sum instead. Once you finish this up, create a new output image with the first three channels being the resulting alpha matte but the alpha channel is all set to 255 since we've already achieved the blending at this point and you want all of the pixel values to show with no transparency now.

PYTHON PIL: remove everything from image except text (based on pixel color)

You could try looking for bright values in HSV colourspace like this:

from PIL import Image

# Load image and convert to HSV
im = Image.open('t6FkL.png').convert('HSV')

# Split channels, just retaining the Value channel
_, _, V = im.split()

# Select pixels where V>220
res = V.point(lambda p: p > 220 and 255)
res.save('result.png')

Sample Image


Here is a maybe more intuitive way of writing the point() function to deal with compound logic:

#!/usr/bin/env python3

from PIL import Image

# Build a linear gradient 0..255
im = Image.linear_gradient('L')

# Save how it looks initially just for debug
im.save('DEBUG-start.png')

# Make all pixels between 180..220 black, leaving others as they were
res = im.point(lambda p: 0 if p>180 and p<220 else p)

# Save result
res.save('result.png')

Here is the start image:

Sample Image

And the processed image:

Sample Image

PIL change color channel intensity

I dreamt up an approach for this:

  • Extract and save the Alpha/transparency channel
  • Convert the image, minus Alpha, to HSV colourspace and save the V (lightness)
  • Get a new Hue (and possibly Saturation) from your Colour Picker
  • Synthesize a new Hue channel, and a new Saturation channel of 255 (fully saturated)
  • Merge the new Hue, Saturation and original V (lightness) to a 3 channel HSV image
  • Convert the HSV image back to RGB space
  • Merge the original Alpha channel back in

That looks like this:

#!/usr/local/bin/python3
import numpy as np
from PIL import Image

# Open and ensure it is RGB, not palettised
img = Image.open("keyshape.png").convert('RGBA')

# Save the Alpha channel to re-apply at the end
A = img.getchannel('A')

# Convert to HSV and save the V (Lightness) channel
V = img.convert('RGB').convert('HSV').getchannel('V')

# Synthesize new Hue and Saturation channels using values from colour picker
colpickerH, colpickerS = 10, 255
newH=Image.new('L',img.size,(colpickerH))
newS=Image.new('L',img.size,(colpickerS))

# Recombine original V channel plus 2 synthetic ones to a 3 channel HSV image
HSV = Image.merge('HSV', (newH, newS, V))

# Add original Alpha layer back in
R,G,B = HSV.convert('RGB').split()
RGBA = Image.merge('RGBA',(R,G,B,A))

RGBA.save('result.png')

With colpickerH=10 you get this (try putting Hue=10 here):

Sample Image

With colpickerH=120 you get this (try putting Hue=120 here):

Sample Image


Just for fun, you can do it exactly the same without writing any Python, just at the command line with ImageMagick which is installed on most Linux distros and available for macOS and Windows:

# Split into Hue, Saturation, Lightness and Alpha channels
convert keyshape.png -colorspace hsl -separate ch-%d.png

# Make a new solid Hue channel filled with 40, a new solid Saturation channel filled with 255, take the original V channel (and darken it a little), convert from HSL to RGB, copy the Alpha channel from the original image
convert -size 73x320 xc:gray40 xc:white \( ch-2.png -evaluate multiply 0.5 \) -set colorspace HSL -combine -colorspace RGB ch-3.png -compose copyalpha -composite result.png

Yes, I could do it as a one-liner, but it would be harder to read.



Related Topics



Leave a reply



Submit