Converting Raw Bytes to Image

How to convert byte array to image in python

You can generate a bytearray of dummy data representing a gradient like this:

import numpy as np

# Generate a left-right gradient 242 px wide by 266 pixels tall
ba = bytearray((np.arange(242) + np.zeros((266,1))).astype(np.uint8))

For reference, that array will contain data like this:

array([[  0.,   1.,   2., ..., 239., 240., 241.],
[ 0., 1., 2., ..., 239., 240., 241.],
[ 0., 1., 2., ..., 239., 240., 241.],
...,
[ 0., 1., 2., ..., 239., 240., 241.],
[ 0., 1., 2., ..., 239., 240., 241.],
[ 0., 1., 2., ..., 239., 240., 241.]])

And then make into a PIL/Pillow image like this:

from PIL import Image

# Convert bytearray "ba" to PIL Image, 'L' just means greyscale/lightness
im = Image.frombuffer('L', (242,266), ba, 'raw', 'L', 0, 1)

Then you can save the image like this:

im.save('result.png')

enter image description here

Documentation is here.

Converting RAW byte data to Bitmap

So, in the end my solution was simple.

int WriteBitmapFile(string filename, int width, int height, byte[] imageData)
{
byte[] newData = new byte[imageData.Length];

for(int x = 0; x < imageData.Length; x+= 4)
{
byte[] pixel = new byte[4];
Array.Copy(imageData, x, pixel, 0, 4);

byte r = pixel[0];
byte g = pixel[1];
byte b = pixel[2];
byte a = pixel[3];

byte[] newPixel = new byte[] { b, g, r, a };

Array.Copy(newPixel, 0, newData, x, 4);
}

imageData = newData;

using (var stream = new MemoryStream(imageData))
using (var bmp = new Bitmap(width, height, PixelFormat.Format32bppArgb))
{
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0,
bmp.Width,
bmp.Height),
ImageLockMode.WriteOnly,
bmp.PixelFormat);

IntPtr pNative = bmpData.Scan0;
Marshal.Copy(imageData, 0, pNative, imageData.Length);

bmp.UnlockBits(bmpData);

bmp.Save(filename);
}

return 1;
}

All I had to do was loop through the imageData and adjust the bytes that made up the pixels and re-order them so that they would suit the expected format for windows bitmap which is BGRA.

Obviously, I can still make some small optimisations in the for loop that shifts the bytes around, but it's working.

Convert raw bytes to a gif file image in c#

The way to manipulate 8-bit images in .Net is not very straightforward; you need to make a new Bitmap using the (width, height, pixelformat) constructor, then use LockBits to open its backing byte array, and copy your image data into it using Marshal.Copy.

Note that the length in bytes of one line in this backing data array, called the "stride", is always rounded up to a multiple of four bytes, so you have to copy your data line by line, and skip to the next line position using the stride, not just the (image width * bits per pixel) value.

Though, on the subject of the image width... your bytes array is only a part of the image. If it is just the pixel data, you are missing all your usual image header information, such as the image dimensions, and, assuming the pixel format is 8-bit indexed (as gif is), the colour palette. This data isn't optional; without it, there is no way to reconstruct an image.

If the image is grayscale, and each byte simply represents a brightness, you can generate the palette easily with a simple for loop, though:

Color[] palette = new Color[256];
for (Int32 i = 0; i < palette.Length; i++)
palette[i] = Color.FromArgb(i, i, i);

Once you (somehow) got that missing information, this is the way to make an image out of the bytes array:

/// <summary>
/// Creates a bitmap based on data, width, height, stride and pixel format.
/// </summary>
/// <param name="sourceData">Byte array of raw source data</param>
/// <param name="width">Width of the image</param>
/// <param name="height">Height of the image</param>
/// <param name="stride">Scanline length inside the data</param>
/// <param name="pixelFormat">Pixel format</param>
/// <param name="palette">Color palette</param>
/// <param name="defaultColor">Default color to fill in on the palette if the given colors don't fully fill it.</param>
/// <returns>The new image</returns>
public static Bitmap BuildImage(Byte[] sourceData, Int32 width, Int32 height, Int32 stride, PixelFormat pixelFormat, Color[] palette, Color? defaultColor)
{
Bitmap newImage = new Bitmap(width, height, pixelFormat);
BitmapData targetData = newImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, pixelFormat);
//If the input stride is larger than the width, this calculates the actual amount of bytes to copy for each line.
Int32 newDataWidth = ((Image.GetPixelFormatSize(pixelFormat) * width) + 7) / 8;
// Compensate for possible negative stride on BMP format data.
Boolean isFlipped = stride < 0;
stride = Math.Abs(stride);
// Cache these to avoid unnecessary getter calls.
Int32 targetStride = targetData.Stride;
Int64 scan0 = targetData.Scan0.ToInt64();
for (Int32 y = 0; y < height; y++)
Marshal.Copy(sourceData, y * stride, new IntPtr(scan0 + y * targetStride), newDataWidth);
newImage.UnlockBits(targetData);
// Fix negative stride on BMP format.
if (isFlipped)
newImage.RotateFlip(RotateFlipType.Rotate180FlipX);
// For indexed images, set the palette.
if ((pixelFormat & PixelFormat.Indexed) != 0 && palette != null)
{
ColorPalette pal = newImage.Palette;
for (Int32 i = 0; i < pal.Entries.Length; i++)
{
if (i < palette.Length)
pal.Entries[i] = palette[i];
else if (defaultColor.HasValue)
pal.Entries[i] = defaultColor.Value;
else
break;
}
newImage.Palette = pal;
}
return newImage;
}

For a compact bytes array of an 8-bit image, the width and stride should be identical. Without a palette or the dimensions, there's no way to do it, though.

For the record, your little repeating { 0xC2, 0x80 } byte array above, loaded as 10×20 grayscale image (it's 200 bytes, not 100 as you said), gives this result (zoomed to x20):

Byte array loaded as 8-bit grayscale image
It's just 2 repeated bytes, on an even width, so all you get are vertical lines...

Golang convert raw image []byte to image.Image

How to convert raw byte[] into bitmap in camera App

It is not possible to retrieve uncompressed bitmap data from android camera, so the picture call back need to move from RawCallback to JpegCallback and use decodeByteArray() to obtain Bitmap.Also it is unreliable to pass Bitmap through Intent So the simplest way is to write to the receiving Activity directly.The Code became like this:

    @Override
public void onPicture(byte[] bytes) {
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, mJpeg.get().length,null);
PicturePreviewActivity.mBitmap=new WeakReference<>(bitmap);
Intent intent = new Intent(CameraActivity.this, PicturePreviewActivity.class);
startActivityForResult(intent, SAVE_PICTURE_OR_NOT);
}
}

Reading RGB raw input, converting to openCV object, then converting to .JPEG bytes - without PIL

I figured this out. Further improvements welcome. Still not entirely sure I understand the shape and contents of the imencode returned tuple.

See below my working numpy/opencv alternative to the PIL approach above:

Read in RBG 24bit image from buffer and convert to numpy array (opencvFr) for openCV:

FlatNp=np.frombuffer(buf,dtype=np.uint8, count=-1)
opencvFr=np.resize(FlatNp,(576,768,3))

Encoding the edited numpy image array (opencvFr) as JPEG bytes:

tmp=cv2.imencode('.jpg',opencvFr) #this returns a tuple.
self.frame=tmp[1].tobytes() #bytes containing compressed JPEG image

(I am experimenting with this for a streaming camera application)



Related Topics



Leave a reply



Submit