Raw Image Data from Camera Like "645 Pro"

Raw image data from camera like 645 PRO

I could solve it with OpenCV. Thanks to everyone who helped me.

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat frame(height, width, CV_8UC4, (void*)baseAddress);

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"ocv%d.BMP", picNum]];
const char* cPath = [filePath cStringUsingEncoding:NSMacOSRomanStringEncoding];

const cv::string newPaths = (const cv::string)cPath;

cv::imwrite(newPaths, frame);

I just have to use the imwrite function from opencv. This way I get BMP-files around 24 MB directly after the bayer-filter!

Raw image data from camera

This is how i do it:

1: I first open the camera using:

- (void)openCamera
{
imagePicker = [[UIImagePickerController alloc] init];
imagePicker.delegate = self;

if([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera])
{
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
imagePicker.mediaTypes = [NSArray arrayWithObjects:
(NSString *) kUTTypeImage,
(NSString *) kUTTypeMovie, nil];
imagePicker.allowsEditing = NO;

[self presentModalViewController:imagePicker animated:YES];
}
else {
lblError.text = NSLocalizedStringFromTable(@"noCameraFound", @"Errors", @"");
}
}

When the picture is taken this method gets called:

 -(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
// Save and get the path of the image
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if([mediaType isEqualToString:(NSString *)kUTTypeImage])
{
// Save the image
image = [info objectForKey:@"UIImagePickerControllerOriginalImage"];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if(!error) {
//Save path and location to database
NSString *pathLocation = [[NSString alloc] initWithFormat:@"%@", assetURL];
} else {
NSLog(@"CameraViewController: Error on saving image : %@ {imagePickerController}", error);
}
}];

}
[imagePicker dismissModalViewControllerAnimated:YES];
}

Then with that path i get the picture from the library in FULL resolution (using the "1"):

 -(void)preparePicture: (NSString *) filePathPicture{
ALAssetsLibraryAssetForURLResultBlock resultBlock = ^(ALAsset *myasset)
{
if(myasset != nil){
ALAssetRepresentation *assetRep = [myasset defaultRepresentation];
CGImageRef imageRef = [assetRep fullResolutionImage];
if (imageRef) {
NSData *imageData = UIImageJPEGRepresentation([UIImage imageWithCGImage:imageRef], 1);
}
}else {
//error
}
};

ALAssetsLibraryAccessFailureBlock failureBlock = ^(NSError *error)
{
NSString *errorString = [NSString stringWithFormat:@"can't get image, %@",[error localizedDescription]];
NSLog(@"%@", errorString);
};

if(filePathPicture && [filePathPicture length])
{
NSURL *assetUrl = [NSURL URLWithString:filePathPicture];
ALAssetsLibrary *assetslibrary = [[ALAssetsLibrary alloc] init];
[assetslibrary assetForURL:assetUrl
resultBlock:resultBlock
failureBlock:failureBlock];
}
}

Hope this helps you a bit further :-).

iOS: Get pixel-by-pixel data from camera

AV Foundation can give you back the raw bytes for an image captured by either the video or still camera. You need to set up an AVCaptureSession with an appropriate AVCaptureDevice and a corresponding AVCaptureDeviceInput and AVCaptureDeviceOutput (AVCaptureVideoDataOutput or AVCaptureStillImageOutput). Apple has some examples of this process in their documentation, and it requires some boilerplate code to configure.

Once you have your capture session configured and you are capturing data from the camera, you will set up a -captureOutput:didOutputSampleBuffer:fromConnection: delegate method, where one of the parameters will be a CMSampleBufferRef. That will have a CVImageBufferRef within it that you access via CMSampleBufferGetImageBuffer(). Using CVPixelBufferGetBaseAddress() on that pixel buffer will return the base address of the byte array for the raw pixel data representing your camera frame. This can be in a few different formats, but the most common are BGRA and planar YUV.

I have an example application that uses this here, but I'd recommend that you also take a look at my open source framework which wraps the standard AV Foundation boilerplate and makes it easy to perform image processing on the GPU. Depending on what you want to do with these raw camera bytes, I may already have something you can use there or a means of doing it much faster than with on-CPU processing.

Easy and quick way of getting raw data of jpeg image in Java?

The easiest way is to do

ImageIO.read(new File("Image.jpeg"))

to get the BufferedImage. Using the BufferedImage you can use getRGB(int x, int y) or getRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize) for better performance. Additionally, getRaster() is an option, which I've found to be the fastest (a little more effort though).

For setting pixels, similar setRGB methods exist.

Edit: ImageIO is javax.​imageio.​ImageIO.

Import Raw image in R

You can use the readRaw() function from the hexView package :

library(hexView)
raw_image <- readRaw("image_path")

The data is then in raw_image$fileRaw. It is stored as a vector of class "raw".

You can simply turn it into a matrix :

image_matrix <- matrix(raw_image$fileRaw, nrow = image_height, ncol = image_width, byrow = TRUE)

It should recreate what you had in your raw file.

NB : image_height and image_width would be in pixels.

Now you can easily plot your image via as.raster:

plot(as.raster(image_matrix))

Capture still UIImage without compression (from CMSampleBufferRef)?

The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.

So for example:

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;                                 
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];

[newStillImageOutput setOutputSettings:outputSettings];

EDIT

I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that

AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];

EDIT 2

The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer, 0);

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);



// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);


// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

return (image);

}

I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.

captureStillImageAsynchronouslyFromConnection without JPG intermediary

You'll need to set the outputSettings with a different pixel format. If you want 32-bit BGRA, for example, you can set:

NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];

From https://developer.apple.com/library/mac/#documentation/AVFoundation/Reference/AVCaptureStillImageOutput_Class/Reference/Reference.html, the "recommended" pixel formats are:

  • kCMVideoCodecType_JPEG
  • kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
  • kCVPixelFormatType_32BGRA

Of course, if you're not using JPEG output, you can't use jpegStillImageNSDataRepresentation:, but there's an example here:
how to convert a CVImageBufferRef to UIImage



Related Topics



Leave a reply



Submit