Mar 19, 2012

Manual Camera on iPhone

Today I have succeeded at streaming video from camera and showing it on the screen. Also, I have succeeded at manipulating pixels before showing it.

The normal way is to:

1. Write your own AVCaptureVideoDataOutputSampleBufferDelegate.
2. Get CVImageBuffer, which is from CVPixelBuffer
3. Manipulate pixels at their addresses
4. Create CVImageRef from CVImageBuffer
4. setContents of the CALayer with your CVImageRef

This is a normal way. You may also channel CVImageBufferRef to OpenGL in order to manipulate pixels with OpenGL Shader.

The vital technique is not to copy anything but to manipulate the pixels at their addresses directly.

It seems to me that using OpenGL shader is a better idea because:

1. It is run on GPU. This means it does not consume CPU time.
2. It is optimized for parallel stuffs.

I'm too lazy to post the code here.