I was checking out the CIDetector API in iOS5, specifically using it with an AVCaptureSession to detect faces in video. I’d never played with the AV Foundation framework before, but it was pretty straight forward to hook up the front camera on the phone as an
AVCaptureDevice and get the frames through an
However I ran into issues when trying to get
CIDetector to find faces in the
CIImage from the video. Eventually I came across the Apple SquareCam sample which contained the solution!
The reference for
CIDetector only mentions the
- featuresInImage:(CIImage *) selector, however there is also an
- featuresInImage:(CIImage *) options:(NSDicitonary *) selector which can be used to provide a
CIDetectorImageOrientation key. This is documented in the CIDetector.h (and appears to only be available on iOS and not Mac). The iPhone camera actually captures video “natively” in landscape mode, so if you’re holding the camera in portrait you need to tell the detector this (I’m sure if I was more familiar with the AV Foundation framework this would have been obvious).
The SquareCam sample has a good chunk of code for converting a
UIDeviceOrientation to the
CIDetectorImageOrientation required by
CIDetector (essentially an EXIF orientation) - this includes an enum for the various EXIF orientations and working correctly for the front and rear facing cameras, but as a basic hack for working in portrait mode with the front facing camera all you need to do is:
1 2 3
(this is the 0 row on the right and 0 col on top orientation - ie. image needs to be rotated 90º CCW to get it to the right orientation)
Anyway, probably something obvious for more experienced folk, but having never played with the AV stuff I didn’t realise what the issue could be, so was happy to find the solution, so I’ll leave this here in case anyone else is running into the same issue.