ios - Applying a CIFilter to a Video File and Saving it -
is there fast, lightweight-as-possible way apply cifilter
video? before it's mentioned, have looked @ gpuimage - looks powerful magic code, it's overkill i'm trying do.
essentially, to
- take video file, stored @
/tmp/myvideofile.mp4
- apply
cifilter
video file - save video file different (or same) location,
/tmp/anothervideofile.mp4
i've been able apply cifilter video that's playing extremely , using avplayeritemvideooutput
let player = avplayer(playeritem: avplayeritem(asset: video)) let output = avplayeritemvideooutput(pixelbufferattributes: nil) player.currentitem?.addoutput(self.output) player.play() let displaylink = cadisplaylink(target: self, selector: #selector(self.displaylinkdidrefresh(_:))) displaylink.addtorunloop(nsrunloop.mainrunloop(), formode: nsrunloopcommonmodes) func displaylinkdidrefresh(link: cadisplaylink){ let itemtime = output.itemtimeforhosttime(cacurrentmediatime()) if output.hasnewpixelbufferforitemtime(itemtime){ if let pixelbuffer = output.copypixelbufferforitemtime(itemtime, itemtimefordisplay: nil){ let image = ciimage(cvpixelbuffer: pixelbuffer) // apply filters image // display image } } }
this works great, i've been having a lot tiniest bit of trouble finding out how apply filter saved video file. there option of doing did above, using avplayer
, playing video, , getting pixel buffer every frame played won't work video processing in background. don't think users appreciate having wait long video filter applied.
in way over-simplified code, i'm looking this:
var newvideo = avmutableasset() // we'll pretend thing var originalvideo = avasset(url: nsurl(urlstring: "/example/location.mp4")) originalvideo.getallframes(){(pixelbuffer: cvpixelbuffer) -> void in let image = ciimage(cvpixelbuffer: pixelbuffer) .imagebyapplyingfilter("filter", withinputparameters: [:]) newvideo.addframe(image) } newvideo.exportto(url: nsurl(urlstring: "/this/isanother/example.mp4"))
is there way fast (again, not involving gpuimage, , ideally working in ios 7) way apply filter video file , save it? example take saved video, load avasset
, apply cifilter
, , save new video different location.
in ios 9 / os x 10.11 / tvos, there's convenience method applying cifilter
s video. works on avvideocomposition
, can use both playback , file-to-file import/export. see avvideocomposition.init(asset:applyingcifilterswithhandler:)
method docs.
there's example in apple's core image programming guide, too:
let filter = cifilter(name: "cigaussianblur")! let composition = avvideocomposition(asset: asset, applyingcifilterswithhandler: { request in // clamp avoid blurring transparent pixels @ image edges let source = request.sourceimage.clampingtoextent() filter.setvalue(source, forkey: kciinputimagekey) // vary filter parameters based on video timing let seconds = cmtimegetseconds(request.compositiontime) filter.setvalue(seconds * 10.0, forkey: kciinputradiuskey) // crop blurred output bounds of original image let output = filter.outputimage!.cropping(to: request.sourceimage.extent) // provide filter output composition request.finish(with: output, context: nil) })
that part sets composition. after you've done that, can either play assigning avplayer
or write file avassetexportsession
. since you're after latter, here's example of that:
let export = avassetexportsession(asset: asset, presetname: avassetexportpreset1920x1200) export.outputfiletype = avfiletypequicktimemovie export.outputurl = outurl export.videocomposition = composition export.exportasynchronouslywithcompletionhandler(/*...*/)
there's bit more in wwdc15 session on core image, starting around 20 minutes in.
if want solution works on earlier os, it's bit more complicated.
aside: think how far need support. as of august 15, 2016, 87% of devices on ios 9.0 or later, , 97% on ios 8.0 or later. going lot of effort support small slice of potential customer base—and it'll smaller time project done , ready deploy—might not worth cost.
there couple of ways go @ this. either way, you'll getting cvpixelbuffer
s representing source frames, creating ciimage
s them, applying filters, , rendering out new cvpixelbuffer
s.
use
avassetreader
,avassetwriter
read , write pixel buffers. there's examples how (the reading , writing part; still need filtering in between) in export chapter of apple's avfoundation programming guide.use
avvideocomposition
custom compositor class. custom compositor givenavasynchronousvideocompositionrequest
objects provide access pixel buffers , way provide processed pixel buffers. apple has sample code project called avcustomedit shows how (again, getting , returning sample buffers part; you'd want process core image instead of using gl renderers).
of two, avvideocomposition
route more flexible, because can use composition both playback , export.
Comments
Post a Comment