ios - Applying a CIFilter to a Video File and Saving it -


is there fast, lightweight-as-possible way apply cifilter video? before it's mentioned, have looked @ gpuimage - looks powerful magic code, it's overkill i'm trying do.

essentially, to

  1. take video file, stored @ /tmp/myvideofile.mp4
  2. apply cifilter video file
  3. save video file different (or same) location, /tmp/anothervideofile.mp4

i've been able apply cifilter video that's playing extremely , using avplayeritemvideooutput

let player = avplayer(playeritem: avplayeritem(asset: video)) let output = avplayeritemvideooutput(pixelbufferattributes: nil) player.currentitem?.addoutput(self.output) player.play()  let displaylink = cadisplaylink(target: self, selector: #selector(self.displaylinkdidrefresh(_:))) displaylink.addtorunloop(nsrunloop.mainrunloop(), formode: nsrunloopcommonmodes)  func displaylinkdidrefresh(link: cadisplaylink){     let itemtime = output.itemtimeforhosttime(cacurrentmediatime())     if output.hasnewpixelbufferforitemtime(itemtime){         if let pixelbuffer = output.copypixelbufferforitemtime(itemtime, itemtimefordisplay: nil){             let image = ciimage(cvpixelbuffer: pixelbuffer)             // apply filters image             // display image         }     } } 

this works great, i've been having a lot tiniest bit of trouble finding out how apply filter saved video file. there option of doing did above, using avplayer, playing video, , getting pixel buffer every frame played won't work video processing in background. don't think users appreciate having wait long video filter applied.

in way over-simplified code, i'm looking this:

var newvideo = avmutableasset() // we'll pretend thing  var originalvideo = avasset(url: nsurl(urlstring: "/example/location.mp4")) originalvideo.getallframes(){(pixelbuffer: cvpixelbuffer) -> void in     let image = ciimage(cvpixelbuffer: pixelbuffer)         .imagebyapplyingfilter("filter", withinputparameters: [:])      newvideo.addframe(image) }  newvideo.exportto(url: nsurl(urlstring: "/this/isanother/example.mp4")) 

is there way fast (again, not involving gpuimage, , ideally working in ios 7) way apply filter video file , save it? example take saved video, load avasset, apply cifilter, , save new video different location.

in ios 9 / os x 10.11 / tvos, there's convenience method applying cifilters video. works on avvideocomposition, can use both playback , file-to-file import/export. see avvideocomposition.init(asset:applyingcifilterswithhandler:) method docs.

there's example in apple's core image programming guide, too:

let filter = cifilter(name: "cigaussianblur")! let composition = avvideocomposition(asset: asset, applyingcifilterswithhandler: { request in      // clamp avoid blurring transparent pixels @ image edges     let source = request.sourceimage.clampingtoextent()     filter.setvalue(source, forkey: kciinputimagekey)      // vary filter parameters based on video timing     let seconds = cmtimegetseconds(request.compositiontime)     filter.setvalue(seconds * 10.0, forkey: kciinputradiuskey)      // crop blurred output bounds of original image     let output = filter.outputimage!.cropping(to: request.sourceimage.extent)      // provide filter output composition     request.finish(with: output, context: nil) }) 

that part sets composition. after you've done that, can either play assigning avplayer or write file avassetexportsession. since you're after latter, here's example of that:

let export = avassetexportsession(asset: asset, presetname: avassetexportpreset1920x1200) export.outputfiletype = avfiletypequicktimemovie export.outputurl = outurl export.videocomposition = composition  export.exportasynchronouslywithcompletionhandler(/*...*/) 

there's bit more in wwdc15 session on core image, starting around 20 minutes in.


if want solution works on earlier os, it's bit more complicated.

aside: think how far need support. as of august 15, 2016, 87% of devices on ios 9.0 or later, , 97% on ios 8.0 or later. going lot of effort support small slice of potential customer base—and it'll smaller time project done , ready deploy—might not worth cost.

there couple of ways go @ this. either way, you'll getting cvpixelbuffers representing source frames, creating ciimages them, applying filters, , rendering out new cvpixelbuffers.

  1. use avassetreader , avassetwriter read , write pixel buffers. there's examples how (the reading , writing part; still need filtering in between) in export chapter of apple's avfoundation programming guide.

  2. use avvideocomposition custom compositor class. custom compositor given avasynchronousvideocompositionrequest objects provide access pixel buffers , way provide processed pixel buffers. apple has sample code project called avcustomedit shows how (again, getting , returning sample buffers part; you'd want process core image instead of using gl renderers).

of two, avvideocomposition route more flexible, because can use composition both playback , export.


Comments

Popular posts from this blog

mysql - Dreamhost PyCharm Django Python 3 Launching a Site -

java - Sending SMS with SMSLib and Web Services -

java - How to resolve The method toString() in the type Object is not applicable for the arguments (InputStream) -