videoflint / cabbage Goto Github PK
View Code? Open in Web Editor NEWA video composition framework build on top of AVFoundation. It's simple to use and easy to extend.
License: MIT License
A video composition framework build on top of AVFoundation. It's simple to use and easy to extend.
License: MIT License
讨论:
现在的结构只支持一个音频和视频的主 channel,而没有办法添加其它 channel。
overlays 可以放置其它视频数据,但是由于用了 AVComposition 添加到不同的 track id 的方式,导致现在有最大 16 个的限制。
audios 同理。
目标:
可以有多 channel;
overlays 实现方式优化,不限制 overlays 个数;
audios 暂时没有更好的改进建议
可以优化一下,兼容声音时间更多的情况
I am playing a few video clips together by using this marvelous library. Also, from time to time I need to append a new video clip and play the whole video just after adding it. When the videos amount increases it takes more time to regenerate the play item.
Is there any way to remove this delay?
Can we do changes to the timeline dynamically and see them on preview(player item) without rebuilding the composition?
倒放视频的时候,音频能倒放吗
Hi Vitoziv!
I am currently using your library in some video editing projects and it is really awesome. Thank you for your efforts to create this library and to make editing videos easier.
While using your library I have encountered some issues that I would like to share and look forward to your feedback.
I often have to change track items and re-build player item. Ex: change the URL of a track item ...
But I'm having performance issues because the build player item takes a long time and takes up a lot of device resources
So is there a way for me to not have to re-build the player item but only update the changes?
Ex: The code below when I want to change the URL of bambooTrackItem, I am currently rebuilding the compositionGenerator and re-build player item.
Looking forward to hearing from you. Thank you!
let bambooTrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "bamboo", withExtension: "mp4")!
let resource = AVAssetTrackResource(asset: AVAsset(url: url))
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
return trackItem
}()
let overlayTrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "overlay", withExtension: "jpg")!
let image = CIImage(contentsOf: url)!
let resource = ImageResource(image: image, duration: CMTime.init(seconds: 5, preferredTimescale: 600))
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
return trackItem
}()
let seaTrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "sea", withExtension: "mp4")!
let resource = AVAssetTrackResource(asset: AVAsset(url: url))
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
return trackItem
}()
let transitionDuration = CMTime(seconds: 2, preferredTimescale: 600)
bambooTrackItem.videoTransition = PushTransition(duration: transitionDuration)
bambooTrackItem.audioTransition = FadeInOutAudioTransition(duration: transitionDuration)
overlayTrackItem.videoTransition = BoundingUpTransition(duration: transitionDuration)
let timeline = Timeline()
timeline.videoChannel = [bambooTrackItem, overlayTrackItem, seaTrackItem]
timeline.audioChannel = [bambooTrackItem, seaTrackItem]
do {
try Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
} catch {
assert(false, error.localizedDescription)
}
timeline.renderSize = CGSize(width: 1920, height: 1080)
let compositionGenerator = CompositionGenerator(timeline: timeline)
let playerItem = compositionGenerator.buildPlayerItem()
return playerItem
你好,我很想了解您的自定义过渡动画的实现原理,到底是如何实现的,如果我想自己设置过渡动画样式,又如何实现呢?
Hi vitoziv, I cannot play audio in a specific time range after added. This is my source code. I cannot play audio1.mp3 after set startTime and duration, if I don't set this, it plays well. Thank you.
let video1TrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "video1", withExtension: "mp4")!
let resource = AVAssetTrackResource(asset: AVAsset(url: url))
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
return trackItem
}()
let mp3TrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "audio1", withExtension: "mp3")!
let resource = AVAssetTrackResource(asset: AVAsset(url: url))
let trackItem = TrackItem(resource: resource)
trackItem.startTime = CMTime(value: 1, timescale: 1)
trackItem.duration = CMTime(value: 10, timescale: 1)
return trackItem
}()
let timeline = Timeline()
timeline.videoChannel = [video1TrackItem]
timeline.audioChannel = [video1TrackItem]
timeline.audios = [mp3TrackItem]
I have a list of videos AVAssets.
I want to merge them into 1 video with their corresponding audios.
But the videos sometimes are portrait, sometimes square, sometimes landscape (they may have infinite different width and heights). I want the videos to merge and stay aspectFit to the size frame of the first video.
Is this possible with Cabbage ?
Im having a hard time understanding your ¨timeline¨ concept.
the player display area is black!
我参考VideoCat的Demo从相册中选择了图片
let resource = PHAssetImageResource.init(asset: asset, duration: CMTime.init(value: 3000, 600))
guard let task = resource.prepare(progressHandler: progressHandler, completion: { (status, error) in
if status == .avaliable {
resource.selectedTimeRange = CMTimeRange.init(start: CMTime.zero, end: resource.duration)
let trackItem: TrackItem = TrackItem(resource: resource)
let transition = CrossDissolveTransition()
transition.duration = CMTime(value: 900, timescale: 600)
trackItem.videoTransition = transition
let audioTransition = FadeInOutAudioTransition(duration: CMTime(value: 66150, timescale: 44100))
trackItem.audioTransition = audioTransition
if resource.isKind(of: ImageResource.self) {
trackItem.videoConfiguration.contentMode = .custom
} else {
trackItem.videoConfiguration.contentMode = .aspectFill
}
complete(trackItem)
} else {
Log.error("image track status is \(status), check prepare function, error: \(error?.localizedDescription ?? "")")
complete(nil)
}
})
然后检查了,图片的选择,在 resource.prepare中,Image是存在的,status也是 available的。
public func reloadPlayerItem(_ items: [TrackItem]) -> AVPlayerItem {
let timeLine = TimeLineManager.current.timeline
let width = UIScreen.main.bounds.width * UIScreen.main.scale
let height = width
timeLine.videoChannel = items
timeLine.audioChannel = items
do {
try Timeline.reloadVideoStartTime(providers: timeLine.videoChannel)
} catch {
assert(false, error.localizedDescription)
}
timeLine.renderSize = CGSize.init(width: width, height: height)
let compositonGenerator = CompositionGenerator.init(timeline: timeLine)
return compositonGenerator.buildPlayerItem()
}
这是buildItem的方法,视屏和livephoto都没有问题,只有图片无法正常播放,在播放器的时长也不对。
master分支。
xcode 10.2.2
swift 5.0
I write the simple demo as below:
`
let backgroundTrackItem: TrackItem = {
let url = Bundle.main.url(forResource: "bamboo", withExtension: "mp4")!
let resource = AVAssetTrackResource(asset: AVAsset(url: url))
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
return trackItem
}()
let overlay1: TrackItem = {
let url = Bundle.main.url(forResource: "filmstrip_background1", withExtension: "png")!
let image = CIImage(contentsOf: url)!
let resource = ImageResource(image: image, duration: CMTime.init(seconds: 7, preferredTimescale: 600))
let trackItem = TrackItem(resource: resource)
trackItem.startTime = CMTime.init(seconds: 0, preferredTimescale: 600)
trackItem.videoConfiguration.contentMode = .aspectFit
let overlayHeight = renderSize.width/4
let frame = CGRect.init(x: 0, y: 0, width: renderSize.width, height: overlayHeight)
trackItem.videoConfiguration.contentMode = .custom
trackItem.videoConfiguration.frame = frame;
return trackItem
}()
let overlay2: TrackItem = {
let url = Bundle.main.url(forResource: "black_overlay", withExtension: "png")!
let image = CIImage(contentsOf: url)!
let resource = ImageResource(image: image, duration: CMTime.init(seconds: 7, preferredTimescale: 600))
let trackItem = TrackItem(resource: resource)
trackItem.startTime = CMTime.init(seconds: 2, preferredTimescale: 600)
trackItem.videoConfiguration.contentMode = .custom
trackItem.videoConfiguration.frame = CGRect(x: 0, y: 0, width: renderSize.width/4, height: renderSize.height/4)
return trackItem
}()
let timeline = Timeline()
timeline.videoChannel = [backgroundTrackItem]
timeline.overlays = [overlay1, overlay2]
do {
try Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
} catch {
assert(false, error.localizedDescription)
}
timeline.renderSize = renderSize;
let compositionGenerator = CompositionGenerator(timeline: timeline)
let playerItem = compositionGenerator.buildPlayerItem()
return playerItem
`
This code always return a black video. If I change the start time of overlay2 to 0, it works well.
So How I can customize the start time of each overlay on video timeline?
P/s: Another question, How can I add text overlay on the video?
is it possible to control the volume for audios or videos audio without the need to rebuild the timeline?
Swift Package Manager expects a version described as
package version is a three period-separated integer
https://developer.apple.com/documentation/xcode/publishing_a_swift_package_with_xcode
Could we use 3 numbers going forwards, and release an 0.5.1 ?
dyld: Library not loaded: @rpath/libswiftAVFoundation.dylib
Referenced from: /var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Cabbage
Reason: no suitable image found. Did find:
/private/var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Frameworks/libswiftAVFoundation.dylib: code signing blocked mmap() of '/private/var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Frameworks/libswiftAVFoundation.dylib'
The file EnlargeImageView.swift is not in the downloaded bundle
Basic example -
public class MyFilter: VideoConfigurationProtocol {
var filter: TestFilter!
public var setIntensity: CGFloat = 0.3
public init() { }
public func applyEffect(to sourceImage: CIImage, info: VideoConfigurationEffectInfo) -> CIImage {
var finalImage = sourceImage
filter = TestFilter()
filter.inputImage = finalImage
filter.intensity = setIntensity
finalImage = filter.outputImage!
return finalImage
}
}
Then when appending this class into trackItem.videoConfiguration.configurations, would there be a preferred method to adjust the setIntensity variable with a UISlider?
UPD: I was trying it without a player and then after checking with a AVPlayer instance I see that it actually loops the video, but does it only in player, not in AVExportSession's output.
So the question now sounds like: Why the result differs if I do export the composition?
I'm creating a video editing app and I have functionality to add transitions and audios.
When adding transitions, without added audios, it renders fine, with transitions and all.
Also when adding audios, without added transitions, it renders fine, audio plays fine.
However, if I add a transition and audio on the timeline, only the audio plays and the video is only black. Here's my code.
private func buildTracks() {
var videoChannel: [TrackItem] = []
var audioChannel: [TrackItem] = []
for asset in assets {
let resource = trackResource(for: asset)
let trackItem = TrackItem(resource: resource)
trackItem.videoConfiguration.contentMode = .aspectFit
switch asset.transition {
case 1:
let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
let transition = CrossDissolveTransition(duration: transitionDuration)
trackItem.videoTransition = transition
print("CROSS DISSOLVE")
case 2:
let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
let transition = FadeTransition(duration: transitionDuration)
trackItem.videoTransition = transition
print("FADE BLACK")
case 3:
let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
let transition = FadeTransition(duration: transitionDuration)
trackItem.videoTransition = transition
print("FADE WHITE")
default:
trackItem.videoTransition = nil
print("NONE")
}
if let asset = asset as? VVideoAsset {
trackItem.audioConfiguration.volume = asset.volume
}
videoChannel.append(trackItem)
audioChannel.append(trackItem)
let filterConfigurations = videoEdit.filters.map { FilterConfiguration(filter: $0, totalDuration: totalDuration) }
trackItem.videoConfiguration.configurations = filterConfigurations
}
timeline.videoChannel = videoChannel
timeline.audioChannel = audioChannel
}
private func buildAudios() -> [AudioProvider] {
var audios: [AudioProvider] = []
videoEdit.audios.forEach { (audio) in
guard let audioURL = audio.audioAsset.mp3Path else {
return
}
let audioAsset = AVAsset(url: audioURL)
let resource = AVAssetTrackResource(asset: audioAsset)
let duration = audio.duration * totalDuration
resource.selectedTimeRange = CMTimeRange.init(start: CMTime.zero, end: CMTimeMakeWithSeconds(duration, preferredTimescale: preferredTimeScale))
let audioTrackItem = TrackItem(resource: resource)
audioTrackItem.audioConfiguration.volume = audio.volume
audioTrackItem.startTime = CMTimeMakeWithSeconds(audio.componentStart * totalDuration, preferredTimescale: preferredTimeScale)
audios.append(audioTrackItem)
}
return audios
}
private func buildAddedComponents() {
timeline.audios = buildAudios()
timeline.overlays = buildOverlays()
}
实时倒放技术探索。
Hi,
We'd like to add a transition to start the video but we're not sure how to accomplish this. For example, we'd like to fade in from black to start the video. Thank you for this great library!
DEBUG 后发现 VideoCompositionInstruction
的方法
open func apply(request: AVAsynchronousVideoCompositionRequest) -> CIImage?
在 16 秒后 request
的 sourceTrackIDs
为 nil
,调用 sourceFrame(byTrackID trackID: CMPersistentTrackID)
也返回 nil。
求助问题原因所在。
下面是我通过 images 创建 AVPlayerItem 的代码:
func makePlayerItemFromImages(_ images: [UIImage]) {
let ciImages = images.compactMap { $0.cgImage }.map { CIImage.init(cgImage: $0) }
let resources = ciImages.map { ImageResource(image: $0) }
for item in resources {
let timeRange = CMTimeRange(start: kCMTimeZero, duration: CMTimeMake(100, 100))
item.duration = timeRange.duration
item.selectedTimeRange = timeRange
}
let items = resources.map { TrackItem(resource: $0) }
var timeLineDuration = kCMTimeZero
items.forEach {
$0.configuration.videoConfiguration.baseContentMode = .aspectFit
let timeRange = CMTimeRange(start: timeLineDuration, duration: $0.resource.duration)
$0.configuration.timelineTimeRange = timeRange
timeLineDuration = CMTimeAdd(timeLineDuration, timeRange.duration)
}
let timeline = Timeline()
timeline.videoChannel = items
let compositionGenerator = CompositionGenerator(timeline: timeline)
compositionGenerator.renderSize = CGSize(width: 480, height: 480)
let playerItem = compositionGenerator.buildPlayerItem()
let controller = AVPlayerViewController.init()
controller.player = AVPlayer.init(playerItem: playerItem)
controller.view.backgroundColor = UIColor.white
present(controller, animated: true, completion: nil)
}
I want to build application add multi music to video.
I have a video track item and some audio track item.
I need to misc some audio track (with different start time) and video.
Ref: I have referenced on page --> https://github.com/vitoziv/VideoCat
Example:
video: ___________[=======================] video track
audios : [--------===========] audio track 1
____________[-----========] audio track 2
__________________[=======================--------] audio track 3
____[-------------=======================---------] audio track 4
Note:
====== : available
-------- : unavailable
video: ______________________[=======================] video track
audio track 1:___[~~~~~~~~===========] offsetTime < 0
audio track 2:_______________~~~~~~~~[===========] offsetTime > 0
==> ~~~~~ : offsetTime
video__:[========================60s===================] total
result: 60s --> 20s with lowtime 20s, uptime 40s
trimed_:[[-----20s-----][======20s=====][______20s_____]
trimed_:[[+++++++++++++++++++++++++++++]
==> [----] lowtime
==> [+++] up time
convenience init --> AudioData :
self.offsetTime = offsetTime
self.lowTime = lowTime > 0.0 ? lowTime : 0.0
let upValue = max(upTime, lowTime)
self.upTime = upValue
let durationValue = upValue - lowTime
self.duration = durationValue
let startTime = lowTime + offsetTime
self.startTime = startTime
let distance = min(durationValue, videoDuration)
let available = distance - startTime
self.available = available
My code:
Video track item:
let asset = AVAsset(url: url)
let resource = AVAssetTrackResource(asset: asset)
let lowTime = CMTime(seconds: video.lowTime, preferredTimescale: 600)
//default lowtime = 0.0
let durationTime = CMTime(seconds: video.duration, preferredTimescale: 600)
//default video.duration = total time of video
resource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: durationTime)
let videoTrackItem = TrackItem(resource: resource)
videoTrackItem.startTime = lowTime
videoTrackItem.videoConfiguration.contentMode = .aspectFill
videoTrackItem.audioConfiguration.volume = video.volume
context.viewModel.addVideoTrackItem(videoTrackItem)
context.videoView.player.replaceCurrentItem(context.viewModel.playerItem)
context.timelineView.reload(with: context.viewModel.videoTrackItems)
A Audio track item:
let asset = AVAsset(url: url)
let resource = AVAssetTrackResource(asset: asset)
print("lowtime \(audio.lowTime)") // default low time = 0.0
print("upTime \(audio.upTime)")
print("startTime \(audio.startTime)")
print("available \(audio.available)")
print("videoDuration \(self.durationTimeOfVideo)")
let startTime = CMTime(seconds: audio.startTime, preferredTimescale: 600)
let availableTime = CMTime(seconds: audio.available, preferredTimescale: 600)
resource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: availableTime)
let trackItem = TrackItem(resource: resource)
trackItem.startTime = startTime
trackItem.audioConfiguration.volume = audio.volume
TimelineViewModel
class TimelineManager {
static let current = TimelineManager()
var timeline = Timeline()
}
class TimelineViewModel {
// MARK: - Vars
private(set) var audioTrackItems = [TrackItem]()
private(set) var videoTrackItems = [TrackItem]()
private(set) var renderSize: CGSize = CGSize.zero
private(set) var lut: String = "original_lut"
private(set) var playerItem = AVPlayerItem(asset: AVComposition())
func buildTimeline() -> Timeline {
let timeline = TimelineManager.current.timeline
reloadTimeline(timeline)
return timeline
}
// MARK: - Add/Replace
func addVideoTrackItem(_ trackItem: TrackItem) {
videoTrackItems.append(trackItem)
reloadPlayerItems()
}
func insertTrackItem(_ trackItem: TrackItem, at index: Int) {
guard audioTrackItems.count >= index else { return }
audioTrackItems.insert(trackItem, at: index)
reloadPlayerItems()
}
func updateTrackItem(_ trackItem: TrackItem, at index: Int) {
guard audioTrackItems.count > index else { return }
audioTrackItems[index] = trackItem
reloadPlayerItems()
}
func removeTrackItem(_ trackItem: TrackItem) {
guard let index = audioTrackItems.index(of: trackItem) else { return }
audioTrackItems.remove(at: index)
reloadPlayerItems()
}
func removeTrackItem(at index: Int) {
guard audioTrackItems.count > index else { return }
audioTrackItems.remove(at: index)
reloadPlayerItems()
}
func removeAllAudioTrackItems() {
audioTrackItems.removeAll()
reloadPlayerItems()
}
func removeAll() {
audioTrackItems.removeAll()
videoTrackItems.removeAll()
reloadPlayerItems()
}
func reloadPlayerItems() {
let timeline = TimelineManager.current.timeline
timeline.renderSize = renderSize
reloadTimeline(timeline)
do {
try Timeline.reloadVideoStartTime(providers: videoTrackItems)
} catch {
assert(false, error.localizedDescription)
}
build(with: timeline)
}
fileprivate func build(with timeline: Timeline) {
let compositionGenerator = CompositionGenerator(timeline: timeline)
let playerItem = compositionGenerator.buildPlayerItem()
self.playerItem = playerItem
}
fileprivate func reloadTimeline(_ timeline: Timeline) {
timeline.videoChannel = videoTrackItems
timeline.audios = videoTrackItems + audioTrackItems
}
extension TrackItem {
func reloadTimelineDuration() {
self.duration = self.resource.selectedTimeRange.duration
}
}
==> Errors:
I have been working on to create a video collage with layer border, but even the library works fine I can not apply any transformation or border layer.
如题,pod之后无法创建ImageCompositionGroupProvider实例,因为没有公开的实例方法。
另外CompositionGenerator类中buildVideoComposition()方法生成的videoComposition,能否支持下自定义frameDuration?目前视频会被统一处理成30FPS
public func buildVideoComposition() -> AVVideoComposition? {
if let videoComposition = self.videoComposition, !needRebuildVideoComposition {
return videoComposition
}
buildComposition()
var layerInstructions: [VideoCompositionLayerInstruction] = []
mainVideoTrackInfo.forEach { info in
info.info.forEach({ (provider) in
let layerInstruction = VideoCompositionLayerInstruction.init(trackID: info.track.trackID, videoCompositionProvider: provider)
layerInstruction.prefferdTransform = info.track.preferredTransforms[provider.timeRange.vf_identifier]
layerInstruction.timeRange = provider.timeRange
layerInstruction.transition = provider.videoTransition
layerInstructions.append(layerInstruction)
})
}
overlayTrackInfo.forEach { (info) in
let track = info.track
let provider = info.info
let layerInstruction = VideoCompositionLayerInstruction.init(trackID: track.trackID, videoCompositionProvider: provider)
layerInstruction.prefferdTransform = track.preferredTransforms[provider.timeRange.vf_identifier]
layerInstruction.timeRange = provider.timeRange
layerInstructions.append(layerInstruction)
}
layerInstructions.sort { (left, right) -> Bool in
return left.timeRange.start < right.timeRange.start
}
// Create multiple instructions,each instructions contains layerInstructions whose time range have insection with instruction,layerrinstruction
// When rendering the frame, the instruction can quickly find layerInstructions
let layerInstructionsSlices = calculateSlices(for: layerInstructions)
let mainTrackIDs = mainVideoTrackInfo.map({ $0.track.trackID })
let instructions: [VideoCompositionInstruction] = layerInstructionsSlices.map({ (slice) in
let trackIDs = slice.1.map({ $0.trackID })
let instruction = VideoCompositionInstruction(theSourceTrackIDs: trackIDs as [NSValue], forTimeRange: slice.0)
instruction.backgroundColor = timeline.backgroundColor
instruction.layerInstructions = slice.1
instruction.passingThroughVideoCompositionProvider = timeline.passingThroughVideoCompositionProvider
instruction.mainTrackIDs = mainTrackIDs.filter({ trackIDs.contains($0) })
return instruction
})
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
videoComposition.renderSize = self.timeline.renderSize
videoComposition.instructions = instructions
videoComposition.customVideoCompositorClass = VideoCompositor.self
//我使用的是以下方式 新增的添加字幕和贴画的代码
let layerTool = SubtitlePlayerLayerTool()
layerTool.renderSize = self.timeline.renderSize
layerTool.subtitlesAndStickersModel = compositionSubTitleAndStickerData.0
videoComposition.animationTool = layerTool.makeAnimationTool()
self.videoComposition = videoComposition
return videoComposition
}
你好!
我们在你的中文说明当中看到以下信息:
对 CALayer 的支持,可以把 CALayer 所支持的所有 CoreAnimation 动画带入到视频画面中。比如使用 Lottie,设计师在 AE 中导出的动画配置,客户端用配置生成 CALayer 类,添加到 AVVideoCompositionCoreAnimationTool 中就可以很方便的实现视频中做贴纸动画的功能。
能否提供1~2个相应使用Lottie导出的动画配置做的视频合成的例子吗?
Jack
WeChat:15915895880
Hi, I find it support Pod install. but it seem to not support carthage.
Carthage is a good project that can help us manage other framework.
if Cabbage can support cartahge, I think it will be cool.
很棒的一个库,目前swift5.0的适配工作,什么时候可以支持到呢?
missing: "import UIKit" in PHAssetLivePhotoResource.swift
你好,首先非常感谢vitoziv的无私分享!大致把Cabbage的中文说明和源代码梳理了一下,但对Cabbage的设计思路和使用还有一些不太清楚的地方希望请教讨论一下。目前,项目需要实现给视频添加字幕和动画贴图的功能,我不太清楚是通过timeline上的overlays实现,还是使用CALayer去实现。但如果使用CALayer但的话,预览和渲染需要维护两套业务逻辑感觉比较麻烦。想请教一下vitoziv,Cabbage对这方面需求的支持是怎样考虑的以及你的建议?
when export reverse video its show black
I make a simple demo to add text overlay on video as your suggestion:
(About text overlay, I suggest you add text's image to Timeline.passingThroughVideoCompositionProvider)
It works but I got the issue: the text overlay always had a black background.
I understand you used 1 black video to render image as video frame but in this case how can I delete this black background
P/s: How can I custom the video resource to create a track item from different resource type as a GIF file?
同题,谢谢~
Hi Vito, I'm wondering what the process looks like for processing a filter now that filterProcessor was removed in version 0.2. The VideoCat demo used this when applying Lookup table filters but I'm wondering how I should go about doing this now it's gone. Do you have an example I could try? Thank you for your work and time!
当我通过以下方式去填充音频轨道时,我遇到了以下问题:
1、对resouorce实例不进行深拷贝,所有音频资源的selectTimeRange属性总是会与最后一次循环的设置一致。
2、对resource实例进行深拷贝,音频资源的scaledDuration属性,总是为音频长度。
代码如下:
private func caculateMusicTrack(resource: AVAssetTrackResource, duration: CMTime) -> [TrackItem] {
Log.out(">> Total VIDEO DURATION: \(duration.seconds)")
Log.out(">> Music File Duration: \(resource.duration.seconds)")
let numOfLoops = (duration.seconds - currentMusicStartOffset) / resource.duration.seconds
let numOfLoopsRoundedUp = numOfLoops.rounded(.up)
var sumPartsTotals = CMTime.zero
var endS = CMTime.zero
var result: [TrackItem] = []
//Audio Trim
for i in 0..<Int(numOfLoopsRoundedUp) {
let mResource = resource.copy() as! AVAssetTrackResource
Log.out(mResource)
guard let musicAsset = mResource.asset else {
continue
}
//Audio Trim
let start = CMTimeMake(value: Int64(0.0 * 600), timescale: 600)
if i == Int(numOfLoopsRoundedUp) - 1 { //is the last chunk of audio
let lastChunkTimeFrac = numOfLoops.truncatingRemainder(dividingBy: 1) // ex 1.5 will give 0.5
let lastChunkTimeSecs = musicAsset.duration.seconds * lastChunkTimeFrac //music from 0 to this value
endS = CMTimeMake(value: Int64((lastChunkTimeSecs-0.05) * 600), timescale: 600)
} else {
endS = CMTimeMake(value: Int64(musicAsset.duration.seconds * 600), timescale: 600)
}
let timeOffset = CMTime.init(seconds: currentMusicStartOffset, preferredTimescale: 600)
if i == 0 {
let startTime = currentMusicStartOffset < 0 ? start - timeOffset : start
mResource.selectedTimeRange = CMTimeRange.init(start: startTime, end: endS)
} else {
mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
}
mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
Log.out("selectedStart:\(mResource.selectedTimeRange.start.seconds) - totalPart:\(mResource.selectedTimeRange.end.seconds)")
let partMyTrackItem = TrackItem(resource: mResource)
let zeroOffsetTime = CMTimeMultiply(musicAsset.duration, multiplier: Int32(i))
if i == 0 {
partMyTrackItem.startTime = zeroOffsetTime + (currentMusicStartOffset < 0 ? CMTime.zero : timeOffset)
} else {
partMyTrackItem.startTime = zeroOffsetTime + timeOffset
}
partMyTrackItem.startTime = zeroOffsetTime
Log.out("start:\(partMyTrackItem.startTime.seconds) - totalPart:\(mResource.scaledDuration.seconds)")
sumPartsTotals = CMTimeAdd(sumPartsTotals, mResource.scaledDuration)
result.append(partMyTrackItem)
}
return result
}
AVAssetTrackResource 设置selectedTimeRange后生成的trackItem, 或是生成的playeritem 还是asset的全部资源,未裁剪到选中时间,以下是代码示例-
let resource = AVAssetTrackResource(asset: model.asset)// model.asset duration 7秒
resource.setSpeed(model.speed)
if let startTime = model.startTime, let endTime = model.endTime {
let startTime = CMTime(seconds: startTime, preferredTimescale: model.asset.duration.timescale)
let endTime = CMTime(seconds: endTime, preferredTimescale: model.asset.duration.timescale)
let timeRange = CMTimeRange(start: startTime, end: endTime)
print(CMTimeGetSeconds(timeRange.start),
CMTimeGetSeconds(timeRange.end),
CMTimeGetSeconds(timeRange.duration))// 0, 4, 4
resource.selectedTimeRange = CMTimeRange(start: startTime, end: endTime)
}
let trackItem = TrackItem(resource: resource)// 这里打印trackItem.duration 还是7秒
trackItem.videoConfiguration.transform = model.transform
trackItem.videoConfiguration.contentMode = .aspectFit
timeline.videoChannel.append(trackItem)
timeline.audioChannel.append(trackItem)`
try! Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
try! Timeline.reloadAudioStartTime(providers: timeline.audioChannel)
let playerItem = CompositionGenerator(timeline: timeline).buildPlayerItem()//这里playitem.duration 还是7 秒
将 VideoCompositor
中最后渲染的代码修改就可以解决
VideoCompositor.ciContext.render(image, to: outputPixels)
改为:
let colorSpace = CGColorSpace.init(name: CGColorSpace.sRGB) ?? CGColorSpaceCreateDeviceRGB()
VideoCompositor.ciContext.render(image, to: outputPixels, bounds: image.extent, colorSpace: colorSpace)
CrossDissolveTransition,视屏转场动画,在最新的master分支中,显示异常。后一帧的图会突然显示到前一帧上去。
The pod install says the latest version is 0.2
But.. I see your latest release is 0.4
Has the pod been updated?
Hi im back :)
So previously I successfully implemented your suggestions to merge videos with their corresponding audio tracks. Now Im wondering if it's possible 2 things.
1- Is it possible to define separate audio for each video with specific range of that audio ?
example:
let tLine = Timeline()
var vChannel = [TrackItem]()
var aChannel = [TrackItem]()
//VIDEO Tracks
... trackVideoItem1, trackVideoItem2, trackVideoItem3...
//AUDIO Tracks
let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
let musicAsset = AVAsset(url: musicUrl)
let resourceA = AVAssetTrackResource(asset: musicAsset)
let trackAudioItem1 = TrackItem(resource: resourceA)
... same for trackAudioItem2, trackAudioItem3...
But how do I specify the start-end and duration of those tracks. ??
tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
tLine.audioChannel = [trackAudioItem1, trackAudioItem2, trackAudioItem3]
try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)
Currently the above creates an unreadable video.
2- The other question is, If it's possible to define 1 music track for the entire video composition.
example:
let tLine = Timeline()
var vChannel = [TrackItem]()
var aChannel = [TrackItem]()
//VIDEO Tracks
... trackVideoItem1, trackVideoItem2, trackVideoItem3...
//AUDIO Track for everything
let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
let musicAsset = AVAsset(url: musicUrl)
let resourceA = AVAssetTrackResource(asset: musicAsset)
let trackMusicItem = TrackItem(resource: resourceA)
But how do I specify the start-end of the audio (trimming the audio)
tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
tLine.audioChannel = [trackMusicItem]
try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)
Is it possible to set 1 audio for everything? and what happens if the audioTrack is shorter than the entire video composition or the video composition is shorter than the audioTrack, would it repeat the audioTrack ?
Thanks in advance :) :)
没有找到关于overlayers轨道的demo。研究了下你的源码,不知道理解的对不对,请教一下。如果要使用overlays轨道,实现当前track中x:50,y,50的坐标中放一个size为50*50的overlay。参考了了ImageOverlayItem的实现方法,可以通过trackItem.configuration.videoConfiguration.transform配置来传相应的transform实现,这样可以实现,但使用起来不是很方便。
我的理解,overlays: [VideoProvider] ,overlays不只是VideoProvider协议,更合适的是进一步封装了frame的协议。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.