Coder Social home page Coder Social logo

subtext's Introduction

subtext's People

Contributors

ichunjo avatar marillat avatar masaiki avatar myrsloik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

subtext's Issues

Add meson build (meson.build included)

Could add meson support ?
Only tested under Linux

project('subtext', 'c', 'cpp',
  version : '2',
  default_options : ['warning_level=3'])

add_project_arguments('-ffast-math', language : 'c')

sources = [
     'src/common.c',
     'src/common.h',
     'src/image.cpp',
     'src/text.c',
     'src/toass.cpp',
     'src/toutf8.c'
]

vapoursynth_dep = dependency('vapoursynth', version: '>=55').partial_dependency(compile_args : true, includes : true)
libass_dep = dependency('libass', version: '>=0.12.0')
libavcodec_dep = dependency('libavcodec')
libavutil_dep = dependency('libavutil')
libavformat_dep = dependency('libavformat')

deps = [vapoursynth_dep, libass_dep, libavcodec_dep, libavutil_dep, libavformat_dep]

shared_module('subtext', sources,
  dependencies : deps,
  install : true,
  install_dir : join_paths(vapoursynth_dep.get_pkgconfig_variable('libdir'), 'vapoursynth'),
  gnu_symbol_visibility : 'hidden'
)

Not working in float

import vapoursynth as vs
core = vs.core

SUBTITLE_DEFAULT_STYLE: str = ("arial,200,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,"
                               "0,0,0,0,100,100,0,0,1,2,0,7,10,10,10,1")

clip = core.std.BlankClip(format=vs.YUV444PS, width=1000, height=200)
clip = core.sub.Subtitle(clip, text="1234567890", style=SUBTITLE_DEFAULT_STYLE)
clip.set_output()

I tried YUV444PS and RGBS and both outputs seem unchanged from the subtitle call.

Missing ass_set_storage_size call

Hi, it seems subtext currently isn't calling ass_set_storage_size which can lead to incorrect rendering for some subtitle+video combinations unless the rather recently introduced LayoutRes* headers are set. The storage size needs to be the resolution at which the encoded video is stored, i.e. before anamorphic desqueezing.

If a preceeding filter rescaled or unsqueezed the image, this may be different from the properties of the frame subtext receives, but at least AviSynth(+) and third-party VapourSynth assrender plugins (AS(+), VS) seem to think using the current frame size is a good enough default (but they also added input parameters to override it).

Affected demo files (both anamorphic and non-anamorphic) with correct reference output can be found e.g. here:
https://code.videolan.org/videolan/vlc/uploads/b54e0761d0d3f4f79b2947ffb83a3b59/vlc-issue_libass-storage-size.tar.xz

See also: libass/libass#591 and current storage size docs

blend=False doesn't return two clips as the documentation states

R1 and R2 don't return two clips with blend=False and the old APIv3 plugin outputs this:

setVideoInfo: Video filter TextFile has more than one output node but only the first one will be returned

Code:

clips = core.sub.TextFile(src, sub, fontdir='fonts', blend=False)

Unable to render SRT subtitles

While testing recently, I found that SRT subtitles are not rendered at all using this plugin. Taking the same file and converting it to ASS using ffmpeg and then rendering again using this plugin works like a charm.

Example script:

import vapoursynth as vs

core = vs.core

clip = core.std.BlankClip(width=720, height=304, length=2000)
clip = clip.sub.TextFile("./subs-short.srt")

clip.set_output()

Use the attached file (rename to srt as appropriate, Github only allows txt files). You'll see a black clip rendered with no text.

subs-short.txt

Subtitle text not applied for some files

I have some encoded single frame HEVC files I use for testing, and for some reason the text is not being applied to clips from these files.
An example probe: hevc (Rext), yuv420p10le(tv, bt2020nc/bt2020/smpte2084), 3840x2160 [SAR 1:1 DAR 16:9], 60 tbr, 1200k tbn, 60 tbc

Sample file: https://0x0.st/oosf.hevc
Reproduceable script:

from vapoursynth import core

clip = core.ffms2.Source("oosf.hevc")
clip = core.resize.Spline36(clip, width=1280, height=720)
clip = core.sub.Subtitle(clip, "my text")

clip.set_output()

The clip has these properties:

VideoNode
	Format: YUV420P10
	Width: 3840
	Height: 2160
	Num Frames: 1
	FPS: 1200000

I'm on Arch Linux with libass 0.15.2-1 installed.

TextFile + RGB: MaskedMerge: Input frames must have the same range

When using:

# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Loading Plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SubtitleFilter/SubText/SubText.dll")
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/libimwri.dll")
# source: 'G:/clips/scrolling_subs/croquants.png'
# current color space: RGB24, bit depth: 8, resolution: 720x480, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading G:\clips\scrolling_subs\croquants.png using vsImageReader
clip = core.imwri.Read(["G:/clips/scrolling_subs/croquants.png"])
clip = core.std.Loop(clip=clip, times=6050)
# Input color space is assumed to be RGB24
# Setting color transfer info (470bg), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# Loading G:\clips\scrolling_subs\hector.ass using SubText
clip = core.sub.TextFile(clip=clip, file="G:/clips/scrolling_subs/hector.ass", fontdir="F:/Hybridnew/settings/fonts")
# adjusting output color from: RGB24 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()

I get:

Error: Failed to retrieve frame 0 with error: MaskedMerge: Input frames must have the same range

script works fine if I convert to YUV420P8 before the call of 'core.sub.TextFile' or if I remove the 'core.sub.TextFile' call.

Set alignment of Subtitle?

import vapoursynth as vs 
core = vs.core
core.std.LoadPlugin(r"img_plugins\SubText.dll")
dummy_clip = core.std.BlankClip(width=600, height=600, fpsnum=24000, fpsden=1001)
sub = core.sub.Subtitle(clip=dummy_clip, blend=False, text="Reference")
print(sub.width)

The end goal is to be able to properly position the text to the right and allow dynamic text resizing based on input. However, it seems like ASS alignment methods do not work for as it want the width of the video.

Crashes with some .ass subtitles

Using:

# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Loading Plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# source: 'C:\Users\Alex\Videos\Hybrid\Input\TESTVIDEO.mkv'
# current color space: YUV420P8, bit depth: 8, resolution: 1920x1080, fps: 23.976, color matrix: 709, yuv luminance scale: limited, scanorder: progressive
# Loading C:\Users\Alex\Videos\Hybrid\Input\TESTVIDEO.mkv using LWLibavSource
clip = core.lsmas.LWLibavSource(source="C:/Users/Alex/Videos/Hybrid/Input/TESTVIDEO.mkv", format="YUV420P8", cache=0, prefer_hw=0)
# Setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=1)
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=1)
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 23.976
clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=1280, h=720, interlaced=False, interlacedd=False)
# Loading C:\Users\Alex\Videos\Hybrid\Output\TESTVIDEO_id_2_lang_en_default.ass using SubText
clip = core.sub.TextFile(clip=clip, file="C:/Users/Alex/Videos/Hybrid/Output/TESTVIDEO_id_2_lang_en_default.ass", fontdir="C:/Users/Alex/AppData/Roaming/hybrid/fonts")
# adjusting output color from: YUV420P16 to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, range_s="limited")
# set output frame rate to 23.976fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
# Output
clip.set_output()

the encoding, preview and vspipe crash at frame 26763, using VsFilterMod or AssRender works fine.
Here are the used subtitles and fonts:
One Punch Man S01E07_id_2_lang_en_default.zip
fonts.zip
.

Issues with fullrange clips

ass_file = r"file.ass"
clip = core.std.BlankClip(None, 1920, 1080, vs.YUV420P8, length=35000)
clip_full = clip.std.SetFrameProps(_ColorRange=0)
subs = core.sub.TextFile(clip=clip_full, file=ass_file, charset="UTF-8", scale=1, debuglevel=7, linespacing=0, margins=[0, 0, 0, 0], sar=0)
subs.set_output()

vapoursynth.Error: MaskedMerge: Input frames must have the same range

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.