Coder Social home page Coder Social logo

open-chat-video-editor's People

Contributors

junhongh avatar scutlihaoyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-chat-video-editor's Issues

experiential summary: set server_port & server_name

Gradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere!

we can check their website building-demos:

  • Can be set by environment variable GRADIO_SERVER_PORT. If None, will search for an available port starting at 7860.
  • Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".

WeChat group issue

The number of people in the WeChat group has exceeded 200 and cannot be added. Can we provide WeChat group pulling services for group leaders

ERROR: No matching distribution found for torch==2.0.0+cu117

windows 10 GPU 安装报错:
pip install -r requirements.txt

Collecting toolz==0.12.0
Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
ERROR: Could not find a version that satisfies the requirement torch==2.0.0+cu117 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1)
ERROR: No matching distribution found for torch==2.0.0+cu117

windows安装的两个小问题咨询下

本人python小白windows安装的时候有两个小问题

1.安装其他依赖环境 (这个必须要执行的吗)执行pip install -r requirements.txt 会提示没有文件

2.下载数据索引和meta信息data.tar,并解压到 data/index 目录下 这个data/index 在哪儿啊?

望大神解答,感谢

ValueError: max() arg is an empty sequence

2023-05-23 06:58:36,192 - comm.mylog - INFO - sentences: ['cats are lovely', ' fluffy creatures that people love to spend time with', " they're always happy and playful", ' and they love to cuddle up with their owners', " whether you're a cat person or not", " you can't 否认 cats are some of the best animals to have around", ' so why not add one to your life and see how much joy it can bring']
2023-05-23 06:58:36,192 - comm.mylog - INFO - en_out_text: ['cats are lovely', ' fluffy creatures that people love to spend time with', " they're always happy and playful", ' and they love to cuddle up with their owners', " whether you're a cat person or not", " you can't 否认 cats are some of the best animals to have around", ' so why not add one to your life and see how much joy it can bring']
[2023-05-23 06:58:47,705] [    INFO] - Already cached /root/.paddlenlp/models/bert-base-chinese/bert-base-chinese-vocab.txt
[2023-05-23 06:58:47,727] [    INFO] - tokenizer config file saved in /root/.paddlenlp/models/bert-base-chinese/tokenizer_config.json
[2023-05-23 06:58:47,728] [    INFO] - Special tokens file saved in /root/.paddlenlp/models/bert-base-chinese/special_tokens_map.json
Building prefix dict from the default dictionary ...
2023-05-23 06:58:58,554 - jieba - DEBUG - Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
2023-05-23 06:58:58,554 - jieba - DEBUG - Loading model from cache /tmp/jieba.cache
Loading model cost 0.778 seconds.
2023-05-23 06:58:59,332 - jieba - DEBUG - Loading model cost 0.778 seconds.
Prefix dict has been built successfully.
2023-05-23 06:58:59,332 - jieba - DEBUG - Prefix dict has been built successfully.
2023-05-23 06:59:04,170 - comm.mylog - INFO - final_clips: 0
Traceback (most recent call last):
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/routes.py", line 401, in run_predict
    output = await app.get_blocks().process_api(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/blocks.py", line 1302, in process_api
    result = await self.call_function(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/blocks.py", line 1025, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "app/app.py", line 30, in run_Text2VideoEditor_logit
    out_text,video_out = editor.run(input_text,style_text,out_video)
  File "/opt/gf/open-chat-video-editor/editor/chat_editor.py", line 96, in run
    video = concatenate_videoclips(final_clips)
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/moviepy/video/compositing/concatenate.py", line 75, in concatenate_videoclips
    w = max(r[0] for r in sizes)
ValueError: max() arg is an empty sequence

不支持mac

不支持mac最好在readme里面说明,还有python的版本,我换了好几个python版本,每次都是某个包曝出不支持当前的python版本,最后换到python3.8之后安装到pywin32的时候,发现mac平台不支持,真的是

在docker下build tts generator的时候会出现seg fault

Current thread 0x0000004001763fc0 (most recent call first):
File "", line 219 in _call_with_frames_removed
File "", line 1166 in create_module
File "", line 556 in module_from_spec
File "", line 657 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1042 in _handle_fromlist
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/fluid/core.py", line 274 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1042 in _handle_fromlist
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/fluid/framework.py", line 37 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1042 in _handle_fromlist
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/fluid/init.py", line 36 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/framework/random.py", line 16 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1042 in _handle_fromlist
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/framework/init.py", line 17 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/init.py", line 25 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/utils.py", line 26 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/resource/resource.py", line 20 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/resource/init.py", line 14 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/base_commands.py", line 20 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/init.py", line 16 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 961 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 961 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/Users/tenghuiliu/Project/decoda_ai/github/open-chat-video-editor/generator/tts/paddlespeech_model.py", line 1 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/Users/tenghuiliu/Project/decoda_ai/github/open-chat-video-editor/generator/tts/build.py", line 2 in
File "", line 219 in _call_with_frames_removed
File "", line 843 in exec_module
File "", line 671 in _load_unlocked
File "", line 975 in _find_and_load_unlocked
File "", line 991 in _find_and_load
File "/Users/tenghuiliu/Project/decoda_ai/github/open-chat-video-editor/editor/build.py", line 3 in
File "", line 219 in _call_with_frames_removed

KeyError: 'phone_ids'

2023-05-23 07:01:21,517 - comm.mylog - INFO - chatgpt response: Here's a 50-word short video copy using cat content:

"Watch our cute cat playing with a string, trying to catch it and make it fly. Look how excited it is, just like a child playing with a toy. Don't you feel like hugging it and petting its head? Look at its little claws, they're so sharp and fierce. Sure, it's just a cat, but it's our pet and we love it just the same. Watch and enjoy our cat video."
2023-05-23 07:01:21,517 - comm.mylog - INFO - sentences: ['Here\'s a 50-word short video copy using cat content:"Watch our cute cat playing with a string', ' trying to catch it and make it fly', ' Look how excited it is', ' just like a child playing with a toy', " Don't you feel like hugging it and petting its head", ' Look at its little claws', " they're so sharp and fierce", ' Sure', " it's just a cat", " but it's our pet and we love it just the same", ' Watch and enjoy our cat video', '"']
2023-05-23 07:01:21,517 - comm.mylog - INFO - en_out_text: ['Here\'s a 50-word short video copy using cat content:"Watch our cute cat playing with a string', ' trying to catch it and make it fly', ' Look how excited it is', ' just like a child playing with a toy', " Don't you feel like hugging it and petting its head", ' Look at its little claws', " they're so sharp and fierce", ' Sure', " it's just a cat", " but it's our pet and we love it just the same", ' Watch and enjoy our cat video', '"']
Traceback (most recent call last):
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/routes.py", line 401, in run_predict
    output = await app.get_blocks().process_api(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/blocks.py", line 1302, in process_api
    result = await self.call_function(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/gradio/blocks.py", line 1025, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "app/app.py", line 30, in run_Text2VideoEditor_logit
    out_text,video_out = editor.run(input_text,style_text,out_video)
  File "/opt/gf/open-chat-video-editor/editor/chat_editor.py", line 50, in run
    tts_resp = self.audio_generator.batch_run(tts_in_text)
  File "/opt/gf/open-chat-video-editor/generator/tts/tts_generator.py", line 27, in batch_run
    resp.append(self.run_tts(text))
  File "/opt/gf/open-chat-video-editor/generator/tts/tts_generator.py", line 15, in run_tts
    self.tts_model.run_tts(text,out_path)
  File "/opt/gf/open-chat-video-editor/generator/tts/paddlespeech_model.py", line 16, in run_tts
    self.tts(text=text,lang=self.lang,am=self.am,output=out_path)
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/utils.py", line 328, in _warpper
    return executor_func(self, *args, **kwargs)
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/tts/infer.py", line 710, in __call__
    self.infer(text=text, lang=lang, am=am, spk_id=spk_id)
  File "<decorator-gen-603>", line 2, in infer
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 375, in _decorate_function
    return func(*args, **kwargs)
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/cli/tts/infer.py", line 471, in infer
    frontend_dict = run_frontend(
  File "/data/anaconda3/envs/open_editor/lib/python3.8/site-packages/paddlespeech/t2s/exps/syn_utils.py", line 305, in run_frontend
    phone_ids = input_ids["phone_ids"]
KeyError: 'phone_ids'

一套很有希望的系统已经不更新了,甚是可惜

这里面可以结合和改进的点还有很多,但代码已经3个月没有更新了,看样子要偃旗息鼓了。

比如更细粒度的管理:生成剧本的模板管理,生成sd场景的关键词管理,关键帧的特效管理,特效到视频合成管理,字幕,旁白管理等……

这个系统做出来就是一个自媒体的视频编辑平台,商业前景很好,可惜可惜。

no space left on device

Followed the instruction in the docker section and tried twice to pull docker image. Each time I got the same error, but I have enough space on my Mac(more than 700GB left). Here is the command I ran on my Mac:

docker pull iamjunhonghuang/open-chat-video-editor:retrival

Here is the whole log printed in my terminal:

retrival: Pulling from iamjunhonghuang/open-chat-video-editor 2d473b07cdd5: Pull complete 2144867bd1ad: Pull complete 6661021dc03d: Pull complete 142f314be218: Pull complete 66744ab40d65: Pull complete d36c5d2af16f: Pull complete 7e87ed5688d3: Pull complete bcd0e7c63b53: Pull complete 686656cf88ae: Pull complete 5616e23b2d3c: Extracting [==================================================>] 12.31GB/12.31GB failed to register layer: Error processing tar file(exit status 1): write /data/anaconda3/envs/open_editor/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so: no space left on device

Here is the error I got:

ailed to register layer: Error processing tar file(exit status 1): write /data/anaconda3/envs/open_editor/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so: no space left on device

AssertionError: Torch not compiled with CUDA enabled

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.46G/3.46G [07:25<00:00, 11.1MB/s]
│ E:\voice2face\open-chat-video-editor\app\app.py:27 in │
│ │
│ 24 │ │ # cfg_path = "configs/video_by_retrieval_text_by_chatgpt_zh.yaml" │
│ 25 │ │ cfg.merge_from_file(cfg_path) │
│ 26 │ │ print(cfg) │
│ ❱ 27 │ │ editor = build_editor(cfg) │
│ 28 │ │ def run_Text2VideoEditor_logit(input_text, style_text): │
│ 29 │ │ │ out_video = "test.mp4" │
│ 30 │ │ │ out_text,video_out = editor.run(input_text,style_text,out_video) │
│ │
│ E:\voice2face\open-chat-video-editor\editor\build.py:14 in build_editor │
│ │
│ 11 │ logger.info('visual_gen_type: {}'.format(visual_gen_type)) │
│ 12 │ # image_by_diffusion video_by_retrieval image_by_retrieval_then_diffusion video_by_ │
│ 13 │ if visual_gen_type in ["image_by_retrieval","image_by_diffusion","image_by_retrieval │
│ ❱ 14 │ │ vision_generator = build_image_generator(cfg) │
│ 15 │ else: │
│ 16 │ │ vision_generator = build_video_generator(cfg) │
│ 17 │
│ │
│ E:\voice2face\open-chat-video-editor\generator\image\build.py:33 in build_image_generator │
│ │
│ 30 │ │ image_generator = ImageGenbyRetrieval(cfg,query_model,index_server,meta_server) │
│ 31 │ elif visual_gen_type == "image_by_diffusion": │
│ 32 │ │ logger.info("start build_img_gen_model") │
│ ❱ 33 │ │ img_gen_model = build_img_gen_model(cfg) │
│ 34 │ │ image_generator = ImageGenByDiffusion(cfg,img_gen_model) │
│ 35 │ elif visual_gen_type == "image_by_retrieval_then_diffusion": │
│ 36 │ │ # build img retrieval generator │
│ │
│ E:\voice2face\open-chat-video-editor\generator\image\generation\build.py:6 in │
│ build_img_gen_model │
│ │
│ 3 def build_img_gen_model(cfg): │
│ 4 │ │
│ 5 │ model_id = cfg.video_editor.visual_gen.image_by_diffusion.model_id │
│ ❱ 6 │ model = StableDiffusionImgModel(model_id) │
│ 7 │ return model │
│ 8 │
│ 9 def build_img2img_gen_model(cfg): │
│ │
│ E:\voice2face\open-chat-video-editor\generator\image\generation\stable_diffusion.py:10 in │
init
│ │
│ 7 │ │ self.model_id = model_id │
│ 8 │ │ self.pipe = StableDiffusionPipeline.from_pretrained(self.model_id, torch_dtype=t │
│ 9 │ │ self.pipe.scheduler = DPMSolverMultistepScheduler.from_config(self.pipe.schedule │
│ ❱ 10 │ │ self.pipe = self.pipe.to("cuda") │
│ 11 │ │
│ 12 │ def run(self,prompt): │
│ 13 │ │ image = self.pipe(prompt).images[0] │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\diffusers\pipelines\pipeline_utils.p │
│ y:643 in to │
│ │
│ 640 │ │ │
│ 641 │ │ is_offloaded = pipeline_is_offloaded or pipeline_is_sequentially_offloaded │
│ 642 │ │ for module in modules: │
│ ❱ 643 │ │ │ module.to(torch_device, torch_dtype) │
│ 644 │ │ │ if ( │
│ 645 │ │ │ │ module.dtype == torch.float16 │
│ 646 │ │ │ │ and str(torch_device) in ["cpu"] │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\torch\nn\modules\module.py:1145 in │
│ to │
│ │
│ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 1144 │ │ │
│ ❱ 1145 │ │ return self._apply(convert) │
│ 1146 │ │
│ 1147 │ def register_full_backward_pre_hook( │
│ 1148 │ │ self, │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\torch\nn\modules\module.py:797 in │
│ _apply │
│ │
│ 794 │ │
│ 795 │ def _apply(self, fn): │
│ 796 │ │ for module in self.children(): │
│ ❱ 797 │ │ │ module._apply(fn) │
│ 798 │ │ │
│ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\torch\nn\modules\module.py:820 in │
│ _apply │
│ │
│ 817 │ │ │ # track autograd history of param_applied, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\torch\nn\modules\module.py:1143 in │
│ convert │
│ │
│ 1140 │ │ │ if convert_to_format is not None and t.dim() in (4, 5): │
│ 1141 │ │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() els │
│ 1142 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ ❱ 1143 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 1144 │ │ │
│ 1145 │ │ return self.apply(convert) │
│ 1146 │
│ │
│ E:\voice2face\open-chat-video-editor\enve\lib\site-packages\torch\cuda_init
.py:239 in │
│ _lazy_init │
│ │
│ 236 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │
│ 237 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │
│ 238 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │
│ ❱ 239 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │
│ 240 │ │ if _cudart is None: │
│ 241 │ │ │ raise AssertionError( │
│ 242 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled

python version 3.92
no GPU

CPU版本

pip3 install torch torchvision torchaudio

toolz 0.12.0
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2

运行命令:>python app/app.py --func Text2VideoEditor --cfg configs\text2video\image_by_diffusion_text_by_chatgpt_zh.yaml

How to generate customized .faiss .db

Hi, I wonder if owner of anyone else could share the method creating new webvid.faiss, and webvid.db. Would be very appreciated!

大家好, 请问有谁可以分享一下如何生成自己的视频检索index和db文件( webvid.faiss,webvid.db)?非常感谢帮助

ModuleNotFoundError: No module named 'faiss.swigfaiss_avx2'

window11中缺少dll包
2023-06-08 00:23:18,377 - faiss.loader - INFO - Loading faiss with AVX2 support.
2023-06-08 00:23:18,377 - faiss.loader - INFO - Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-06-08 00:23:18,378 - faiss.loader - INFO - Loading faiss.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.