Comments (7)
Could you post the callstack? It may be from being unable to read next frame from the input video file. Can you try with other input videos? Init video is working for me on stable-diffusion-512-v2-1
(though I initially had problem with all black frames with previous frame strength 1.0)
from stability-sdk.
these are the current results. it's failing on the first frame within the tdqm loop, as no images are being returned I guess.
same with this model:
model: 'stable-diffusion-512-v2-1'
is there some compatibility with the new model and some settings maybe? I recall some settings crashed certain models before.
venv/bin/python3 animai_process.py runone assets/templates/base-48.json
stamp: 1122-051344
jobid j-1122-051344
-- [pid 48346] [svc/api] job metadata {
"userId": "user01",
"requestId": "request01",
"jobid": "j-1122-051344"
}
using model: stable-diffusion-xl-1024-v0-9
-- [pid 48346] [replacer] ---- detection ----
-- [pid 48346] [replacer] prompts before: {
"0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}
-- [pid 48346] [replacer] orig_image "./assets/people/the-rock-9.jpg"
-- [pid 48346] [replacer] prompts after: {
"0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}
completed downloading files
image: ./outputs/j-1122-051344/init_image.png
video: ./outputs/j-1122-051344/init_video.png
using video_init_path: ./outputs/j-1122-051344/init_video.png
using init_image: ./outputs/j-1122-051344/init_image.png
safe_image: ./outputs/j-1122-051344/init_image.png
animation_settings:
{
width: 512
height: 512
cfg_scale: 12
sampler: 'DDIM'
model: 'stable-diffusion-xl-1024-v0-9'
seed: -1
clip_guidance: 'None'
init_image: './outputs/j-1122-051344/init_image.png'
init_sizing: 'cover'
mask_path: ''
mask_invert: false
animation_mode: 'Video Input'
max_frames: 50
border: 'replicate'
noise_add_curve: '0:(0.02)'
noise_scale_curve: '0:(1.04)'
strength_curve: '0:(1), 10:(0.3), 30:(0.3), 45:(1)'
steps_curve: '0:(20)'
steps_strength_adj: true
interpolate_prompts: true
inpaint_border: false
locked_seed: false
angle: '0:(0)'
zoom: '0:(1)'
translation_x: '0:(0)'
translation_y: '0:(0)'
translation_z: '0:(-0.2)'
rotation_x: '0:(0)'
rotation_y: '0:(0)'
rotation_z: '0:(0)'
diffusion_cadence_curve: '0:(2)'
cadence_interp: 'rife'
cadence_spans: false
color_coherence: 'LAB'
brightness_curve: '0:(1.0)'
contrast_curve: '0:(1.0)'
hue_curve: '0:(0.0)'
saturation_curve: '0:(1.0)'
lightness_curve: '0:(0.0)'
depth_model_weight: 0.3
near_plane: 200
far_plane: 10000
fov_curve: '0:(25)'
save_depth_maps: false
depth_blur_curve: '0:(0.0)'
depth_warp_curve: '0:(1.0)'
video_init_path: './outputs/j-1122-051344/init_video.png'
extract_nth_frame: 5
video_mix_in_curve: '0:(1), 10:(0.3), 30:(0.3), 45:(1)'
video_flow_warp: false
vr_mode: false
vr_eye_angle: 0.5
vr_eye_dist: 5.0
vr_projection: -0.4
resume_timestring: ''
override_settings_path: ''
non_inpainting_model_for_diffusion_frames: false
do_mask_fixup: false
save_inpaint_masks: false
mask_min_value: '0:(0.0)'
fps: 20
camera_type: 'perspective'
image_render_method: 'mesh'
image_render_points_per_pixel: 8
image_render_point_radius: 0.006
image_max_mesh_edge: 0.1
mask_render_points_per_pixel: 4
mask_render_point_radius: 0.0045
mask_max_mesh_edge: 0.04
verbose: true
accumulate_xforms: false
midas_weight: 0.3
negative_prompt: 'blurry, bad hands, ugly, wrinkles'
negative_prompt_weight: -1
}
-- [pid 48346] [render] prompts: {
"0": " portrait of a young man for the cover of a graphic novel in a floating city a merchant space station by Jean Giraud Moebius, the Incal, Syd Mead"
}
[pid 48346] ----- [render] start animating -----
'[pid 48346] [jobid j-1122-051344]'
[pid 48346] [jobid j-1122-051344]: 0%| | 0/50 [00:00<?, ?it/s]
#### ERROR [render] unknown exception: ####
TypeError("'NoneType' object is not subscriptable")
Traceback (most recent call last):
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 164, in animate
for frame_idx, frame in enumerate(tqdm(
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
self.inpaint_mask = self.transform_video(frame_idx)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
video_next_frame = cv2_to_pil(video_next_frame)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
render error 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 25, in <module>
main()
File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 18, in main
runner.runone(opt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/runner.py", line 40, in runone
process_job(job)
File "/Users/dc/dev/revel/ai-anims-v2/animai/services/api.py", line 128, in process_job
result = render.render_job(jobid, settings, prompt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 20, in render_job
raise ex
File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 14, in render_job
result = animate(jobid, settings, prompt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 185, in animate
raise ex
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 164, in animate
for frame_idx, frame in enumerate(tqdm(
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
self.inpaint_mask = self.transform_video(frame_idx)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
video_next_frame = cv2_to_pil(video_next_frame)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
error: Recipe `runone` failed on line 97 with exit code 1
from stability-sdk.
I confirmed animation is working with your default settings from python3 -m stability_sdk animate --gui
so perhaps it's something in the config or animation settings for the new models.
from stability-sdk.
OK i think the issue is that you have removed support for using a JPG/PNG as a video init image.
this crashes:
"animation_mode": "Video Input",
"video_init_path": "./assets/people/the-rock-9.jpg",
if i change the input video to an MP4 file I can get it to render.
"video_init_path": "./assets/pongs/alex-shin.mp4",
can you confirm if this is expected behavior now?
this is not a big difference, just inconvenient, we'll have to ffmpeg convert all the inputs first.
maybe I can still use an older commit of the animation extension?
from stability-sdk.
Ah interesting, nice work narrowing that down. The code in the SDK for this didn't change so it may have been an update to the opencv-python-headless
library which broke this. If 4.7.0.72
still works we can pin the version at that.
from stability-sdk.
I tried 4.7.0.72
and failing that also went back a few versions to
opencv-python==4.6.0.66
opencv-python-headless==4.6.0.66
but still getting the error.
repro:
git clone --recurse-submodules https://github.com/Stability-AI/stability-sdk
# modified the `setup.py`
'anim': [
'keyframed',
'numpy',
'opencv-python-headless==4.6.0.66', ## <== pinned this
],
pip install "./stability-sdk[anim]"
# manually install base opencv @ version
pip install opencv-python==4.6.0.66
$ pip freeze | grep cv
opencv-python==4.6.0.66
opencv-python-headless==4.6.0.66
gives
[pid 58469] ----- [render] start animating -----
'[pid 58469] [jobid j-1122-074438]'
[pid 58469] [jobid j-1122-074438]: 0%| | 0/50 [00:00<?, ?it/s]
#### ERROR [render] unknown exception: ####
TypeError("'NoneType' object is not subscriptable")
Traceback (most recent call last):
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 165, in animate
for frame_idx, frame in enumerate(tqdm(
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
self.inpaint_mask = self.transform_video(frame_idx)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
video_next_frame = cv2_to_pil(video_next_frame)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
render error 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 25, in <module>
main()
File "/Users/dc/dev/revel/ai-anims-v2/animai_process.py", line 18, in main
runner.runone(opt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/runner.py", line 40, in runone
process_job(job)
File "/Users/dc/dev/revel/ai-anims-v2/animai/services/api.py", line 128, in process_job
result = render.render_job(jobid, settings, prompt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 20, in render_job
raise ex
File "/Users/dc/dev/revel/ai-anims-v2/animai/render.py", line 14, in render_job
result = animate(jobid, settings, prompt)
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 186, in animate
raise ex
File "/Users/dc/dev/revel/ai-anims-v2/animai/stabai/animator.py", line 165, in animate
for frame_idx, frame in enumerate(tqdm(
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 640, in render
self.inpaint_mask = self.transform_video(frame_idx)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 949, in transform_video
video_next_frame = cv2_to_pil(video_next_frame)
File "/Users/dc/dev/revel/ai-anims-v2/venv/lib/python3.10/site-packages/stability_sdk/animation.py", line 219, in cv2_to_pil
return Image.fromarray(cv2_img[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
error: Recipe `runone` failed on line 97 with exit code 1
so maybe it's another package?
from stability-sdk.
I was able to get this working by using a full animation as the background video_init_path
handy ffmpeg script make a backing video from single still image
ffmpeg -y -loop 1 -framerate 1 -i {{imagepath}} -t 50 -c:v libx264 -r 10 {{output}}
from stability-sdk.
Related Issues (20)
- The animation almost finishes but pauses on the last frame of the UI HOT 2
- Client default engines in environment HOT 1
- Clip Guidance Not working on the API HOT 2
- Error when running pip install in SageMaker notebook HOT 2
- T2IAdapterParameters not present in generaion_pb2 HOT 9
- Masked pixels get altered when using SDXL-0.9 HOT 9
- Unable to generate multiple images in single API call with distinct seeds
- REST api giving wrong data HOT 6
- ModuleNotFoundError: No module named 'keyframed' HOT 3
- The specified engine (ID stable-diffusion-v1-5) was not found.
- No Prompt with input image allowed?
- Api call returning error when different images are used
- "The specified engine (ID stable-inpainting-512-v2-0) was not found." HOT 1
- Prompt scheduling using sdk?
- stable-diffusion-xl-1024-v1-0 turns grey (100% noise)
- Unable install grpcio 1.53.0 HOT 1
- Error cases in gRPC
- Diffusion 3 Support on Python SDK HOT 2
- Fine-Tuning is currently available in Limited Preview for select customers only
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stability-sdk.