Coder Social home page Coder Social logo

inochi2d / inochi-session Goto Github PK

View Code? Open in Web Editor NEW
265.0 11.0 18.0 1.65 MB

Application that allows streaming with Inochi2D puppets

Home Page: https://inochi2d.com

License: BSD 2-Clause "Simplified" License

D 95.18% Shell 3.39% C 0.53% PowerShell 0.49% GLSL 0.41%
streaming vtuber vtubing

inochi-session's Introduction

日本語 简体中文

Inochi2D

Support me on Patreon Discord

Inochi2D is a library for realtime 2D puppet animation and the reference implementation of the Inochi2D Puppet standard. Inochi2D works by deforming 2D meshes created from layered art at runtime based on parameters, this deformation tricks the viewer in to seeing 3D depth and movement in the 2D art.

 

2022-05-03.02-46-34.mp4

Video from Beta 0.7.2, LunaFoxgirlVT, model art by kpon

 

For Riggers and VTubers

If you're a model rigger you may want to check out Inochi Creator, the official Inochi2D rigging app in development. If you're a VTuber you may want to check out Inochi Session. This repository is purely for the standard and is not useful if you're an end user.

 

Documentation

Documentation is currently in the process of being written for the spec and the official tools. You can find the official documentation page here.

 

Supported platforms

Inochi2D is a "bring your own renderer" API, we provide a OpenGL 3.1 backend to get you started easily and to work as a reference on how a renderer can be implemented.
To use the OpenGL renderer call inRendererInitGL during initialization of Inochi2D, a OpenGL 3.1 core context needs to be present.

We provide inochi2d-c as a way to use this library from non-D languages and we will be providing a layer to allow non-D languages to create rendering backends, additionally a second workgroup is making a pure Rust implementation of the Inochi2D specification over at Inox2D.

NOTE

Inochi2D does not support compilation with the OpenD language. Only the the official D language and compilers are supported.

 

Special Thanks

This project is funded through NGI0 Entrust, a fund established by NLnet with financial support from the European Commission's Next Generation Internet program. Learn more at the NLnet project page.

NLnet foundation logo


The Inochi2D logo was designed by James Daniel

inochi-session's People

Contributors

20kdc avatar grillo-delmal avatar higasamitsue avatar lunathefoxgirl avatar orowith2os avatar pheki avatar scrwnl avatar seagetch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inochi-session's Issues

Error: No valid OpenGL 4.2 context was found!

Hey there. I gave the latest release build a quick test, and ran into the following error:

./inochi-session-x86_64.AppImage
[INFO] Inochi Session v0.5.4, args=[]
[INFO] Lua support initialized. (Statically linked for now)
[INFO] Scanning plugins at /home/ezequiel/.config/inochi-session/plugins...
object.Exception@inui/source/inui/core/window/appwin.d(235): No valid OpenGL 4.2 context was found!
----------------
??:? [0x558e2abc8096]
??:? [0x558e2abc9ba6]
??:? [0x558e2abab1af]
??:? [0x558e2aab35ea]
??:? [0x558e2aa4cfa5]
??:? [0x558e2aa633f9]
??:? [0x558e2abaae9b]
??:? [0x558e2abaad95]
??:? [0x558e2abaabed]
??:? [0x7f4d21a3554f]
??:? __libc_start_main [0x7f4d21a35608]
??:? [0x558e2aa491e9]
fish: Job 1, './inochi-session-x86_64.AppImage' terminated by signal SIGSEGV (Address boundary error)

Running the command in the bash shell (fish is my usual) adds this line to the end of the error message:

Segmentation fault (core dumped)

My system information:

  • OS: Fedora 36
  • Display server: X11
  • Processor: i7-4790
  • Graphics: Nvidia GTX1070 with proprietary drivers.

3D and OpenGL apps in general seem to work, so I really don't think the problem is with my system specifically. I should probably also mention that I get this error with the zipped version of the release as well.

[BUG] Segfault with empty zone

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

When trying to run this software using integrated graphics on an AMD CPU, it segfaults (SIGSEGV) when trying to launch. The hardware in question is AMD Ryzen 7 4700U with Radeon Graphics, per lscpu

Reproduction

See description.

System Architecture

x86_64

Operating System

Linux

Version

Nightly (As of 19:11, UTC-5, April 23)

Logs

Click to expand! ``` $ ./inochi-session [INFO] Inochi Session commit+610ccc8, args=[] [INFO] Lua support initialized. (Statically linked for now) [INFO] Scanning plugins at /home/pk/.config/inochi-session/plugins... [ERR ] Could not start texture sharing, it will be disabled. Is the library missing? Segmentation fault (core dumped) $ ```

Additional Context

No response

[Feature Request] Add 'File' menu for opening models

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Drag-and-dropping model file from file manager doesn't come naturally for a lot of users, so a clear and obvious File->Open menu will help a lot.

Suggested solution

Add 'File' menu with 'Open' function to the existing top menu.

Alternative solution

No response

Additional Context

No response

[Feature Request] Background Image Transform and Tracker/Expression Binding

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Adding background image transform will grant more options to the scene, and being able to bind those transform to expression bindings will be a nice added option. This for example can be used for some kind of background parallax effect combined with the character moving left and right, which is already common in terms of rigging features.

Suggested solution

  • Add input fields to change background scale and position.
  • Add tracking/expression bindings to background scale and position for a more dynamic change.

Alternative solution

No response

Additional Context

No response

[Feature Request] Trigger animations through ratio/expression bindings

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

The idea is to add a way to see the list of animations added to a model and be able to bind them to a tracking value (either ratio or expression).

Suggested solution

I imagine this could be achieved by either add the animation list to the tracking panel or add it to a new panel that works in a similar way.

The idea would be to bind the animation to a blendshape or expression, trigger the lead in animation it when it goes to a value >= 1 and after that loop until the value goes <= 0, what would trigger the lead out animation and stop.

Values could be configurable though, and

Alternative solution

No response

Additional Context

I may make a small video if I have the time...

[Feature Request] When to support camera capture and let Live2D follow the person inside the camera

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

That is, I hope to say that the dog can support the camera to capture the face and shoulders and arms and so on and apply to Live2D image

Suggested solution

Look at the description

Alternative solution

No response

Additional Context

No response

[Feature Request] Add a way for Session to send output larger than its current window size.

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Currently, when capturing Session in OBS via Spout2 the results are tied to Session's window size. For livestreaming i think this is more than enough, but for recording full body model in high resolution (in order to save the recording as a video and use the result later for video editing projects, for example) this is not enough unless the user has a high resolution display or using some kind of super resolution provided by GPU drivers.

Suggested solution

Add a 'Canvas' or 'Stage' which dimensions are independent from Session's window size. Spout2 output then should render the entire canvas so users can make a high resolution recording without a high resolution desktop.

Alternative solution

No response

Additional Context

No response

[BUG] 200% pixel models are very small and far away

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

importing a pixelmodel only resized by 200% makes it extremely small in session even though it looks fine on creator

image

Reproduction

  1. start inochi session
  2. drag inp file of a small 200% resolution model

System Architecture

None

Operating System

Windows

Version

8.0

Logs

No response

Additional Context

No response

[BUG] Deleting Virtual Space Zone items causes crash

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

Due to a threading issue the threads for virtual space zone items are not appropriately cleared. This causes the garbage collector to attempt to delete stale memory.

Reproduction

See description.

System Architecture

None

Operating System

None

Version

No response

Logs

No response

Additional Context

No response

[Feature Request] Animation Recording

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Now that animation system is implemented in Creator, adding animation recording feature in Session will be very nice (similar to what VTS already have).

Suggested solution

Add animation recording feature to Session. Even a 'dumb' one which just records parameter values at every set amount of time without any smoothing or predictive keyframe placement will be very useful already.

I'm not sure what's the best way to do it if Session is now tied to .inp files only, but if we have the recording feature plus animation import feature in Creator (as per my previous suggestion) then user can record via Session -> Save to file (.inp) -> Open their .inx file in Creator -> Import animation from the .inp file.

Alternative solution

No response

Additional Context

No response

[Feature Request] OSC tracking

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Support for the Open Sound Control (OSC) protocol as a tracker source. Possibly as a control plane for the rest of the program as well.

Benefits

  • Integration with scene and cue sequencers (Ossia Score, Chataigne, IanniX)
  • Integration with external triggers (TouchOSC, Stream Deck) to trigger character expressions
  • Simple format (get UDP packet, resolve path)

An external panel can then be used to change parameters (via OSC) such that the character shows a cartoon sweat effect while a terrified button is held.

Suggested solution

Support OSC control messages to control the software, and/or as a tracker input.

Alternative solution

No response

Additional Context

No response

[Feature Request] Add a parameter panel, or combine tracking and parameter into one panel.

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Currently in Session there is no way to manually set a parameter without tracking. This prevents some functionalities commonly used in VTuber livestreaming, like switching on/off accessories, or switching body part poses.

Suggested solution

  • Add a parameter panel or combine tracking panel and parameter panel into a single entity.
  • The parameter panel can be like the one in Creator, so users can manually click and drag to set up parameter values. Maybe make it so when a parameter is tied to a specific tracking, the sliders lock and cannot be edited using mouse.

Alternative solution

No response

Additional Context

No response

[BUG] Very High CPU Usage

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

At the moment, running Session with no Virtual Spaces, hidden UI, and no model loaded takes 33% CPU vs 13% CPU took by VTube Studio when no model is loaded and webcam tracking is disabled (i3 9100F).

Screenshot (651)

Reproduction

  1. Open Session, delete all virtual spaces, and hide UI.
  2. Open VTube Studio, and disable webcam capture.

System Architecture

x86_64

Operating System

Windows

Version

0.8.3

Logs

No response

Additional Context

No response

[BUG] Passing a model file in inochi-session binary opens the model but the same doesn't work in the AppImage

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

I am trying to open a inx model in inochi-session using the following

$ inochi-session-x86_64.AppImage ./models/Midori.inx

but it's still opening an empty inochi-session

I think adding "$@" in the AppRun may help solving the issue.

ref: https://discourse.appimage.org/t/command-line-parameter-transfer/1537

Reproduction

  1. Download AppImage build and example models from https://github.com/Inochi2D/inochi-session/releases/tag/v0.5.4 and make the AppImage executable
  2. Run $ inochi-session-x86_64.AppImage ./path/to/Midori.inx, is expected to open inochi-session with "Midori" model but is opening empty session

System Architecture

x86_64

Operating System

Linux

Version

v0.5.4

Logs

No response

Additional Context

No response

[BUG] Trailing Images when Post-Processing is Enabled

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

This is old issue but since it's not listed here yet I thought I'll add the entry. As said in the title, enabling post-processing causes trailing images to appear.

Reproduction

1, Open Scene Settings
2. Enable Post Processing
3. Move loaded model around.
Screenshot (542)

System Architecture

x86_64

Operating System

Windows

Version

62a328

Logs

No response

Additional Context

No response

[BUG] Incorrect packets over the VMC protocol crash Inochi

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

Sending an incorrect OSC bundle to the VMC endpoint will crash inochi session with an array out of bounds error. The software will not close correctly and will hang at the stack trace.

Reproduction

  1. Add a VMC Tracker
  2. Create an OSC device in Ossia Score
  3. Send a float value to /VMC/Ext/Blend/Catte
  4. Observe the crash

This should not strictly require Ossia since it looks like the issue is VMC expecting a Name+Float bundle, receiving only a Float, and failing to fail gracefully.v

System Architecture

x86_64

Operating System

Linux

Version

v0.8.0

Logs

Click to expand! icedquinn@astaraline ~/D/inochi-session-linux-x86_64 [SIGKILL]> ./inochi-session [INFO] Inochi Session v0.8.0, args=[] [INFO] Lua support initialized. (Statically linked for now) [INFO] Scanning plugins at /home/icedquinn/.config/inochi-session/plugins... [INFO] Found zone Yass [ERR ] Could not start texture sharing, it will be disabled. Is the library missing? core.exception.ArrayIndexError@../../../.dub/packages/vmc-d-1.1.3/vmc-d/source/osc/message.d(140): index [1] is out of bounds for array of length 1

Additional Context

This issue is a fork of #41, a bug I discovered while experimenting with that issue.

[Feature Request] Add a toggle for viewport background color change/switch

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Currently the viewport follows app's current theme (light or dark). It will be very useful if there is a switch between light/dark viewport background in the toolbar.

Suggested solution

  • Add a toggle between dark/light viewport color in the toolbar.

Alternative solution

  • Alternatively, add toggle between dark/light UI theme in the toolbar (might be easier/faster since the functionality is already available, just placed in a rather inconvenient place).
  • In the future, it might be even more useful if we can switch between solid and checkerboard background pattern.

Additional Context

No response

[BUG] Scene becomes invisible when model is loaded on certain GPUs

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

After loading a model, the entire scene becomes invisible, leaving only a black background and inochi-session's UI. This issue has been observed on Intel GPUs and software rendering; Nvidia GPUs are not affected. At least versions 0.8 up to the latest nightly (a462922) are affected.

Reproduction

Using an Intel GPU or software rendering:

  1. Start Inochi-Session 0.8.
  2. Set the scene background colour to something other than black.
  3. Load Midori.inx from example-models by drag-and-drop.
  4. The scene is gone, leaving just the UI on a black background.
  5. Start dragging from the central part of the screen
  6. Observe that the trashcan still appears on the bottom left.
  7. Drag the invisible puppet to the trashcan.
  8. The scene becomes visible again with the colour selected in step 2.

System Architecture

x86_64

Operating System

Linux

Version

a462922

Logs

Logs under llvmpipe and MESA_DEBUG=1
$ export LIBGL_ALWAYS_SOFTWARE=1 MESA_DEBUG=1 GALLIUM_DRIVER=llvmpipe
$ ./inochi-session 
[INFO] Inochi Session v0.8.0, args=[]
[INFO] Lua support initialized. (Statically linked for now)
[INFO] Scanning plugins at /home/user/.config/inochi-session/plugins...
Mesa: User error: GL_INVALID_OPERATION in glFramebufferTexture2D(window-system framebuffer)
Mesa: 7 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glFramebufferTexture2D(non-existent texture 4)
[ERR ] Could not start texture sharing, it will be disabled. Is the library missing?
Mesa: 7 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
[INFO] Created expression p3304717045_0...
[INFO] Created expression p3304717478_0...
Mesa: 374 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 4 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_OPERATION in glDrawElements
Mesa: 2 similar GL_INVALID_OPERATION errors
Mesa: User error: GL_INVALID_OPERATION in glDrawBuffers(unsupported buffer GL_COLOR_ATTACHMENT0)
Mesa: User error: GL_INVALID_OPERATION in glDrawArrays
Mesa: User error: GL_INVALID_ENUM in glBlendEquationSeparateEXT(modeRGB)
[INFO] Saving Virtual Space...

Additional Context

I created this to formally document a string of issues reported on the #support Discord channel.

So far, this issue has been observed on:

  • Intel HD Graphics (Intel Kaby Lake) on Windows 10
  • Mesa 23.0.3 llvmpipe (LLVM 15.0.7) on Fedora 37
  • Mesa 22.2.5 llvmpipe (LLVM 15.0.6) on Ubuntu 22.04.2 LTS
  • Mesa 23.2.0-devel (git-4621a6db50) llvmpipe (LLVM 15.0.7) on Ubuntu 22.04.2 LTS
  • Possibly other Mesa llvmpipe drivers

This issue does not affect Nvidia GPUs or Mesa softpipe.

[Feature Request] Sort out flatpak builds

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

We need to get stable, nightly, release candidate and arm64 flatpaks sorted before we release 0.8, will need @orowith2os's help with this.

Suggested solution

See above.

Alternative solution

No response

Additional Context

No response

[Feature Request] Allow searching for plugins in system paths

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

From what I can tell, Session currently only searches for the plugins in the user paths. If there's a system path for it, I can't see any in the logs, and a quick search over the source code doesn't show anything I recognize.

Suggested solution

Allow setting the path to search for system plugins with at build-time, and maybe have an environment variable to override it for debug purposes.

Session should search the user plugin directories first, then the system paths - if there are any conflicting plugins, prioritize the user plugin.

Alternative solution

No response

Additional Context

This is a blocker for the Flatpak version of Session.

[BUG] App crashes when adding extra trackers when there is an active puppet expecing bone deformation data

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

I have a puppet which I already asociated tracking parameters before. When I try to add a new tracker on the Virtual Space menu, the app crashes

Reproduction

  1. Load a puppet
  2. Go to View > Virtual Spaces and add a space and a tracker
  3. Associate a bone to one of the puppet's parameters
  4. Go to the vspace and try to add a second tracker
  5. crash

System Architecture

x86_64

Operating System

Linux

Version

0.8.4

Logs

No response

Additional Context

It crashes here:

Bone b = source.getBones()[name];

Imgui Empty ID Assertion triggered after reloading tracking tree

How to reproduce:

  • Open inochi-session with a model
  • Ensure that the Tracking popup is visible ( View -> Tracking)
  • Click on the model
  • Click the refresh button
  • Open one of the combo boxes

Doing this produces the following error message:
inochi-session: /opt/src/bindbc-imgui/deps/cimgui/imgui/imgui.cpp:8925: bool ImGui::ItemAdd(const ImRect&, ImGuiID, const ImRect*, ImGuiItemFlags): Assertion `id != window->ID && "Cannot have an empty ID at the root of a window. If you need an empty label, use ## and read the FAQ about how the ID Stack works!"' failed.

Rebuilding with debug flags and using gdb I found that this is triggered when the tracking system returns a source with an empty name.

I made this patch that stopped the problem, but I don't know if this is an appropriate solution for this problem.

diff --git a/source/session/panels/tracking.d b/source/session/panels/tracking.d
index e0b713f..c3e2e89 100644
--- a/source/session/panels/tracking.d
+++ b/source/session/panels/tracking.d
@@ -137,7 +137,9 @@ private:
                             uiImEndMenu();
                         }
                     } else {
-                        if (uiImSelectable(source.cName, selected)) {
+                        if (uiImSelectable(
+                                source.name.length > 0 ? source.cName : " ".ptr, 
+                                selected)) {
                             binding.sourceType = SourceType.Blendshape;
                             binding.sourceName = source.name;
                             binding.createSourceDisplayName();

This was tested on the binary provided by the 0.0.1 AppImage using a compiled cimage.so and on a Fedora 36 environment.

[BUG] UI does not respect system scaling (makes it unusable on high DPI monitor)

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

When running the program on a monitor with a high DPI it will not respect the system UI scaling options so the UI gets displayed really small

image

Reproduction

  1. (Optionally) Get a high DPI monitor
  2. Set system window scaling to some none 100% value (e.g. 150%)
  3. Notice the scale of the UI elements in Inochi Session doesn't change

System Architecture

x86_64

Operating System

Linux

Version

0.8.3

Logs

Nothing special really

Full log
[INFO] Inochi Session v0.8.3, args=[]
[INFO] Lua support initialized. (Statically linked for now)
[INFO] Scanning plugins at /home/themoep/.config/inochi-session/plugins...
[ERR ] webcam: Unexpected end of input when converting from type string to type uint
[INFO] Found zone webcam
[ERR ] Could not start texture sharing, it will be disabled. Is the library missing?
[INFO] Saving Virtual Space...
[INFO] Saving Zone webcam...

Additional Context

Running on Pop!_OS 22.04 LTS with the default Window Manager (so GNOME 42 based).

Ideally it would use the scaling of the system to set the UIScale option.

[Feature Request] Add some initial values for first time users in Virtual Space settings

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

When running Session for the first time, the 'Virtual Space' settings is empty, and the UI doesn't help user that much without outside documentation or tutorial.

Suggested solution

  • In the Virtual Space window, there should be a 'Default' entry already set up at first boot (can be deleted/renamed later by users who know what they're doing)
  • The '+' button should have a text label, for example 'Add Virtual Space' and 'Add Tracker'
  • When opening the Virtual Space window, first entry in the virtual spaces list should already be at selected state (which means for first time users, the 'Default' virtual space is the one that is selected')
  • The field names should be more readable like how it's already in Creator (image attached).
  • Field values should be pre-filled like how it's already in Creator (image attached).
  • If first boot is detected (only 'Default' virtual space exists with no tracker set up), show a dialog window offering user to set up these. Something like 'No tracking apps/devices set up. Do you want to configure these now? Yes/No'. Clicking yes opens the Virtual Space window.
  • Closing and/or saving settings in Virtual Space window should also refresh the tracker list in Tracking window.

Screenshot (570)

Alternative solution

No response

Additional Context

No response

Weird background rendering problems when running on Wayland

When running inochi-session on Wayland if you start moving the window, the model or the opaque things around, they appear to leave a visible trace that doesn't show on screenshots, but also doesn't leave until you refresh the window some way (like minimize + maximize).

Here is a picture that I took on my phone of the issue:
image

This doesn't happen when using X11 or when running on Wayland using the X11 SDL video driver (setting the envs SDL_VIDEODRIVER=x11 SDL_VIDEO_X11_FORCE_EGL=1).

This also stopped happening when commenting the line that sets SDL_HINT_VIDEO_EGL_ALLOW_TRANSPARENCY to 1 on my debug build.

This was tested on the binary provided by the 0.0.1 AppImage using a compiled cimage.so and on a Fedora 36 environment (SDL2-2.0.22).

[BUG] Red Artifacting On Textures When Post-Processing is On

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

When post-processing is toggled on (in Session or Creator), the model is covered in bright red artifacts of some kind. The artifacting is random and seems to change whenever the application is restarted--sometimes it's worse, sometimes better.
image
image
The artifacts persist in streamed views of the window on Discord and OBS (using Window Capture).

Reproduction

  1. Load a puppet in Inochi Session or Creator.
  2. Toggle post-processing to "on".
  3. Observe varying levels of weird glitch effect.

System Architecture

x86_64

Operating System

Linux

Version

0.8.3

Logs

System Information System:
Kernel: 6.5.0-15-generic x86_64 bits: 64 compiler: N/A Desktop: Cinnamon 5.8.4 tk: GTK 3.24.33 wm: muffin dm: LightDM Distro: Linux Mint 21.2 Victoria base: Ubuntu 22.04 jammy
Machine:
Type: Desktop Mobo: ASRock model: B550M Pro4 serial: UEFI: American Megatrends LLC. v: P3.20 date: 09/27/2023
Battery:
Device-1: hidpp_battery_0 model: Logitech Wireless Keyboard serial: charge: 55% (should be ignored) status: Discharging
CPU:
Info: 6-core model: AMD Ryzen 5 4500 bits: 64 type: MT MCP arch: Zen 2 rev: 1 cache: L1: 384 KiB L2: 3 MiB L3: 8 MiB
Speed (MHz): avg: 1091 high: 3242 min/max: 400/4208 cores: 1: 3242 2: 400 3: 400 4: 3059 5: 3192 6: 400 7: 400 8: 400 9: 400 10: 400 11: 400 12: 400 bogomips: 86244
Flags: avx avx2 ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
Graphics:
Device-1: AMD Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] vendor: XFX Pine driver: amdgpu v: kernel pcie: speed: 2.5 GT/s lanes: 16 ports: active: DP-1,DP-2,HDMI-A-1 empty: DP-3,DVI-D-1 bus-ID: 01:00.0 chip-ID: 1002:67df
Device-2: Jieli USB PHY 2.0 type: USB driver: snd-usb-audio,uvcvideo bus-ID: 1-4.4:10 chip-ID: 1224:2a25
Display: x11 server: X.Org v: 1.21.1.4 driver: X: loaded: amdgpu,ati unloaded: fbdev,modesetting,vesa gpu: amdgpu display-ID: :0 screens: 1
Screen-1: 0 s-res: 5760x1080 s-dpi: 96
Monitor-1: DisplayPort-0 mapped: DP-1 pos: primary,center model: Acer K242HYL res: 1920x1080 dpi: 93 diag: 604mm (23.8")
Monitor-2: DisplayPort-1 mapped: DP-2 pos: primary,left model: ASUS VP228 res: 1920x1080 dpi: 102 diag: 546mm (21.5")
Monitor-3: HDMI-A-0 mapped: HDMI-A-1 pos: right model: Sharp HDMI res: 1920x1080 dpi: 55 diag: 1016mm (40")
OpenGL:
renderer: AMD Radeon RX 580 Series (radeonsi polaris10 LLVM 15.0.7 DRM 3.54 6.5.0-15-generic) v: 4.6 Mesa 23.3.1 - kisak-mesa PPA direct render: Yes
Audio:
Device-1: AMD Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] vendor: XFX Pine driver: snd_hda_intel v: kernel pcie: speed: 8 GT/s lanes: 16 bus-ID: 01:00.1 chip-ID: 1002:aaf0
Device-2: AMD Renoir Radeon High Definition Audio driver: snd_hda_intel v: kernel pcie: speed: 8 GT/s lanes: 16 bus-ID: 06:00.1 chip-ID: 1002:1637
Device-3: AMD Family 17h HD Audio vendor: ASRock driver: snd_hda_intel v: kernel pcie: speed: 8 GT/s lanes: 16 bus-ID: 06:00.6 chip-ID: 1022:15e3
Device-4: C-Media JLAB TALK GO MICROPHONE type: USB driver: hid-generic,snd-usb-audio,usbhid bus-ID: 1-2:2 chip-ID: 0d8c:1008
Device-5: Jieli USB PHY 2.0 type: USB driver: snd-usb-audio,uvcvideo bus-ID: 1-4.4:10 chip-ID: 1224:2a25
Sound Server-1: ALSA v: k6.5.0-15-generic running: yes
Sound Server-2: PulseAudio v: 15.99.1 running: yes
Sound Server-3: PipeWire v: 0.3.48 running: yes
Network:
Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: ASRock driver: r8169 v: kernel pcie: speed: 2.5 GT/s lanes: 1 port: e000 bus-ID: 04:00.0 chip-ID: 10ec:8168 IF: enp4s0 state: up speed: 1000 Mbps duplex: full mac:
Bluetooth:
Device-1: Broadcom BCM20702A0 Bluetooth 4.0 type: USB driver: btusb v: 0.8 bus-ID: 1-7.2:7 chip-ID: 0a5c:21e8
Report: hciconfig ID: hci0 rfk-id: 0 state: up address: bt-v: 2.1 lmp-v: 4.0 sub-v: 220e
Drives:
Local Storage: total: 3.76 TiB used: 1.25 TiB (33.2%)
ID-1: /dev/nvme0n1 vendor: Transcend model: TS128GMTE110S size: 119.24 GiB speed: 31.6 Gb/s lanes: 4 serial: temp: 43.9 C
ID-2: /dev/sda vendor: Western Digital model: WD40EZAZ-00SF3B0 size: 3.64 TiB speed: 6.0 Gb/s serial:
Partition:
ID-1: / size: 116.34 GiB used: 38.1 GiB (32.7%) fs: ext4 dev: /dev/nvme0n1p2
ID-2: /boot/efi size: 486 MiB used: 6.1 MiB (1.2%) fs: vfat dev: /dev/nvme0n1p1
ID-3: /home size: 3.58 TiB used: 1.21 TiB (33.7%) fs: ext4 dev: /dev/sda1
Swap:
ID-1: swap-1 type: file size: 2 GiB used: 0 KiB (0.0%) priority: -2 file: /swapfile
Sensors:
System Temperatures: cpu: N/A mobo: N/A gpu: amdgpu temp: 50.0 C
Fan Speeds (RPM): N/A gpu: amdgpu fan: 748
Repos:
Packages: 3104 apt: 3072 flatpak: 20 snap: 12
No active apt repos in: /etc/apt/sources.list
Active apt repos in: /etc/apt/sources.list.d/1password.list
1: deb [arch=amd64 signed-by=/usr/share/keyrings/1password-archive-keyring.gpg] https: //downloads.1password.com/linux/debian/amd64 stable main
No active apt repos in: /etc/apt/sources.list.d/amdgpu-proprietary.list
Active apt repos in: /etc/apt/sources.list.d/amdgpu.list
1: deb https: //repo.radeon.com/amdgpu/23.20/amdgpu/ubuntu jammy main
Active apt repos in: /etc/apt/sources.list.d/lunarg-vulkan-1.3.268-jammy.list
1: deb https: //packages.lunarg.com/vulkan/1.3.268 jammy main
2: deb-src https: //packages.lunarg.com/vulkan/1.3.268 jammy main
Active apt repos in: /etc/apt/sources.list.d/official-package-repositories.list
1: deb http: //packages.linuxmint.com victoria main upstream import backport
2: deb http: //mirrors.accretive-networks.net/ubuntu jammy main restricted universe multiverse
3: deb http: //mirrors.accretive-networks.net/ubuntu jammy-updates main restricted universe multiverse
4: deb http: //mirrors.accretive-networks.net/ubuntu jammy-backports main restricted universe multiverse
5: deb http: //security.ubuntu.com/ubuntu/ jammy-security main restricted universe multiverse
Active apt repos in: /etc/apt/sources.list.d/qbittorrent-team-qbittorrent-stable-jammy.list
1: deb [arch=amd64 signed-by=/etc/apt/keyrings/qbittorrent-team-qbittorrent-stable-jammy.gpg] https: //ppa.launchpadcontent.net/qbittorrent-team/qbittorrent-stable/ubuntu jammy main
Active apt repos in: /etc/apt/sources.list.d/rocm.list
1: deb [arch=amd64] https: //repo.radeon.com/amdgpu/23.20/rocm/apt/5.7 jammy main
Active apt repos in: /etc/apt/sources.list.d/spotify.list
1: deb http: //repository.spotify.com stable non-free
Active apt repos in: /etc/apt/sources.list.d/vivaldi.list
1: deb [arch=amd64] https: //repo.vivaldi.com/stable/deb/ stable main
Active apt repos in: /etc/apt/sources.list.d/vscode.list
1: deb [arch=amd64,arm64,armhf] http: //packages.microsoft.com/repos/code stable main
Active apt repos in: /etc/apt/sources.list.d/winehq-jammy.sources
1: deb [arch=amd64 i386] https: //dl.winehq.org/wine-builds/ubuntu jammy main
Info:
Processes: 373 Uptime: 24m Memory: 30.25 GiB used: 4.27 GiB (14.1%) Init: systemd v: 249 runlevel: 5 Compilers: gcc: 11.4.0 alt: 11/12 Client: Cinnamon v: 5.8.4 inxi: 3.3.13

Additional Context

I sent a model to one of my friends, whose laptop is apparently weaker graphically than my desktop, and who also runs on Ubuntu (but not Mint), and she didn't experience the same issue.

[BUG] BlendShapes do not track in ratio bindings under specific circumstances.

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

If a virtual space has two sources, the BlendShapes from the lower source are not read by ratio bindings if the upper source is not currently active.
Expression bindings using the BLEND(...) function work as normal, and the BlendShape tab also shows the values as expected.

In my specific scenario my virtual space has:

  • OpenSeeFace
  • VMC-MC (a Minecraft mod that sends VMC data)

I have tested this in both orders and the bug is consistent for BlendShapes (Bones simply did not transmit regardless when OSF was lower). If the source that is higher on the virtual space is not transmitting data, then the lower source BlendShapes will experience the bug.

Reproduction

  1. Create a new virtual space.
  2. Click the + to add a new source.
  3. Set up the source.
  4. Add and set up another source.
  5. Open the program of the second source.
  6. Set a parameter to use ratio binding for any BlendShape of the second source.
    a. The ratio binding will not track the value.
  7. Open the program of the first source.
    a. The ratio binding will now work.
  8. Close the program of the first source.
    a. The ratio binding will stop working.

System Architecture

x86_64

Operating System

Windows

Version

0.8.3

Logs

No response

Additional Context

No response

[Feature Request] Add a way to quickly replace models.

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

If we have multiple models (different costumes for one character, for example) then it will be very helpful if Session can quickly replace the model on stage with another, while retaining the same position and scale.

Suggested solution

Add a way to quickly replace a model with another file while maintaining position and scale. Maybe add a right-click menu? Right click the model -> Show context menu, click 'Replace', and then open file browser to choose the replacement model.

Alternative solution

No response

Additional Context

No response

[BUG] Virtual Space deletion Crash.

Validations

  • I have checked for similar bug reports and could not find any.
  • I have tested and confirmed that this is an issue in an official branded build.

Describe the bug

When deleting a Virtual Space, it crashes if the IP cannot be resolved.

Reproduction

1 Create Virtual Space
2 name it (main in this case)
3 Select VTubeStudio
4 phoneIP set to 0.0.0.0:8001
5 save changes
6 save (bottom)
7 reopen menu
8 try delete it (will crash)

System Architecture

x86_64

Operating System

Windows

Version

v0.5.4

Logs

Click to expand! [INFO] Inochi Session v0.5.4, args=[] [INFO] Lua support initialized. (Statically linked for now) [INFO] Scanning plugins at C:\Users\mrmgp\AppData\Roaming\.inochi-session\plugins... [INFO] Found zone main [INFO] Found zone testing [INFO] Frame-sending started successfully!

std.socket.AddressException@std\socket.d(1534): Unable to resolve host '0.0.0.0:8001': Host desconocido.

0x00007FF70D50910D in fprintf
0x00007FF70D508E76 in fprintf
0x00007FF70D4FCCFF in fprintf
0x00007FF70D4E68CA in fprintf
0x00007FF70D4C7668 in fprintf
0x00007FF70D43794E
0x00007FF70D4EE008 in fprintf
0x00007FF8018D9363 in recalloc
0x00007FF802B326BD in BaseThreadInitThunk
0x00007FF8040CDFB8 in RtlUserThreadStart

Additional Context

No response

[Feature Request] Add some way to precisely set model scale and coordinates

Validations

  • I have checked for similar feature requests and could not find any.
  • I have made sure this is not an already-existing feature.

Description

Adding a way to set model coordinates and scale exactly by inputting numbers will immensely help regarding consistency between multiple sessions in Session.

Suggested solution

Add a way to input model scale and coordinates. These input fields can be added to a new panel, for example.

Alternative solution

No response

Additional Context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.