Coder Social home page Coder Social logo

pauzii / phasebetweener Goto Github PK

View Code? Open in Web Editor NEW
158.0 8.0 25.0 41.25 MB

Creating animation sequences between sparse key frames using motion phase features.

Python 0.95% C# 22.89% ShaderLab 1.55% HLSL 3.10% CMake 0.01% C++ 70.54% C 0.97%
animation artificial-intelligence computer-games data-driven-model motion-capture

phasebetweener's Introduction

Motion In-Betweening with Phase Manifolds
In 22nd ACM SIGGRAPH/EUROGRAPHICS Symposium on Computer Animation (SCA 2023)
Paul Starke, Sebastian Starke, Taku Komura, Frank Steinicke
Proc. ACM Comput. Graph. Interact. Tech. 6, 3, Article 37 (August 2023), 17 pages. https://doi.org/10.1145/3606921

Abstract

This work introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder. The approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights. Each generated set of weights then produces a sequence of poses in an autoregressive manner between the current and target state of the character. In addition, to satisfy poses which are manually modified by the animators or where certain end effectors serve as constraints to be reached by the animation, a learned bi-directional control scheme is implemented to satisfy such constraints. Using phases for motion in-betweening tasks sharpen the interpolated movements, and furthermore stabilizes the learning process. Moreover, more challenging movements beyond locomotion behaviors can be synthesized. Additionally, style control is enabled between given target keyframes. The framework can compete with state-of-the-art methods for motion in-betweening in terms of motion quality and generalization, especially in the existence of long transition durations. This framework contributes to faster prototyping workflows for creating animated character sequences, which is of enormous interest for the game and film industry.

- Video - Paper - Code, Demo & Tool - ReadMe

Copyright Information

This project is only for research or education purposes, and not freely available for commercial use or redistribution. The motion capture data and 3D character model from ubisoft-laforge is available only under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License.

phasebetweener's People

Contributors

paulstarke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

phasebetweener's Issues

Error: X in ONNXNetwork is always null

Hello,

I am trying the project in Unity Version 2020.3.18f1 and the Barracuda 2.0.0 package, same version as you recommended.
Also I placed the MotionCapture folder inside the Assets/Demo/Authoring folder.

When running the AuthoringDemo.unity, I am having this error handreds of times:

NullReferenceException: Object reference not set to an instance of an object
ONNX.ONNXNetwork+ONNXInference.GetFeedSize () (at Assets/Scripts/DeepLearning/ONNX/ONNXNetwork.cs:42)
ONNX.NeuralNetwork.Feed (System.Single value) (at Assets/Scripts/DeepLearning/ONNX/ONNXNeuralNetwork.cs:78)
ONNX.NeuralNetwork.FeedXZ (UnityEngine.Vector3 vector) (at Assets/Scripts/DeepLearning/ONNX/ONNXNeuralNetwork.cs:120)
AnimationAuthoring.AuthoringInBetweeningController.Feed () (at Assets/Demo/Authoring/Runtime/AuthoringInBetweeningController.cs:322)
NeuralONNXAnimation.Update () (at Assets/Scripts/Animation/NeuralONNXAnimation.cs:43)

Any idea what is wrong?

"training=False" causes error in "Utility.py > SaveONNX"

Hi @pauzii

Right now it's:

def SaveONNX(path, model, input_size, input_names, output_names):
    FromDevice(model)
    torch.onnx.export(
        model,                            # model being run
        torch.randn(1, input_size),          # model input (or a tuple for multiple inputs)
        path,            # where to save the model (can be a file or file-like object)
        training=False,
        export_params=True,                 # store the trained parameter weights inside the model file
        opset_version=9,                    # the ONNX version to export the model to
        do_constant_folding=False,          # whether to execute constant folding for optimization
        input_names = input_names,                # the model's input names
        output_names = output_names                # the model's output names
    )
    ToDevice(model)

which causes an error:

Traceback (most recent call last):
 File "C:\Users\Wojtek\Desktop\prgr\PhaseBetweener\DeepLearningONNX\Models\GNN\InBetweeningNetwork.py", line 104, in <module>
   utility.SaveONNX(
 File "C:\Users\Wojtek\Desktop\prgr\PhaseBetweener\DeepLearningONNX\Models\GNN\../../../DeepLearningONNX\Library\Utility.py", line 157, in SaveONNX
   torch.onnx.export(
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 506, in export
   _export(
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 1525, in _export
   with exporter_context(model, training, verbose):
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\contextlib.py", line 135, in __enter__
   return next(self.gen)
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 178, in exporter_context
   with select_model_mode_for_export(
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\contextlib.py", line 135, in __enter__
   return next(self.gen)
 File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 90, in select_model_mode_for_export
   raise TypeError(
TypeError: 'mode' should be a torch.onnx.TrainingMode enum, but got '<class 'bool'>'.

The fix is to delete the training=false line:

def SaveONNX(path, model, input_size, input_names, output_names):
    FromDevice(model)
    torch.onnx.export(
        model,                            # model being run
        torch.randn(1, input_size),          # model input (or a tuple for multiple inputs)
        path,            # where to save the model (can be a file or file-like object)
        export_params=True,                 # store the trained parameter weights inside the model file
        opset_version=9,                    # the ONNX version to export the model to
        do_constant_folding=False,          # whether to execute constant folding for optimization
        input_names = input_names,                # the model's input names
        output_names = output_names                # the model's output names
    )
    ToDevice(model)

Training IO bottleneck

Hi @pauzii,
just letting you know that your code is a bit IO bottlenecked because of loading of Input.txt and Output.txt.
Here's a version that loads them fully into the VRAM (though, yeah, you need 10GB of VRAM :D) - it's much faster
(there are some other minor changes like changes in paths, which can be ignored, though)

import sys
sys.path.append("../../../DeepLearningONNX")

import Library.Utility as utility
import Library.AdamWR.adamw as adamw
import Library.AdamWR.cyclic_scheduler as cyclic_scheduler
import Models.GNN.InBetweeningNetwork as this

import numpy as np
import torch
from torch.nn.parameter import Parameter
import torch.nn.functional as F

from pathlib import Path

if __name__ == '__main__':
    load = Path("../..")
    save = "./Training"
    
    
    InputFile = load / "Input.txt"
    OutputFile = load / "Output.txt"

    Xnorm = utility.ReadNorm(load / "InputNorm.txt")
    Ynorm = utility.ReadNorm(load / "OutputNorm.txt")

    utility.SetSeed(23456)

    epochs = 150
    batch_size = 32
    dropout = 0.3
    gating_hidden = 130
    main_hidden = 654
    experts = 8

    learning_rate = 1e-4
    weight_decay = 1e-4
    restart_period = 10
    restart_mult = 2

    print(torch.__version__)
    print(torch.cuda.is_available())
    print(torch.cuda.device_count())
    print(torch.cuda.current_device())
    print(torch.cuda.get_device_name(0))

    print("Started creating data pointers...")
    # X = utility.ToDevice(torch.load("input.pt"))
    # Y = utility.ToDevice(torch.load("output.pt"))
    
    X = utility.ToDevice(torch.from_numpy(np.loadtxt("../../Input.txt"))).float()
    Y = utility.ToDevice(torch.from_numpy(np.loadtxt("../../Output.txt"))).float()
    # pointersX = utility.CollectPointers(str(InputFile))
    # pointersY = utility.CollectPointers(str(OutputFile))
    print("Finished creating data pointers.")

    # sample_count = pointersX.shape[0]
    sample_count = X.shape[0] # ???

    input_dim = Xnorm.shape[1]
    output_dim = Ynorm.shape[1]
    
    #SpectralModel
    gating_indices = torch.tensor([(main_hidden + i) for i in range(gating_hidden)]) #index where phase starts
    main_indices = torch.tensor([(i) for i in range(main_hidden)])

    network = utility.ToDevice(this.Model(
        gating_indices=gating_indices, 
        gating_input=len(gating_indices), 
        gating_hidden=gating_hidden, 
        gating_output=experts, 
        main_indices=main_indices, 
        main_input=len(main_indices), 
        main_hidden=main_hidden, 
        main_output=output_dim,
        dropout=dropout,
        input_norm=Xnorm,
        output_norm=Ynorm
    ))

    optimizer = adamw.AdamW(network.parameters(), lr=learning_rate, weight_decay=weight_decay)
    scheduler = cyclic_scheduler.CyclicLRWithRestarts(optimizer=optimizer, batch_size=batch_size, epoch_size=sample_count, restart_period=restart_period, t_mult=restart_mult, policy="cosine", verbose=True)
    loss_function = torch.nn.MSELoss()

    error_train = np.zeros(epochs)

    I = np.arange(sample_count)
    for epoch in range(epochs):
        scheduler.step()
        np.random.shuffle(I)
        error = 0.0
        for i in range(0, sample_count, batch_size):
            # print('Progress', round(100 * i / sample_count, 2), "%", end="\r")
            train_indices = I[i:i+batch_size]

            # xBatch = utility.ToDevice(torch.from_numpy(utility.ReadChunk(str(InputFile), pointersX[train_indices])))
            # yBatch = utility.ToDevice(torch.from_numpy(utility.ReadChunk(str(OutputFile), pointersY[train_indices])))

            xBatch = X[train_indices]
            yBatch = Y[train_indices]


            # xBatch = utility.ToDevice(torch.from_numpy(InputFile[train_indices]))
            # yBatch = utility.ToDevice(torch.from_numpy(OutputFile[train_indices]))

            yPred, gPred, w0, w1, w2 = network(xBatch)

            loss = loss_function(utility.Normalize(yPred, network.Ynorm), utility.Normalize(yBatch, network.Ynorm))
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            scheduler.batch_step()

            error += loss.item()
    
        utility.SaveONNX(
            path=save+'/'+str(epoch+1)+'.onnx',
            model=network,
            input_size=input_dim,
            input_names=['X'],
            output_names=['Y', 'G', 'W0', 'W1','W2']
        )
        print('Epoch', epoch+1, error/(sample_count/batch_size))
        error_train[epoch] = error/(sample_count/batch_size)
        error_train.tofile(save+"/error_train.bin")

class Model(torch.nn.Module):
    def __init__(self, gating_indices, gating_input, gating_hidden, gating_output, main_indices, main_input, main_hidden, main_output, dropout, input_norm, output_norm):
        super(Model, self).__init__()

        if len(gating_indices) + len(main_indices) != len(input_norm[0]):
            print("Warning: Number of gating features (" + str(len(gating_indices)) + ") and main features (" + str(len(main_indices)) + ") are not the same as input features (" + str(len(input_norm[0])) + ").")

        self.gating_indices = gating_indices
        self.main_indices = main_indices

        self.GW1 = self.weights([gating_hidden, gating_input])
        self.Gb1 = self.bias([gating_hidden, 1])

        self.GW2 = self.weights([gating_hidden, gating_hidden])
        self.Gb2 = self.bias([gating_hidden, 1])

        self.GW3 = self.weights([gating_output, gating_hidden])
        self.Gb3 = self.bias([gating_output, 1])

        self.EW1 = self.weights([gating_output, main_hidden, main_input])
        self.Eb1 = self.bias([gating_output, main_hidden, 1])

        self.EW2 = self.weights([gating_output, main_hidden, main_hidden])
        self.Eb2 = self.bias([gating_output, main_hidden, 1])

        self.EW3 = self.weights([gating_output, main_output, main_hidden])
        self.Eb3 = self.bias([gating_output, main_output, 1])

        self.dropout = dropout
        self.Xnorm = Parameter(torch.from_numpy(input_norm), requires_grad=False)
        self.Ynorm = Parameter(torch.from_numpy(output_norm), requires_grad=False)

    def weights(self, shape):
        alpha_bound = np.sqrt(6.0 / np.prod(shape[-2:]))
        alpha = np.asarray(np.random.uniform(low=-alpha_bound, high=alpha_bound, size=shape), dtype=np.float32)
        return Parameter(torch.from_numpy(alpha), requires_grad=True)

    def bias(self, shape):
        return Parameter(torch.zeros(shape, dtype=torch.float), requires_grad=True)

    def blend(self, g, m):
        a = m.unsqueeze(1)
        a = a.repeat(1, g.shape[1], 1, 1)
        w = g.reshape(g.shape[0], g.shape[1], 1, 1)
        r = w * a
        r = torch.sum(r, dim=0)
        return r

    def forward(self, x):
        x = utility.Normalize(x, self.Xnorm)

        #Gating
        g = x[:, self.gating_indices]
        g = g.transpose(0,1)

        g = F.dropout(g, self.dropout, training=self.training)
        g = F.elu(self.GW1.matmul(g) + self.Gb1)

        g = F.dropout(g, self.dropout, training=self.training)
        g = F.elu(self.GW2.matmul(g) + self.Gb2)

        g = F.dropout(g, self.dropout, training=self.training)
        g = F.softmax(self.GW3.matmul(g) + self.Gb3, dim=0)

        #Main
        m = x[:, self.main_indices]
        m = m.reshape(m.shape[0], m.shape[1], 1)

        m = F.dropout(m, self.dropout, training=self.training)
        w0 = self.blend(g, self.EW1)
        m = F.elu(w0.matmul(m) + self.blend(g, self.Eb1))

        m = F.dropout(m, self.dropout, training=self.training)
        w1 = self.blend(g, self.EW2)
        m = F.elu(w1.matmul(m) + self.blend(g, self.Eb2))
        
        
        m = F.dropout(m, self.dropout, training=self.training)
        w2 = self.blend(g, self.EW3)
        m = w2.matmul(m) + self.blend(g, self.Eb3)
        
        m = m.reshape(m.shape[0], m.shape[1])

        return utility.Renormalize(m, self.Ynorm), g, w0, w1, w2

What are different options/settings doing?

Hi @pauzii !

I was wondering what the following options are doing/how they work:

From Authoring In Betweening Controller:

  1. Lerp Duration Factor
  2. Trajectory Control
  3. Trajectory Correction
  4. Lerp Input Pose
  5. Blend To Target Space
  6. Postprocessing - just some IKs on feet? Without it there's foot sliding. Though I've also noticed that when subsequent ControlPoints are very close to each other (or even at the same position), then this option causes the character to not move its feet at all, but rather "levitate" its torso towards the target pose :D

From Authoring:
7. Style Info Options - how are these working? How are they determined? Can I use my own style? These are just strings, not pointing to any animation file. In the default StyleControl GameObject I see that "Crawl" works, but I don't see any result for "Move" or "Aiming".
8. Speed - It seems like it does nothing.
image

How to train PAE and extract Phases from it?

I've watched the video on how to train it (https://www.youtube.com/watch?v=3ASGrxNDd0k) and downloaded the Untiy project (https://github.com/sebastianstarke/AI4Animation/tree/master/AI4Animation/SIGGRAPH_2022/Unity), but I have following doubts/issues:

  1. Is the BVH import functionality the same as in the PhaseBetweener?
  2. How to create a scene with motion editor for a Biped model (ex. LaFAN model)?
  3. There is no Biped pipeline in this project:
    image
  4. Provided I get the following output data:
    image
    what should I do with them? Do I need to follow the video step by step until the end in order to train the GNN? But is this GNN the same as in the PhaseBetweener?
    Or can I somehow copy them to PhaseBetweener Unity project as Phase parameters? But then how do I convert them to the following format required by PhaseBetweener:
    image
    ?

Error on importing trained .onnx into Unity

Hey @pauzii,

I went through the steps described here, I trained the NN using InBetweeningNetwork.py for 16 epochs and got the following .onnx results:

image

But upon drag-n-dropping the .onnx file into Unity I get the following errors:

image

ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
System.ThrowHelper.ThrowArgumentOutOfRangeException (System.ExceptionArgument argument, System.ExceptionResource resource) (at <695d1cc93cca45069c528c15c9fdd749>:0)
System.ThrowHelper.ThrowArgumentOutOfRangeException () (at <695d1cc93cca45069c528c15c9fdd749>:0)
Unity.Barracuda.Compiler.IRShapeInferenceHelper.ShapeInference.InferOutputShapeNCHW (Unity.Barracuda.Layer layer, System.Int32[] inputRanks, Unity.Barracuda.TensorShape[] inputShapes) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/ShapeInference/IRShapeInferenceHelper.cs:752)
Unity.Barracuda.Compiler.IRShapeInferenceHelper.ShapeInference.UpdateKnownTensorShapesNCHW (Unity.Barracuda.Model model, System.Collections.Generic.IDictionary`2[TKey,TValue] ranksByName, System.Collections.Generic.IDictionary`2[System.String,System.Nullable`1[Unity.Barracuda.TensorShape]]& shapesByName) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/ShapeInference/IRShapeInferenceHelper.cs:781)
Unity.Barracuda.Compiler.Passes.IRShapeInferenceAndConstantFusing.FuseShapesIntoConstants (Unity.Barracuda.Model& model, System.Collections.Generic.IDictionary`2[TKey,TValue] shapesByName, System.Collections.Generic.IDictionary`2[TKey,TValue] ranksByName) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IRShapeInferenceAndConstantFusing.cs:66)
Unity.Barracuda.Compiler.Passes.IRShapeInferenceAndConstantFusing.Run (Unity.Barracuda.Model& model) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IRShapeInferenceAndConstantFusing.cs:23)
Unity.Barracuda.Compiler.Passes.IntermediateToRunnableNHWCPass.Run (Unity.Barracuda.Model& model) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IntermediateToRunnableNHWCPass.cs:12)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (Google.Protobuf.CodedInputStream inputStream) (at Library/PackageCache/[email protected]/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:170)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (System.String filePath) (at Library/PackageCache/[email protected]/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:83)
Unity.Barracuda.ONNXModelImporter.OnImportAsset (UnityEditor.AssetImporters.AssetImportContext ctx) (at Library/PackageCache/[email protected]/Barracuda/Editor/ONNXModelImporter.cs:58)
UnityEditor.AssetImporters.ScriptedImporter.GenerateAssetData (UnityEditor.AssetImporters.AssetImportContext ctx) (at <5ad584e208e14caaa9e6b2e6027e9204>:0)
UnityEditorInternal.InternalEditorUtility:ProjectWindowDrag(HierarchyProperty, Boolean)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&)
Asset import failed, "Assets/15.onnx" > ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
System.ThrowHelper.ThrowArgumentOutOfRangeException (System.ExceptionArgument argument, System.ExceptionResource resource) (at <695d1cc93cca45069c528c15c9fdd749>:0)
System.ThrowHelper.ThrowArgumentOutOfRangeException () (at <695d1cc93cca45069c528c15c9fdd749>:0)
Unity.Barracuda.Compiler.IRShapeInferenceHelper.ShapeInference.InferOutputShapeNCHW (Unity.Barracuda.Layer layer, System.Int32[] inputRanks, Unity.Barracuda.TensorShape[] inputShapes) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/ShapeInference/IRShapeInferenceHelper.cs:752)
Unity.Barracuda.Compiler.IRShapeInferenceHelper.ShapeInference.UpdateKnownTensorShapesNCHW (Unity.Barracuda.Model model, System.Collections.Generic.IDictionary`2[TKey,TValue] ranksByName, System.Collections.Generic.IDictionary`2[System.String,System.Nullable`1[Unity.Barracuda.TensorShape]]& shapesByName) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/ShapeInference/IRShapeInferenceHelper.cs:781)
Unity.Barracuda.Compiler.Passes.IRShapeInferenceAndConstantFusing.FuseShapesIntoConstants (Unity.Barracuda.Model& model, System.Collections.Generic.IDictionary`2[TKey,TValue] shapesByName, System.Collections.Generic.IDictionary`2[TKey,TValue] ranksByName) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IRShapeInferenceAndConstantFusing.cs:66)
Unity.Barracuda.Compiler.Passes.IRShapeInferenceAndConstantFusing.Run (Unity.Barracuda.Model& model) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IRShapeInferenceAndConstantFusing.cs:23)
Unity.Barracuda.Compiler.Passes.IntermediateToRunnableNHWCPass.Run (Unity.Barracuda.Model& model) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Compiler/Passes/IntermediateToRunnableNHWCPass.cs:12)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (Google.Protobuf.CodedInputStream inputStream) (at Library/PackageCache/[email protected]/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:170)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (System.String filePath) (at Library/PackageCache/[email protected]/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:83)
Unity.Barracuda.ONNXModelImporter.OnImportAsset (UnityEditor.AssetImporters.AssetImportContext ctx) (at Library/PackageCache/[email protected]/Barracuda/Editor/ONNXModelImporter.cs:58)
UnityEditor.AssetImporters.ScriptedImporter.GenerateAssetData (UnityEditor.AssetImporters.AssetImportContext ctx) (at <5ad584e208e14caaa9e6b2e6027e9204>:0)
UnityEditorInternal.InternalEditorUtility:ProjectWindowDrag(HierarchyProperty, Boolean)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&)

UnityEngine.GUIUtility:ProcessEvent (int,intptr,bool&)

You can also see that the .onnx didn't get converted to Unity asset:
image

Scalability + Obstacle avoidance ?

I have been playing with it, performance wise it is very interesting (unlike what someone commented in Youtube).

I have two questions if you can answer them:

1- At what extent do you thing this is scalable to add more motion to fill the gaps? we might need to add more parameters to the model, but what do you think about performance?

2- Is there anyway I can hack it to add obstacle avoidance, like there is a wall between the control and target, and it has to pass by the door, or do you think this is has to be managed by a high level policy, it can be handled for navigation, but sometimes you have 3d obstacles that needs particular motion. Can these obstacle meshes be baked into the latent space somehow?

where is the "Export" button for generating training data

thansk for your great work. i'm trying to reproduce the training data.
as i followed the readme:
4. Click the "Export" button, which will generate the training data and save it.
I can't find the "Export" button
where is it?
thank you very much
image
image

Question about how the input motion phases of future frames is constructed

Hi,

Thanks for your work!

I am confused by the description in paper here.
image
I get it that during training, the phase feature is pre-extracted for each frame i from i - 60 to i + 60, but while inference, how can we extract the equivalent feature? Say, if I want to generate inbetweening frame from i to i + 120(target frame), [i - 60, i] is available, but (i, i + 60] is not available, since the future cannot be seen.

How do you solve this question?

Thanks!

The size of tensor a (823) must match the size of tensor b (784) at non-singleton dimension 1

https://github.com/pauzii/PhaseBetweener/blob/9d4956e33cc7d8e4b65c020a704dd9382d67af8a/DeepLearningONNX/Models/GNN/InBetweeningNetwork.py#L53-L54

Error related to #10 (comment)

Started creating data pointers...
Finished creating data pointers.
Warning: Number of gating features (130) and main features (693) are not the same as input features (784).
Traceback (most recent call last):
  File "/home/jupyter/DeepLearningONNX/Models/GNN/InBetweeningNetwork.py", line 106, in <module>
    yPred, gPred, w0, w1, w2 = network(xBatch)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/jupyter/DeepLearningONNX/Models/GNN/../../../DeepLearningONNX/Models/GNN/InBetweeningNetwork.py", line 176, in forward
    x = utility.Normalize(x, self.Xnorm)
  File "/home/jupyter/DeepLearningONNX/Models/GNN/../../../DeepLearningONNX/Library/Utility.py", line 251, in Normalize
    return (X - mean) / std
RuntimeError: The size of tensor a (823) must match the size of tensor b (784) at non-singleton dimension 1

I trained the wrong model

Hi @pauzii, another issue here.

I trained the model as per the instruction in the README.md (+custom Barracuda build, that I mentioned in an other issue), but I can't use it.

The reason seems to be that it's not the same model as the LaFAN1_150_DeepPhases or LaFAN1_150_LocalPhases (LaFAN1_150_NoPhases doesn;t work at all so I ignore it), but rather it's more similar (in input and output sizes) to the LaFan_150_GNN_DeepPhases_Styles model (which produces shitty results).

My trained model is called 150 (the number of epochs, right?)
image.

Training folder is missing

I got an error:

(PhaseBetweener) C:\Users\Wojtek\Desktop\prgr\PhaseBetweener\DeepLearningONNX\Models\GNN>python InBetweeningNetwork.py
2.0.1+cu118
True
1
0
NVIDIA GeForce RTX 3060 Laptop GPU
Started creating data pointers...
Collecting data pointers for ..\..\Input.txt - 28000
Collecting data pointers for ..\..\Output.txt - 28000
Finished creating data pointers.
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last):
  File "C:\Users\Wojtek\Desktop\prgr\PhaseBetweener\DeepLearningONNX\Models\GNN\InBetweeningNetwork.py", line 104, in <module>
    utility.SaveONNX(
  File "C:\Users\Wojtek\Desktop\prgr\PhaseBetweener\DeepLearningONNX\Models\GNN\../../../DeepLearningONNX\Library\Utility.py", line 157, in SaveONNX
    torch.onnx.export(
  File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 506, in export
    _export(
  File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\utils.py", line 1626, in _export
    onnx_proto_utils._export_file(proto, f, export_type, export_map)
  File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\onnx\_internal\onnx_proto_utils.py", line 174, in _export_file
    with torch.serialization._open_file_like(f, "wb") as opened_file:
  File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\serialization.py", line 271, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "C:\Users\Wojtek\miniconda3\envs\PhaseBetweener\lib\site-packages\torch\serialization.py", line 252, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './Training/1.onnx'

To fix this, I needed to create the Training folder in /PhaseBetweener/DeepLearningONNX/Models/GNN

Also look at:
https://github.com/pauzii/PhaseBetweener/blob/b8f4e4df3456f6218ce2434d8d875f78a3799ad9/DeepLearningONNX/Models/GNN/InBetweeningNetwork.py#L16

In MotionProcessor, when clicking "Process" an error is thrown

As the title says, I get the following error:

NullReferenceException: Object reference not set to an instance of an object
MotionProcessor+<Process>d__25.MoveNext () (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/DataProcessing/Processor/MotionProcessor.cs:298)
EditorCoroutines.MoveNext (EditorCoroutines+EditorCoroutine coroutine) (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/Extensions/EditorCoroutines/EditorCoroutines.cs:336)
EditorCoroutines.GoStartCoroutine (EditorCoroutines+EditorCoroutine coroutine) (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/Extensions/EditorCoroutines/EditorCoroutines.cs:279)
EditorCoroutines.GoStartCoroutine (System.Collections.IEnumerator routine, System.Object thisReference) (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/Extensions/EditorCoroutines/EditorCoroutines.cs:253)
EditorCoroutines.StartCoroutine (System.Collections.IEnumerator routine, System.Object thisReference) (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/Extensions/EditorCoroutines/EditorCoroutines.cs:124)
EditorCoroutineExtensions.StartCoroutine (UnityEditor.EditorWindow thisRef, System.Collections.IEnumerator coroutine) (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/Extensions/EditorCoroutines/EditorCoroutineExtensions.cs:9)
MotionProcessor.OnGUI () (at C:/Users/Wojtek/Desktop/prgr/PhaseBetweener/Unity/Assets/Scripts/DataProcessing/Processor/MotionProcessor.cs:178)
UnityEditor.HostView.InvokeOnGUI (UnityEngine.Rect onGUIPosition, UnityEngine.Rect viewRect) (at <5ad584e208e14caaa9e6b2e6027e9204>:0)
UnityEditor.DockArea.DrawView (UnityEngine.Rect viewRect, UnityEngine.Rect dockAreaRect) (at <5ad584e208e14caaa9e6b2e6027e9204>:0)
UnityEditor.DockArea.OldOnGUI () (at <5ad584e208e14caaa9e6b2e6027e9204>:0)
UnityEngine.UIElements.IMGUIContainer.DoOnGUI (UnityEngine.Event evt, UnityEngine.Matrix4x4 parentTransform, UnityEngine.Rect clippingRect, System.Boolean isComputingLayout, UnityEngine.Rect layoutSize, System.Action onGUIHandler, System.Boolean canAffectFocus) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, UnityEngine.Matrix4x4 worldTransform, UnityEngine.Rect clippingRect, System.Action onGUIHandler, System.Boolean canAffectFocus) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, System.Action onGUIHandler, System.Boolean canAffectFocus) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, System.Boolean canAffectFocus) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.SendEventToIMGUIRaw (UnityEngine.UIElements.EventBase evt, System.Boolean canAffectFocus, System.Boolean verifyBounds) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.SendEventToIMGUI (UnityEngine.UIElements.EventBase evt, System.Boolean canAffectFocus, System.Boolean verifyBounds) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.IMGUIContainer.HandleEvent (UnityEngine.UIElements.EventBase evt) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.CallbackEventHandler.HandleEventAtTargetPhase (UnityEngine.UIElements.EventBase evt) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.MouseCaptureDispatchingStrategy.DispatchEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.ApplyDispatchingStrategies (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel, System.Boolean imguiEventIsInitiallyUsed) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.ProcessEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.ProcessEventQueue () (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.OpenGate () (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcherGate.Dispose () (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.ProcessEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.EventDispatcher.Dispatch (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel, UnityEngine.UIElements.DispatchMode dispatchMode) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.BaseVisualElementPanel.SendEvent (UnityEngine.UIElements.EventBase e, UnityEngine.UIElements.DispatchMode dispatchMode) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.UIElementsUtility.DoDispatch (UnityEngine.UIElements.BaseVisualElementPanel panel) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.UIElementsUtility.UnityEngine.UIElements.IUIElementsUtility.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr, System.Boolean& eventHandled) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.UIEventRegistration.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.UIElements.UIEventRegistration+<>c.<.cctor>b__1_2 (System.Int32 i, System.IntPtr ptr) (at <08270fb28ecf479b927dcf4fe817bc07>:0)
UnityEngine.GUIUtility.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr, System.Boolean& result) (at <00a477ed1abf4030be646c3244bd3667>:0)

Here's my screenshot:
image

FBX importer cant't get phase features

hello, I'm trying to use my own data for training. I notice that there is a FBX importer in header AI4Animation/importer.
When I import my fbx files in unity, the actor moves normally . but as i follow the instructions to extract features, i can't get the deep phase features while other features are correct. i want to know how to get deep phase features.
Or, I can't transfer my fbx files to correct bvh files like lafan1. when i import my bvh files, the mesh of the actor looks strange. Is there any instruction about how to get correct bvh format?

test.fbx.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.