computationalbiomechanicslab / opensim-creator Goto Github PK
View Code? Open in Web Editor NEWA UI for building OpenSim models
Home Page: https://opensimcreator.com
License: Apache License 2.0
A UI for building OpenSim models
Home Page: https://opensimcreator.com
License: Apache License 2.0
Estimate: 1-4h (might end up requiring a little jiggling around to do robustly)
From Nerissa:
I found a 'problem' in the new measuring tool. As you can see in the image below, it is possible for the measured distance and the coordinates you point at to overlap, making it hard or impossible to read the measured length. Of course, with some manouvring, it is possible to work around it, but it might be nice to e.g. redirect the coordinates or length whenever this happens.
Estimate: 2-6d (adding texture support is easy, most of the work is around adding relevant dialog boxes, buttons, etc.)
For model building.
Let the user add an image--e.g. an arm anatomy diagram--along an axis of the scene /w adjustable opacity, so that the user can edit the OpenSim model with that as a ("tracing") background. This is really handy for sculpting, where people usually work this way in other software (e.g. Blender).
Estimate: 2-6d (wide range because doing this properly would also require stuff like drawing 3D connection lines, modals for displaying connectivity, etc.)
If the user selects a body in the UI, a common question is "which joints are connected to this body", or "which offset frames are connected to this body", or similar. There isn't currently an easy way to figure this out, bar going through the joints and manually checking.
Estimate: 2-5h
The osc::Transform
class is specifically designed for 3D rendering (as opposed to the SimTK::Transform
, which is better-suited for accurate simulation transform encoding). A common feature in other transform classes (e.g. Unity's, here) is to have the ability to chain together transforms as part of a scene graph computation.
This is similar to chaining OpenSim::Frame
s, in that it's there for creating topologies, but the utility of an osc::Transform
is that it's designed to have a lower memory footprint (10 float
s vs. 12 double
s) and be trivially tween-able for when the UI decides to do things like tween between transforms.
Medium prio: it's an easy(ish) change that helps a user know when their data is "safe"
Estimate: 1d, 2d if also implementing dirty flagging at the same time
When a model is opened, the window's titlebar should change from:
OpenSim Creator
To:
OpenSim Creator (file.osim)
And ideally there should be some dirty-flagging, so that an unsaved file shows with an asterisk:
OpenSim Creator (file.osim*)
But this might be a fair bit more work
Estimate: 1d (assuming it's fairly easy to implement an in-app clipboard abstraction)
It's been noticed in several UX sessions that it's kind of annoying to copy each vector component one-by-one between things in the mesh importer
At the moment, copying a Vec3 (for example) involves copying over each value one-at-a-time into another Vec3 field (e.g. to copy a property value around)
This might be implementable as a generic property-copying approach, maybe.
Estimate: 1-2d for partial support, 5-8 days for full support (TextGeometry
) with the side-benefit that full support would also mean other parts of OSC would then be able to render 3D text.
The SimTK::DecorativeGeometryImplementation
implementation that OSC uses, which is directly coupled to OSC's lower-level systems (notably, GPU allocation/caching) does not implement all geometry types yet. This means that if an OpenSim model / SimTK system contains one of those unknown geometries then it will not be visible in OSC.
Steps necessary:
DONE: install a one-time log warning message whenever the frontend (e.g. an OpenSim model) tries to emit a not-supported geometry to the backend
Find some OpenSim models that contain these not-yet-supported geometries, if possible, and install them into the examples/
dir so that we have in-prod testing of the geometry available (so it's easier to end-users to investigate)
Write custom OpenSim models that contain geometry that isn't exercised by existing OpenSim model files (e.g. a model containing toruses)
Implement each missing geometry:
Ensure there is a model building route for adding these geometries (e.g. adding dropdows/buttons for attaching them to OpenSim::Body
s in the model)
Estimate: 0.5-1.5d
There's currently a hacky spatial measurement tool, but one thing that people are asking for is a measurement tool that measures things within a particular frame (coordinates, lengths, whatever).
The idea would be that a user selects some frame in the scene and then starts clicking somewhere else so that the UI says "oh yes, it's here in space and it's this far across, etc.)
Estimate: 1-2d (basic impl.), 3-5d (impl that also takes things like crazy length scales into account correctly and has soft edges without blowing the GPU up)
The current implementation does not render any scene shadows, which gives a very "flat" look to the scene. I've previously implemented shadows using a standard shadowmapping technique, e.g. as in here:
https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping
Integrating it into OSC shouldn't be too much work. Main issue is ensuring the shadowmap is aligned correctly relative to the current camera position, and ensuring the shadows don't become too messed up when the camera is (e.g.) zoomed out and the shadowmap's resolution is insufficient to map distant objects. It would be too much work to implement hierarchichal shadow maps.
Show the "line" of the coordinate in the UI, if possible (I'm unsure how easy this is, given that a "Coordinate" is generalized and not necessarily easy to stick onto a 3D frame without a bunch of heuristics).
Estimate: 1-2d
When the user is attaching geometry to a body or something, they're presented with text dropdowns like "cube", "sphere," etc. If they select one, or a file, it would be nice to have a little 3D viewer that shows what they're about to attach.
Estimate: 2-6h
If the user clicks a combo box, it lists the options. However, some users expect to be able to type the first few letters of an option case-insentively to find something.
Estimate: 4-7d (assuming recording the states etc. is already sorted and the user doesn't want fancy recording options)
This is a stub issue for implementing simulation recording.
High-level overview:
SimTK::State
s from the simulator into video framesSimTK::State
s to align with the video framerate)The encoder does not need to be high-perf, only "good enough" to provide the feature. It's preferred to use smaller libraries that OSC can build from source on all target platforms, rather than larger encoding libraries that are a complete PITA to build + integrate (looking at you, FFMPEG...)
Estimate: 2-4d for most of the implementation (incl. UI windows for picking sizes etc.)
Add basic support for screen-shotting the 3D viewer alone.
Of course, users could just do this themselves with print screen/snippet tool. However, the main benefits of having something in-UI is that the UI has direct access to the renderer, so it can:
The reason this bug exists is because muscle coloring requires scanning over the entire geometry list "looking" for muscle-related geometry.
If "coerce selection to muscle" is selected then the comparison is something like:
for (Component* c : drawlist) {
if (dynamic_cast<Muscle>(c)) {
// perform muscle coloring
}
}
Which is O(n) w.r.t. the number of components in the drawlist - assuming dynamic_cast
is asymtopically O(1) to figure out whether an instance is a child of some other class (this is implementation dependent and likely depends on how type information is loaded into the runtime vtable etc.). Whereas to make the feature work when "coerce selection to muscle" is not selected then the comparison is something like:
for (Component* c : drawlist) {
if (dynamic_cast<Muscle>(c) || is_child_of<Muscle>(c)) {
// perform muscle coloring
}
}
Where is_child_of
is an algorithm that needs to crawl up the model hierarchy until it either hits a muscle (it is a child of a Muscle
) or root (it is not a child of a muscle). The crawl is O(m) in average depth of the component in the model hierarchy. Most subcomponents in a model are deeper (the tree "fans out" to many lower levels, like individual points in a muscle, quite a bit).
OpenSim models also aren't exactly memory-local, so you're trading a linear scan over the drawlist (the for
) and an (assumed O(1)) dynamic_cast
with a not-memory-local tree traversal that performs many dynamic_cast
s also. The perf hit is mostly in memory locality issues.
I'll re-insert the is_child_of
code and see if it's actually a problem, though. It's probably better to have a consistent user experience (changing muscle coloring "just works" rather than being dependent on some unrelated setting) than be a few percent faster in rendering.
Estimate: 4-8d
This is a fairly big feature that tries to mimic how Blender works.
The idea is that users can click G, R, or S in a 3D scene to temporarily enter a "mode" in that screen that performs the specific action. The user can also press X, Y, or Z while in that mode to perform a transform in that axis.
E.g. this would enable a user:
The benefit of this UX feature is that it removes the user from having to use the right-click menu or on-screen gizmos, which are harder to click.
Estimate: 2-6h
Right now it's a cube, which isn't what people seem to like. Maybe a sphere with axes poking out a little bit?
Estimate: 3-5d (bulk of the work is in establishing temporary locations, expiring files, when to persist, etc.)
E.g. if the user is editing/making a model but hasn't saved in a while, it might be advantageous to save the model to some temporary location that, when the user reboots OSC, a popup comes up saying "oi, you have unsaved changes, what do you want to do about that?" or something along those lines.
The benefit of a feature like this is that it makes unhandle-able crashes (e.g. segfaults) a little less painful. At the moment, the system can only try to automatically recover from runtime exceptions.
High-priority: it annoys users and is fairly trivial to change
Estimate: 1-4h
At the moment, the OSC UI splits things into entirely sandboxed "screens" that may share a little state but have complete control over the entire window frame. The main screens users see are:
These need to be refactored into a "Tab" interface, which will largely be the same as the screen interface, but with the following differences:
onTabCloseRequested
), so that tabs can close themselves, open other tabs, blink the tab (e.g. when a sim is complete), set the tab's title, etc.Estimate: 1-2d
At the moment, the initial position of ImGui panels is based on a hard-coded configuration file that I copy from my dev machine. Ideally, the initial position of panels would be calculated based on the host's computer (screen dimensions, etc.) because my screen might be a lot larger than some users (who might be using cheaper low-res laptops)
Estimate: 1-2d for a basic 3D visualizer with basic tooling.
The OSC backend could easily support directly rendering Simbody simulations, because it already (effectively) does this via OpenSim.
Although this isn't a core required feature of OSC, the advantage of implementing a basic Simbody visualizer is that it enables exercising Simbody features directly in OSC separately from OpenSim, which is useful for debugging a simulation/rendering issue (it removes OpenSim from the equation).
Estimate: 1-3d
Currently, all meshes emitted by the backend are duped. As in, they're either:
It would be advantageous to implement basic vert deduplication because we already have the necessary backend for it (the codebase is already using EBOs) and it would reduce the GPU's memory pressure slightly, which is a nice-to-have on lower-end devices.
Estimate: 1-3d
If the user has an item selected in the UI and then hovers over an item that surrounds that item then the inner-item's rims will be dimmed. There's deeper technical reasons for why this happens, but it should ideally show each rim clearly.
Estimate: 2-4d
In particular, this is a requirement for PathSpring
s to work correctly (see #77). This issue specifically addresses making GeometryPath properties work.
Estimate: 2-3d
Giving the user the ability to configure the scene with a different floor, lighting, background image, etc. would be nice. Based on what I'm seeing in presentations etc. people seem to enjoy wooden floors (those monsters).
Estimate: 1-3d
The mesh importer 3D editor is much more comprehensive in its feature set. It supports things like:
Estimate: 2-4d (it also requires robustly implementing a long-term per-user configuration system)
At the moment, if the user opens/closes windows it isn't remembered between boots of the application. This makes the UI feel a little less "homey" because the user may have set things up just how they like it to find that resetting the application wipes all memory of their changes.
Estimate: 1-4h
The "move something between two meshpoints" and similar actions currently rely on left- then right-clicking to get two points. The logic is that left-click assigns one point and right-click assigns the other.
The issue is that the screen immediately transitions once both points are set with no backsies. It would be better to instead make it such that users can do the entire thing with one button, but that there's some basic undo/redo support and a big "FINISHED" button once they're happy with placement.
Estimate: 4-8h (it just deletes files - assuming the configuration system is "done")
I.e. a "Factory Reset" button in the UI
Estimate: 2-4h
A nice-to-have feature - probably more useful on smaller models where everything in the hierarchy viewer is within one scroll page.
Items in the mesh importer scene graph are displayed in a hierarchy viewer. They may also be "attached" to each other (e.g. a mesh attaches to a body). One comment in a UX session was that it's unclear what's attached to what. The 3D viewer shows connection lines but one suggestion was to also show thin connection lines in the 2D hierarchy viewer.
Estimate: 3-6d (the datastructure is used in a lot of places)
At the moment, the osim editor allows for Undo/Redo behavior. This works by enabling edits to a "scratch" model that's later system'd, state'd, and render'd before being pushed into an undo/redo circular buffer.
The current API effectively dirty-flags the model as it's edited and, finally, performs this commit magic at the end of the frame (if necessary). This is sub-optimal because there is no metadata. The client code just edits a model and forgets about undo/redo (it's automatic, based on the flag). This means we can't have a labelled history ("added this", "deleted that") and we can't easily later support features like labelled snapshots and model versioning.
Estimate: 1d
The ImGui default slider (e.g. the one use to slide along coordinate values) is difficult for some users to identify. This is because modern software like blender, youtube, etc. use a slider design that is a thinner line with a larger circular slider button.
Sliders should also make it clearer how to edit their value (Ctrl+LeftClick) - it's not obvious right now.
Estimate: 0.5-2d (depends on how we deal with the event propagation)
Minor bug
There is currently a bug in the mesh importer screen where importing a mesh using the mouse (i.e. clicking "open" in the dialog) causes the mesh to be imported, but de-selected.
The code for importing the mesh is fine, the problem is that the click event from clicking "open" is being propagated to the UI, which is intepreting it as a click
Estimate: 2-10d (wide range because it's ill-defined how robustly it needs to be implemented)
There isn't currently any support for this because wrapping surfaces are a little annoying to do with the existing automated modals. Iirc, it's because the wrapping geometry has to be added into the model's wrapset and connected to a relevant GeometryPath correctly etc. to all work.
Estimate: 1-2d (+ any other overhead from PRing)
At the moment, the spring doesn't really output anything. This can make it very annoying to develop a model that contains lots of springs, because the spring's rest length etc. usually need to be tweaked a bit to rebalance the model.
Estimate: 1-2d
Ideally, this would be implemented as a composite output measurement. Effectively, add two offset frames to the body and plot length(f1.pos-f2.pos)
.
It might be that, in the interim, it needs to be added as a specific feature for specifically measuring the distance between two points.
Estimate: 2-4d (it would also require re-architecting the 3D viewport API to robustly handle specialized context menus, which would be nice to have for other reasons though)
Right-clicking a body in the visualizer should, ideally, show a contextual action like attach new body to this
or something. This is so that the user doesn't have to separately find the body in the add body
modal
High-prio: modal screens frequently confuse certain types of users and this fix isn't hard to implement
Estimate: 1-3h
A modal screen is one of the "do something" screens in the mesh importer. It's effectively a "mode" that the entire screen temporarily enters to prompt something from the user that can't allow any other actions to take place (e.g. "select something").
At the moment, modal screens only show a little bit of text in the top-left corner like "click something, or press ESC to exit". It would be better if there was also a cancel button that the user clearly sees and that takes them back.
Estimate: 2-6h
Even the lowest-end of laptops render just enough of the splash screen, but it's unknown what our typical users actually use, so pessimistically assume "the worst laptops in the world".
The current splash screen uses a hard-coded main menu size. It's mostly ok, but can push the top logo etc. off the edges of the screen on low-res screens (e.g. the ones found on low-end laptops).
Estimate: 0.5-2d (depends on how easy it is to automate)
At the moment, the mesh importer tries to show which axis has a degree of freedom by lengthening it and shortening the others. For example, a PinJoint
has a longer Z than the other axes.
This is not currently implemented for all supported joint types. E.g. universal joints have equal-length axes. A robust implementation would handle this for all axes. Ideally, by creating a fake joint, perturbing it slightly, and seeing what effect that has on the parent-to-child relationship (e.g. if a coordinate seems to rotate it by some axis then it's probably an axis-aligned rotation).
Estimate: 2-9d (wide range because grabbing and dragging might require a rehaul of the model commit system and UI renderer also, it would also require fairly tricksy heuristics for handling OpenSim)
This would require a property-to-viewer interaction, or at least some kind of heuristic (e.g. "is a freejoint selected? manipulate coords, is an offsetframe selected? manipulate transform, etc.)
Estimate: 2-4d
Some users prefer positioning things in the local coordinate system. This is particularly handy for users that have their frames set up in some external software that happens to use a different coordinate system.
"Shortest path" refers to the fact that a body may be joined to ground by multiple joints. The "shortest" path should be chosen when deciding which parent frame to use for this local-space transform.
Estimate: 1-5d (wide range because I don't know the alg at all)
This requires more investigation.
Based on user feedback, one thing that might be handy is the ability to "mate" two meshes toghether. Effectively, to slam two meshes into eachover until they collide as a way of positioning them relative to eachover (imagine having bone meshes that are separate in space but you want to position them such that they are touching - you want to "mate" them).
The mesh importer now can import a mesh and assign bodies, but does not support defining joints for the imported mesh (they all just default to weld joints).
Estimate: 2-5h
This could either be done in an OpenSim-agnostic software engineering way, by having the simulation thread sleep on a condition_variable
shouldSleep
type thing (so rebooting would be achieved by resetting the flag and re-waking the simulation thread). It could also be done in an OpenSim-specific way, by having the simulation save its latest (post-integration) state before collapsing the worker thread entirely (so rebooting would be achieved by restarting the simulation with the saved state as the initial state)
This was simplified to pausing/resuming the playback, which is easier than implementing an interruptable simulation (the sim will continue, but the playpack will pause).
Estimate: 1-2 days for basic support (of the most used types, with some error swallowing), 2-5 days for more comprehensive support.
I anticipate adding okay support that enables deleting the most important component types (e.g. bodies, joints) may take a day or so with a little basic testing and some robustness code that rolls back if the operation breaks the model. A robust implementation that handles a wide variety of edge-cases, presents the users with appropriate options ("would you like to reassign that now-dangling socket?") may take several days.
Deleting things in an OpenSim model is currently quite difficult because only some items support deletion. The reason why is because some components can be dependent on other components (recursively). For example, a muscle might be dependent on a geometry path which is dependent on path points which are dependent on a body frame. Deleting a body might therefore require deleting the points, path, and muscle that are (in some cases, indirectly) dependent on it.
Estimate: 1-2d
When a user adds a new body, they are presented with a list of all the other frames the body can attach to. This list should (ideally) be sorted and searchable.
When the user adds some other component (e.g. a force), they are presented with a list of frames that the component's sockets will connect to. These, too, should be sorted + searchable.
Estimate: 0.5-3d (large range because it's unclear what it would entail to be done robustly and clearly)
When a user hovers a joint in the osim hierarchy, it would be nice to show the joint center in the 3D visualizer, to make it easy to figure out its current orientation etc.
It might be that "show frames" effectively does this, but isn't clear enough yet.
Estimate: 2-6d (wide range because the code is ready for it, but there might be some annoying edge-case stuff to kick out during implementation)
At the moment, the OSC UI splits things into entirely sandboxed "screens" that may share a little state but have complete control over the entire window frame. The main screens users see are:
These need to be refactored into a "Tab" interface, which will largely be the same as the screen interface, but with the following differences:
onTabCloseRequested
), so that tabs can close themselves, open other tabs, blink the tab (e.g. when a sim is complete), set the tab's title, etc.A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.