speedlimits / museum Goto Github PK
View Code? Open in Web Editor NEWThis project forked from sirikata/sirikata
Augmented Museum, a Sirikata application
Home Page: http://www.sirikata.com/
License: Other
This project forked from sirikata/sirikata
Augmented Museum, a Sirikata application
Home Page: http://www.sirikata.com/
License: Other
security breach -- change password & logic
add column to trigger python script for objects
some feature should indicate that this is not just a canned movie -- maybe camera pan?
clicking on a painting or sculpture should bring up metadata in the JS pane
javascript event causes camera to move to a position looking directly at artwork, at correct distance & angle (if painting, parallel to wall)
make gravity a property. At minimum, need a way to specify global gravity. Better to make it local to object (might be sideways for a painting?)
Fly starts automagically when you click Start button on the Start screen.
The final keyframe should trigger a return to the Start screen
If you drag and drop an image from the cover flow into the 3d space, but you do so over the navigation tool or the coverflow background or any other browser area, it gets created parallel to the camera rather than on the wall behind it.
staticbox subtype would allow us to set up arbitrary invisible walls, by specifying non-existent mesh files (we used to do this before we had dynamicbox and staticmesh)
staticsphere my also be nice, especially as a child of an avatar
We need some ability to light paintings and sculptures, and adjust the lighting.
right now, the app doesn't even work if it's not connected to the internet. Responsiveness on loading objects is related to network speed.
Given that the app will be deployed in Europe, and we don't want to commit to keeping sirikata.com up 24/7, we need to ensure that all assets AND name->asset mappings can be stored and retrieved locally, without need of an outside network connect.
Simple fix: we have the Staging mechanism, where a names.txt file has name mappings and assets can be in Staging or Cache. A simple script could download names and assets from the CDN to the Staging folder. However, without actually parsing the assets, we can't determine what we need, so we would have to download the entire CDN, and possibly multiple CDN's.
A better approach would be to have a command that tells the app to dump its current name->assethash map into the Staging/names.txt file. This would capture all the name->asset mappings we are using in a given run of the application. Presumably the assets themselves are in the Cache (otherwise we wouldn't see them). That way, the app should run fine without a network connection.
Related issue: some assets have URL's hard-coded in their sub-assets, so even if we move them to Sirikata.com they continue to try to download parts from graphics.stanford.edu. I'm not sure if/how these assets work if they're in Staging and Cache.
pressing left or right arrow before physics is initialized occasionally sends your avatar to an insane position, where the x coordinate is many digits long and all you see is blue
(seen in ccrma most recently, but I believe it's also happened in master)
the name field is null
with the JScript panel taking up about 1/4 of the screen on the right-hand side, the remaining area in which 3D activity will go on may look strange because the camera is off-center. Ken mentioned there should be a way to adjust that in Ogre &/or the rendering pipeline. Patrick mentioned that this could bring up issues wrt selection logic and ray tracing.
This is to enable the art team to compose lighting for scenes
if two pictures are on the wall, and I'm moving one, if I click another and move the mouse (without a second click), the first picture moves. Need to click a painting twice to actually grab it and move it on wall.
putting this here to remind us to work around this bug (sirikata.com/trac # 55)
we need to write a script to fix problem material files on the CDN &/or fix upload and re-upload all bad files
basic idea: some material scripts have mis-formed text. They should look like:
delegate "meru:///H6VPhlsl.program:H6VPhlsl"
but instead some are like:
delegate H4VPhlsl
these are all hash-named files in assets/
so when fixed, the hash must be recomputed, the file renamed, AND all files that point to this file must be fixed as well (names/ files? Or could the hash be in another hash file????)
This is totally a 'nice to have' but it would be cool if when selected the spot light on the picture was the main light and all other lights tone down so you make the picture jump out at you.
Try to use a wordpress post as the storage for gallery data.
The comments for the post will be the comments for the gallery
for artist branch, right-click is used for camera movement; only left button should select things.
Patrick says some old code allows right-click to select in a different order (but we're doing top object anyways so that's obsolete)
we need a light type that isn't actually put in the scene so Ken can use it as a template for moods
need more than just a hardcoded camera_path.csv -- maybe have it mirror scene.csv name?
also put it in scenes instead of cmake/build
tabs should highlight when they are selected
0a11f6
After the first picture has been placed, if you drag a picture from the JScript panel, it is often the wrong picture.
Basically before a picture jumps to the next wall it should not be allowed to go partially out of sight into the crack between the two walls
somewhere between commit 08a5d and a768c, the camera lost the ability to attach to the avatar.
Each of the 4 modes needs a start screen. These should:
*have a unified look
*contain a short description of what the mode is (maybe a picture/illustration?)
*have a start button to click
in curator/critic mode (maybe others), one mode of navigation should be to click on a place in the scene, and have the camera move there (not instantaneously -- move gracefully and gradually, as if controlled by a user)
if the click occurs on a picture or sculpture, the camera should additionally center itself so the object (if a painting) is parallel to the camera plane, and at a comfortable viewing distance.
Sometimes it is too easy to select background objects, then move them inadvertently. We should have a tag on all object, indicating whether they are movable or not.
Perhaps rather than a Boolean, it should be an integer that requires a certain level of permission to move:
if (obj->moveability < mParent->motionPermissionLevel)
SelectForMotion(obj);
or an OR of permission flags
if (obj->moveability & mParent->permissionFlags)
SelectForMotion(obj);
8/24 meeting --
in Curator/Critic modes, hovering over a painting or sculpture should visually indicate that it can be selected (clicked)
Suggestions: coverflow-like enlargement (temporary?)
color shift
border
flash (like artist selection mode)
changes directional to point
also still saving 'ambient' rather than 3 components
major piece of functionality is missing
we will use the issue tracker to track features and bugs for the Augmented Museum project
if you're not sure whether something is a bug in the core Sirikata code or specific to the Augmented Museum, put it here first.
When the window size is changed, the 3D viewport is not appropriately changed, resulting in images tat are stretched or squished.
reported by Chris P 8/24
left/right (using arrow keys) is slower by factor of 2 than forward/back (arrow up/down)
walls can be selected. This is bad!!
...
scale is wrong -- need to adjust either artwork or avatar/physics (Gravity)
mode 'w' axis-aligned (with shift/ctrl) should have slow/med/fast action similar to avatar control
discussed 8/24 --
There is a way to export scene data from Max (ogremax?) that creates an .xml file with every scene element's position & orientation
Script (Python?) would parse this xml file, and create an equivalent scene.csv file for Sirikata import
assumption is that every scene element corresponds to a .mesh that will be exported/imported
assign: dbm?
wait on user dialog to continue?
discussed 8/24 -- Patrick Horn volunteered -- probably Python
script will take 60+ paintings in (tiff?) format, create properly proportioned mesh, power-of-2 .tga, set UV's properly, convert to dds, create a folder ready for upload
The navigation tooldisk cannot be moved. Part of the problem is that the entire surface seems to be active to trigger motion. I suggest reducing the active area in the tooldisk, and use the remainder to move the tooldisk by clicking and dragging.
during picture-hanging mode, picture should 'snap' to a good viewing height. Perhaps draw a line (like Virtual Gallerie)?
assign: Ken?
For the flythru video should play on an off screen awsomium webview (texture mapped)
Playback should be triggered by a scripting event (for instance 2:34 mins into the flythru the video starts playing). Consequently video should stop playing upon a different event.
Sound is not necessary for the video.
In fly through mode we need to be able to only let user look around by moving the mouse. Every other mouse click or key press should not do anything.
Create light presets that can be triggered via the javascript interface in curator mode.
These presets will consist of ambient lights as well lights that are attached to paintings
(master) -- ctl-s sometimes crashes, saves just the camera
need to implement PID loop, or at least keep setting velocity regularly. Used to sort of work on Ubuntu because of key repeat.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.