urban-toolkit / utk Goto Github PK
View Code? Open in Web Editor NEWA Grammar-based Framework for Urban Visual Analytics
Home Page: http://urbantk.org
License: MIT License
A Grammar-based Framework for Urban Visual Analytics
Home Page: http://urbantk.org
License: MIT License
If a join is specified in the grammar using the integration_scheme it will be checked if that join already exists. To determine if a join exists it is checked:
There is checking regarding the operation (aggregation and custom function). That works well if in
and out
are physical layers or knots because the operation is applied in a later step but when in
is a thematic layer the join is stored with the operation already done. This difference should be considered somehow.
We should be able to specify interactions through the grammar. For example, so that we can select specific census tracts, or buildings, etc.
As a legacy feature from the old utk some layers and parts of the code manipulate data using only x and y. However, the whole code should use x,y,z and assign z=0 to 2D data.
grammar.md should contain descriptions of all that is possible to do in the grammar.
I will probably have to break this issue into smaller ones but the idea is that you can gain some performance if only render the part of the scene the user can see. The challenge is detecting that the user can see and filter that out efficiently considering different types of physical layers.
The files should follow the same naming pattern. Some of them are using camel case others hyphen, which is not ideal.
Be careful with renaming files because all the references to that file should also be updated.
Point cloud support :)
When loading data from OSM each type of road should have a different thickness. Streets, for instance, are thinner than avenues.
If I'm joining two physical layers it might be the case that some objects cannot be joined, in that case, there should be a default attribution of thematic values. For an intersect spatial_integration, for instance, some objects may not intersect with any other object Currently, for most spatial_integration the default value is 0, but it should be defined by the user.
We need a generic solution to add outlines around any physical layers.
We could render some trees to make things look more beautiful.
We have three types of geometry levels: COORDINATES, COORDINATES3D and OBJECTS. I think it is relevant to have a fourth level. The name we can decide but this is a level that can be used to make references to subdivisions in the physical layer. One example is how buildings are subdivided into cells. Layers that do not have any special type of subdivisions could have triangulation as the default subdivision.
When the grammar is applied the whole interface reloads. That should not be the case. Only what has been changed should reload.
This is how the camera is currently specified in the grammar:
"camera": {
"position": [
-9753739,
5117375.5,
2216.08675
],
"direction": {
"right": [
14146.2158203125,
100883.421875,
2216086.75
],
"lookAt": [
14146.2158203125,
100883.421875,
2213086.75
],
"up": [
0,
1,
0
]
}
}
Even though that is the parameters used internally they should be exposed in a more friendly way to the user. One way to do that, for example, is using lat/long (EPSG:4326) instead of EPSG:3395 for the camera position. Internally the system still manipulates 3395
Currently, all data is served under public/data/repository name. The user cannot choose, either via Grammar or Jupyter or Graphical interface, what folder to use. To choose the folder the user has to change environmentDataFolder in src/params.js (front end) and in src/utk-map/ts/pythonServerConfig.json (back end). There should be a better way to do this.
Currently, you can only interact with physical layers that have thematic data linked. Does that make sense?
Currently, the .json of the physical layer has a "discardFuncInterval" indicating what pixels should be discarded based on their value of thematic data. That should be specified in the grammar. This feature is used to eliminate pixels in a grid over the surface that have value 0 for instance.
We could also have a default thematic value that could indicate discard.
In the grammar view component, there is a disabled functionality of log messages. It is important to keep the user updated about what is happening behind the scenes.
This issue might involve restructuring the way the dashboard is being loaded because everything is loaded at the same time. Is there is a widget, like the map, that is taking more time to load it is going to affect the loading of all the other components and widgets, which should not happen.
Currently, the shaders are assigned in the .json that defines the layer. The physical layers should not have any information about the shaders, they should be assigned dynamically according to the functionalities specified in the grammar. For example, if the grammar specifies picking interaction then the PICKING shader should be assigned to the layer.
Support for loading Google Earth building tiles
An example of that is the getAbstractDataFromLink function in layer-manager.ts. It should be inside knotManager.ts (probably).
The widget we are using for the Grammar editor has support for JSON schema. We can try to leverage that.
Loading tiles based on camera position
The abstract flag is used to indicate if in
is a thematic layer or not. That should be already encoded in the headers of the .utk file.
This issue depends on issue #27
The user can use knots inside other knots (in and out fields) as if it was a layer. In that case, the final layer and thematic values of the knot are considered.
{
spatial_relation: "INTERSECTS",
out: {
name: "layer1",
level: "COORDINATES"
},
in: "knot2",
operation: "AVG",
abstract: true
}
Currently, all views and widgets are initialized and positioned according to the initial size of the screen. Every time the screen changes the size and position should be adjusted.
Currently, physical layers have metadata stored as .json and the data itself in .data binary files. The thematic layers are completely stored in .json. The idea is to replace those files with a unified binary .utk format with headers describing the data and the data itself.
While I was changing things around and reshaping the grammar to work as a dashboard the search widget broke.
Component file: src/utk-map/ts/src/reactComponents/SearchWidget.tsx
It is just a matter of seeing where the widget is used and check what is out of place.
If you trying to run UTK following the documentation I created, I would like to ask you to post any problems or questions you have on this issue so we can help everyone at the same time :)
For some reason when two grouped animated layers are used they do not behave as expected. You first have to interact of the first layer to be able to interact with the second. It is probably a matter related to parameter initialization. The solution involves changing the ToggleKnotsWidget.tsx.
If two grouped layers have the same amount of sublayers it should be possible to link their animations.
Currently, we can define what layers will be shown at which zoom level. However, we could have more sophisticated dynamics with multiple resolutions. There are a number of ways to navigate through resolutions, for instance ("An assisting, constrained 3D navigation technique for multiscale virtual 3D city models").
We can discuss this further to have a more concrete idea.
In some parts of the code, I'm assuming that there is only one mapview instantiated (components[0]).
Also, the link between grammar and the map camera only works for one mapview.
When picking is activated for the triangles layer a right click with the mouse should deselect all elements, but that is not happening.
Currently, we have two types of shaders to handle picking: shader-picking.ts and shader-picking-triangles.ts. The first one handles picking for buildings and the second one for triangle layers. We can unify both and make it generic to work with any layer that follow some constraints.
integration_scheme := (spatial_relation?, (layer_in | knot_in), (layer_out | knot_out), operation?)
operation := (aggregation | *Custom function*)
aggregation := MAX | MIN | AVG | SUM | COUNT | NONE | DISCARD
The aggregation is implemented but not the custom function.
The user should be able to specify what is the domain of the data in the grammar so the color scale is generated accordingly. Also, the user should be able to specify the scale to use (from d3).
There should also be a way to link the domain/normalization/color scale of two knots.
Some packages should not be bundled together in the final version. Those dependencies should be included in the "external" field in the rollup.config.mjs file.
Still have to decide which dependencies should be external
Currently, it is only possible to Join COORDINATES3D with COORDINATES3D and only with the NEAREST spatial_relation. We want to be able to do more 3D operations. For instance COORDINATES3D with OBJECTS using a CONTAINS spatial_realation.
Document and generate data for all examples that are described in the paper and the web page.
Old web page: http://utk.evl.uic.edu/
grammar-manager.ts (GrammarManager) was initially meant to be the class to manage the grammar. However, that is now done by grammarInterpreter.ts and grammar-manager.ts only takes care of plots.
Be careful that changing the name of files and classes implies changing all the places where they are used
Sometimes it is desirable to use thematic values just to color the physical layers but they don't have any semantics. The user could then signal somehow that those thematic values are actually colors and use rbg, hex or other relevant color systems.
Adding interactions to select buildings and load new ones from a (obj?) file.
There are some "TODO" comments sprinkled throughout the code. Some of them express concerns and others small changes that need to be made. If you guys wanna work on them just remove the comments as you solve the problems and paste a note here. We can also discuss stuff because I'm sure some TODO's does not require any fix at all.
We have three geometry levels COORDINATES, COORDINATES3D, and OBJECTS. In theory, we could merge COORDINATES and COORDINATES3D because the first one is just a special case of the second where z=0. The problem is that we can easily do spatial joins in 2D but 3D is more complicated. So we have to support all the spatial join in 3D before doing the merge or be able to tell that the layer is in 2D without having to rely on the geometry levels label.
An interesting feature would be having the possibility to inspect the grammar that defines each one of the components of the grid. A button on the map could show the part of the grammar where that map is defined so the user could make specific and localized changes.
This feature should not replace the unified grammar view.
If interactivity is enabled for multiple plots at the same time some unexpected behaviors start to happen.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.