Coder Social home page Coder Social logo

Comments (17)

electrodude avatar electrodude commented on July 18, 2024 1

Unlike Mesecons, Digilines doesn't visually update every wire node every time a message is sent, so only the effectors have to be notified of messages, and there's no need to iterate over and modify every block for every message. So, instead, what if we do it how Technic's power distribution works, not bothering with the wires themselves every time but instead caching a list of effector nodes that actually need to be notified of messages? This list can be generated using VMs, but after that, the intervening wire nodes don't need to be loaded, looked at, or modified at all to actually send a message.

This adds the problem of the switching station, which can be overcome by making the switching station not a block but just a Lua table referenced by numerical ID by every wire in the network.

EDIT: got receptors and effectors backwards, fixed it

from digilines.

sofar avatar sofar commented on July 18, 2024

I think it would be very interesting to see this work. Perhaps @electrodude wants to comment? I know also that the @Jeija mesecons devs are implementing similar mechanisms, they may have ideas to share.

from digilines.

Jeija avatar Jeija commented on July 18, 2024

Actually, @Hawk777 was the one implementing these mechanisms in mesecons 😉 , so he is the one most proficient in using VoxelManips for transmissions. The digilines algorithms were basically forked away from mesecons in January 2013 and have never been updated since (mostly because there was no immediate need for it and my laziness). So I would be very much in favor of updating digilines just the way we did with mesecons. Since digilines only handles transmissions and doesn't care about different input/output rules and on/off states, it should be much easier to rewrite these digilines functions compared to mesecons (and just copy the VoxelManip caching stuff from mesecons).

and rewriting digiline:transmit to do an iterative graph search?

Yes, that is long overdue.

However, this would mean that the number of forceloaded blocks needed to operate a circuit with two nodes a long way apart could be reduced from hundreds (covering the whole wire) to just two (covering the ends).

I can understand that we want digilines to behave similarly no matter if the block currently happens to be active or inactive, but I'm not really in favor of forceloading done by the digilines core. I think there are many legitimate reasons for digiline "effectors" to not do anything in case the mapblock is inactive. E.g. there is no need for the LCD to update unless a player is actually close by. Since things like the digilines LCD are often used all over the map (e.g. I've seen them used as traffic displays on VanessaE's creative server), forceloading all of these blocks would increase memory usage. So I'd say that letting the effector take care of forceloading / using VoxelManips / ... in case it absolutely needs it might be the better solution compared to implenting that in core.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

I think there are many legitimate reasons for digiline "effectors" to not do anything in case the mapblock is inactive.

I totally didn’t mean for Digilines to start forceloading anything. I meant that if you want a large-scale circuit to work, you might use something like a Technic anchor. By making this change, you would only have to place two instead of hundreds for a long string of cable.

Unlike Mesecons, Digilines doesn't visually update every wire node every time a message is sent, so only the receptors have to be notified of messages, and there's no need to iterate over and modify every block for every message. So, instead, what if we do it how Technic's power distribution works, not bothering with the wires themselves every time but instead caching a list of receptor nodes that actually need to be notified of messages? This list can be generated using VMs, but after that, the intervening wire nodes don't need to be loaded, looked at, or modified at all to actually send a message.

That sounds like exactly the idea I suggested in this comment. That comment and the next explain some of the difficulties. We decided not to do that for Mesecons. Do you think the problems are reasonable to overcome for Digilines?

from digilines.

electrodude avatar electrodude commented on July 18, 2024

Technic's cables don't use VMs - it uses minetest.get_{node,meta} to build its cache and then just uses the cache after that. I don't understand how it detects network changes, but I think it just rebuilds the whole affected network if anything changes.

If we do this for Digilines, efficiently joining networks shouldn't be hard - just move all of the nodes in one of the networks to the other. However, the easiest way to do cutting might just be to invalidate the whole network that had a wire node removed and recalculate the whole list when the next message is sent. An exception to this is that removing a effector or receptor that isn't a wire is easy. Unless you have technic frames that are constantly moving digiline wires around, I'd expect it to be better than what we have now (recalculating everything for every message), and also faster than VMs, since there's nothing to check this way if the network hasn't changed since the last message. I don't see why it should be too much of a problem for digilines, but it will certainly involve major rewriting and I unfortunately won't have the time or motivation to try implementing it any time soon.

The datastructure could probably just be a global table mapping network IDs to lists of effectors in that network. Network IDs don't really need to be stored to disk as metadata, so they can just be stored in a Lua table mapping positions to network IDs using something like minetest.hash_node_position(pos), but it might still be a good idea to store them as metadata so they can be unloaded properly. Receptors don't need to know what network they're in - they just look at the 6 neighboring blocks and see which are wires or effectors -, and effectors don't know what network they're in, but networks still need to know what effectors they have (this is so receptors and effectors can be part of more than one network). When a effector is added or removed, update the effector lists of the networks of any wires in the 6 neighboring blocks. When an receptor is added or removed, nothing needs to be done. When an isolated wire is added, assign it to a new network. When a wire is added that connects to something, look at the 6 neighboring blocks and see which of the networks it connects to has the lowest ID, and take that ID; then, move every wire in neighboring networks with higher IDs into the new wire's network, and delete the now-empty networks. In any case where a wire is added, after assigning the wire to a network, make sure any neighboring effectors are in that network's effector list. When a wire is removed, remove every wire node in its network from the network and delete the network. The next time a message passes through a wire not part of a network, that wire's network will be regenerated.

EDIT: I got receptors and effectors backwards. I edited this comment to fix that.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

The reason why we decided not to use a schematic representation of networks in Mesecons is because of the difficulty ensuring the schematic representation was always consistent with the real world, even in the face of such things as server crashes and editing the world without Mesecons loaded (in a Minetest instance without Mesecons or in a non-Minetest tool).

I propose a solution that eliminates all of these problems: make the schematic cache volatile. It’s probably still worthwhile using VoxelManipulators to populate the cache rather than get_node (if for no other reason than that they allow the mapblocks not to be forceloaded), but this can be done when the first message is sent into an uncached network after server startup and can persist only until invalidated or until server shutdown. IMO keeping a volatile schematic cache consistent should be doable (though such things as pistons and movestones will need consideration).

The theoretically ideal choice of data structure would apparently be spanning forests with levels or cutsets, which would take quite a lot of work to implement. For million-node graphs, they are probably appropriate; I suspect our graphs are unlikely to rise above a few thousand nodes, though. So I agree with just invalidating the entire network cache on deletion of a node and letting the next message sent rebuild it.

Given this much simpler problem, I think a good choice of data structure would be a variant of the disjoint-set data structure.

My first idea was to construct a table (the cache table) mapping from hashed node position to a data table. The data table would either contain all the effectors for this network (all nodes in the network would point at the same table), or would be marked invalid, or would be marked as an alias for another table (the alias form would be used when joining two networks; network A’s effectors would be copied into network B’s table, then network A’s table would be cleared and marked as an alias of B’s table). Deleting a node would mark the network’s table as invalid, and the next send-message attempt would create a new table (not just refill the existing one) to be pointed at by the found nodes, thus leaving all disconnected segments pointing at the old invalid table and only the one still-connected segment pointing at the new valid table.

The problem with this idea is that you could rapidly create a lot of tables. In the worst case, almost every node could end up with its own table if you do something particularly pathological like lay down every second piece of cable, send a message into it, and then join them all together. There would be a lot of alias tables. Path compression optimization would eventually discard some of them, but those wires not ever used to send messages would keep a lot of crud around in memory for a long time.

My second idea was to attach the data table to only one node, and then have other nodes in the set just map to the hashed position of a parent node, eventually leading to the canonical representative who holds the data table. This means there is only ever one table per network (rather than possibly as many as one per node). However, it makes invalidation impractical: you can mark the table as invalid, but you can’t throw it away and start building a new one using only a subset of the nodes that were in the old network because some old nodes that are no longer in the network might still be pointing at the hashed positions of nodes that are still in the network.

My third idea was to use two tables. The first table maps from hashed block positions to subnet IDs, where a subnet ID is just allocated from an incrementing counter. The second table maps from subnet IDs to values, where the value is either another subnet ID or else the network data (list of effectors). Multiple nodes can map to the same subnet ID, but nodes in the same network could also have different subnet IDs as long as the transitive closure of following the subnet IDs in the second table ends up in the same place. When scanning an unknown network, simply allocate one subnet ID, map it to a list of effectors, and map every node in the network to that subnet ID. When joining two networks, copy the effectors from network A’s effector list to network B’s effector list, then remap network A’s root subnet ID from pointing to an effector list to pointing at network B’s root subnet ID. When deleting a node, invalidate the effector list.

The key advantage of idea 3 over idea 1 is that idea 1 uses a table per subnet (of which there could be many), while idea 3 uses only a table per network and a single integer per subnet (the subnet ID of the parent subnet).

The key advantage of idea 3 over idea 2 is that, in idea 3, the parent-child relationships are maintained in terms of subnet IDs (which are never reused) rather than hashed positions (which are reused all the time). The reason why idea 2 fails is because node X might still be pointing at node Y even after node Y has been invalidated (this is OK because Y is marked invalid), and might still point there after a new network is created rooted at Y (this is not OK because X might not be part of that network, yet still points at Y). The reason why idea 3 doesn’t fail in the same manner is because a new subnet uses a newly allocated subnet ID and therefore cannot possibly be pointed at by any existing subnet.

Thoughts? Does it make sense to do just the VoxelManipulator stuff and then do the caching later as a separate ticket, or do everything at once?

from digilines.

Jeija avatar Jeija commented on July 18, 2024

Your third idea sounds like the cleanest solution to me and it also reminds of how CAD software for drawing electronic schematics handles this problem.

If I understand you correctly, the cache table will have all "conductors" (their hashed positions) anywhere in the map as indices and subnet IDs as values. Another subnet ID table will have subnet IDs as indices and data tables that contain all the effectors of the subnet as values. Please correct me if that is wrong.

The second table maps from subnet IDs to values, where the value is either another subnet ID or else the network data (list of effectors)

Why have it so that the second table can also point to another subnet ID? This seems like overcomplicating the issue to me. When joining several networks with different subnet IDs, why not just assign a new, common subnet ID to all conductors in the newly connected network?

I am assuming that receptors are not assigned to any of these tables, since they can be part of multiple subnets. When sending a signal, they will just look for conductors they link to inside the cache table and transmit their message to the corresponding effectors of the subnet.

The problem I see with just caching node positions is flexibility. Currently, all digiline effectors don't care where their signal is coming from - they only look at the channel and the data, not the input though (digilines also doesn't support that currently afaik). If, however, we want to change that later on, e.g. to allow the luacontroller to see where a signal comes from or to "route" between two networks, all the abstraction that is introduced by the caching will make this harder.
Things like a digiline crossing (just like in mesecons) will also be significantly harder to implement after introducing a node-position-based cache. A crossing is part of two networks at the same time, but won't transfer signals between those two.
So I'd say we better make sure digilines stays flexible enough to implement these features at a later point or implement them now already. Maybe we don't want those exact features after all, but they are just two examples I off the top of my head. There might be other issues arising I haven't even thought of.

And at this point I think we should also take a step back:

  • Caching will make the code significantly more complicated, so we need to make absolutely sure it stays maintainable.
  • Is this really worth it? Most digiline installations are so small-scale that it won't make a difference.
  • For many people, wires are an annoyance anyways. Not only are they usually impractical to place if all you're trying to do is automate some structure, they also add a significant amount of nodes that slow down connecting to the server. If digilines is completely abstracted away from those wires anyways, we could just as well make everything wireless (or add wireless transmitter nodes) and save frustration of server owners, make the code more maintainable and even increase performance.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

If I understand you correctly, the cache table will have all "conductors" (their hashed positions) anywhere in the map as indices and subnet IDs as values. Another subnet ID table will have subnet IDs as indices and data tables that contain all the effectors of the subnet as values. Please correct me if that is wrong.

Correct, except that the subnet ID table maps from subnet IDs to either data tables or other subnet IDs.

Why have it so that the second table can also point to another subnet ID? This seems like overcomplicating the issue to me. When joining several networks with different subnet IDs, why not just assign a new, common subnet ID to all conductors in the newly connected network?

Because then you would have to scan the whole network to find all the nodes in it and set their positions in the cache table to map to the new subnet ID. By allowing subnet ID indirection, the only thing you have to do when placing a wire which connects two wires is:

  1. Find the root subnets on both sides.
  2. Copy the effectors from the left side to the right side.
  3. Change the root subnet on the left side to point at the subnet ID of the right side’s root subnet instead of a data table.

It also doesn’t work to just change the left side’s subnet ID to point at the same data table, because then if you merge that network with a third network, you would have to scan and find all the subnet IDs pointing at the table to update them to point at another table. By ensuring that only one subnet ID points at each table, and that subnet ID is the root for a network (and findable quickly from any other subnet ID in the network), the merge is very fast.

I am assuming that receptors are not assigned to any of these tables, since they can be part of multiple subnets. When sending a signal, they will just look for conductors they link to inside the cache table and transmit their message to the corresponding effectors of the subnet.

Correct.

The problem I see with just caching node positions is flexibility. Currently, all digiline effectors don't care where their signal is coming from - they only look at the channel and the data, not the input though (digilines also doesn't support that currently afaik). If, however, we want to change that later on, e.g. to allow the luacontroller to see where a signal comes from or to "route" between two networks, all the abstraction that is introduced by the caching will make this harder.

This is not a problem. The effectors table you find when reaching the root of a network can have any information you want in it, such as (effector, side) pairs.

Things like a digiline crossing (just like in mesecons) will also be significantly harder to implement after introducing a node-position-based cache. A crossing is part of two networks at the same time, but won't transfer signals between those two.

This requires making up some kind of unique identifier for a single piece of wire, which I suppose is a node position plus some sort of index identifying which of the wires within the node is being addressed. minetest.hash_node_position returns a 48-bit integer. Could we allocate a few more bits for the subnode index?

  • Caching will make the code significantly more complicated, so we need to make absolutely sure it stays maintainable.
  • Is this really worth it? Most digiline installations are so small-scale that it won't make a difference.

I was quite happy to not implement this in Mesecons. I’m not a server operator, and even if I were I wouldn’t be a popular server operator. I don’t have a lot of people playing with me, I don’t have a lot of complicated circuits, and the server I do play on doesn’t seem to have any performance problems. That means I’m really, seriously not qualified to judge whether any particular performance optimization is worthwhile.

  • For many people, wires are an annoyance anyways. Not only are they usually impractical to place if all you're trying to do is automate some structure, they also add a significant amount of nodes that slow down connecting to the server. If digilines is completely abstracted away from those wires anyways, we could just as well make everything wireless (or add wireless transmitter nodes) and save frustration of server owners, make the code more maintainable and even increase performance.

Well, if you want to do that then sure. Personally, I find the lack of a vertical Digilines wire slightly odd, but otherwise consider allocating space for wires to be just part of the cost of doing business in the Luacontroller world. The thing that’s preventing me from building the system I want to build right now is the fact that I can’t run wire a kilometre away and have it work (at least not without absurd numbers of anchors), not the fact that I have to build a lot of wires. So that’s what I wanted to fix. If that gets fixed by making Digilines wireless, then that’s cool and will save me a pile of crystals and stuff. If it gets fixed by keeping wires and using VMs or schematics to make them work over long distances, that’s cool too.

If you’d rather go wireless, I’d be happy to try my hand at implementing that solution instead.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

Hello! Any thoughts on this? Should I just start coding up the simple, easy, minimally intrusive change (VoxelManipulators) and submit it as a PR, leaving further changes for later if and when someone wants them?

from digilines.

Jeija avatar Jeija commented on July 18, 2024

I'd be fine with a minimal VoxelManip solution + changing the transmit algorithm to an iterative breadth-first search, without caching for now. Don't spend too much time on implementing an extremely clever solution that is super-performant, but comes at the expense of maintainability.

However, I don't really want to be the one making this decision, I would like the minetest-mods community to decide on the future of this mod. That's mostly because I haven't been playing minetest a lot lately, so I can't really know what the average minetest player might want, how performance on servers really is etc.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

I’m not on any Minetest forums. I don’t join any big servers. I do hang out in #minetest on IRC, but that’s pretty quiet most of the time. So, really, I don’t know where the community is. I’m just this one guy trying to play Minetest and make things a little bit better for my own use cases. I’m probably about the worst person to try and gauge the community’s thoughts. I’ll see about implementing the minimal improvement and submitting it as a PR, then. I may have something to show sometime this weekend.

from digilines.

Jeija avatar Jeija commented on July 18, 2024

I was specifically talking about the @minetest-mods team that owns this repo now. I handed digilines over to them because I know I'm not able to put enough effort into maintaining digilines in an acceptable state. What I'm saying is that I could make these decisions, but I want whatever the @minetest-mods people say to take precedence over my plans.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

Oh, I see. Well, it looks like @sofar was in favour of trying it out.

from digilines.

sofar avatar sofar commented on July 18, 2024

The way I see it, if people like @Hawk777 think it's worth putting in the time, and they're willing to consider the thoughts of folks like @Jeija, then I'm sure that whatever code they come up with will be reviewed with an open mind.

So don't worry too much about @minetest-mods - it's primary goal is to ensure that the code keeps working in the future - you guys are what drives new features and changes - together (and that's what is happening in this thread, so I strongly encourage it).

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

Pull request #36 submitted.

from digilines.

Hawk777 avatar Hawk777 commented on July 18, 2024

I guess this can probably either be closed now or else left open in case someone wants to take on building the more complicated but higher-performance stuff.

from digilines.

Emojigit avatar Emojigit commented on July 18, 2024

I think digiline can have its own node database like advtrains, but if the mapblock is currently load, the mod uses the nodes in the world instad of node db.

It will temporary load the end of the wire to check are there any receiver and (if any) run its code.

from digilines.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.