Coder Social home page Coder Social logo

hecs's Introduction

hecs

Documentation Crates.io License: Apache 2.0

hecs provides a high-performance, minimalist entity-component-system (ECS) world. It is a library, not a framework. In place of an explicit "System" abstraction, a World's entities are easily queried from regular code. Organize your application however you like!

Example

let mut world = World::new();
// Nearly any type can be used as a component with zero boilerplate
let a = world.spawn((123, true, "abc"));
let b = world.spawn((42, false));
// Systems can be simple for loops
for (id, (number, &flag)) in world.query_mut::<(&mut i32, &bool)>() {
  if flag { *number *= 2; }
}
// Random access is simple and safe
assert_eq!(*world.get::<&i32>(a).unwrap(), 246);
assert_eq!(*world.get::<&i32>(b).unwrap(), 42);

Why ECS?

Entity-component-system architecture makes it easy to compose loosely-coupled state and behavior. An ECS world consists of:

  • any number of entities, which represent distinct objects
  • a collection of component data associated with each entity, where each entity has at most one component of any type, and two entities may have different components

That world is then manipulated by systems, each of which accesses all entities having a particular set of component types. Systems implement self-contained behavior like physics (e.g. by accessing "position", "velocity", and "collision" components) or rendering (e.g. by accessing "position" and "sprite" components).

New components and systems can be added to a complex application without interfering with existing logic, making the ECS paradigm well suited to applications where many layers of overlapping behavior will be defined on the same set of objects, particularly if new behaviors will be added in the future. This flexibility sets it apart from traditional approaches based on heterogeneous collections of explicitly defined object types, where implementing new combinations of behaviors (e.g. a vehicle which is also a questgiver) can require far-reaching changes.

Performance

In addition to having excellent composability, the ECS paradigm can also provide exceptional speed and cache locality. hecs internally tracks groups of entities which all have the same components. Each group has a dense, contiguous array for each type of component. When a system accesses all entities with a certain set of components, a fast linear traversal can be made through each group having a superset of those components. This is effectively a columnar database, and has the same benefits: the CPU can accurately predict memory accesses, bypassing unneeded data, maximizing cache use and minimizing latency.

Why Not ECS?

hecs strives to be lightweight and unobtrusive so it can be useful in a wide range of applications. Even so, it's not appropriate for every game. If your game will have few types of entities, consider a simpler architecture such as storing each type of entity in a separate plain Vec. Similarly, ECS may be overkill for games that don't call for batch processing of entities.

Even for games that benefit, an ECS world is not a be-all end-all data structure. Most games will store significant amounts of state in other structures. For example, many games maintain a spatial index structure (e.g. a tile map or bounding volume hierarchy) used to find entities and obstacles near a certain location for efficient collision detection without searching the entire world.

If you need to search for specific entities using criteria other than the types of their components, consider maintaining a specialized index beside your world, storing Entity handles and whatever other data is necessary. Insert into the index when spawning relevant entities, and include a component with that allows efficiently removing them from the index when despawning.

Other Libraries

hecs owes a great deal to the free exchange of ideas in Rust's ECS library ecosystem. Particularly influential examples include:

  • bevy, which continually pushes the envelope for performance and ergonomics in the context of a batteries-included framework
  • specs, which was key in popularizing ECS in Rust
  • legion, which demonstrated archetypal memory layout and trait-less components

If hecs doesn't suit you, one of those might do the trick!

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Disclaimer

This is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google.

hecs's People

Contributors

a1phyr avatar adamreichold avatar angelofsol avatar aweinstock314 avatar burtonageo avatar caelunshun avatar cedric-h avatar dependabot-preview[bot] avatar dependabot[bot] avatar dylan-dpc avatar i509vcb avatar jrmiller82 avatar mjhostet avatar ouz-a avatar patchmixolydic avatar peamaeq avatar ralith avatar rj00a avatar sajattack avatar sanbox-irl avatar sdleffler avatar t-mw avatar ten3roberts avatar tesselode avatar uriopass avatar veykril avatar wenyuzhao avatar winsalot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hecs's Issues

Query a slice of entities

As I see query can be executed only on the entire world. I have a set (slice) of entities and I'd like to perform the query on this slice somehow. Is it possible ?
I could perform the query on the world and check if the entity is in the slice, but that requires a search for each entity in the slice.
Also I could iterate over the slice and fetch the component from the world and check for the required component, but it requires handcrafted solution for each case (ex using query_one)
Is there any other option, or is it possible to add such an interface to to library?

impl World {
    fn query_slice<Q:Query>(slice: &[Entity]) -> QuerySliceBorrow  {...}
}

Some use case, where order matters: Some entities are part of a graph and I keep a sorted vector for the update order, thus iteration should happen in the order of the slice, instead of the default ordering.

Dynamically typed queries

hecs could be extended with support for executing queries constructed at runtime rather than specified statically in types. This could be useful for e.g. interactively user-specified queries or for running queries in embedded scripting environments.

An implementation could look something like:

impl World {
    fn query_dynamic(&self, q: DynamicQuery) -> DynamicQueryBorrow<'_>;
}

pub enum DynamicQuery {
    Get(TypeId),
    GetMut(TypeId),
    And(Vec<DynamicQuery>),
}

where DynamicQueryBorrow<'_> contains an iterator which yields DynamicQueryItem which has

impl DynamicQueryItem<'_> {
    fn get<T: Component>(&self) -> Option<&T>;
    fn get_mut<T: Component>(&self) -> Option<&mut T>;
}

or perhaps

impl DynamicQueryItem<'_> {
    fn get(&self, ty: TypeId) -> Option<&dyn Any>;
    fn get_mut(&self, ty: TypeId) -> Option<&mut dyn Any>;
}

This is complex enough that I'm not likely to work on it unless there's significant demand, so please leave a comment describing your use case if you're interested!

Unsound component access using mutable queries

The following test case shows how mutable queries can be used to alias the same component value via a shared reference and a mutable reference:

#[test]
fn unsound_component_access_using_mutable_queries() {
    let mut world = World::new();
    world.spawn((23,));
    world.spawn((42,));

    for (_, (i, m)) in world.query_mut::<(&i32, &mut i32)>() {
        let o = *i;
        *m = 0;
        let n = *i;
        assert_eq!(o, n);
    }
}

I think checking the internal consistency of a Fetch instance is required to avoid this. Since the relevant information is known at compile time, I also think that this should be possible without runtime overhead, probably via the tuple fetch combinator and the derive macro.

Main Thread Panicked: Already borrowed uniquely

Hello, I am trying to execute a query inside another query. Like that?

for (_entity, (actor, transform, collition)) in &mut world.query::<(&mut Actor, &mut Transform2D, Option<&Collision>)>() {
        let mut other_actors = Vec::new();

        if actor.move_x != 0.0 || actor.move_y != 0.0 {
            for rect in world
                .query::<(&Collision, &Transform2D)>()
                .iter()
                .filter(|(other_entity, _)| *other_entity != _entity)
                .map(|(_other_entity, (collision, other_transform))| Rectangle::new(
                    collision.rectangle.x + other_transform.position.x,
                    collision.rectangle.y + other_transform.position.y,
                    collision.rectangle.width,
                    collision.rectangle.height,
                ))
            {
                other_actors.push(rect);
            }
        }

      // ... there's more things ...
}

however even though I tried to filter with id1 != id0 like you did on you examples https://github.com/Ralith/hecs/blob/master/examples/ffa_simulation.rs#L82 it panicked.

Is there anyway to query all other entities but the current one?

get multiple components

is there a way to world.get multiple components of an entity? to not pay the entity.get cost multiple times?

Nicer way to remove a component from all entities?

A pattern I have in my game's codebase is that one system will create a bunch of components attached to entities, and then another system will process them all and clear them. Currently, I'm calling iterating over world.query_mut::<MyComponent>() and calling world.remove_one on each one, but I'm not sure if that's the fastest possible way. Even if there's no speed-up to be gained, a world.remove_all::<MyComponent>() might be nice purely for ergonomics.

Optimize archetype filtering

Currently, each query checks compatibility with each known archetype. This works pretty well in practice because the number of archetypes doesn't tend to be astronomical and checking is fast, but there should be an asymptotically better solution.

Best way to connect assets (e.g. sprites) to components?

What's the best way to connect assets (e.g. sprites) to components?

I'm trying to add an Image (in this case, an Image from good-web-game) as one of an entity's components:

impl MainState {
    fn new(ctx: &mut Context) -> GameResult<MainState> {
        let assets = Assets::new(ctx)?;
        let mut world = World::new();
        world.spawn((Transform::new(), assets.player_image));
        ...
    }
}

But I get this as a result (and my Rust-fu is not great enough to understand what's going on):

error[E0277]: `std::cell::Cell<bool>` cannot be shared between threads safely
  --> src/main.rs:57:15
   |
57 |         world.spawn((Transform::new(), assets.player_image));
   |               ^^^^^ `std::cell::Cell<bool>` cannot be shared between threads safely
   |
   = help: within `gwg::graphics::Image`, the trait `std::marker::Sync` is not implemented for `std::cell::Cell<bool>`
   = note: required because it appears within the type `gwg::graphics::Image`
   = note: required because of the requirements on the impl of `hecs::world::Component` for `gwg::graphics::Image`
   = note: required because of the requirements on the impl of `hecs::bundle::DynamicBundle` for `(game::transform::Transform, gwg::graphics::Image)`

Any suggestions?

Operator for modifying the world while iterating

Sometimes it's useful to allow components to be inserted/removed and entities to be spawned/despawned in the course of iteration. This is impossible using a conventional interface, but could be achieved with a sufficiently clever interface in the style of Vec::retain. For example:

fn modify<Q: Query, T>(
    &mut self,
    process: impl for<'a> FnMut(<Q::Fetch as Fetch<'a>>::Item) -> T,
    apply: impl FnMut(&mut ModifiableWorld, Entity, T),
)

where the two functions are called, one after the other, for each entity matching Q, and ModifiableWorld is a proxy object that adjusts the iteration state to account for entities being added/removed from the archetype currently being iterated.

The implementation is likely to be a source of significant complexity, and it does not enable anything you can't already do by making two passes and storing intermediate data in a Vec. That said, it would be neat.

Optimize random access

Fetching individual components currently requires hitting the hash table that maps component types to columns every time. For applications which do large amounts of random access for the same component types (e.g. rapier) this is redundant. We could optimize this case by precomputing an archetype -> column offset mapping for a given type up front, in a helper that borrows the World and hence will be disposed before any spawns might invalidate it. This would also allow us to perform dynamic borrow checking only once.

Implementation outline:

impl World {
    fn column<T: Component>(&self) -> Column<'_, T>;
    fn column_mut<T: Component>(&self) -> ColumnMut<'_, T>;
}

struct Column<'a, T> {
    entities: &'a [EntityMeta],
    archetypes: &'a [Archetype],
    archetype_column_offsets: Vec<Option<u32>>,
    _marker: PhantomData<T>,
}

impl<T> Column<'_, T> {
    fn get(&self, entity: Entity) -> Result<&T, ComponentError>;
}

With/Without examples

as a followup to #32 I think I am not the only one who would benefit from having more complete examples of how to combine With and Without with actual component data in query now that the with() and without() chaining helpers are gone.

Referenced components

I'd like to load a Model once, then store it and have my entities reference it. When I try to use a &Model as a component, the compiler says that it needs a static lifetime. Is there a reason for this limitation and a way to get around it?

Better std::fmt::Debug output on Entity and BuiltEntity

It would be handy (no pun intended!) to be able to call println! and pass in an Entity (or BuiltEntity) and get a list of components returned.

Currently, Entity shows the following debug output (println!("entity: {:?}", entity);):

entity: 0v0

... and BuiltEntity does not implement std::fmt::Debug afaik.

What I'd like to see is something like this:

let e = world.spawn(("abc", 123));
println!("entity: {:?}", e);
...
[&str, i32]

Or perhaps less test-y and more practically:

let c1 = Transform::new();
let c2 = Character::new();
let c3 = Name::new("Player1");
let c4 = Controllable::new();
let e = world.spawn((c1, c2, c3, c4));
println!("entity: {:?}", e);
...
[Transform, Character, Name("Player1"}, Controllable]

AtomicBorrow race condition

In src/borrow.rs functions AtomicBorrow::borrow and AtomicBorrow::release_mut conflict with each other.

This execution order will break internal state

Atomic value is 0 originally.
Thread #1 AtomicBorrow::borrow_mut() stores UNIQUE_BIT in atomic value.
Thread #2 AtomicBorrow::borrow() increments atomic value. Sees UNIQUE_BIT set
Thread #1 AtomicBorrow::release_mut() stores 0 in atomic value.
Thread #2 AtomicBorrow::borrow() decrements atomic value.
Atomic value is usize::MAX.

`spawn_batch` reserves exactly the memory it needs no matter how small leading to constant re-allocation when used repeatedly

Using spawn_batch with 5 items 20,000 times takes ~5 seconds and most time is spent copying data to the reallocated archetype storage:
image
flamegraph.svg.zip

spawn_batch calls reserve_inner with the exact amount of items needed

u32::try_from(upper.unwrap_or(lower)).expect("iterator too large"),

which then calls reserve on the archetype
self.archetypes.archetypes[archetype_id as usize].reserve(additional);

hecs/src/archetype.rs

Lines 238 to 242 in 4088845

pub(crate) fn reserve(&mut self, additional: u32) {
if additional > (self.capacity() - self.len()) {
self.grow(additional - (self.capacity() - self.len()));
}
}

Meanwhile the Archetype::allocate function used by World::spawn bumps the size by 64 if it reaches the max capacity.

hecs/src/archetype.rs

Lines 223 to 227 in 4088845

pub(crate) unsafe fn allocate(&mut self, id: u32) -> u32 {
if self.len as usize == self.entities.len() {
self.grow(self.len.max(64));
}

One solution to improve performance here would be to adopt this 64 value as the minimum that spawn_batch reserves.

However, I also wonder if it would be better to consider a more aggressive growth strategy like that used by Vec to minimize the re-allocation and copying needed.

Also, I'm not sure what has already been considered and/or done here or what the limitations would be in this context, but another idea that could help is breaking the archetype into large chunks to prevent the need for copying everything when growing past the current capacity.

World and threads, soundness

Is unsafe impl Send for World and unsafe impl Sync for World really sound?
It seems that World doesn't synchronizes access to its filed archetypes: Vec<Archetype> and Archetype is neither Send nor Sync.

Upstreaming Bevy ECS Changes

My recently announced Bevy ECS project uses a forked version of hecs, which adds the following features on top:

(copied directly from the blog post)

  • Function Systems: Hecs actually has no concept of a "system" at all. You just run queries directly on the World. Bevy adds the ability to define portable, schedulable systems using normal Rust functions.
  • Resources: Hecs has no concept of unique/global data. When building games, this is often needed. Bevy adds a Resource collection and resource queries
  • Parallel Scheduler: Hecs is single threaded, but it was designed to allow parallel schedulers to be built on top. Bevy ECS adds a custom dependency-aware scheduler that builds on top of the "Function Systems" mentioned above.
  • Optimization: Hecs is already plenty fast, but by modifying some of its internal data access patterns, we were able to improve performance significantly. This moved it from "fast enough" to "the fastest" (see the benchmark above to compare Bevy ECS to vanilla Hecs).
  • Query Wrappers: The Query Bevy ECS exports is actually a wrapper around Hecs Queries. It provides safe, scoped access to the World in a multi-threaded context and improves the ergonomics of iteration.
  • Change Detection: Automatically (and efficiently) tracks component add/remove/update operations and exposes them in the Query interface.
  • Stable Entity IDs: Almost every ECS (including Hecs) uses unstable entity ids that cannot be used for serialization (scenes / save files) or networking. In Bevy ECS, entity ids are globally unique and stable. You can use them in any context!

Most of the performance improvements came from removing Entity from the iterator and instead returning it via queries. I suspect it allowed rust to inline something (or otherwise optimize it) in a way that it couldn't before. I also tweaked manual inlining in a few places.

I'd also like to note that Stable Entity IDs do come at a performance cost, and the current version uses random u32's, which bevy users have proven to have a high collision risk. That being said the collision risk is a solvable problem and the performance cost is "worth it" to me. (although entity ids are still a hot discussion, as you can probably see in that thread). You can see the difference illustrated in the performance graphs in the blog post above.

I'd love to upstream whatever features you want from Bevy ECS, so its mainly a question of "what do you want" / "what fits the hecs scope".

I would also like to at least discuss the possibility of Bevy eventually consuming upstream hecs directly. That way improvements to hecs could make it into Bevy, and Bevy developers would be encouraged to contribute to hecs. I see a few things that could block that:

  • I had to make a number of "type visibility" changes to maintain a clean separation between bevy_ecs logic and hecs logic. These may or may not make sense for hecs upstream
  • I'm currently embedding a number of additional Query implementations in query.rs. If you don't want these types, then I'd need to move them to bevy_ecs. I've noticed that optimizations sometimes break down when moving code across crates. If theres no way to force optimal performance across crate boundaries, I'd be hesitant to take that performance hit.
  • There are a number of features embedded directly into the Bevy hecs fork that would either need to be adopted in full, or abstracted out somehow (which could be very difficult):
    • Stable Entity Ids
    • Change tracking

Change tracking

The upstreaming Bevy changes issue (#71) mentioned among othert things change detection.
That issue was closed with Bevy ECS and Hecs diverging due to serving different needs.

However, change tracking seems like a useful feature to have as part of the core ECS. Especially if having an ability to query changed components could be implemented in the ECS with better time complexity than just querying all components and checking for some changed boolean.

What is the current status of this? Are there any plans to include change detection in Hecs?

Prepared queries

Query start-up time could be reduced by cacheing the list of archetypes to traverse, either in a HashMap analogous to how spawns/inserts/removes are cached, or in a structure that the user keeps track of themselves.

Command buffer helper

We should supply a mechanism for recording entities to spawn/despawn on a world at some point in the future, to simplify the performant implementation of code that wants to intermingle those operations with iteration over a query. In particular, there's an opportunity to do cheeky EntityBuilder-style region allocation and reuse shared between every pending spawn operation, saving some overhead compared to expecting the user to e.g. keep a Vec<EntityBuilder>.

Deleting all entities

I had some code like this to delete all entities in my code to reset everything on a button press.

for (entity, _) in world.iter() {
    world.despawn(entity);                                    
}    

But I run into a borrowing issue as world cannot be borrowed as mutable while also being iterated on. Maybe a iter_mut() would be desired on world?

To solve the issue I collect the ID's in a Vec before de-spawning but this seems inefficient for large numbers of entities. Maybe a simple clear method on world would be useful?

let ids = world.iter().map(|(id, _)| id).collect::<Vec<_>>();  

for id in ids {
    world.despawn(id);                                        
}

All in all, just a small issue I ran into with some ideas for possible improvement. I'm loving the crate so far. The simplicity and freedom is wonderful!

Bundle not implemented for <(A) as hecs::bundle::DynamicBundle>

Hello, First of all, loving the project so far!

I saw a possible issue, which I believe has to do with your tuple_impl! macro (or the macro that calls tuple_impl! for all the different combinations of (A,B, etc), though I'm not good with macros so I can't be sure.

As the title says, bundle is not implemented for (A). It's implemented for (A,); (A,B); etc..

To reproduce:
let mut storage = World::new(); let e = storage.spawn((1));

This is mainly problematic when attempting to use storage.insert(...) as I've found it common to want to add a single component to an entity at run time. It's currently trivial to work around; just use (A,) instead of (A). If it is intended functionality for it to be (A,), then I believe (A) would be more user friendly. However, in v1.6, tuple_impl! was used for (A), not (A,), so I assume this is just a small issue with the macros!

Just wanted to make you aware... again, loving hecs so far!

EntityRef::entity panics when using an empty Entity received from World::entity

This code panics with the message 'index out of bounds: the len is 64 but the index is 4294967295':

use hecs::*;

fn main() {
    let mut world = World::new();
    world.spawn(());

    let entities: Vec<Entity> = world.iter().map(|e| e.entity()).collect();

    for e in entities {
        let entity_ref = world.entity(e).unwrap();
        println!("{:?}", e);
        entity_ref.entity();
    }
}

Digging into it, the issue is caused by the check in Entities::get line 405 because apparently the id of the empty archetype is 0.

As a result, empty entites are kind of Schrödinger's entities. They exist and can be iterated, but attempting to access them by their id will fail because they are seen as 'pending'.

There is a comment in ArchetypeSet::new that says archetype 0 always represents empty entities, so I am not sure what the intended behaviour should be and another comment on World::flush states that reserved entities are converted into empty entities, so there seems to be some reason for this behaviour, but if intentional it is not documented that empty entities are not supported and leads to a rather confusing crash.

World::flush and the check in Entities::get were created in commit 90c0197.

Benchmarks don't compile

Trying to compile benchmarks of current master with either stable or nightly fails with:

   Compiling hecs v0.2.7 (/home/svenstaro/src/hecs)
error[E0277]: `hecs::query::QueryBorrow<'_, (&mut Position, &Velocity)>` is not an iterator
  --> benches/bench.rs:50:32
   |
50 |         for (_, (pos, vel)) in world.query::<(&mut Position, &Velocity)>() {
   |                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `hecs::query::QueryBorrow<'_, (&mut Position, &Velocity)>` is not an iterator
   |
   = help: the trait `std::iter::Iterator` is not implemented for `hecs::query::QueryBorrow<'_, (&mut Position, &Velocity)>`
   = note: required by `std::iter::IntoIterator::into_iter`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0277`.
error: could not compile `hecs`.

What am I doing wrong?

Columnar bulk insertions

Initialization of components from bulk column-formatted data could allow for even faster entity construction than spawn_batch. It's unclear if there's a real need for that as spawn_batch is already blazingly fast, but it could be fun.

Packed/sparse representations

It might be useful/fun to allow certain components to be stored per archetype in ways other than a flat array. For example, a bool component might be stored in a packed bitfield, and a bool component which is nearly always false might benefit from sparse encoding. This could be pursued with something like:

trait Container<T> { /* ... */ }

struct Array<T>(Box<[T]>);

impl<T> Container<T> for Array { /* ... */ }

trait Component<T: Container<Self> = Array<T>>: Send + Sync + Sized + 'static {}
impl<T: Send + Sync + 'static> Component<Array<T>> for T {}

This preserves the "It Just Works" behavior for typical cases, while allowing specialized implementations as needed. The proposed definition of Array would break up the currently monolithic Archetype allocations; in the event that this compromises performance, an allocator abstraction could be introduced to allow multiple containers to share a single memory region.

The most obviously useful nonstandard container, a bitfield, would require container-specific smart pointer types in place of references. This will be difficult to accomplish without GATs, so it's probably best to wait their arrival before developing this concept further.

"Or"/disjunction query combinator

Currently Option allows you to "try" to fetch a fetchable thing; we just came across an issue where we'd like to be able to match on queries which have one fetchable subset or another fetchable subset. I think something like an Or<Q, P> combinator that implements Fetch would be feasible?

Deprecate World::get(_mut) in favor of World::query_one

These are, in theory, redundant; both expose access to a single entity's components, but query_one is significantly more versatile. We should verify that they have similar performance, optimize if necessary, then deprecate for at least one major release cycle before removing entirely.

parallel iteration

is it possible to iterate over a query in parallel? is it possible to iterate two query's in parallel if their component query does not interfere? if so how?

Dependabot can't resolve your Rust dependency files

Dependabot can't resolve your Rust dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Updating crates.io index
error: failed to select a version for the requirement `hashbrown = "^0.10.0"`
candidate versions found which didn't match: 0.9.1, 0.9.0, 0.8.2, ...
location searched: crates.io index
required by package `hecs v0.3.1 (/home/dependabot/dependabot-updater/dependabot_tmp_dir)`

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

Archetype transform graph

Part of why adding/removing components is expensive is the need to construct, sort, and hash a Vec<TypeId> to find the destination archetype. For statically typed component bundles, we can cache the index of the destination archetype after adding/removing the bundle in a HashMap using the TypeId of the bundle itself.

Entity ID assignment

I noticed that entity IDs are assigned randomly:

impl Entity {
    #[allow(missing_docs)]
    pub fn new() -> Self {
        Self(rand::random::<u32>())
    }

I'm concerned about this. I implemented the equations from the birthday paradox using this Python script:

import decimal

Dec = decimal.Decimal
decimal.getcontext().prec = 100

total_num_ids = Dec(2 ** 32)
num_spawned_entities = 1000

all_unique_ids_probability = Dec(1.0)
for item_index in range(0, num_spawned_entities):
    all_unique_ids_probability *= (total_num_ids - Dec(item_index)) / total_num_ids
duplicate_id_probability = Dec(1.0) - all_unique_ids_probability

print("{:.2f}".format(Dec(1.0) / duplicate_id_probability))

According to the results, a program that spawns 1000 entities will contain at least 1 duplicate ID every 8599.03 times it is run, on average. A more impractical application that spawns 10,000 entities will contain at least 1 duplicate ID every 86.41 times it is run. I'm currently experimenting with the Bevy engine and would be interested in creating a pull request to fix the problem. I am making this issue to collect any input from current project maintainers for things I should keep in mind or look out for as I am not particularly familiar with the codebase.

no-std support for macros

#[derive(Bundle)] currently generates code that references std. Core should be referenced directly where possible, and alloc should be used in place of std based on a feature gate.

Getting an Entity from an EntityRef

It'd be great to have a method on EntityRef that yields the entity that it came from.

Though it's possible to pass an Entity along with its corresponding EntityRef, it feels redundant!

I'm writing collision handling code that looks roughly like this:

let mut handle_collision = |projectile, hit| {
    let bullet = projectile.get::<Bullet>()?;

    // Proposed use: Bullet.owner is of type Entity
    if bullet.owner == hit.entity() {
        return None;
    }

    // snip

    None
};

for (a, b) in game.physics.collisions() {
    let a = game.world.entity(*a);
    let b = game.world.entity(*b);

    let _ = handle_collision(a, b);
    let _ = handle_collision(b, a);
}

Optimal event handling

This event handling pattern has came up a few times lately:

for (entity, event) in events.drain(..) {
    if let Ok(components) = world.query_one::<A>(entity) {
        handle_a(event, components);
    }
    if let Ok(components) = world.query_one::<B>(entity) {
        handle_b(event, components);
    }
    // ...
}

This works well, but is slightly suboptimal. It should be possible to precompute the subset of these queries (A, B, ...) which are satisfied by entities in each archetype, then iterate through that list instead of checking every possible query each time.

One approach would be to allow queries to be registered to obtain an ID, then expose a list of satisfied query IDs for a given entity, stored in the archetype. The above pattern could then be optimized by walking the satisfied ID list to select the appropriate handlers.

Some challenges:

  • Queries must be registered exactly once. OnceCell and statics? HashSet of query TypeIds?
  • How do we actually branch to the proper handler based on a dynamic ID? A HashMap<QueryID, &dyn FnMut> would have to be reallocated every time to avoid a 'static bound on the closures, which kinda sucks. When TypeId::of becomes a const fn (rust-lang/rust#77125), we could use the TypeId of each query as its ID and just match.
  • The interface all this implies is a bit ugly, but could be encapsulated gracefully in a declarative macro.
  • This is a lot of faffing about for a probably very marginal efficiency gain.

reserve_entity does not increment entities.len() when flushed

Repro:

#[test]
fn test() {
    let mut e = Entities::default();
    let e1 = e.reserve_entity();
    e.flush(|x, y| {});
    // this fails
    assert_eq!(e.len(), 1);
}

This causes "subtract with overflow" errors when despawning a reserved entity (after flushing).

#[test]
fn reserve_test() {
    let mut world = World::new();
    let entity = world.reserve_entity();
    world.flush();
    world.despawn(entity).unwrap();
}

This effectively makes entity reservation useless (unless the entities are never despawed). I'm experimenting with using the new hecs entity allocator in bevy, which is what surfaced this.

Introduced by #86

@Ralith @mjhostet

Empty archetypes

Despawning all entities of an archetype doesn't remove that archetype from the world, and doesn't advance archetype generation.

This is problematic for something like my library yaks, because it causes tests like this one to fail - summarized, it:

  • spawns some entities of two different types,
  • creates an executor with two systems that are disjoint (by resources and archetypes, but not by components, here meaning they modify the same type of component, but can be ran concurrently as long as their queries don't overlap on an archetype),
  • verifies that the systems run concurrently,
  • spawns some entities of a type that satisfies queries of both systems,
  • verifies that this forces the systems to run sequentially,
  • despawns all entities of that third type,
  • (unsuccessfully) verifies that the systems are ran concurrently again.

This means that as soon as a system-straddling entity is spawned those systems are forbidden from running concurrently until the world is recreated.

I see these approaches so far:

  • automatically removing archetypes once they are empty and advancing the archetype generation,
  • some form of World::defrag() that purges empty archetypes and advances the generation,
  • tacking the function of said ::defrag() onto World::flush(),
  • exposing Archetype::len() and/or ::is_empty(), and... advancing the generation once an archetype is empty?

The problem with last approach is obvious - the archetypes don't actually change - but there needs to be some mechanism of informing user code.

Instanced renderer?

Since hecs doesn't have any shared components/tags feature, would there be any way to implement an "efficient" renderer, let alone an instanced renderer?

With a regular query, I could achieve something like this:

for (id, (ltw, mesh, pip)) in world.query::<(&LocalToWorld, &Mesh, &Pipeline)>().iter()
{
    // no instanced rendering here, just update the uniform buffer for every object
    ctx.update_uniform(ltw);

    // this is a relatively expensive operation
    // the `Mesh` component is a handle to a shared mesh
    ctx.bind_mesh(mesh);

    // also a relatively expensive operation
    // the `Pipeline` component is a handle to a shared graphics pipeline
    ctx.bind_pipeline(pip);

    // draw a single "instance"
    ctx.draw(0..1);
}

The code above would be ridiculously inefficient because both the mesh and pipeline are re-bound for every entity. Ideally, I'd like to be able to achieve something like this:

for chunk in world.query::<(&LocalToWorld,)>()
                                      .shared::<(&Mesh, &Pipeline)>()
                                      .iter()
{
    let mesh: &Mesh = chunk.shared_component::<Mesh>().unwrap();
    let pip: &Pipeline = chunk.shared_component().unwrap();

    // only bind once per chunk
    ctx.bind_mesh(mesh);

    // only bind once per chunk
    ctx.bind_pipeline(pip);

    for (id, (ltw,)) in chunk.iter()
    {
        // still not instanced, but much better than before
        ctx.update_uniform(ltw);
        ctx.draw(0..1);
    }
}

And even better, instanced rendering by getting a slice of components:

for chunk in world.query::<(&LocalToWorld,)>()
                                      .shared::<(&Mesh, &Pipeline)>()
                                      .iter()
{
    let mesh: &Mesh = chunk.shared_component::<Mesh>().unwrap();
    let pip: &Pipeline = chunk.shared_component().unwrap();

    // get a slice of components(I'm not sure how hecs stores component
    // data and if this is possible at all)
    let ltw: &[LocalToWorld] = chunk.components();

    // only bind once per chunk
    ctx.bind_mesh(mesh);

    // only bind once per chunk
    ctx.bind_pipeline(pip);

    // update buffer once
    ctx.update_uniform(ltw);

    ctx.draw(0..chunk.count());
}

So far, this has all been very similar to legion's approach towards this problem. I'd be interested in any other methods that require less/no change to hecs' current API.

Archetype::grow leaks previously allocated memory

With this test case on master:

#[test]
fn spawn_batch_unexact() {
    struct Unhint<I>(I);
    impl<I: Iterator> Iterator for Unhint<I> {
        type Item = I::Item;

        fn next(&mut self) -> Option<Self::Item> {
            self.0.next()
        }

        fn size_hint(&self) -> (usize, Option<usize>) {
            (0, None)
        }
    }

    let mut world = World::new();
    world.spawn_batch(Unhint((0..100).map(|x| (x, "abc"))));
    let entities = world
        .query::<&i32>()
        .iter()
        .map(|(_, &x)| x)
        .collect::<Vec<_>>();
    assert_eq!(entities.len(), 100);
}

miri reports:

alloc4621966 (Rust heap, size: 1280, align: 8) {
    0x000 │ ╾a4468557[<11121328>]─╼ 03 00 00 00 00 00 00 00 │ ╾──────╼........
    // ... repeats ...
    0x400 │ 00 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 │ ................
    0x410 │ 04 00 00 00 05 00 00 00 06 00 00 00 07 00 00 00 │ ................
    // ... repeats ...
    0x4f0 │ 3c 00 00 00 3d 00 00 00 3e 00 00 00 3f 00 00 00 │ <...=...>...?...
}
alloc4468557 (global, size: 3, align: 1) {
    61 62 63                                        │ abc
}
alloc4622641 (global, size: 3, align: 1) {
    61 62 63                                        │ abc
}
// ... repeats ...

cargo-valgrind reports:

       Error Leaked 1.2 kiB
        Info at malloc (vg_replace_malloc.c:307)
             at alloc::alloc::alloc (alloc.rs:80)
             at hecs::archetype::Archetype::grow (archetype.rs:195)
             at hecs::archetype::Archetype::allocate (archetype.rs:159)
             at <hecs::world::SpawnBatchIter<I> as core::iter::traits::iterator::Iterator>::next (world.rs:763)
             at <&mut I as core::iter::traits::iterator::Iterator>::next (iterator.rs:3283)
             at <hecs::world::SpawnBatchIter<I> as core::ops::drop::Drop>::drop (world.rs:748)
             at core::ptr::drop_in_place (mod.rs:184)
             at tests::spawn_batch_unexact (tests.rs:355)
             at tests::spawn_batch_unexact::{{closure}} (tests.rs:340)
             at core::ops::function::FnOnce::call_once (function.rs:232)
             at call_once<(),FnOnce<()>> (boxed.rs:1076)
             at call_once<(),alloc::boxed::Box<FnOnce<()>>> (panic.rs:318)
             at do_call<std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>,()> (panicking.rs:297)
             at try<(),std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>> (panicking.rs:274)
             at catch_unwind<std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>,()> (panic.rs:394)
             at run_test_in_process (lib.rs:541)
             at test::run_test::run_test_inner::{{closure}} (lib.rs:450)
       Error Leaked 2.0 MiB
        Info at malloc (vg_replace_malloc.c:307)
             at alloc::alloc::alloc (alloc.rs:80)
             at hecs::archetype::Archetype::grow (archetype.rs:195)
             at hecs::archetype::Archetype::allocate (archetype.rs:159)
             at hecs::world::World::spawn (world.rs:98)
             at tests::spawn_many (tests.rs:248)
             at tests::spawn_many::{{closure}} (tests.rs:244)
             at core::ops::function::FnOnce::call_once (function.rs:232)
             at call_once<(),FnOnce<()>> (boxed.rs:1076)
             at call_once<(),alloc::boxed::Box<FnOnce<()>>> (panic.rs:318)
             at do_call<std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>,()> (panicking.rs:297)
             at try<(),std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>> (panicking.rs:274)
             at catch_unwind<std::panic::AssertUnwindSafe<alloc::boxed::Box<FnOnce<()>>>,()> (panic.rs:394)
             at run_test_in_process (lib.rs:541)
             at test::run_test::run_test_inner::{{closure}} (lib.rs:450)
             at std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:130)
             at {{closure}}<closure-0,()> (mod.rs:475)
             at call_once<(),closure-0> (panic.rs:318)
             at do_call<std::panic::AssertUnwindSafe<closure-0>,()> (panicking.rs:297)
             at try<(),std::panic::AssertUnwindSafe<closure-0>> (panicking.rs:274)
             at catch_unwind<std::panic::AssertUnwindSafe<closure-0>,()> (panic.rs:394)
             at {{closure}}<closure-0,()> (mod.rs:474)
             at core::ops::function::FnOnce::call_once{{vtable-shim}} (function.rs:232)
             at call_once<(),FnOnce<()>> (boxed.rs:1076)
             at call_once<(),alloc::boxed::Box<FnOnce<()>>> (boxed.rs:1076)
             at std::sys::unix::thread::Thread::new::thread_start (thread.rs:87)
     Summary Leaked 2.0 MiB total

Note that valgrind also reported leaked memory for spawn_many, which is ignored for miri, but it seems likely that both leaks may have the same underlying reason.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.