Coder Social home page Coder Social logo

wgpu-rs's Introduction

wgpu-rs's People

Contributors

adamnemecek avatar aloucks avatar andful avatar bors[bot] avatar coder-256 avatar cormac-obrien avatar cwfitzgerald avatar danwilhelm avatar dhardy avatar gabrielmajeri avatar gordon-f avatar grovesnl avatar halfvoxel avatar hecrj avatar imberflur avatar infinitesnow avatar kimundi avatar kvark avatar kyren avatar lachlansneff avatar m4b avatar manugildev avatar parasyte avatar rukai avatar scoopr avatar uriopass avatar wumpf avatar yatekii avatar yutannihilation avatar yzsolt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wgpu-rs's Issues

msaa-line demo crashes on Mac OS X with assertion failure.

When I attempted to change the sample count, the main thread paniced with an assertion failure.

Mac OS X 10.13.6
GPU: Intel HD Graphics 4000 1536 MB

Trace below:

mlindner$ RUST_BACKTRACE=1 cargo run --example msaa-line
    Finished dev [unoptimized + debuginfo] target(s) in 0.13s
     Running `target/debug/examples/msaa-line`
Press left/right arrow keys to change sample_count.
sample_count: 4
sample_count: 8
thread 'main' panicked at 'Attachment sample_count must be supported by physical device limits', /Users/mlindner/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/c3609d7/wgpu-native/src/command/mod.rs:261:9
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /Users/vsts/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.34/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /Users/vsts/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.34/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:47
   3: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:36
   4: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:200
   5: std::panicking::default_hook
             at src/libstd/panicking.rs:214
   6: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:477
   7: std::panicking::begin_panic
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54/src/libstd/panicking.rs:411
   8: wgpu_native::command::command_encoder_begin_render_pass
             at /Users/mlindner/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/c3609d7/wgpu-native/src/command/mod.rs:261
   9: wgpu_command_encoder_begin_render_pass
             at /Users/mlindner/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/c3609d7/wgpu-native/src/command/mod.rs:719
  10: wgpu::CommandEncoder::begin_render_pass
             at src/lib.rs:1058
  11: <msaa_line::Example as msaa_line::framework::Example>::render
             at examples/msaa-line/main.rs:214
  12: msaa_line::framework::run::{{closure}}
             at examples/msaa-line/../framework.rs:155
  13: <winit::platform_impl::platform::app_state::EventLoopHandler<F,T> as winit::platform_impl::platform::app_state::EventHandler>::handle_nonuser_event
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.20.0-alpha3/src/platform_impl/macos/app_state.rs:61
  14: winit::platform_impl::platform::app_state::Handler::handle_nonuser_event
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.20.0-alpha3/src/platform_impl/macos/app_state.rs:169
  15: winit::platform_impl::platform::app_state::AppState::cleared
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.20.0-alpha3/src/platform_impl/macos/app_state.rs:293
  16: winit::platform_impl::platform::observer::control_flow_end_handler
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.20.0-alpha3/src/platform_impl/macos/observer.rs:136
  17: <unknown>
  18: <unknown>
  19: <unknown>
  20: <unknown>
  21: <unknown>
  22: <unknown>
  23: <unknown>
  24: <unknown>
  25: <unknown>
  26: <unknown>
  27: <() as objc::message::MessageArguments>::invoke
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/objc-0.2.6/src/message/mod.rs:128
  28: objc::message::platform::send_unverified
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/objc-0.2.6/src/message/apple/mod.rs:27
  29: objc::message::send_message
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/objc-0.2.6/src/message/mod.rs:178
  30: winit::platform_impl::platform::event_loop::EventLoop<T>::run
             at ./<::objc::macros::msg_send macros>:15
  31: winit::event_loop::EventLoop<T>::run
             at /Users/mlindner/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.20.0-alpha3/src/event_loop.rs:140
  32: msaa_line::framework::run
             at examples/msaa-line/../framework.rs:115
  33: msaa_line::main
             at examples/msaa-line/main.rs:228
  34: std::rt::lang_start::{{closure}}
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54/src/libstd/rt.rs:64
  35: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:49
  36: std::panicking::try::do_call
             at src/libstd/panicking.rs:296
  37: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:80
  38: std::panicking::try
             at src/libstd/panicking.rs:275
  39: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  40: std::rt::lang_start_internal
             at src/libstd/rt.rs:48
  41: std::rt::lang_start
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54/src/libstd/rt.rs:64
  42: msaa_line::main

Publish to crates.io

With the release of 0.3.0, it would be nice if this was published to crates.io, along with wgpu-native. I'm trying to pull this in as a dependency for another crate, but cargo doesn't accept git dependencies.

[macOS] full screen is super slow

Previously in gfx-rs/wgpu#78 mentioned that full screen leaks memory and is super slow, but now it's just slow, no longer leaks memory.
I don't see excessive swapchain recreation, it just slow on its own. I haven't checked a trace yet but will do that eventually.

Some questions about OpenGL support and overhead by native C-layer

I am currently not able to run examples on OSX 10.11.6 because:

  • Metal is not installed
  • In order to install Metal I need to upgrade Xcode
  • In order to upgrade Xcode is need to update the OSX
  • The OSX update is broken

I have a Windows computer too (workstation), but I prefer to write code using MacBook Air.

That's why I prefer to use OpenGL when developing new stuff, because even if the API is phased out by new APIs, it works on many platforms, and it works on my preferred device for writing code.

While I'm not concerned about my own private use (I can just buy a new computer at some point in the future) it puts me in a somewhat difficult spot regarding deciding whether to port existing code to WebGPU or write backend specific code.

Currently, OpenGL does not work with WebGPU, but it looks like it might be supported in the future, since Gfx-HAL has a backend for it.

If OpenGL is supported, the software stack will look like this:

WebGPU (Rust wrapper around C layer)
WebGPU-native (C layer)
Gfx-HAL
OpenGL

My questions are:

  • Will OpenGL support be added?
  • Will the overhead be noticeable for common use cases? (e.g. less than 2%)

I do not require an exact estimate, just asking about what's your thoughts about this.

Guidance on Pipelines and Buffers

Greetings!

I've been developing with wgpu-rs to do both 2D and 3D graphics and been having a blast :) I'll admit that my learning curve has been fairly steep; a lot of blankly staring at code. But I think I'm starting to get the hang of it. So far your examples have been my only real guide, and I'm very grateful for them. They've been super helpful.

I have quite a few questions about how best to go about things. If this is not the right place to ask these questions, please direct me to a better spot.

As I'm working through the examples and ramping up the number and complexity of objects I'm drawing on the screen, I'm wondering:

  • If I have 1,000 objects on the screen, and they only differ by some minor information, is it better to have 1,000 pipelines whose buffers don't change between renders, or have 1 pipeline and update the pipeline's buffer per object render?
  • Is there a significant performance penalty in updating buffers every render? Am I worrying about premature optimazation?
  • Is there a way to treat a texture as a buffer? It seems you have to bake the texture into the pipeline never to be changed. Is that correct?
    • If 1,000 objects on the screen were identical except for the texture (aka a thumbnail sheet), does that imply 1,000 pipelines?
  • If I want to use a different shader between renders, I need to create a new pipeline, correct?
    • I believe pipelines are intended to be immutable but allow updating buffer info. Let me know if I'm wrong.
  • If data is the same across pipelines (such as a transformation matrix), is it possible to put that information into a single buffer and have the pipelines reference that central buffer?
    • With Rust, I'm not sure how I would go about that due to the memory sharing restrictions, or whether its a good idea. But I was wondering and thought I'd ask.

I also have some higher level questions:

  • Is the roadmap for WebGPU intended to merge with WebXR? Specifically, if I intend to do AR with Rust, is WebGPU a dead end for wasm AR development, or is it on the right track?
  • I believe the buffer in Rust code is simply a pointer to memory on the GPU (or memory physically near the GPU). Is my understanding correct?
    • If so, does the buffer know to release the memory if it goes out of scope? Do I need to do anything specifically? I may be totally misunderstanding the role of buffers here, so forgive me if this is a stupid question.
  • Has WebGPU become official? Am I working with a real representation of the API, or just Apple's Metal-ish recommendation?
  • Are there any resources I should be aware of to learn WebGPU better?

If, based on my questions, I'm missing the point of basic concepts (quite possible), please feel free to correct me. I'd rather look dumb today than be dumb tomorrow.

As I've mentioned before, I think you guys are doing a fantastic job! I appreciate everything you've done so far. Please keep up the good work :)

Mipmap example does not downsample correctly

This bug is for the current git version.

The mipmap example currently downsamples using Nearest rather than Linear.
Changing the enum on mag_filter to Linear instead fixes it and correctly averages pixels:
https://github.com/gfx-rs/wgpu-rs/blob/master/examples/mipmap/main.rs#L137

I was under the impression that mag_filter refers to the filtering method used when upsampling, not when downsampling. Is this a bug in my understanding, or a bug in the code?

Target the Web directly

We need to make wgpu-rs do the actual Web calls and run in the browser supporting WebGPU API. The API itself is still evolving, but we should start exploring what our side would look like, and maybe we'll already get something running.

`hello_triangle_rust` example gives errors on resize

Using Linux Ubuntu 18.04 with X11 rendering and Vulkan backend, NVidia 1060 with NVidia drivers.

When running the hello_triangle example and resizing the window, the window doesn't redraw. Any new area in the window is not drawn at all, and it spits out a ton of the following warnings:

ERROR 2019-04-23T14:24:14Z: gfx_backend_vulkan: [DS] Object: 0x220 | vkAcquireNextImageKHR: Application has already acquired the maximum number of images (0x1)
DS(ERROR / SPEC): object: 0x220 type: 1000001000 msgNum: 108 - vkAcquireNextImageKHR: Application has already acquired the maximum number of images (0x1)
DS(ERROR): object: 0x220 type: 27 location: 10883 msg_code: 108: Object: 0x220 | vkAcquireNextImageKHR: Application has already acquired the maximum number of images (0x1)
ERROR 2019-04-23T14:24:14Z: gfx_backend_vulkan: [DS] Object: 0x55a8d84b00f0 | vkCreateSwapChainKHR() called with imageExtent = (1024,768), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1011,756), minImageExtent = (1011,756), maxImageExtent = (1011,756). The spec valid usage text states 'imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface' (https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
DS(ERROR / SPEC): object: 0x55a8d84b00f0 type: 3 msgNum: 341838324 - vkCreateSwapChainKHR() called with imageExtent = (1024,768), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1011,756), minImageExtent = (1011,756), maxImageExtent = (1011,756). The spec valid usage text states 'imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface' (https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)
DS(ERROR): object: 0x55a8d84b00f0 type: 3 location: 10260 msg_code: 341838324: Object: 0x55a8d84b00f0 | vkCreateSwapChainKHR() called with imageExtent = (1024,768), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1011,756), minImageExtent = (1011,756), maxImageExtent = (1011,756). The spec valid usage text states 'imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface' (https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)

Pretty sure this is just 'cause the example doesn't handle resizing at all, based on the errors, and it needs to free and re-create some resources at the appropriate size. I'd like to help implement this functionality to start learning wgpu a little, but I don't really know where to start.

[meta] Latest example failures

Hi! I'm the co-creator of G3N, a 3D game engine for Go. I'm very interested in building a cross-platform game engine for Rust on top of wgpu-rs. I think the WebGPU API is the right level of abstraction to build upon, specially with hopes of targeting browsers later on.

I started diving into the examples and noticed the following failures (on Windows 10):

Example GL DX11 DX12 Vulkan
capture โŒ Instance::new() โœ”๏ธ โœ”๏ธ โœ”๏ธ
cube โŒ Bgra8Unorm โœ”๏ธ โœ”๏ธ โœ”๏ธ
hello-triangle โŒ int overflow โŒ OutOfPoolMemory โœ”๏ธ โœ”๏ธ
hello-compute โŒ Instance::new() โœ”๏ธ โœ”๏ธ โœ”๏ธ
mipmap โŒ Bgra8Unorm โœ”๏ธ โœ”๏ธ โœ”๏ธ
msaa-line โŒ Bgra8Unorm โŒ sample_count โœ”๏ธ โ— no visual change? โœ”๏ธ crashes when samples=32
shadow โŒ Bgra8Unorm โœ”๏ธ has artifacts โœ”๏ธ โœ”๏ธ

Each failure has a link to the associated log.
Apparently there has been some improvement since #27.

Hope this helps!

Edit: updated the table!

Crashes when running examples

I'm running Windows 10 1903 build 18917.1000 (Insider Fast) and testing on commit 76c4927. Some of the examples have not been working properly for me. Here is what I've noticed so far:

Example DX11 DX12 Vulkan
cube Blank Works Works
hello-triangle Crashes Works Works
hello-compute Incorrect Works Works
mipmap Blank Crashes Works
msaa-line Crashes Crashes Crashes
shadow Blank Works (minor glitches) Works (minor glitches)

GL backend build always fails:

   Compiling wgpu v0.2.2 (C:\Users\Jacob Greenfield\wgpu-rs)
error[E0425]: cannot find function `wgpu_instance_create_surface_from_xlib` in module `wgn`
   --> src\lib.rs:511:22
    |
511 |             id: wgn::wgpu_instance_create_surface_from_xlib(self.id, display, window),
    |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not found in `wgn`

error[E0425]: cannot find function `wgpu_instance_create_surface_from_macos_layer` in module `wgn`
   --> src\lib.rs:517:22
    |
517 |             id: wgn::wgpu_instance_create_surface_from_macos_layer(self.id, layer),
    |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not found in `wgn`

error[E0425]: cannot find function `wgpu_instance_create_surface_from_windows_hwnd` in module `wgn`
   --> src\lib.rs:527:22
    |
527 |             id: wgn::wgpu_instance_create_surface_from_windows_hwnd(self.id, hinstance, hwnd),
    |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not found in `wgn`

error: aborting due to 3 previous errors

For more information about this error, try `rustc --explain E0425`.
error: Could not compile `wgpu`.

To learn more, run the command again with --verbose.

hello-triangle on DX11 crash:

>cargo run --features dx11 --example hello-triangle
    Finished dev [unoptimized + debuginfo] target(s) in 0.30s
     Running `target\debug\examples\hello-triangle.exe`
thread 'main' panicked at 'Unexpected error: OutOfPoolMemory', C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\rendy-descriptor-0.3.0\src\allocator.rs:96:17
stack backtrace:
   0: backtrace::backtrace::trace_unsynchronized
             at C:\Users\appveyor\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.29\src\backtrace\mod.rs:66
   1: std::sys_common::backtrace::_print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:47
   2: std::sys_common::backtrace::print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:36
   3: std::panicking::default_hook::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:198
   4: std::panicking::default_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:212
   5: std::panicking::rust_panic_with_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:475
   6: std::panicking::continue_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:382
   7: std::panicking::begin_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:337
   8: rendy_descriptor::allocator::allocate_from_pool::{{closure}}<gfx_backend_dx11::Backend>
             at <::std::macros::panic macros>:9
   9: core::result::Result<(), gfx_hal::pso::descriptor::AllocationError>::map_err<(),gfx_hal::pso::descriptor::AllocationError,gfx_hal::device::OutOfMemory,closure>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libcore\result.rs:522
  10: rendy_descriptor::allocator::allocate_from_pool<gfx_backend_dx11::Backend>
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\rendy-descriptor-0.3.0\src\allocator.rs:80
  11: rendy_descriptor::allocator::DescriptorBucket<gfx_backend_dx11::Backend>::allocate<gfx_backend_dx11::Backend>
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\rendy-descriptor-0.3.0\src\allocator.rs:209
  12: rendy_descriptor::allocator::DescriptorAllocator<gfx_backend_dx11::Backend>::allocate<gfx_backend_dx11::Backend,arrayvec::ArrayVec<[rendy_descriptor::allocator::DescriptorSet<gfx_backend_dx11::Backend>; 1]>>
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\rendy-descriptor-0.3.0\src\allocator.rs:308
  13: wgpu_native::device::device_create_bind_group
             at C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\device.rs:1032
  14: wgpu_native::device::wgpu_device_create_bind_group
             at C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\device.rs:1158
  15: wgpu::Device::create_bind_group
             at .\src\lib.rs:606
  16: hello_triangle::main
             at .\examples\hello-triangle\main.rs:68
  17: std::rt::lang_start::{{closure}}<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  18: std::rt::lang_start_internal::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:49
  19: std::panicking::try::do_call<closure,i32>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:294
  20: panic_unwind::__rust_maybe_catch_panic
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libpanic_unwind\lib.rs:82
  21: std::panicking::try
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:273
  22: std::panic::catch_unwind
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panic.rs:388
  23: std::rt::lang_start_internal
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:48
  24: std::rt::lang_start<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  25: main
  26: invoke_main
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
  27: __scrt_common_main_seh
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
  28: BaseThreadInitThunk
  29: RtlUserThreadStart
error: process didn't exit successfully: `target\debug\examples\hello-triangle.exe` (exit code: 0xc000001d, STATUS_ILLEGAL_INSTRUCTION)

hello-compute on DX11, incorrect output:

>cargo run --features dx11 --example hello-compute 1 2 3 4
    Finished dev [unoptimized + debuginfo] target(s) in 0.29s
     Running `target\debug\examples\hello-compute.exe 1 2 3 4`
Times: [1, 2, 3, 4]

mipmap on DX12 crash (I think caused by this):

>cargo run --features dx12 --example mipmap
    Finished dev [unoptimized + debuginfo] target(s) in 0.28s
     Running `target\debug\examples\mipmap.exe`
thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `9`', C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\gfx-backend-dx12-0.2.0\src\device.rs:553:9
stack backtrace:
   0: backtrace::backtrace::trace_unsynchronized
             at C:\Users\appveyor\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.29\src\backtrace\mod.rs:66
   1: std::sys_common::backtrace::_print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:47
   2: std::sys_common::backtrace::print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:36
   3: std::panicking::default_hook::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:198
   4: std::panicking::default_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:212
   5: std::panicking::rust_panic_with_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:475
   6: std::panicking::continue_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:382
   7: std::panicking::begin_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:337
   8: gfx_backend_dx12::Device::view_image_as_render_target_impl
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\gfx-backend-dx12-0.2.0\src\device.rs:553
   9: gfx_backend_dx12::Device::view_image_as_render_target
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\gfx-backend-dx12-0.2.0\src\device.rs:635
  10: gfx_backend_dx12::device::{{impl}}::create_image_view
             at C:\Users\Jacob Greenfield\.cargo\registry\src\github.com-1ecc6299db9ec823\gfx-backend-dx12-0.2.0\src\device.rs:2327
  11: wgpu_native::device::texture_create_view
             at C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\device.rs:784
  12: wgpu_native::device::wgpu_texture_create_default_view
             at C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\device.rs:859
  13: wgpu::Texture::create_default_view
             at .\src\lib.rs:956
  14: mipmap::{{impl}}::init
             at .\examples\mipmap\main.rs:250
  15: mipmap::framework::run<mipmap::Example>
             at .\examples\framework.rs:112
  16: mipmap::main
             at .\examples\mipmap\main.rs:429
  17: std::rt::lang_start::{{closure}}<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  18: std::rt::lang_start_internal::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:49
  19: std::panicking::try::do_call<closure,i32>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:294
  20: panic_unwind::__rust_maybe_catch_panic
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libpanic_unwind\lib.rs:82
  21: std::panicking::try
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:273
  22: std::panic::catch_unwind
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panic.rs:388
  23: std::rt::lang_start_internal
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:48
  24: std::rt::lang_start<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  25: main
  26: invoke_main
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
  27: __scrt_common_main_seh
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
  28: BaseThreadInitThunk
  29: RtlUserThreadStart
error: process didn't exit successfully: `target\debug\examples\mipmap.exe` (exit code: 0xc000001d, STATUS_ILLEGAL_INSTRUCTION)

msaa-line on DX12 crash (the same error happens for DX11 and Vulkan too):

>cargo run --features dx12 --example msaa-line
    Finished dev [unoptimized + debuginfo] target(s) in 0.28s
     Running `target\debug\examples\msaa-line.exe`
Press left/right arrow keys to change sample_count.
sample_count: 2
thread 'main' panicked at 'No rendering work has been submitted for the presented frame (image 0)', C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\swap_chain.rs:217:5
stack backtrace:
   0: backtrace::backtrace::trace_unsynchronized
             at C:\Users\appveyor\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.29\src\backtrace\mod.rs:66
   1: std::sys_common::backtrace::_print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:47
   2: std::sys_common::backtrace::print
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\sys_common\backtrace.rs:36
   3: std::panicking::default_hook::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:198
   4: std::panicking::default_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:212
   5: std::panicking::rust_panic_with_hook
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:475
   6: std::panicking::continue_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:382
   7: std::panicking::begin_panic_fmt
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:337
   8: wgpu_native::swap_chain::wgpu_swap_chain_present
             at C:\Users\Jacob Greenfield\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\a667d50\wgpu-native\src\swap_chain.rs:217
   9: wgpu::{{impl}}::drop
             at .\src\lib.rs:1256
  10: core::ptr::real_drop_in_place<wgpu::SwapChainOutput>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libcore\ptr\mod.rs:197
  11: msaa_line::framework::run<msaa_line::Example>
             at .\examples\framework.rs:152
  12: msaa_line::main
             at .\examples\msaa-line\main.rs:211
  13: std::rt::lang_start::{{closure}}<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  14: std::rt::lang_start_internal::{{closure}}
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:49
  15: std::panicking::try::do_call<closure,i32>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:294
  16: panic_unwind::__rust_maybe_catch_panic
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libpanic_unwind\lib.rs:82
  17: std::panicking::try
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panicking.rs:273
  18: std::panic::catch_unwind
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\panic.rs:388
  19: std::rt::lang_start_internal
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\/src\libstd\rt.rs:48
  20: std::rt::lang_start<()>
             at /rustc/2fe7b3383c1e0a8b68f8a809be3ac21006998929\src\libstd\rt.rs:64
  21: main
  22: invoke_main
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
  23: __scrt_common_main_seh
             at d:\agent\_work\2\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
  24: BaseThreadInitThunk
  25: RtlUserThreadStart
error: process didn't exit successfully: `target\debug\examples\msaa-line.exe` (exit code: 0xc000001d, STATUS_ILLEGAL_INSTRUCTION)

macOS: Window losing focus will break rendering.

Using master branch wgpu and

[patch.crates-io]
gfx-hal = {git = "https://github.com/gfx-rs/gfx.git", branch = "hal-0.3" }

When a window loses focus, rendering no longer works.
I can't even submit my own clear color, just get a black screen.
Occasionally this happens on launch - in other words sometimes it just works, but more often it just breaks.

Recreating the swapchain will cause flickering between black and my rendering.
I recreate the swapchain for these events
Resized(_, _) | SizeChanged(_, _) | Maximized | Hidden | Minimized

It shouldn't flicker because it's not constantly recreating the swapchain, those events happen a few times at launch.

Curiosity killed the cat - why does owning the window and surface change everything?

First off: If you read this to the end, I've found a solution to the initial problem. Curiosity is my motivation for this ticket. It may very well be, that this is no issue at all, or just one on my side. In any case I'd like to understand it.

Backstory: I am porting an Vulkano based renderer to wgpu. The examples run on my machine, so I jump right into it and start by porting the triangle example while wrapping everything in a Renderer-struct to replace the Vulkano one.

The code below is like it's looking at the moment on my machine. Nothing is stripped, nothing omitted.

// main.rs

mod wgpu_renderer;

const WINDOW_TITLE: &str = "This is a triumph";

pub fn main()  {
    use winit::{
        event_loop::ControlFlow,
        event,
    };

    let (mut gfx, el) = WgpuRenderEngine::new(WINDOW_TITLE, 1920.0, 1080.0);

    el.run(move |event, _, control_flow| {
        *control_flow = ControlFlow::Poll;
        match event {
            event::Event::WindowEvent { event, .. } => match event {
                event::WindowEvent::KeyboardInput {
                    input:
                        event::KeyboardInput {
                            virtual_keycode: Some(event::VirtualKeyCode::Escape),
                            state: event::ElementState::Pressed,
                            ..
                        },
                        ..
                }
                | event::WindowEvent::CloseRequested => {
                    *control_flow = ControlFlow::Exit;
                }
                _ => {}
            },
            event::Event::EventsCleared => gfx.render(),
            _ => (),
        }
    });
}

// wgpu_renderer.rs

use winit::{
    event_loop::EventLoop,
    window,
};

pub struct WgpuRenderEngine {
    device: wgpu::Device,
    queue: wgpu::Queue,
    swap_chain: wgpu::SwapChain,
    dummy_pipeline: wgpu::RenderPipeline,
    dummy_bind_group: wgpu::BindGroup,
}

impl WgpuRenderEngine {
    pub fn new<T>(title: T, width: f64, height: f64) -> (Self, EventLoop<()>)
        where T: Into<String>
    {
        // FIXME get rid of the panics and return an appropriate error instead
        let event_loop = EventLoop::new();

        let size = winit::dpi::LogicalSize::new(width, height);
        let window = window::WindowBuilder::new()
            .with_inner_size(size)
            .with_title(title)
            .build(&event_loop)
            .expect("Failed to create a window");

        let surface = wgpu::Surface::create(&window);

        let adapter = wgpu::Adapter::request(&wgpu::RequestAdapterOptions {
            power_preference: wgpu::PowerPreference::HighPerformance,
            backends: wgpu::BackendBit::PRIMARY
        }).expect("Failed to retrieve an adapter");

        let (device, queue) = adapter.request_device(&wgpu::DeviceDescriptor {
            extensions: wgpu::Extensions {
                anisotropic_filtering: false,
            },
            limits: wgpu::Limits::default(),
        });

        let swap_chain = device.create_swap_chain(
            &surface,
            &wgpu::SwapChainDescriptor {
                usage: wgpu::TextureUsage::OUTPUT_ATTACHMENT,
                format: wgpu::TextureFormat::Bgra8UnormSrgb,
                width: size.width.round() as u32,
                height: size.height.round() as u32,
                present_mode: wgpu::PresentMode::Vsync,
            },
        );

        let (dummy_pipeline, dummy_bind_group) = self::shaders::dummy_render_pipeline(&device);

        let moi = Self {
            device,
            queue,
            swap_chain,
            dummy_pipeline,
            dummy_bind_group,
        };

        (moi, event_loop)
    }

    pub fn render(&mut self) {
        let frame = self.swap_chain.get_next_texture();
        let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor { todo: 0 });
        {
            let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
                color_attachments: &[wgpu::RenderPassColorAttachmentDescriptor {
                    attachment: &frame.view,
                    resolve_target: None,
                    load_op: wgpu::LoadOp::Clear,
                    store_op: wgpu::StoreOp::Store,
                    clear_color: wgpu::Color::GREEN,
                }],
                depth_stencil_attachment: None,
            });
            rpass.set_pipeline(&self.dummy_pipeline);
            rpass.set_bind_group(0, &self.dummy_bind_group, &[]);
            rpass.draw(0 .. 3, 0 .. 1);
        }
        self.queue.submit(&[encoder.finish()]);
    }
}

mod shaders {
    pub fn dummy_render_pipeline(device: &wgpu::Device) -> (wgpu::RenderPipeline, wgpu::BindGroup) {
        let vs = include_bytes!("shader.vert.spv");
        let vs_module = device.create_shader_module(&wgpu::read_spirv(std::io::Cursor::new(&vs[..])).unwrap());

        let fs = include_bytes!("shader.frag.spv");
        let fs_module = device.create_shader_module(&wgpu::read_spirv(std::io::Cursor::new(&fs[..])).unwrap());

        let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
            bindings: &[],
        });
        let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
            layout: &bind_group_layout,
            bindings: &[],
        });
        let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
            bind_group_layouts: &[&bind_group_layout],
        });

        let pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
            layout: &pipeline_layout,
            vertex_stage: wgpu::ProgrammableStageDescriptor {
                module: &vs_module,
                entry_point: "main",
            },
            fragment_stage: Some(wgpu::ProgrammableStageDescriptor {
                module: &fs_module,
                entry_point: "main",
            }),
            rasterization_state: Some(wgpu::RasterizationStateDescriptor {
                front_face: wgpu::FrontFace::Ccw,
                cull_mode: wgpu::CullMode::None,
                depth_bias: 0,
                depth_bias_slope_scale: 0.0,
                depth_bias_clamp: 0.0,
            }),
            primitive_topology: wgpu::PrimitiveTopology::TriangleList,
            color_states: &[wgpu::ColorStateDescriptor {
                format: wgpu::TextureFormat::Bgra8UnormSrgb,
                color_blend: wgpu::BlendDescriptor::REPLACE,
                alpha_blend: wgpu::BlendDescriptor::REPLACE,
                write_mask: wgpu::ColorWrite::ALL,
            }],
            depth_stencil_state: None,
            index_format: wgpu::IndexFormat::Uint16,
            vertex_buffers: &[],
            sample_count: 1,
            sample_mask: !0,
            alpha_to_coverage_enabled: false,
        });

        (pipeline, bind_group)
    }
}

The compiled shaders I copied from https://github.com/gfx-rs/wgpu-rs/tree/master/examples/hello-triangle

Running this, leads to GPU took too much time processing last frames :(:

cargo r --release
warning: unused manifest key: bin.0.include
    Finished release [optimized] target(s) in 0.08s
     Running `target/release/client`
thread 'main' panicked at 'GPU took too much time processing last frames :(', /home/abendstolz/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/78fbbba/wgpu-native/src/swap_chain.rs:150:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

Okay, what now? The example works just fine. I verify this again by replacing my code with it. All works, so I start playing around. This is the version that works:

pub struct WgpuRenderEngine {
    device: wgpu::Device,
    queue: wgpu::Queue,
    swap_chain: wgpu::SwapChain,
    window: winit::window::Window,
    surface: wgpu::Surface,
    dummy_pipeline: wgpu::RenderPipeline,
    dummy_bind_group: wgpu::BindGroup,
}

impl WgpuRenderEngine {
    pub fn new<T>(title: T, width: f64, height: f64) -> (Self, EventLoop<()>)
        where T: Into<String>
    {
        // FIXME get rid of the panics and return an appropriate error instead
        let event_loop = EventLoop::new();

        let size = winit::dpi::LogicalSize::new(width, height);
        let window = window::WindowBuilder::new()
            .with_inner_size(size)
            .with_title(title)
            .build(&event_loop)
            .expect("Failed to create a window");

        let surface = wgpu::Surface::create(&window);

        let adapter = wgpu::Adapter::request(&wgpu::RequestAdapterOptions {
            power_preference: wgpu::PowerPreference::HighPerformance,
            backends: wgpu::BackendBit::PRIMARY
        }).expect("Failed to retrieve an adapter");

        let (device, queue) = adapter.request_device(&wgpu::DeviceDescriptor {
            extensions: wgpu::Extensions {
                anisotropic_filtering: false,
            },
            limits: wgpu::Limits::default(),
        });

        let swap_chain = device.create_swap_chain(
            &surface,
            &wgpu::SwapChainDescriptor {
                usage: wgpu::TextureUsage::OUTPUT_ATTACHMENT,
                format: wgpu::TextureFormat::Bgra8UnormSrgb,
                width: size.width.round() as u32,
                height: size.height.round() as u32,
                present_mode: wgpu::PresentMode::Vsync,
            },
        );

        let (dummy_pipeline, dummy_bind_group) = self::shaders::dummy_render_pipeline(&device);

        let moi = Self {
            device,
            queue,
            swap_chain,
            window,
            surface,
            dummy_pipeline,
            dummy_bind_group,
        };

        (moi, event_loop)
    }

    pub fn render(&mut self) {
        let frame = self.swap_chain.get_next_texture();
        let mut encoder = self.device.create_command_encoder(&wgpu::CommandEncoderDescriptor { todo: 0 });
        {
            let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
                color_attachments: &[wgpu::RenderPassColorAttachmentDescriptor {
                    attachment: &frame.view,
                    resolve_target: None,
                    load_op: wgpu::LoadOp::Clear,
                    store_op: wgpu::StoreOp::Store,
                    clear_color: wgpu::Color::GREEN,
                }],
                depth_stencil_attachment: None,
            });
            rpass.set_pipeline(&self.dummy_pipeline);
            rpass.set_bind_group(0, &self.dummy_bind_group, &[]);
            rpass.draw(0 .. 3, 0 .. 1);
        }
        self.queue.submit(&[encoder.finish()]);
    }
}

So, the only difference is that I now save window and surface in the renderer struct. They are not used anywhere so far (although I will need them inside the renderer, so owning them makes perfect sense).

I jump between the two states and it's working <-> not working with the above error. No additional changes whatsoever.

I can only begin to imagine stuff that could happen under the hood, like some unsafe-pointer magic which leads to stuff being dropped, if it's not owned and goes out of scope.

But instead of guessing I would love to understand what's really happening here. Obviously rustc didn't save my ass this time.

Just in case it matters, here is my GPU info (and OS/kernel)

~ glxinfo | grep OpenGL

OpenGL vendor string: X.Org
OpenGL renderer string: AMD Radeon (TM) RX 470 Graphics (POLARIS10, DRM 3.33.0, 5.3.7-arch1-1-ARCH, LLVM 9.0.0)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 19.2.2
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.5 (Compatibility Profile) Mesa 19.2.2
OpenGL shading language version string: 4.50
OpenGL context flags: (none)
OpenGL profile mask: compatibility profile
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 19.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
[dependencies]
...
# Graphics
wgpu = { git = "https://github.com/gfx-rs/wgpu-rs", rev = "ed2c67f762970d0099c1e6c6e078fb645afbf964" }
winit = "0.20.0-alpha4"
...

If you need additional details, I am happy to provide them and whatever else helps you solve this "mystery" (maybe it's totally obvious to you!)

texture sampling in vertex shader crashes on vulkan

It works with dx12 but crashes with vulkan backend (both on windows and linux, same trace).

RTX 2070

trace :

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `Ok(())`,
 right: `Err(ERROR_DEVICE_LOST)`', C:\Users\ctrl\.cargo\git\checkouts\gfx-e86e7f3ebdbc4218\3d5db15\src\backend\vulkan\src\lib.rs:1156:9
stack backtrace:
   0: backtrace::backtrace::trace_unsynchronized
             at C:\Users\VssAdministrator\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.34\src\backtrace\mod.rs:66
   1: std::sys_common::backtrace::_print
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\sys_common\backtrace.rs:47
   2: std::sys_common::backtrace::print
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\sys_common\backtrace.rs:36
   3: std::panicking::default_hook::{{closure}}
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:200
   4: std::panicking::default_hook
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:214
   5: std::panicking::rust_panic_with_hook
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:477
   6: std::panicking::continue_panic_fmt
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:384
   7: std::panicking::begin_panic_fmt
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:339
   8: gfx_backend_vulkan::{{impl}}::submit<gfx_backend_vulkan::command::CommandBuffer,core::iter::adapters::flatten::FlatMap<core::slice::Iter<wgpu_native::id::Id<wgpu_native::command::CommandBuffer<gfx_backend_empty::Backend>>>, alloc::vec::Vec<gfx_backend_vul
             at <::std::macros::panic macros>:8
   9: wgpu_native::device::queue_submit<gfx_backend_vulkan::Backend>
             at C:\Users\ctrl\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\c3609d7\wgpu-native\src\device.rs:1527
  10: wgpu_native::device::wgpu_queue_submit
             at C:\Users\ctrl\.cargo\git\checkouts\wgpu-53e70f8674b08dd4\c3609d7\wgpu-native\src\device.rs:1568
  11: wgpu::Queue::submit
             at .\src\lib.rs:1332
  12: cube::framework::run::{{closure}}<cube::Example>
             at .\examples\framework.rs:156
  13: winit::platform_impl::platform::event_loop::{{impl}}::run_return::{{closure}}<(),closure-0>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:178
  14: alloc::boxed::{{impl}}::call_mut<(winit::event::Event<()>, mut winit::event_loop::ControlFlow*),FnMut<(winit::event::Event<()>, mut winit::event_loop::ControlFlow*)>>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\liballoc\boxed.rs:794
  15: winit::platform_impl::platform::event_loop::{{impl}}::call_event_handler::{{closure}}<()>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:547
  16: std::panic::{{impl}}::call_once<(),closure-0>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panic.rs:315
  17: std::panicking::try::do_call<std::panic::AssertUnwindSafe<closure-0>,()>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panicking.rs:296
  18: panic_unwind::__rust_maybe_catch_panic
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libpanic_unwind\lib.rs:80
  19: std::panicking::try<(),std::panic::AssertUnwindSafe<closure-0>>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panicking.rs:275
  20: std::panic::catch_unwind<std::panic::AssertUnwindSafe<closure-0>,()>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\panic.rs:394
  21: winit::platform_impl::platform::event_loop::EventLoopRunner<()>::call_event_handler<()>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:545
  22: winit::platform_impl::platform::event_loop::EventLoopRunner<()>::events_cleared<()>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:484
  23: winit::platform_impl::platform::event_loop::EventLoop<()>::run_return<(),closure-0>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:225
  24: winit::platform_impl::platform::event_loop::EventLoop<()>::run<(),closure-0>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\platform_impl\windows\event_loop.rs:162
  25: winit::event_loop::EventLoop<()>::run<(),closure-0>
             at C:\Users\ctrl\.cargo\registry\src\github.com-1ecc6299db9ec823\winit-0.20.0-alpha3\src\event_loop.rs:140
  26: cube::framework::run<cube::Example>
             at .\examples\framework.rs:115
  27: cube::main
             at .\examples\cube\main.rs:357
  28: std::rt::lang_start::{{closure}}<()>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\rt.rs:64
  29: std::rt::lang_start_internal::{{closure}}
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\rt.rs:49
  30: std::panicking::try::do_call<closure-0,i32>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:296
  31: panic_unwind::__rust_maybe_catch_panic
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libpanic_unwind\lib.rs:80
  32: std::panicking::try
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panicking.rs:275
  33: std::panic::catch_unwind
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\panic.rs:394
  34: std::rt::lang_start_internal
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\/src\libstd\rt.rs:48
  35: std::rt::lang_start<()>
             at /rustc/625451e376bb2e5283fc4741caa0a3e8a2ca4d54\src\libstd\rt.rs:64
  36: main
  37: invoke_main
             at d:\agent\_work\3\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
  38: __scrt_common_main_seh
             at d:\agent\_work\3\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
  39: BaseThreadInitThunk
  40: RtlUserThreadStart
error: process didn't exit successfully: `target\debug\examples\cube.exe` (exit code: 101)

step to reproduce : take the cube example and change the shaders to the following :

shader.vert

#version 450
layout(set = 0, binding = 1) uniform texture2D t_Color;
layout(set = 0, binding = 2) uniform sampler s_Color;
void main() {
    gl_Position = texture(sampler2D(t_Color, s_Color), vec2(0.0,0.0));
}

shader.frag

#version 450
layout(location = 0) out vec4 o_Target;
void main() {
    o_Target =  vec4(0.0);
}

Both glsl_to_spirv and shaderc compilation of the shaders have the same behavior.

skybox cubemap example

A new example for how to write an idiomatic skybox with a cubemap in wgpu-rs would be great.

Opening this issue in case thereโ€™s some superhuman out there who wants to help us all out

macOS: "acquire_image failed, re-creating"

Running any example on macOS with RUST_LOG=info logs the following warnings, followed by a panic.

[2019-08-31T07:23:56Z INFO  shadow::framework] Initializing the window...
[2019-08-31T07:23:56Z INFO  wgpu_native::instance] Adapter Metal AdapterInfo { name: "Intel(R) HD Graphics 630", vendor: 0, device: 0, device_type: IntegratedGpu }
[2019-08-31T07:23:56Z INFO  wgpu_native::device] creating swap chain SwapChainDescriptor { usage: OUTPUT_ATTACHMENT, format: Bgra8UnormSrgb, width: 1600, height: 1200, present_mode: Vsync }
[2019-08-31T07:23:56Z INFO  gfx_backend_metal::window] build_swapchain SwapchainConfig { present_mode: Fifo, composite_alpha: OPAQUE, format: Bgra8Srgb, extent: Extent2D { width: 1600, height: 1200 }, image_count: 2, image_layers: 1, image_usage: COLOR_ATTACHMENT }
[2019-08-31T07:23:56Z INFO  shadow::framework] Initializing the example...
[2019-08-31T07:23:57Z INFO  gfx_backend_metal::device] Entry point EntryPoint { name: "main", execution_model: Vertex, work_group_size: WorkGroupSize { x: 0, y: 0, z: 0 } }
[2019-08-31T07:23:57Z INFO  gfx_backend_metal::device] Entry point EntryPoint { name: "main", execution_model: Fragment, work_group_size: WorkGroupSize { x: 0, y: 0, z: 0 } }
[2019-08-31T07:23:57Z INFO  gfx_backend_metal::device] Entry point EntryPoint { name: "main", execution_model: Vertex, work_group_size: WorkGroupSize { x: 0, y: 0, z: 0 } }
[2019-08-31T07:23:57Z INFO  gfx_backend_metal::device] Entry point EntryPoint { name: "main", execution_model: Fragment, work_group_size: WorkGroupSize { x: 0, y: 0, z: 0 } }
[2019-08-31T07:23:57Z INFO  shadow::framework] Entering render loop...
[2019-08-31T07:23:58Z WARN  gfx_backend_metal::window] Swapchain drawables are changed, unable to wait for 0
[2019-08-31T07:23:58Z WARN  gfx_backend_metal::window] Failed to get the drawable of frame 0
[2019-08-31T07:23:58Z WARN  wgpu_native::swap_chain] present failed: OutOfDate
[2019-08-31T07:23:58Z WARN  gfx_backend_metal::window] Swapchain drawables are changed, unable to wait for 1
[2019-08-31T07:23:58Z WARN  gfx_backend_metal::window] Failed to get the drawable of frame 1
[2019-08-31T07:23:58Z WARN  wgpu_native::swap_chain] present failed: OutOfDate
[2019-08-31T07:23:58Z WARN  gfx_backend_metal::window] No frame is available
[2019-08-31T07:23:58Z WARN  wgpu_native::swap_chain] acquire_image failed, re-creating
thread 'main' panicked at 'not yet implemented', /Users/parasyte/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/b58c96e/wgpu-native/src/swap_chain.rs:132:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

I can get around the panic by updating the wgpu-native commit hash to gfx-rs/wgpu#314 But I still get the acquire_image failed warning quite often. Sometimes the renderer gives up and I get a seizure-inducing red/black flash at full frame rate. When it does work (rarely, meaning no acquire_image failed warning), the animation on the shadow example is much slower than it used to be (similar to #72). It's slow, but it doesn't use excessive amounts of CPU or anything.

FWIW I bisected this back to #70. Prior to this patch, everything is stable and solid. Haven't seen any warnings with this build at all.

Prevent any drop() logic if inside a panic

It's annoying to see a panic inside a panic if we are seeing drop() failing because we are in process of stack unwinding. We should check for this in drop() implementations and avoid doing anything that can fail.

multithreaded_compute test fails with metal

On OSX, I observe:

[18:20]% cargo test --features metal                                                                                      paul@compy386:~/repos/wgpu-rs on capture -> origin/capture
   Compiling wgpu v0.2.2 (/Users/paul/repos/wgpu-rs)
warning: constant item is never used: `OPENGL_TO_WGPU_MATRIX`
 --> examples/msaa-line/../framework.rs:4:1
  |
4 | / pub const OPENGL_TO_WGPU_MATRIX: cgmath::Matrix4<f32> = cgmath::Matrix4::new(
5 | |     1.0, 0.0, 0.0, 0.0,
6 | |     0.0, -1.0, 0.0, 0.0,
7 | |     0.0, 0.0, 0.5, 0.0,
8 | |     0.0, 0.0, 0.5, 1.0,
9 | | );
  | |__^
  |
  = note: #[warn(dead_code)] on by default

    Finished dev [unoptimized + debuginfo] target(s) in 10.62s
     Running target/debug/deps/wgpu-b82b9f440a8489f5

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

     Running target/debug/deps/multithreaded_compute-312014e85c62f80f

running 1 test
test multithreaded_compute ... FAILED

failures:

---- multithreaded_compute stdout ----
thread 'multithreaded_compute' panicked at 'A thread never completed.: Timeout', src/libcore/result.rs:999:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:197
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:208
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:474
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:381
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:308
   7: <T as core::any::Any>::type_id
             at src/libcore/panicking.rs:85
   8: core::result::unwrap_failed
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/macros.rs:18
   9: core::result::Result<T,E>::expect
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/result.rs:827
  10: multithreaded_compute::multithreaded_compute
             at tests/multithreaded_compute.rs:102
  11: multithreaded_compute::multithreaded_compute::{{closure}}
             at tests/multithreaded_compute.rs:3
  12: core::ops::function::FnOnce::call_once
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/ops/function.rs:231
  13: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/liballoc/boxed.rs:704
  14: panic_unwind::dwarf::eh::read_encoded_pointer
             at src/libpanic_unwind/lib.rs:85
  15: test::run_test::run_test_inner::{{closure}}
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/panicking.rs:272
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/panic.rs:394
             at src/libtest/lib.rs:1468


failures:
    multithreaded_compute

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--test multithreaded_compute'

I noticed that calling device.poll before tx.send caused the test to pass.

Conflicting dependency: gfx-hal 0.2.1 vs 0.2.0

Attempting to compile on OSX, I see the following error message:

cargo run --example capture --features metal                                                                                                                                                                                                                                                            paul@compy386:~/repos/wgpu-rs on capture -> origin/capture
    Updating git repository `https://github.com/gfx-rs/wgpu`
    Updating crates.io index
error: failed to select a version for `gfx-hal`.
    ... required by package `gfx-backend-dx11 v0.2.1`
    ... which is depended on by `wgpu-native v0.2.6 (https://github.com/gfx-rs/wgpu?rev=32399cff8a0b55389f2d2493ea5c900c986c2547#32399cff)`
    ... which is depended on by `wgpu v0.2.2 (/Users/paul/repos/wgpu-rs)`
versions that meet the requirements `= 0.2.0` are: 0.2.0

all possible versions conflict with previously selected packages.

  previously selected package `gfx-hal v0.2.1`
    ... which is depended on by `wgpu-native v0.2.6 (https://github.com/gfx-rs/wgpu?rev=32399cff8a0b55389f2d2493ea5c900c986c2547#32399cff)`
    ... which is depended on by `wgpu v0.2.2 (/Users/paul/repos/wgpu-rs)`

failed to select a version for `gfx-hal` which could resolve this conflict

Mapping buffers synchronously

Let's consider a scenario where one would need to map a transfer buffer and fill it with data needed for this frame. In addition, let's take it for granted that we don't want to allocate a new buffer each frame (let me know if this constraint isn't reasonable, but since it's recommended practice in vulkan and friends, I'd like to figure out a way to fulfill it).

The two ways I see to map buffers is to use create_buffer_mapped and map_read/write_async.

The former obviously only works when you are creating the buffer so it is not enough when your goal is to have a finite amount of buffers that you reuse.

The latter (map_read/write_async) has interesting constraints:

  • The callback doesn't fire until the buffer is ready to be mapped without stalling.
  • The callback gives you the mapped slice of data which can't escape the callback.

@kvark told me about a neat trick to ensure a buffer is ready to be mapped: once the a buffer has been used in a frame, call map_write_async with a callback that notifies your rendering system that the buffer can be mapped.

Doing that is actually a little convoluted because of rust's unforgiving borrow checker:

  • You have to use channels to communicate from the callback to your pool of transfer buffers.
  • You can't move the buffer into the callback because the buffer is borrowed for the duration of the method that you give the callback to.
  • Once the callback fires you get that mapped slice you asked for but you don't want the mapped data now, you'll want it next time you dequeue this buffer from your pool. So you'll have to map that buffer again using map_write_async

Even if the callback trick gives you the guarantee that the buffer is mappable, I am very uneasy about relying on it firing inside of map_write_async, because that's relying on an implementation detail which could change without affecting the public API.
In addition, for soundness reasons the callback cannot borrow its environment which makes it very inconvenient when you know it has it fire synchronously.

Proposed API change

Instead (or in addition to) being able to map the buffer asynchronously, we could asynchronously request a mappable buffer. The API could look like:

    pub fn request_write_mappable_buffer_async<F>(&self, start: u32, size: u32, callback: F) where
    F: FnOnce(WriteMappableBuffer) 
}

WriteMappableBuffer would be able to give you synchronous access to its data, for example through a map<T>(&mut self) -> &mut[T] method or a method that takes a closure which can borrow its environment.
In order to get back a wgpu::Buffer that can be used in a command queue, one would call WriteMappableBuffer::finish(self) -> wgpu::Buffer.

Note how request_write_mappable_buffer_async consumes the buffer. This ensures that:

  • the buffer isn't used while the request is in flight
  • the we get a mappable buffer in the callback which we are free to send to our pool and it has ownership of the buffer.

In addition, create_buffer_mapped could be replaced with create_mappable_buffer and returned a mappable buffer for sonsistency.

Please don't attach any importance to the names in this proposal, I agree they aren't very juicy and I trust you can find more suitabke ones if need be.
Also I'm not sure what makes most sense between asking for a mappable buffer (where mapping would happen when you ask for a slice in the buffer) and asking for a mapped (which holds on to a pointer to the already mapped data). This distinction is, I am sure, very important for wgpu's implementation, but either way is fine from the point of view the user of this API.

The important parts of this proposal is that (in my opinion) rust makes the current async API very hard to work with, whereas something closer to the proposed API would let wgpu have a way to express buffers that can be mapped synchronously even if obtaining them happens through asynchronous means.

instancing?

Is it possible to do instancing with wgpu-rs?

Flickering with multiple draw calls within the same render pass after compute pass on Vulkan with Intel Linux drivers

Flickering occurs when performing two draw calls within the same render pass after an empty compute pass with the Vulkan backend. This is using Intel's Linux drivers - it does not happen with Nvidia drivers.

A modified version of the cube examples with larger number of instances flickers between the following two images:

image
image

The larger boxes are drawn after the many small instances. Please ignore the wrong depth ordering. A depth buffer was not enabled. However, I am also seeing the same flickering in a larger application with a depth buffer enabled.

The modified example can be found here:

https://github.com/dragly/wgpu-rs/tree/dragly/comp-issue/examples/cube

The relevant changes to the cube example are in this commit:

dragly@cc36db2

The issue also appears in RenderDoc. I am able to capture each type of frame above (with and without the large boxes). However, they appear equal when saving and re-opening the captures in RenderDoc and do not appear to have any differences in data or calls.

My guess is that there is either a bug in the Intel drivers or something wrong with the barriers. This is based on a similar issue I am seeing in a different example where two compute shaders appear to be run out of order.

Please add support for the new `raw-window-handle` crate

Recently the gamedev-wg has been trying to make it easier for windowing libs and graphical libs to agree on a protocol for communication. This has resulted in the raw-window-handle crate, where the window offers up its OS window handle thingy to the graphical lib upon request and then the graphical lib is able to do its startup.

It seems like wgpu is mostly going through wgn and that's going through the gfx-backend stuff at the moment, that issue here would be a natural first step towards using raw-window-handle with this crate.

Tracking issue where this crate was discussed and crated if you have any questions: rust-gamedev/wg#26

[feature request] Allow the application to provide API-native shader code

Any thoughts on expanding the API to allow the application to pass in shaders in the native shader format of the underlying API, if it wants to?

Effectively I don't want to eat the cost of spirv_cross at runtime (mostly due to the binary size, but performance could also be an issue in the future). Since I always know the target graphics API at compile time, I could be front-loading the cross compilation to MSL/HLSL/etc where needed.

Uniform buffer's field value is chaotic in compute shader

I have defined a uniform struct named FluidUniform, in the compute shader, , use buffer to record the uniform value in order, then, use copy_buffer_to_buffer copy buffer's value to staging_buffer and print it, the value of the FluidUniform's e field is chaotic, and some values are for other fields (such as lattice_num).

#[repr(C)]
#[derive(Copy, Clone)]
pub struct FluidUniform {
    // e: [[f32; 2]; 9],
    e: [f32; 18],
    lattice_size: [f32; 2],
    lattice_num: [f32; 2],
    weight: [f32; 9],
    swap: i32,
}

// compute shader
layout(set = 0, binding = 0) uniform FluidUniform {
    // vec2 e[9];
    float e[18];
    vec2 lattice_size;
    vec2 lattice_num;
    float weight[9];
    int swap;
};
...
layout (set = 0, binding = 3) buffer FluidBuffer { 
    FluidCell fluidCells[];    
};

// write e's value to buffer
for (int j = 0; j < 6; j ++) {
    fluidCells[j].color[0] = e[j * 3];
    fluidCells[j].color[1] = e[j * 3 + 1];
    fluidCells[j].color[2] = e[j * 3 + 2];
}

Print staging_buffer's value

encoder.copy_buffer_to_buffer(&self.fluid_buffer, 0, &self.staging_buffer, 0, fluid_buf_range);
self.app_view.device.get_queue().submit(&[encoder.finish()]);
self.staging_buffer.map_read_async(0, fluid_buf_range, |result: wgpu::BufferMapAsyncResult<&[FluidCell]>| {
    println!("{:?}", result.unwrap().data);
});

Is it possible to remove the need for cargo features?

I talked about this a bit with @grovesNL on Discord, but basically:

Can the crate be more automatic about backend selection? So that by default the user doesn't have to select any cargo features to just "make it work however".

This could even be a feature that's just the default feature where you get a correct backend picked for you, or you can turn off default features and then turn on just the one backend you want.

I know cargo isn't the best and it's very hard to make it play nice with platform specific stuff like this, but this is an important ergonomics goal.

Better init-time validation errors, add more variants to hal::pso::CreationError

First, let me thank you for this project. The API is super-nice to work with!

As I am writing this issue, I realize this might be a long shot, but would still like to know whether this is possible with the current architecture, so please bear with me. Also, maybe I should have reported this in gfx-hal instead ๐Ÿค”

I tried (by mistake) to create a wgpu::RenderPipeline with empty vertex_buffers while the vertex shader was expecting input attributes. The system panicked as expected, but it did so with a cryptic error caused by an unwrap rather deep down the stack, in gfx_backend_metal - 'called Result::unwrap() on an Err value: Other'.

Once enabling logging, I found this:

2019-07-31T06:53:08Z ERROR gfx_backend_metal::device] PSO creation failed: Vertex function has input attributes but no vertex descriptor was set.

The log originates from here: https://github.com/gfx-rs/gfx/blob/master/src/backend/metal/src/device.rs#L1580

Would it be possible to add more descriptive variants to pso::CreationError for better development experience, or is there something preventing this from happening? (I can imagine we wouldn't want to pollute the error with e.g. backend specific variants).

On a similar note: knowing almost nothing about the project, I would expect wgpu to validate and catch this before it reaches gfx-hal or one of the backends. Is there something in the architecture preventing the validation, or is it just not implemented yet? If the latter is the case, I would be for some implementation work ๐Ÿ™‚

Versions:

wgpu-rs: 5dd361fc639e71af328504f9e27a08daf83d7633 (a quite recent master)
wgpu-native: 0.2.6
gfx-hal: 0.2.1
gfx-backend-metal: 0.2.3

The stacktrace:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Other', src/libcore/result.rs:999:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:197
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:211
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:474
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:381
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:308
   7: <T as core::any::Any>::type_id
             at src/libcore/panicking.rs:85
   8: gfx_backend_metal::device::MemoryTypes::bits
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/macros.rs:18
   9: gfx_backend_metal::device::MemoryTypes::bits
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libcore/result.rs:800
  10: wgpu_native::device::wgpu_queue_submit::{{closure}}
             at /Users/monad/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/32399cf/wgpu-native/src/device.rs:1686
  11: wgpu_native::device::device_create_render_pipeline::{{closure}}
             at /Users/monad/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/32399cf/wgpu-native/src/device.rs:1728
  12: wgpu::Device::create_pipeline_layout::{{closure}}
             at /Users/monad/.cargo/git/checkouts/wgpu-rs-40ea39809c03c5d8/5dd361f/src/lib.rs:686
  13: hurban_selector::viewport_renderer::ViewportRenderer::new
             at src/viewport_renderer.rs:106
  14: hurban_selector::main
             at src/main.rs:45
  15: std::rt::lang_start::{{closure}}
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
  16: std::panicking::try::do_call
             at src/libstd/rt.rs:49
             at src/libstd/panicking.rs:293
  17: panic_unwind::dwarf::eh::read_encoded_pointer
             at src/libpanic_unwind/lib.rs:85
  18: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:272
             at src/libstd/panic.rs:394
             at src/libstd/rt.rs:48
  19: std::rt::lang_start
             at /rustc/a53f9df32fbb0b5f4382caaad8f1a46f36ea887c/src/libstd/rt.rs:64
  20: hurban_selector::create_swap_chain

Building for `gfx-backend-vulkan` without x11

Hi!

Currently, wgpu-rs by default enables x11 through wgpu-native when building for gfx-backend-vulkan. It'd be great if the x11 feature could be made optional when building wgpu-rs projects.

Removing features = [x11] from https://github.com/gfx-rs/wgpu/blob/master/wgpu-native/Cargo.toml#L45 yields a single compile error https://github.com/gfx-rs/wgpu/blob/master/wgpu-native/src/instance.rs#L184 which can be addressed by changing the conditional compilation to #[cfg(all(unix, not(target_os = "ios"), not(target_os = "macos"), feature = "x11"))].

No github releases.

How do I find the matching source for a specific version of a published crate on crates.io?

Update raw-window-handle to 0.3

The current version of the raw-window-handle dependency is 0.1, which is missing some crucial things (for example RawWindowHandle can't be copied or cloned). This is already fixed for wgpu by gfx-rs/wgpu#344.

Is the shadow example supposed to look like this?

After the newest update to fix running wgpu on macos, I was able to run the shadow example.

However, it's extremely glitchy looking, I'm worried there are still issues with rendering on macos.

Does this look right?
Screen Shot 2019-09-09 at 10 12 01 PM

UBO array's will be misaligned if their size isn't a power of 2

I was using the example from lyon, here, and I came across are weird issue. If the Primitive type's size wasn't a power of 2 (e.g. 16, 32, 64), excluding 4 or 8, the array would be misaligned in the shader, making the first primitive render fine, but subsequent primitives wouldn't.
I created a repo, here, to isolate the problem, with two branches, one showing the problem with the size of 8 bytes, and one working with the size of 32 bytes. There's a comment, here, explaining how to make the 8 byte version work. Similarly, there's a comment, here explaining how to make the 32 byte version stop working.
I've tried Vulkan and DirectX 12, both seeming to be having this issue.
I'm not sure if this is a problem directly with wgpu-rs, or if it's a problem with wgpu, rendy, or gfx-hal, but I'll just post it here, because that's what I'm directly working with.

Panic while trying to write buffer contents to file

I'm creating a tutorial site for wgpu at sotrh.github.io/learn-wgpu. As part of my research, I've been trying to write a program that does some rendering and compute work without a window. It works up unto the point ehrn I try to pull the data out of the resulting buffer.

fn main() {
    let instance = wgpu::Instance::new();
    let adapter = instance.request_adapter(&Default::default());
    let mut device = adapter.request_device(&Default::default());

    let texture_size = 32u32;
    let texture_desc = wgpu::TextureDescriptor {
        size: wgpu::Extent3d {
            width: texture_size,
            height: texture_size,
            depth: 1,
        },
        array_layer_count: 1,
        mip_level_count: 0,
        sample_count: 1,
        dimension: wgpu::TextureDimension::D2,
        format: wgpu::TextureFormat::Rgba8UnormSrgb,
        usage: wgpu::TextureUsage::COPY_SRC 
            | wgpu::TextureUsage::OUTPUT_ATTACHMENT,
    };

    let texture = device.create_texture(&texture_desc);
    let texture_view = texture.create_default_view();

    let row_pitch = std::mem::size_of::<u32>() as u32;
    let output_buffer_size = (row_pitch * texture_size * texture_size) as wgpu::BufferAddress;
    let output_buffer_desc = wgpu::BufferDescriptor {
        size: output_buffer_size,
        usage: wgpu::BufferUsage::COPY_DST,
    };
    let output_buffer = device.create_buffer(&output_buffer_desc);

    let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
        todo: 0,
    });

    {
        let render_pass_desc = wgpu::RenderPassDescriptor {
            color_attachments: &[
                wgpu::RenderPassColorAttachmentDescriptor {
                    attachment: &texture_view,
                    resolve_target: None,
                    load_op: wgpu::LoadOp::Clear,
                    store_op: wgpu::StoreOp::Store,
                    clear_color: wgpu::Color::BLACK,
                }
            ],
            depth_stencil_attachment: None,
        };
        let mut render_pass = encoder.begin_render_pass(&render_pass_desc);
    }

    encoder.copy_texture_to_buffer(
        wgpu::TextureCopyView {
            texture: &texture,
            mip_level: 0,
            array_layer: 1,
            origin: wgpu::Origin3d::ZERO,
        }, 
        wgpu::BufferCopyView {
            buffer: &output_buffer,
            offset: 0,
            row_pitch,
            image_height: texture_size,
        }, 
        texture_desc.size,
    );

    device.get_queue().submit(&[encoder.finish()]);

    output_buffer.map_read_async(0, output_buffer_size, move |result: wgpu::BufferMapAsyncResult<&[u8]>| {
        println!("Testing 1, 2, 3");
        let mapping = result.unwrap();
        let data = mapping.data;

        use image::{ImageBuffer, Rgba};
        let buffer = ImageBuffer::<Rgba<u8>, _>::from_raw(
            texture_size,
            texture_size,
            data,
        ).unwrap();

        buffer.save("image.png").unwrap();
    });

    device.poll(true);
}

As you can see, all I'm doing is create a texture to render to, clearing it with the color black, copying the texture to a buffer, and trying to save that buffer to a file as a png. Everything seems to work until device.poll(true). I get a panic with the following backtrace.

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `Ok(false)`,
 right: `Ok(true)`: GPU got stuck :(', /home/benjamin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.3.3/src/device.rs:204:13
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.29/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:47
   3: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:36
   4: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:200
   5: std::panicking::default_hook
             at src/libstd/panicking.rs:214
   6: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:477
   7: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:384
   8: std::panicking::begin_panic_fmt
             at src/libstd/panicking.rs:339
   9: wgpu_native::device::PendingResources<B>::cleanup
             at /home/benjamin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.3.3/src/device.rs:204
  10: wgpu_native::device::Device<gfx_backend_vulkan::Backend>::maintain
             at /home/benjamin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.3.3/src/device.rs:550
  11: wgpu_device_poll
             at /home/benjamin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.3.3/src/device.rs:2037
  12: wgpu::Device::poll
             at /home/benjamin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.3.0/src/lib.rs:602
  13: windowless::main
             at code/src/intermediate/windowless/main.rs:86
  14: std::rt::lang_start::{{closure}}
             at /rustc/eae3437dfe991621e8afdc82734f4a172d7ddf9b/src/libstd/rt.rs:64
  15: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:49
  16: std::panicking::try::do_call
             at src/libstd/panicking.rs:296
  17: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:82
  18: std::panicking::try
             at src/libstd/panicking.rs:275
  19: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  20: std::rt::lang_start_internal
             at src/libstd/rt.rs:48
  21: std::rt::lang_start
             at /rustc/eae3437dfe991621e8afdc82734f4a172d7ddf9b/src/libstd/rt.rs:64
  22: main
  23: __libc_start_main
  24: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace

This seems odd, as the example for wgpu-native uses a similar strategy (though it doesn't do anything with the buffer). I'm pretty sure I'm missing something.

I'm on version 3.0.0 from crates.io.

Buffer should carry size information

It would be nice if the wgpu-rs Buffer struct also carried the size it was constructed with.

I just hit another bug of drawing too many elements in a line list pipeline, which resulted in visual artifacts, instead of segfault (e.g. gfx-rs/wgpu#184).

I think this would have been easier to avoid if the Buffer carried around the size itself, similar to really any other buffer like object in rust, e.g., vec, string, slices, arrays, etc.

device should implement debug

e.g.,

        let mut device = adapter.request_device(&wgpu::DeviceDescriptor {
            extensions: wgpu::Extensions {
                anisotropic_filtering: false,
            },
            limits: wgpu::Limits::default(),
        });
info!("Got device: {:?}", device)

should do something (as opposed to not compiling); ideally, if possible, print the vendor info of the device that was obtained?

E.g., something like:

Device: Intel(R) HD Graphics 620 (Kaby Lake GT2) (type: IntegratedGpu)

"No adapters available" on OpenSUSE

I tried to run examples on OpenSUSE Tumbleweed but it does not work.
It works without problems on Fedora 30 and Windows 10 on the same PC.

Hardware
CPU: Intel Core i5-9300H @ 2.40GHz
GPU: NVIDIA GeForce GTX 1650 (Optimus)

System
OpenSUSE Tumbleweed 20191024

Kernel 5.3.7
KDE Plasma 5.17 (x11)

Cargo 1.38.0
Rustc 1.38.0

Steps to reproduce
1.- Install a fresh copy of OpenSUSE Tumbleweed using NET Installer
2.- Install Rust using Rustup

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

3.- Install cmake and C/C++ Tools

sudo zypper in -t pattern devel_C_C++
sudo zypper in cmake

4.- Clone this repo

git clone https://github.com/gfx-rs/wgpu-rs.git

5.- Run Cube example

cd wgpu-rs
cargo run --example cube

Error

thread 'main' panicked at 'No adapters are available!', /home/javier/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/78fbbba/wgpu-native/src/instance.rs:329:9

I dont know if its a bug, openSUSE's issue or I miss something :S

HAL Interop

We know that gfx-hal (with Rendy) provides a different set of trade-offs versus wgpu-rs. It makes the choice for library vendors to limit the future use of the libraries. If a library is simple enough that it could be using gfx-hal directly, and the author considers going this path, it means the wgpu-rs applications would not be able to use it, and visa versa. The issue is inspired by servo/pathfinder#213

It would be interesting to try to define a foreign library interface (FLI?), such that the user can unsafely provide the necessary glue bits in order for wgpu-rs to be able to use the logic bits of the foreign library. This could be enclosed into a "glue" library, e.g. "pathfinder_wgpu", which anyone could then use as a regular library when their graphcis stack is based on wgpu-rs.

The difficulty here is defining where the boundary should be drawn, and how to provide all the tracking info to wgpu. It might be infeasible completely, just needs to be investigated.

Bindings don't need to match for texture2D

In a fragment shader, using binding = 8.

layout(set = 1, binding = 8) uniform texture2D first_texture; 

Descriptor setting binding to 7.

static TEXDESC: wgpu::BindGroupLayoutDescriptor = wgpu::BindGroupLayoutDescriptor {
    bindings: &[
        wgpu::BindGroupLayoutBinding {
            binding: 7,
            visibility: wgpu::ShaderStage::FRAGMENT,
            ty: wgpu::BindingType::SampledTexture,
        }
    ],
};

Using descriptors bindings which is 7

        let texture_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
            layout: &texture_group_layout,
            bindings: &[
                wgpu::Binding {
                    binding: TEXDESC.bindings.first().unwrap().binding,
                    resource: wgpu::BindingResource::TextureView(&texture_view),
                }
            ],
        });

This seems to work.
I also experimented with adding a second binding with using 3 and regardless of what the shader used for bindings, it still worked.

I wonder if it just tries to match to a texture2D declaration?

map_read/write_async can capture pointers to the stack

This code should not compile but it does:

    pub fn do_the_thing(buffer: &wgpu::Buffer) {
        let mut stack_thing: u32 = 1;
        // the FnOnce closure is about to capture `reference_to_the_stack.
        let reference_to_the_stack = &mut stack_thing;
        buffer.map_write_async(0, 0, move |mapping: wgpu::BufferMapAsyncResult<&mut [u8]>| {
            if let wgpu::BufferMapAsyncResult::Success(..) = mapping {
                *reference_to_the_stack = 42;
            }
        });
        println!("stack_thing hasn't been moved into the closure, it's value is {:?}", stack_thing);
    } // stack_thing is gone but the callback still exists with a pointer to it.

Normally, rust would prevent you from moving the FnOnce callback out of the stack in map_write_aync, for example if you tried to push the callback into a vector that outplives that stack you would get:

error[E0310]: the parameter type `F` may not live long enough
7 |         self.callbacks.push(Box::new(callback));
  |                             ^^^^^^^^^^^^^^^^^^
  |
note: ...so that the type `C` will meet its required lifetime bounds

However the implementation of map_read/write_async dodges this borrow check by casting the boxed BufferMapWriteAsyncUserData into raw parts.

As a result the callbacks can capture any pointer on the stack and read or write into these memory locations later after the caller's stack is gone.

I believe the correct fix for this is to add the 'static bound to the F parameter.

Loading textures from PNG

Thank you so much for the WGPU samples! I'm hoping wgpu really takes off.

For your textured cube, I see that you're creating your fractal image via code. Do you have any examples of loading a png instead? I tried to cobble some example code from gfx-hal and your code to load some type of image as a test (I failed miserably by the way). The best I could do is take some bytes from a bmp and modify the image byte vector you guys created in code.

I'm obviously just poking at things with a stick. I do have experience with OpenGL ES, but it seems things have moved a bit and I need to go through a learning curve.

Any guidance would truly be appreciated. I'd prefer an example with png, but I'll take any example if you have it.

Again, thank you so much for your efforts!!!

vulkan: vkCreateSwapChainKHR called with invalid imageExtent on resize

wgpu version: 2.3
gpu:

description: VGA compatible controller
product: HD Graphics 620
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 02
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:132 memory:b0000000-b0ffffff memory:a0000000-afffffff ioport:4000(size=64) memory:c0000-dffff

os:

NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic


When running a modified version of the triangle example with cargo run --features wgpu/vulkan, I get the following errors when trying to resize the window:

[2019-06-16T20:28:11Z ERROR gfx_backend_vulkan] [DS] Object: 0x55ecdc79b140 | vkCreateSwapChainKHR() called with imageExtent = (1024,768), which is outside the bounds returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR(): currentExtent = (1853,1025), minImageExtent = (1853,1025), maxImageExtent = (1853,1025). The spec valid usage text states 'imageExtent must be between minImageExtent and maxImageExtent, inclusive, where minImageExtent and maxImageExtent are members of the VkSurfaceCapabilitiesKHR structure returned by vkGetPhysicalDeviceSurfaceCapabilitiesKHR for the surface' (https://www.khronos.org/registry/vulkan/specs/1.0-extensions/html/vkspec.html#VUID-VkSwapchainCreateInfoKHR-imageExtent-01274)

Sometimes this causes the program to exit when trying to go to full screen. The actual problem can be found here: https://github.com/lcnr/wgpu_err

Allow passing non-array struct to Buffer

Currently, if you want to pass in any variables to host-side buffer, it must be an array.
The following extract from lib.rs shows the struct in wgpu-rs that stores host-side buffer's data.

pub struct CreateBufferMapped<'a, T> {
    id: wgn::BufferId,
    pub data: &'a mut [T],
}

The following shows the implementation of the struct:

impl<'a, T> CreateBufferMapped<'a, T>
where
    T: Copy,
{
    pub fn fill_from_slice(self, slice: &[T]) -> Buffer {
        self.data.copy_from_slice(slice);
        self.finish()
    }

    pub fn finish(self) -> Buffer {
        wgn::wgpu_buffer_unmap(self.id);
        Buffer { id: self.id }
    }
}

Since the input data type has to be an array, this forces a number of limitations such as not being able to include array count together with the array in the buffer.

Proposal:

  • Add a new variant of create_buffer_mapped method that allow being passed non-array struct as input data.
    For example, the following shows the input struct that could be allowed to the host-side buffer.
struct input_data{
count: usize,
array: Vec[i32]
}

Exact details of the new method based off create_buffer_mapped will be added later.

I will create a PR for this if someone can confirm that this feature could be merged in.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.