shader-slang / slang Goto Github PK
View Code? Open in Web Editor NEWMaking it easier to work with shaders
License: MIT License
Making it easier to work with shaders
License: MIT License
struct LightCB
{
float3 vec3Val; // We're using 2 values. [0]: worldDir [1]: intensity
};
RWStructuredBuffer<LightCB> gRWBuffer[5]; // Only UAV counter used
I get the correct array size, but the size is 0. If I'm not using array the size is reported correctly.
Might be user error, in both cases I'm calling (uint32_t)pSlangElementType->getSize()
The last statement in the Slang file is
return gTexture.Sample(gSampler, normalize(dir), 0);
Which is converted to the following illegal statement in GLSL
gTexture.Sample(gSampler, normalize(dir), float(0));
To reproduce run the Effects/EnvMap sample
HLSL allows a shader entry point to specify the root signature to use as an attribute:
[RootSignature("...")] void main(...) { ... }
The string literal there is expected to be provided via a #define
, and in fact fxc seems to have special support for compiling a #define
d string into a root signature (so it is a #define
that gets linkage? what happens if I #undef
it or have multiple definitions over the course of one file?).
The string literal that specifies a root signature encloses some syntax that looks more or less like a bunch of constructor calls with a bit of key = value
argument-passing sugar.
See [this page][hlsl-root-sig] for the MSDN documentation of the feature.
[hlsl-root-sig]: https://msdn.microsoft.com/en-us/library/windows/desktop/dn913202(v=vs.85).aspx)
Slang's current approach is to pass through attributes it doesn't understand, so shaders using this feature shouldn't be rejected today (modulo support for implicit concatenation of string literals, which we need to add).
A more complete implementation would require:
RootSignature
attribute as a known attribute we actually parse/checkVulkan GLSL allows for the size of an array to depend non-trivially on a specialization constant:
layout(constant_id = 0) const int N = 5;
float foo[N]; // trivial dependence
float bar[ (N+ 15) / 4 ]; // non-trivial dependence
The GLSL spec language around this stuff is messy, and gives the impression that somebody worked backwards from the implementation in glslang
. The basic behavior in glslang
(which is now the "correct" behavior via spec) is:
Two array types, with the equivalent element types, that use specialization constants in their sizes are equivalent if and only if their sizes are specified using the exact same specialization constant (no math on the constant is allowed, not even N +1
). All other cases are deemed not equivalent (even when syntactically identical).
Computed layout information as exposed in, e.g., SPIR-V or a reflection API will always be based on the "default" value for the specialization constant, and it is up to the user to carefully avoid cases where this will lead to incorrect behavior (e.g., don't put an array with speciaziation-constant-based size in the middle of something).
Item (1) is unfortunate, but understandable. Handling such things "right" means building a more complete dependent type system, and most people are going to back away from that. I'd like Slang to eventually do a better job, and at least incorporate a very basic "solver" to check for obvious algebraic identity. This would mean going against the GLSL spec, but compatibility with GLSL is a non-goal for Slang.
Item (2) is just plain dangerous. At some point it would be good to at least emit a warning in cases where the user has done something that could cause problems. A good policy would be to treat an array that uses a specialization constant a bit like an array with no statically-specified size (float foo[]
), and only allow it at the end of a structure, etc.
I'd like to ensure that the Slang reflection API never lies to people.
For now, I'm considering all of this mess out of scope, so this issue only exists to provide a backlog item.
Slang strips all declarations from the GLSL before sending it to glslang.
I send
#extension GL_ARB_compute_shader : require
layout(set = 0, binding = 1, rgba32) uniform image2D gOutput;
layout(local_size_x = 16, local_size_y = 16, local_size_z = 1) in;
void main()
{
imageStore(gOutput, ivec2(0,0), float4(1,1,1,1));
}
slang sends
#version 420
#extension GL_ARB_compute_shader : require
#line 33 0
void main_(){
{
imageStore(gOutput, ivec2(0,0), float4(1,1,1,1));}
}
void main(){
main_();
}
Run the Shadows
sample. It breaks inside Slang when trying to compile the ShadowPass
shaders
This is here just so I won't forget about it.
In order for a define-based solution to work for GLSL, I need to know when I'm compiling a Slang shader (so I can define the struct) vs when I'm compiling GLSL (so that I can remove the struct).
We may also need a SHADER_STAGE macro to know how to use in
and out
This is needed for code that relied on HLSL [unroll]
for semantic validity, since glslang doesn't unroll as part of doing semantic checks.
For now this could be something really hacky like a special-case statement construct __unroll_foreach(i,N) { ... }
. Longer term I'd like to move towards arbitrary compile-time computation (we just aren't in a position to implement the latter right now).
`struct LightCB
{
float3 vec3Val; // We're using 2 values. [0]: worldDir [1]: intensity
};
StructuredBuffer gLightIn;
AppendStructuredBuffer gLightOut;
[numthreads(1, 1, 1)]
void main()
{
uint numLights = 0;
uint stride;
gLightIn.GetDimensions(numLights, stride);
for (uint i = 0; i < numLights; i++)
// MISSING BRACKET
gLightOut.Append(gLightIn[i]);
}
}`
I added a hand-written translation in an __intrinsic
attribute for saturate()
, but the same needs to be done for many other functions.
The ideal case is to do a systematic survey of the HLSL "standard library" and attach GLSL equivalents to all the functions that we can handle easily.
Notes:
__intrinsic
doesn't account for member function calls. That should get fixed.This is strongly related to #16.
Given a shader with some set of "top-level" parameters, such as a Material
, we need a way to inform the compiler that for a given variant to be generated, the implementation of that parameter's type should be specialized in a particular way (based on an actual run-time C++ object of type Falcor::Material
in the Falcor case).
The ideal API interface is then relatively simple:
A shader entry point may include parameters (possibly declared at global scope) that belong to a high-level "module" type like Material
or Light
When low-level code generation is requested for an entry point, we can specify a concrete type to substitute in for the parameter (any parameter where we don't specify this will ideally be left as a "generalized" parameter.
This seems to require a few things not present in the code today:
We need to flesh out the system for interface
declarations (currently using the __trait
keyword), and allow interface implementations and interface-typed parameters.
We need to split the code-generation API up (or at least support splitting it) into two phases:
Actually, there might be a step (1.5) in that flow, which is where one creates target-specific "layout" objects from the raw AST objects to represent how parameters would be bound for a target.
The current representation of source locations is large and also expensive to copy (it contains a reference-counted string).
We should migrate to a lightweight location representation where a location is just a pointer-sized integer that stores an absolute index of a byte processed during the compilation session.
(This complicates debugging, so some time needs to be spent on making sure there is still a reasonable experience for tracing back to where an error came from)
Not Slang's fault, but it would be great if it will report conflicts in external bind locations.
For example:
layout(set = 0, binding = 0) uniform texture2D gTexture;
layout(set = 0, binding = 0) uniform sampler gSampler;
The shader compiles successfully. I don't know if it's a valid GLSL shader or not, perhaps the correct place for this issue is glslang.
The front can currently call out to fxc
and glslang
both for use as a "pass-through" compiler (bypassing the Slang compiler almost completely) and as a means to generate DX bytecode ("DXBC") and SPIR-V.
We should add support for calling out to the new HLSL compiler dxc
as an HLSL-to-DXIL (and eventually HLSL-to-SPIR-V) compiler.
I would not advocate for putting dxc
into our build anywhere, since CMake, LLVM, and clang make for a pretty heavy footprint on what is currently a lightweight project. Instead, we should follow a similar approach to what we do when interacting with d3dcompiler_47.dll
for fxc
, and assume that clients who want to use dxc
will either ensure it is installed on end-user systems, or incorporate it into their build.
We do a basic job of enumerating varying inputs/outputs for GLSL, because the global-scope block declarations are just sitting there and it is hard to ignore them. I'm pretty sure we don't deal with declarations using double
and related types correctly, so there is still work to do there.
We also walk through the varying inputs/outputs for HLSL entry points, so that we can find fragment-shader outputs and properly account for them, but I don't think we currently add any actual TypeLayout
information for reflection to use.
A further issue is that we probably need to distinguish between general varying input/output and the specific cases of "vertex input" and "fragment output." The API here is a bit messy right now, and conventions need to be defined.
Required for Falcor to explicitly enable super-sampling in Vulkan
GaussianBlur.ps.slang declares Buffer<float> weights;
To reproduce, run MultiPassPostProcess sample, load an image and check the GaussianBlur
box
There is code in place for allowing the API user to directly ask for binary formats like DX bytecode as output from a compilation, rather than strings, but the actual function for querying output assumes it can return a null-terminated string (no length is passed) and the internal implementation uses the String
type which isn't really suitable for a byte buffer.
We currently use String
way too much in the code, and in particular we do string comparisons for identifiers, and string-based lookups when looking up names in semantic checking.
A cleaner approach would be to have the lexer go ahead and unique identifiers as they come along, and just store a pointer to a uniqued Name
in each identifier token. Then later stages can just use the names directly, and most dictionaries can be keyed on these pointers intead.
Not a Slang issue.
imageStore(gOutput, (ivec2)crd, pixelated.bgra);
The error message I'm getting is
syntax error, unexpected RIGHT_PAREN, expecting LEFT_PAREN
Slang reports the wrong offset for scale
in the following CB:
layout(set = 0, binding = 0) uniform PerFrameCB
{
vec2 offset; // This is reported as 0, which is correct
vec2 scale; // This one is reported as 16, but it should be 8 since it can be packed together with offset into a single vec4
};
float f;
mat4 a;
mat b = a*f; // This is legal in GLSL but fails compilation inside Slang
Looks like it's a Slang bug, since the GLSL compiler is never invoked.
We are using it for vertex-blending- ShaderCommon.slang::59
- getBlendedWorldMat
coherent
, volatile
, restrict
, readonly
, writeonly
Slang reports an error if I use any of them.
Simple compute-shader to reproduce the problem:
layout(set = 0, binding = 1, rgba32) uniform writeonly image2D gOutput;
layout(local_size_x = 16, local_size_y = 16, local_size_z = 1) in;
void main()
{
imageStore(gOutput, gl_LocalInvocationID.xy, vec4(1,1,1,1));
}
If we don't generate an out gl_PerVertex
block when generating GLSL, then glslang will generate one behind our backs, and it seems to always give it layout(location = 0)
which is not helpful.
To avoid this, we need to automatically generate an appropriate gl_PerVertex
declaration, and ensure that it gets a location after any user vertex outputs.
There is code in TypeLayout.cpp
that tries to implement the std140
and std430
rules, but I have little confidence that it is being invoked correctly.
Tasks:
Set up some reflection-generation tests for GLSL, so we can see how layouts are being computed
glslang
when generating baselines to double-check offsetsEnsure that we are picking up the rules specified as a layout
attribute and applying them correctly.
Ensure that given GLSL source we are picking appropriate rules by default when nothing is specified (e.g., std140
for all uniform
blocks, and std430
for all buffer
blocks)
Make sure to emit downstream code that reflects the layout choices we make, either by applying a layout
attribute to the block, or by applying layout(offset=...)
to each member. We should be conservative and try not to require too many extended features that could make it harder to output portable OpenGL GLSL later.
Currently it embeds #version 420
into the shaders. I'd like to use 4.5, but in general we should allow the user to choose(the user of Slang)
We already have support for exposing the HLSL [numthreads(...)]
attribute through reflection, but we don't support the GLSL equivalent: layout(local_size_x = ...)
.
The existing "Spire" code made heavy use of namespaces, but that just complicates what we are trying to accomplish in a lean-and-mean codebase.
I'm not 100% decided on what an ideal convention should be, but my first stab would be:
All user-face C++ interfaces (currently just inline
wrapper stuff around the C API) will reside in the slang
namespace
All implementation stuff will reside in the Slang
namespace (notice capitalization) with no nesting.
If we really need multiple namespaces (e.g., multiple back-ends need types with similar names), then it probably makes sense to give each a distinct top-level namespace with a Slang
prefix, just to keep things flat.
I'm not enamored of having a distinct namespace for public API vs. private implementation. I might advocate for finding a different way to expose the API that avoids the need for the opaque wrapper types, so that we can actually expose the same namespace and type names (potentially making Slang more debuggable for client code).
The initial cross-compilation goals for Slang only apply to "library" code: files full of types, functions, and maybe some global shader parameters. This is an intentional prioritization choice.
Longer term we'd like to support more complete cross-compilation, in which the user can just throw an HLSL or Slang file at the compiler and get back GLSL for each entry point. That is a crowded field, though, so there would be little reason for somebody to pick our tool over another. Thus it makes sense to put this on the backlog until we've got a more interesting feature set to win over users.
GLSL has restrictions that mean you can't have a function with a discard
in the translation unit, if you aren't targetting the fragment-shader stage. This is even true if you don't even call the function.
The current lowering strategy emits all symbols in an imported module if it detects we are in "rewriter" mode, so this causes problems.
The simplest fix is just to not do that, and only emit declarations that were referenced. That strategy should work for any case where we were able to semantically resolve all the user's code.
If we need something more fail-safe than that, we could try to ensure that when we see an un-checked name expression, or one that resolved to an overload group, we go ahead and emit every matching declaration, just in case. That could cause problems if one of those overloads is invalid for the target, of course.
We might also want to add some logic to detect stage-limited functions, and skip lowering them for targets where they aren't allowed. That seems like something we might need in the long term anyway.
Declare this in the shader
layout(set = 0, binding = 0) buffer gLightIn
{
vec3 val[];
};
Slang crashes inside GetIntVal()
(syntax.cpp, line 937)
GLSL doesn't support default values on function parameters, so we need to eliminate their use during lowering.
A simple strategy would be to lower the original function, and then generate a bunch of helpers that take fewer parameters and call the original with the default values plugged in.
The main down-side of that approach is that it could cause problems if the signature of the generated functions matches any existing function of the same name (although that should have created ambiguity in the first place).
Another alternative is to lower the call sites differently, by plugging in the default-value expression for missing arguments. That would handle many simple cases, but would create problems if the default-value expression ever relied on the lexical context of the original function.
I think we should go with the first option, and rely on a more general renaming pass to solve the collisions if they ever arise.
Unrelated to the release, just documenting some issues I encountered.
Our initial thought was to use variable-size arrays. That doesn't map well to HLSL - and by extension to Falcor. The problem is how to reflect things correctly.
For example
struct Foo {float3 bar;};
StructuredBuffer<Foo> gFoo;
(1) Falcor looks for the buffer using gFoo
.
(2) Falcor is not aware of the Foo
. It gets the struct size from gFoo
.
(3) Reading/writing variables from the host side is done using the fields on Foo
directly - mpFoo[0]["bar"]
(the first index is the struct index
I don't know how to translate that to GLSL. We can do something like
struct Foo {float3 bar;};
layout(set = 1, binding = 4) buffer gFoo
{
Foo foo[];
};
HLSL (1) Will work just fine
HLSL (2) Doesn't. We will need to get the size of gFoo.foo
HLSL(3) Doesn't work as well. foo
gets in the way
This isn't necessarily just a cross-compilation issue. Even for hand-written code, I have no idea how to make Falcor's DX abstractions work with GLSL
The Vulkan extension is called VK_NVX_multiview_per_view_attributes
The extension spec is here
DefaultVS.slang
outputs to NV_X_RIGHT
and NV_VIEWPORT_MASK
which should be convert to new GLSL builting.
When we detect that SPS is used, we need to output to gl_PositionPerViewNV
.
I didn't read the spec yet, the extension is more permissive then what SPS requires. For now we only need to support what's required for SPS.
Running the FalcorTest.sln results in an assert "Unimplmented!" (Slang) in GraphicsStateObjectTest.
Emitting line directives is probably the right default in "rewriter" mode (since we need to see error messages from the downstream compiler), and it is arguably the right default when you want debug output. It isn't the right thing when people want to look at the output and know what is going on.
Realistically, we probably want:
A direct flag (e.g., -no-line-directives
and -line-directives
) to control whether we output line directives at all
Make line directives the default for "rewriter" files and when debugging is desired (e.g., when a -g
flag is specified), but otherwise skip it
Right now I just exit(1)
if either glslang or D3DCompile
fails on the lowered input. This needs to be changed so that these failures come across as ordinary diagnostics from the perspective of the user.
A lot of the existing code from "Spire" uses a naming convention where type members are in UpperCamelCase
. The new convention is that type declarations are UpperCamelCase
while value declarations are in loweCamelCase
(with the exception of enum
tags for now, which use upper when scoped, and a k
prefix when unscoped).
We need to make a pass over the codebase and regularize the convention at some point.
(Consider trying to set up clang-format to help catch issues).
Both Vulkan and D3D12 share the idea that "push constants" ("root constants") are exposed to the shading language as ordinary uniform blocks (cbuffer
s). In D3D12 the actual assignment of a buffer to root constant resources is handled at the API level, while in Vulkan all of the binding to API-level resources is handled in layout
modifiers.
A GLSL uniform
block decorated with layout(push_constant)
:
layout(push_constant) uniform MyStuff { ... };
consumes different resources from an ordinary uniform block. This needs to be accounted for in a few places:
push_constant
layout modifier into a form visible to semanticsParameterBinding.cpp
and/or TypeLayout.cpp
logic needs to allocate resources differently for these blocks
PushConstant
) for members of the blockThe existing reflection API should be able to handle this case.
Actually, one parting shot: it might be worthwhile to go ahead and detect this modifier during parsing, so that we can break out a distinct type for the push_constant
block like what we do for in
and out
blocks already. That would probably simplify a lot of the logic because the block would no longer surface as an "ordinary" uniform block.
Right now the std430
rules inherit the constant-buffer restricts on struct
and array alignment (which are the specific restrictions they are exempted from), but do not inherit the vec3
layout rules that pad them out to be vec4
aligned (which they should use).
Adding this as an issue so that I can reference it in a test case.
HLSL and OpenGL GLSL are consistent in that a declaration like:
Texture2D t[4]; // HLSL
sampler2D t[4]; // OpenGL GLSL
consumes 4 registers/bindings: one for each array element.
Vulkan seems to follow a different rule, where the entire array gets a single binding. Right now we do not implement this behavior correctly.
We need a POR for how a conceptual module like Material
will be exposed to user code in both HLSL and GLSL in a way that works with:
The limitations of both languages and the compilers that will be used for each (e.g., the non-overlapping bugs in both fxc
and dxc
)
The constraints of what we are willing to let the "rewriter" mode in Slang do (e.g., we currently don't let it rewrite anything in function bodies)
Of course we also want a model that is as usable and natural for users as possible.
Going back to the "rosetta stone" sketch that outlined the "rewriter" architecture, there are two key challenges;
What to do when a module conceptually contains both "ordinary" uniform parameters (e.g., a float3
) and "resource" parameters (a `Texture2D). The various langauges/compilers we need to work with have varying levels of support for types that mix up resource and ordinary parameters.
What to do when there is a single conceptual type Material
, but there might be specialized versions of it used in specific cases (but not all)?
and the POR for how to solve them is something like:
This is officially Falcor's problem, not Slang's, since a complete solution to the mixed-type-struct problem is out of scope for what the "rewriter" is supposed to do. That said, I expect that if we can find a way to sole the problem using Slang, nobody is actually going to complain.
The intention is to expose a syntax that the user sees as a macro invocation, e.g., PARAMETER(Material, m);
, but that is actually custom syntax that can expanded to a different type based on how a parameter will be specialized.
Item (2) obviously requires a complete design and has API implications, so it is the important bit to focus on first. Item (1) is actually conceptually straightforward, even if there is a lot of detailed work in implementing it.
Effect/NormalMapFiltering sample (default settings)
The error I'm getting is from glslang
- perturbNormal
is missing the definition. The definition is actually missing from the GLSL string we send even though _MS_USER_NORMAL_MAPPING
is defined.
Is SLANG_hack_samplerForTexelFetch
hardcoded to use set=0, binding=0
?
It collides with other user declarations.
TextRenderer.fs.slang
is one example
Running slangc
on HLSL input like:
void main(InputPatch<FOO, 3> foo, ...) { ... }
leads to an error: `too many arguments to call (got 2, expected 1).
This is because the definitions of InputPatch
and OutputPatch
only list the type parameter, and not the count parameter.
We have the following declaration in SceneEditorCommon.slang.h
: uint drawID : DRAW_ID;
HLSL implicitly assigns nointerpolation
qualifier to it, but GLSL requires that the user define it with the flat
qualifier.
I tried adding nointerpolation
, but Slang didn't replace it with flat
.
The simple solution for now would be to just replace nointerpolation
with flat
, but the long-term solution would be for Slang to detect uint
outputs and assign the required qualifiers
Right now slangc
always just writes its output to the console, which is convenient for testing (and mirrors what fxc
does by default) but it doesn't have any provision right now for specifying an output file.
In simple cases, we should be able to do something like:
slangc -o foo.dxbc -profile vs_5_0 -entry vsMain foo.hlsl
and get the result we expect (DXBC for the vsMain()
entry point function output to the file foo.dxbc
).
Ideally, the front-end driver should be able to infer the desired output format based on the file extension provided, so that you could change that command line to use -o foo.spv
or -o foo.dxil
and it would Do What I Mean).
(Cross-compilation from HLSL/Slang to GLSL should probably be triggered in a similar fashion, e.g.:
slangc foo.hlsl -o foo.glsl
That seems like a slightly more complex feature than what this issue is trying to get at.)
As a simple starting point, this should only be allowed for compilations that involve a single entry point, and output to one of the existing formats that is designed for single entry points: DXBC, DXIL, or SPIR-V.
Longer term, we should define a container format that can hold output for multiple entry points (in any format), but that is a larger change.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.