Coder Social home page Coder Social logo

raysan5 / raylib Goto Github PK

View Code? Open in Web Editor NEW
18.9K 277.0 2.0K 385.77 MB

A simple and easy-to-use library to enjoy videogames programming

Home Page: http://www.raylib.com

License: zlib License

C 84.11% Makefile 2.83% HTML 0.74% CMake 1.38% Meson 0.06% Shell 0.73% Batchfile 0.41% Roff 0.05% Zig 0.58% Lua 9.13%
raylib c videogames programming opengl android embedded iot graphics wasm

raylib's Introduction

raylib is a simple and easy-to-use library to enjoy videogames programming.

raylib is highly inspired by Borland BGI graphics lib and by XNA framework and it's especially well suited for prototyping, tooling, graphical applications, embedded systems and education.

NOTE for ADVENTURERS: raylib is a programming library to enjoy videogames programming; no fancy interface, no visual helpers, no debug button... just coding in the most pure spartan-programmers way.

Ready to learn? Jump to code examples!



GitHub Releases Downloads GitHub Stars GitHub commits since tagged version GitHub Sponsors Packaging Status License

Discord Members Subreddit Subscribers Youtube Subscribers Twitch Status

Windows Linux macOS WebAssembly

CMakeBuilds Windows Examples Linux Examples

features

  • NO external dependencies, all required libraries are bundled into raylib
  • Multiple platforms supported: Windows, Linux, MacOS, RPI, Android, HTML5... and more!
  • Written in plain C code (C99) using PascalCase/camelCase notation
  • Hardware accelerated with OpenGL (1.1, 2.1, 3.3, 4.3, ES 2.0, ES 3.0)
  • Unique OpenGL abstraction layer (usable as standalone module): rlgl
  • Multiple Fonts formats supported (TTF, OTF, Image fonts, AngelCode fonts)
  • Multiple texture formats supported, including compressed formats (DXT, ETC, ASTC)
  • Full 3D support, including 3D Shapes, Models, Billboards, Heightmaps and more!
  • Flexible Materials system, supporting classic maps and PBR maps
  • Animated 3D models supported (skeletal bones animation) (IQM, M3D, glTF)
  • Shaders support, including model shaders and postprocessing shaders
  • Powerful math module for Vector, Matrix and Quaternion operations: raymath
  • Audio loading and playing with streaming support (WAV, QOA, OGG, MP3, FLAC, XM, MOD)
  • VR stereo rendering support with configurable HMD device parameters
  • Huge examples collection with +140 code examples!
  • Bindings to +70 programming languages!
  • Free and open source

basic example

This is a basic raylib example, it creates a window and draws the text "Congrats! You created your first window!" in the middle of the screen. Check this example running live on web here.

#include "raylib.h"

int main(void)
{
    InitWindow(800, 450, "raylib [core] example - basic window");

    while (!WindowShouldClose())
    {
        BeginDrawing();
            ClearBackground(RAYWHITE);
            DrawText("Congrats! You created your first window!", 190, 200, 20, LIGHTGRAY);
        EndDrawing();
    }

    CloseWindow();

    return 0;
}

build and installation

raylib binary releases for Windows, Linux, macOS, Android and HTML5 are available at the Github Releases page.

raylib is also available via multiple package managers on multiple OS distributions.

Installing and building raylib on multiple platforms

raylib Wiki contains detailed instructions on building and usage on multiple platforms.

Note that the Wiki is open for edit, if you find some issues while building raylib for your target platform, feel free to edit the Wiki or open an issue related to it.

Setup raylib with multiple IDEs

raylib has been developed on Windows platform using Notepad++ and MinGW GCC compiler but it can be used with other IDEs on multiple platforms.

Projects directory contains several ready-to-use project templates to build raylib and code examples with multiple IDEs.

Note that there are lots of IDEs supported, some of the provided templates could require some review, so please, if you find some issue with a template or you think they could be improved, feel free to send a PR or open a related issue.

learning and docs

raylib is designed to be learned using the examples as the main reference. There is no standard API documentation but there is a cheatsheet containing all the functions available on the library a short description of each one of them, input parameters and result value names should be intuitive enough to understand how each function works.

Some additional documentation about raylib design can be found in raylib GitHub Wiki. Here are the relevant links:

contact and networks

raylib is present in several networks and raylib community is growing everyday. If you are using raylib and enjoying it, feel free to join us in any of these networks. The most active network is our Discord server! :)

contributors

license

raylib is licensed under an unmodified zlib/libpng license, which is an OSI-certified, BSD-like license that allows static linking with closed source software. Check LICENSE for further details.

raylib uses internally some libraries for window/graphics/inputs management and also to support different file formats loading, all those libraries are embedded with and are available in src/external directory. Check raylib dependencies LICENSES on raylib Wiki for details.

raylib's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

raylib's Issues

[text] Add line-break support to DrawText()

When drawing text, it will be very useful to be able to add line-breaks \n.

Related functions [text.c]:

void DrawTextEx(SpriteFont spriteFont, const char *text, Vector2 position, int fontSize, int spacing, Color tint);

Just add a font-size-proportional Y-position increment and reset X-position value when character \n is parsed.

Create web skeleton for emscripten to show raylib web games

When compiling a raylib game for web using emscripten, the generated output files are:

  • mygame.html - It's a web skeleton with a canvas element where mygame.js runs.
  • mygame.js - It's the actual game code compiled to asm.js.
  • mygame.data - It's the game resources embedded into a binary file.

The base HTML skeleton or template could be customized. I want to add a custom raylib html template to contain raylib games. Something like this: http://www.raylib.com/just_do.html (but far more elegant...).

Default emscripten web templates are:
https://github.com/kripken/emscripten/blob/master/src/shell.html
https://github.com/kripken/emscripten/blob/master/src/shell_minimal.html

Custom template could be defined to emcc at compile time using: --shell-file <path>

<path> is the path name to a skeleton HTML file used when generating HTML output. The shell file used needs to have this token inside it: {{{ SCRIPT }}}.

[camera] Add support for changing camera FOVY

Some students ask me about changing 3d camera field-of-view. Right now raylib FOVY for 3d is fixed to 45 degrees. Allowing the user to change FOVY requires some redesign of struct Camera and review all examples... but after some thinking I realized it could be useful.

Camera system will be updated to:

typedef struct Camera {
    Vector3 position;
    Vector3 target;
    Vector3 up;
    float fovy;           // Field-of-view angle (Y axis)
} Camera;

Involved functions to change:

void Begin3dMode(Camera camera)

[camera] Add an easy 2D camera system: Camera2D

Some raylib users ask for a simple 2D camera system.

It could include basic movement, rotation and zooming.

Here it is a struct proposal:

typedef struct Camera2D {
    Vector2 position;      // To move camera around
    Vector2 origin;        // Required for rotation and zoom
    float rotation;        // Rotation of the world in angles 
    float zoom;            // Zoom factor (world scaling)
} Camera2D;

Usage could be through BeginDrawingEx():

Camera2D camera = {{ 0, 0 }, { 0, 0 }, 0.0f, 1.0f };
...
BeginDrawingEx(camera);
    // Draw 2D elements
EndDrawing()

An improved version of BeginDrawingEx() was already designed but it could be better moved to:

void BeginDrawingPro(int blendMode, Shader shader, Matrix transform);

This new version tried to resemble XNA similar function for spriteBatch.Begin() but maybe some of the required parameters are not that useful...

Feature Request

Hi, out of curiosity, can there be support for the following features?

  • OpenGL ES 1.1 support instead of OpenGL 1.1, which can allow the use of VBO's
  • Render to Texture for OpenGL 3 / ES 2
  • Lighting (and for OpenGL 3 / ES 2, a default shader that some supports lighting features)
  • Light mapping (a second set of UV's in the Vertex Buffer)
  • Generation of normals and UV's for spheres, boxes, planes, and other primitives

[gestures] GESTURE_HOLD not detected properly

Gestures detection system doesn't detect HOLD properly, it changes to GESTURE_DRAG instead.

Related functions [gestures.c]:

static void ProcessMotionEvent(GestureEvent event);

ProcessMotionEvent() is launched only when an event has been detected, gestures update logic should be reviewed to detect HOLD state correctly.

Here it is a quick draft of gestures detection states:

image

release 1.3 android template small bug

IsScreenTouched() from 1.2 is used where I guess IsGestureDetected() from 1.3 should be used.

affected file:

https://github.com/raysan5/raylib/blob/master/templates/android_project/jni/basic_game.c

after that compilation to android compiles just fine and the app runs. I had to add
InitGesturesSystem(struct android_app *app);
and
void UpdateGestures(void);

to make it work, but still the game screen jumps from TITLE to GAME to END to TITLE again very fast while I hold my finger on the tablet screen.

I just modified a few lines:
https://gist.github.com/dmalves/de34b9a56be2c9d5ebb0

How can I detect a single screen tough?

[audio] Is there a lightweight alternative to OpenAL Soft?

Since raylib creation I've been looking for an alternative to OpenAL Soft. It works great but OpenAL Soft is a very big library and raylib programs must be distributed with openal32.dll, I don't like it.

Ideally, I would like to find an audio library with the following features:

  • Lightweight, compilable as an small static library or a header-only library (similar to stb libraries) to be added to the executable on compilation.
  • Multiplatform, it should work (or have versions) on Windows, Linux, OSX, Android, RaspberryPi and HTML5.
  • Easy to use interface (similar to OpenAL Soft) is desirable.
  • No fancy effects or real-time audio postprocessing required, just play audio.

I've been analyzing the possibility of using PortAudio, but I'm not sure if it would be a good replacement... ยฟany ideas?ยฟanyone knows about an alternative?

Update GLFW to 3.1?

New GLFW version was released this week. I see raylib uses 3.0.4, which is a year old now.

[gestures] Review time measuring system on Windows

gestures module is designed to be used as standalone (as much as posible). Time measuring is required for TAP_TIMEOUT and when possible a high-resolution timer should be used.

Related functions [gestures.c]:

static double GetCurrentTime();
static void ProcessMotionEvent(GestureEvent event);

On Linux based systems (Linux, RaspberryPi, Android), clock_gettime() function included in time.h is used for time measurement. On Windows we can use GetSystemTimePreciseAsFileTime() inluded in windows.h. The problem appears when including windows.h, some of its symbols conflict with raylib ones: Rectangle, CloseWindow(), ShowCursor().

Anyone knows an alternative for time measument in Windows that doesn't involve including windows.h? If not, is it possible to avoid symbols conflicts?

[camera] Small glitch on zoom-in with mouse wheel

On CAMERA_FREE mode, after setting internal camera position and target, if zooming in with mouse wheel, the first zoom jump is not correct, it seems there is a small glitch.

Related raylib example: core_3d_camera_free.c

Related functions [camera.c]:

static void ProcessCamera(Camera *camera, Vector3 *playerPosition);

It seems issue could be related to some distance calculation.

Consider renaming "LoadCubesmap"

Hi! The issue I'm reporting here is kind of minor. If you find it too picky, feel free to ignore it without a second though.

Cube mapping has a long history in computer graphics and the term is already associated to a different concept than that implied by the raylib codebase.

Although a Cubesmap is not a cube map, the similarity of the names can lead many to confusion (e.g. experienced graphics programmers thinking they can build a skybox from a Cubesmap or novices trained on raylib mistaking its Cubesmaps with anybody else's CubeMaps).

I would suggest LoadCubicMap or LoadMapOfCubes as alternative names to the LoadCubesmap function on src/models.c and src/raylib.h.

Cheers!

[audio] Add support for WAV music streaming

At the moment, only OGG music streaming is supported. Streaming from WAV file could be useful too.

Related functions [audio.c]:

void PlayMusicStream(char *fileName);
void UpdateMusicStream(void);
static bool BufferMusicStream(ALuint buffer);

WAV file should be opened and data read piece by piece (some samples everytime).

[core][rlgl] TakeScreenshot() not working as expected

When taking a screenshot, if some transparent element has been drawn, returned PNG image shows the transparent pixels.

Related functions [core.c][rlgl.c]

static void TakeScreenshot(void);
unsigned char *rlglReadScreenPixels(int width, int height);

Framebuffer color data is retrieved with glReadPixels() as GL_RGBA. Despite we don't see the transparent information in the running window, transparent info it's there... But, probably, a non-transparent image is expected by the user...

[text] TrueType (TTF) fonts suppot

Generate a SpriteFont from a TTF file, use stb_truetype library to read .ttf characters data.

Related functions [text.c]:

static SpriteFont LoadTTF(const char *fileName, int fontSize);

The generated SpriteFont should contain a generated texture similar to:

image

To generate that texture, every character data should be read from TTF file (using stb_truetype library), every character is an image and some data. Once all characters obtained, they should be convined into a texture and some additional data.

Questions regarding development style

Not sure if anyone is open to other development styles. Wanted to bounce an idea off you all and see what you thought.

Is it possible to have all function definitions pre-created for 1.4 goals? This might give guidance as to how you want things developed. Others could come along and complete things on a function by function basis rather than a feature by feature basis. Also, marking the definitions with unique comments to flag them for being incomplete and giving detailed instructions on what they should do.

[textures] Implement Floyd-Steinberg dithering for 16 bpp formats conversion

When converting our texture to a 16 bpp format, support Floyd-Steinberg dithering algorithm.

Related functions [textures.c]:

void ImageFormat(Image *image, int newFormat);

Dithering should be available (or automatically applied) when converting from UNCOMPRESSED_R8G8B8 or UNCOMPRESSED_R8G8B8A8 to the following formats:

UNCOMPRESSED_R5G6B5,            // 16 bpp (no-alpha)
UNCOMPRESSED_R5G5B5A1,          // 16 bpp (1 bit alpha)
UNCOMPRESSED_R4G4B4A4,          // 16 bpp (4 bit alpha)

And additional function could be defined, here it is declaration proposal:

unsigned short *ImageDataDither(Color *pixels, int rBpp, int gBpp, int bBpp, int aBpp);

Store multisampling in FBO

When using post-production shaders, the multisampling is not stored in FBO so the multisampling hint "SetConfigFlags()" doesn't make any difference...

I found some threads by the Internet about this issue:

I tryed making some lines of code but anything worked (any difference or "frame buffer object couldn't be created"...).

The same when compiling for web.

Android app cycle not paused when app loses focus

raylib takes care internally of Android app cycle, this decisiรณn was taken to make life easier to students that want to create raylib Android apps. After all, experienced programmers can just modify raylib internally. App cycle management should be reviewed.

Related functions [core.c]:

void InitWindow(int width, int height, struct android_app *state);
static void AndroidCommandCallback(struct android_app *app, int32_t cmd);

On app focus lost, app should be paused.

Direction for gradients?

Will there be any future changes that allow for vector2 directions of gradients?

Something like
1,0 for 0,360
-1,0 for 180...

Android back button event breaks the app

When pressing back button on a raylib Android app, it breaks.

Related functions [gestures.c]

static int32_t AndroidInputCallback(struct android_app *app, AInputEvent *event);

Android back button event should be properly managed.

[audio] Chiptunes support

Wondering if there was any support for this. I have been looking into it and am going this route: kd7tck@8a16d76

Wanted a little feedback, wondering if better libraries are out there.

[core] Fullscreen mode not working properly in some monitors

When compiling a program to be fullscreen, depending on the monitor it runs it is not scaled (just centered on the screen) or not scaled properly. It seems this issue could be related to monitor supported videomodes; GLFW library initializes the correct videomode.

To enable fullscreen mode, call SetConfigFlags(FLAG_FULLSCREEN_MODE) before InitWindow()

Related functions [core.c]:

static void InitDisplay(int width, int height);

Useful code snippet:

const GLFWvidmode *mode = glfwGetVideoMode(glfwGetPrimaryMonitor());

int windowWidth = mode->width; 
int windowHeight = mode->height; 

int count; 
const GLFWvidmode *modes = glfwGetVideoModes(glfwGetPrimaryMonitor(), &count);

Tested on Windows platform

[textures] Implement some image manipulation functions

While creating examples and demos I missed some image manipulation functions. The following proposed functions could be useful.

Desired functions [textures.c]:

void ImageCrop(Image *image, Rectangle crop);
void ImageResize(Image *image, int newWidth, int newHeight); // Use stb_image_resize.h
void ImageDraw(Image *dst, Image src, Rectangle srcRec, Rectangle dstRec);
void ImageDrawText(Image *dst, const char *text, Vector2 position, int size, Color color);
void ImageDrawTextEx(Image *dst, SpriteFont font, const char *text, Vector2 position, int size, Color color);

Compressed image formats won't be supported and returned image should keep the same format as original one. To simplify image operation in some cases, it could be converted to R8G8B8A8 (32bit) format to operate with it and reconverted to original format at the end.

Some notes on every function implementation:

  • ImageCrop() is quite easy, just take care of out-of-image limits.
  • ImageResize() is not that easy but 'stb_image_resize.h` can be used to simply the process, function would be just a wrapper.
  • ImageDraw() is just an image blitting process, it could use ImageResize() if destination rectangle size is different than source rectangle size.
  • ImageDrawText() is quite complex. It should use default raylib font but font image must be retrieved from SpriteFont texture id. Then, depending on every letter to draw into the resulting image, every letter source rectangle should be taken from font image and ImageDraw() could be used. Depending on desired font size, ImageResize() should also be used.
  • ImageDrawTextEx() is similar to ImageDrawText() but using a custom font.

[physac] Redesign physics module (simplify)

Physics module requires some redesign. It could be simplified for easier usage.

Proposed solution consist of keeping an internal physic objects pool and just update them all every frame (it also allows to move physics to a separate thread in the future).

Possible struct for PhysicObject:

typedef struct PhysicObject {
    unsigned int id;
    Transform transform;
    Collider collider;
    Rigidbody rigidbody;
    bool enabled;
} PhysicObject;

Some posible functions:

void InitPhysics(void);          // Create pointers array to store physic objects
void ClosePhysics(void);         // Unload physic objects
void UpdatePhysics(void);        // Go through physic objects list and update them
PhysicObject *LoadPhysicObject(void);     // Create a new physic object and put into the internal list

Usage example:

PhysicObject *player = LoadPhysicObject();
player.transform.position = (Vector2){ 100, 100 };

[core] Review Gamepad support

Review Gamepad support for Raspberry Pi. It doesn't work properly. It seems axis movement is not scaled properly, just raw values are read.

Related functions [core.c]:

static void *GamepadThread(void *arg);
Vector2 GetGamepadMovement(int gamepad);

Just scale read values properly.

[core] Add support for new platform: Oculus Rift

Add support for PLATFORM_OCULUS using Oculus SDK 0.7 (DK2).

Related functions [core.c]:

static void InitDisplay(int width, int height);
static void InitGraphics(void);
void BeginDrawing(void);
void EndDrawing(void);

Process goes as follow (draft):

  • Initialize OVR device
  • Initialize OpenGL FBOs with textures for render (every eye)
  • On render, draw every eye independently (multiple render passes)

Note that OpenGL 3.3 is required. Probably new helper functions will be required on rlgl.c.

Android volume buttons break the app

When pressing up/down volume buttons, raylib Android app crashes. Not sure if this issue is device-dependant, sdk version-dependant or raylib's. Probably the third.

Related functions [gestures.c]

static int32_t AndroidInputCallback(struct android_app *app, AInputEvent *event);

Android volume buttons should be properly managed and system master volume should be increased or decreased accordingly. For audio management, raylib uses internally OpenAL Soft library.

[rlua] Add LUA support for scripting

Add a simple an intuitive wrapper to LUA scripting language.

This issue is just a desired feature and it has not been developed yet (I mean, deciding how to implement it). Any help on developing it is highly appreciated!

A draft concept for possible function signature could be:

void *ExecuteLuaScript(const char *name, int numParams, ...)

The idea is just call Lua script with required params and get a return value. Don't worry about setting up Lua state and libraries or data push/pop.

It's been more than 10 years since the last time I used Lua and I need some refresh on how it works...

[textures][rlg] Add support for texture pixel data retrieval on OpenGL ES

On OpenGL ES 2.0, glGetTexImage() is not supported to retrieve pixels data from texture id (GPU VRAM to CPU RAM data copy). It could be done using an FBO path; just rendering the texture to an FBO and retrieveing data using glReadPixels().

Related functions [textures.c][rlgl.c]:

Image GetTextureData(Texture2D texture);
void *rlglReadTexturePixels(unsigned int textureId, unsigned int format);

The process is a bit tedious and probably very slow but it seems to be the only path for limited OpenGL ES 2.0 devices.

Good reference to start: https://www.opengl.org/wiki/Framebuffer_Object_Examples

[core][rlgl] Redesign Shaders/Textures system, use Materials

Right now, struct Shader references the Texture2D objects; struct Model keep references to Shader and Texture2D objects.

This data organization could be upated for a more professional (and coherent) layout. A new struct Material is proposed to include Shader and multiple Texture2D objects.

This change requires some internal raylib redesign, mainly of [rlgl] module, but in long term it will be very beneficial for raylib. ;)

[models] Collision between box and sphere not working properly

Function to review:

bool CheckCollisionBoxSphere(Vector3 minBBox, Vector3 maxBBox, Vector3 centerSphere, float radiusSphere);

Code example involved: models_box_collisions.c

This function could use the new struct BoundingBox instead of minBBox and maxBBox.

[gestures] Include mouse-based gestures detection

On desktop platforms (Windows, Linux, OSX), gestures can also be performed with mouse instead of the touch interface.

Related functions [gestures.c]:

void InitGesturesSystem(void);

When initializing gestures system, mouse events callbacks should be setup. Right now GLFW3 takes care of mouse events detection on desktop platforms through glfwPollEvents() on module core.c.

Adding support for mouse-based gestures could be useful on this interfaces to unify input detection system. That way, same code for desktop platforms can be used on touch-based platforms (Android, html5).

GL_SELECT

Is i possible to select a 3d object (a cube for example) using GL_SELECT mode?

[core][rlgl] Raycast system not working properly

Raycast system intended for 3d picking is not working properly.

Related raylib example: core_3d_picking.c

Related functions [core.c][rlgl.c]:

Ray GetMouseRay(Vector2 mousePosition, Camera camera);
Vector3 rlglUnproject(Vector3 source, Matrix proj, Matrix view);

After long testing, it seems there is some error inside rlglUnproject() but not completely sure...
rlglUnproject() just resembles gluUnproject() but it doesn't work. Maybe an issue with MatrixInvert() or MatrixMultiply() order?

Make raylib even more directory independent

The installer automatically installs raylib into C:\raylib - this is not desired by everyone. You can move the folder somewhere else of course, but the compiler scripts depend on this directory. It would be nice if those were relative.

README

Hello!

I found an error in your README file! It says "This file is empy" instead of "Empty".

Thanks!

Btw, Hola Ray ๐ŸŽฏ

[rlgl] Remove GLEW library dependency (if possible)

In order to reduce raylib external dependencies, OpenGL extensions loading library GLEW could be replaced by a lightweight alternative to load only OpenGL 3.3 Core profile required extensions.

Related functions [rlgl.c]:

void rlglInit(void);

Some alternatives have been already tested: glad and glLoadGen. Those libraries generate custom headers to be included in the project with only the required functionality.

glad seems the most interesting alternative but it's not working (neither glLoadGen), #include <windows.h> must be removed due to conflicts with raylib, but glad compiles anyway... When running any example, it crashes.

"Shaders not supported on OpenGL 1.1" warning

$ ./shaders_custom_uniform
INFO: Initializing raylib (v1.3.0)
INFO: Display device initialized successfully
INFO: Display size: 1920 x 1080
INFO: Render size: 800 x 450
INFO: Screen size: 800 x 450
INFO: Viewport offsets: 0, 0
INFO: GPU: Vendor:   X.Org
INFO: GPU: Renderer: Gallium 0.4 on AMD ARUBA
INFO: GPU: Version:  3.0 Mesa 10.6.5
INFO: GPU: GLSL:     1.30
INFO: OpenGL graphic device initialized successfully
INFO: [TEX ID 1] Texture created successfully (128x128)
INFO: [TEX ID 1] Default font loaded successfully
INFO: [resources/model/dwarf.obj] Model loaded successfully in RAM (CPU)
INFO: [resources/model/dwarf_diffuse.png] Image loaded successfully (2048x2048)
INFO: [TEX ID 2] Texture created successfully (2048x2048)
WARNING: Shaders not supported on OpenGL 1.1
INFO: Target time per frame: 16.667 milliseconds
INFO: [TEX ID 2] Unloaded texture data from VRAM (GPU)
INFO: [VBO ID 0][VBO ID 0][VBO ID 0] Unloaded model data from VRAM (GPU)
INFO: [TEX ID 1] Unloaded texture data from VRAM (GPU)

Then app runs without a shader applied. Same for other shader related examples.

Tested on latest Arch Linux with Mesa driver, here's glxinfo output:

$ glxinfo | grep OpenGL
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD ARUBA
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.6.5
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.6.5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 10.6.5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
OpenGL ES profile extensions:

[models] Billboards not working properly when drawing in front of lines

It seems there is some depth-test issue when DrawBillboard() in front of grid lines.

Related raylib example: models_billboard.c

Related functions [models.c]:

void DrawBillboard(Camera camera, Texture2D texture, Vector3 center, float size, Color tint)

Lines, Triangles and Quads are placed in different buffers inside rlgl.c. When calling rlglDraw(), buffers drawing order is always: Triangles, Quads and Lines. On a 2D orthographic issue this could represent a problem but not on a 3D perspective view because depth-test is enabled... So, why lines behind billboard quad are not visible?

[rlgl] 2D vs 3D, review buffers usage, DEPTH issues

raylib uses different sets of vertex buffers for LINES (position, color), TRIANGLES (position, color) and QUADS (position, texcoord, color, index). Depending on what we draw, vertices are loaded in a different set.

The problem comes because raylib uses those same sets equally for 2d and 3d vertex data. That decision was taken for simplicity and because at that moment raylib 3d support was quite limited (only some geometric shapes). But raylib has grown a lot and right now that solution is messy (depth problems between 2d and 3d) and not valid any more; it requires some redesign.

One solution is using different sets of buffers for 2d and 3d, it could be this way:

2D - LINES buffers: [position, color] vertex attributes
2D - TRIANGLES buffers: [position, color] vertex attributes - Used only for [shapes]
2D - QUADS buffers: [position, texcoord, color, index] vertex attributes - Used only for [textures]
3D - LINES buffers: [position, color] vertex attributes
3D - TRIANGLES buffers: [position, texcoords, color] vertex attributes - Used for geometric shapes, including billboards.

Some required actions:

  • [shapes] Use only 2D TRIANGLES buffers to store vertex data -NOT POSSIBLE-
    Better keep using QUADS for rectangles (DrawRectangle()). Despite depth works ok with TRIANGLES and QUADS, blending fails because TRIANGLES buffers are processed before QUADS and so, QUADS fragments are not available to be blended with TRIANGLES fragments. This issue is noticeable when trying to use a full-screen rectangle for fading-effects. :(
  • [textures] Use only 2D QUADS buffers to store vertex data.
  • [models] Use only 3D LINES and 3D TRIANGLES buffers to store vertex data.
    Not that easy, DrawCubeTexture() and DrawBillboardRec() use QUADS because current TRIANGLES buffers do not include a texture-coordinates buffer. New TRIANGLES buffers would be required to store that additional attribute... but it suppose additional memory consumption despite using only 2D (because buffers are setup at InitWindow()). One solution could be a lazy buffers initialization at Begin3dMode()...
  • [rlgl] Setup new 3D LINES and 3D TRIANGLES buffers.
    Are they really required? The main utility is two avoid 2D drawing conflicting with 3D drawing (due to same buffers usage an depth test) but maybe a better solution could be found. Buffers memory consumption and initialization is also another concern.
  • [rlgl] Switch between 2D and 3D buffers depending on drawing mode.
    Just use a global variable (bool 3dMode = false;) and set it to true on Begin3dMode() and set back to false on End3dMode(). Depending on 3dmode distribute vertex data accordingly on rlVertex3f(). Code complexity increases a bit...
  • [rlgl] Review shaders used for 2D and 3D. Same shaders?
  • [core] DEPTH_TEST could be disabled for 2D, just use it for 3D.
    If depth test is disabled on 2D, LINES are always drawn behind QUADS because LINES buffers are processed before QUADS buffers. :(

[gestures] Add possibility to enable only desired gestures

Sometimes the user can be interested in detecting only some specific gestures and ignore all the others.

Related functions [gestures.c]

void SetGesturesEnabled(unsigned int gestureFlags);

Function was declared to be completed, the idea was using mask-based flags system to enable/disable desired gestures. Enabled/Disabled gestures must be considered or not on UpdateGestures()

Raspberry Pi input system should be redesigned

The way inputs (keyboard/mouse/gamepad) are managed on Raspberry Pi is far from ideal. Some keyboards don't work properly, mouse can be wrongly detected, no current gamepad support.

Related functions [core.c]:

static void InitKeyboard(void);
static void RestoreKeyboard(void);
static void PollInputEvents(void);

static void InitMouse(void);
static void *MouseThread(void *arg);

static void InitGamepad(void);

raylib design goal is to keep dependencies to the minimum so I decided to implement my own raw inputs reading mechanism on Rapsberry Pi; on other platforms I use GLFW3 that works flawlesly. This decision was taken to allow raylib to work on RPi without requiring X Window System.

Different systems were implemented for keyboard and mouse reading:

Keyboard data is read directly from standard input (stdin), keyboard settings and mode are reconfigured (InitKeyboard()) and then data is read as raw (PollInputEvents()); at exit keyboard settings are restored (RestoreKeyboard()). It was decided to be done this way (instead of reading events) to allow usage from SSH connection.

Mouse data is read from DEFAULT_MOUSE_DEV, currently /dev/input/event1, a new thread is created to keep reading mouse data independently of game loop (MouseThread()).

Gamepad data read is not implemented (InitGamepad()).

I think the system should be redesigned to use always /dev/input/eventX but detecting which devices are connected to the Raspberry Pi is another issue.

Pixel perfect DrawRectangleLines under OpenGL 1.1

Currently, several pixel-based routines in RayLib are not pixel-perfect. I have picked DrawRectangleLines because I am using it in a program of mine.

DrawRectangleLines takes four arguments that define the shape of the rectangle to draw (int posX, int posY, int width, int height) and generates eight rlVertex2i calls to draw lines that define a square with corners in(posX+1, posY+1), (posX+width, posY+1), (posX+width, posY+height) and (posX+1, posY+height). Those corners are offset by one pixel, in both the horizontal and the vertical axis, from the intended behavior of the call.

Leaving aside that superficial mistake, in order to get pixel-perfect results, care must be taken to recompile the library after uncommenting the following line from core.c:BeginDrawing:

//  rlTranslatef(0.375, 0.375, 0);  // HACK to have 2D pixel-perfect drawing on OpenGL 1.1

Since that solution is less than ideal, and given that the initial orthographic projection is set as

rlOrtho(0, width - offsetX, height - offsetY, 0, 0, 1);   // top-left corner --> (0,0)

inside rlgl.c:rlglInitGraphics, I would suggest offsetting the corners of the rectangle by half a pixel so that they fall on their intended position. DrawRectangleLines would then read:

void DrawRectangleLines(int posX, int posY, int width, int height, Color color)
{
    rlBegin(RL_LINES);
        rlColor4ub(color.r, color.g, color.b, color.a);
        rlVertex2f(posX + .5, posY + .5);
        rlVertex2f(posX + width - .5, posY + .5);

        rlVertex2f(posX + width - .5, posY + .5);
        rlVertex2f(posX + width, posY + height - .5);

        rlVertex2f(posX + width - .5, posY + height - .5);
        rlVertex2f(posX + .5, posY + height - .5);

        rlVertex2f(posX + .5, posY + height - .5);
        rlVertex2f(posX + .5, posY + - .5);
    rlEnd();
}

I understand that working with non-integer coordinates can get cumbersome quickly, but only the rest of the 2D primitives of shapes.c would need to be adapted and users of RayLib would be blissfully shielded from having to think in terms of floating-point coordinates or from recompiling the library.

What do you think?

Cheers!

[rlgl] Find a generic way to deal with shader maps

Right now shader maps are not generic; in our default shader we define uniform location points for three kind of maps:

  • diffuse -> binded to texture unit 0 (GL_TEXTURE0)
  • normal -> binded to texture unit 1 (GL_TEXTURE1)
  • specular -> binded to texture unit 2 (GL_TEXTURE2)

Those maps can be set with the following (not generic) functions:

void SetShaderMapDiffuse(Shader *shader, Texture2D texture);
void SetShaderMapSpecular(Shader *shader, const char *uniformName, Texture2D texture);
void SetShaderMapNormal(Shader *shader, const char *uniformName, Texture2D texture);

It's not possible to add additional maps to our default shader. To use additional maps, new uniform location points must be defined externally by user. Those map location points (new texture units) are also not considered by rlglDrawModel().

There is not an easy solution to this issue. One idea could be enabling a number of possible maps and keep track of the active ones:

typedef struct Shader {
    unsigned int id;                // Shader program id

    // Variable attributes
    int vertexLoc;        // Vertex attribute location point (vertex shader)
    int texcoordLoc;      // Texcoord attribute location point (vertex shader)
    int normalLoc;        // Normal attribute location point (vertex shader)
    int colorLoc;         // Color attibute location point (vertex shader)

    // Uniforms
    int mvpLoc;           // ModelView-Projection matrix uniform location point (vertex shader)
    int tintColorLoc;     // Color uniform location point (fragment shader)

    // Maps uniform locations
    int mapLoc[MAX_MAP_LOCATIONS];  // Map texture uniform location point (fragment shader)
    int mapCounter;                 // Number of maps used (texture units)
} Shader;

One function to set custom maps could be added:

void SetShaderMap(Shader *shader, Texture2D texture);

Obviously, the user should use them properly on the shader code.

A similar solution could also be applied to vertex attributes for custom attributes.

Poor CPU Performance

When running simple text based games, the CPU usage can be as high as 30%. It can be traced back to a poor implementation of system polling with GLFW.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.