Coder Social home page Coder Social logo

waitfree-mpsc-queue's Introduction

MPSCQ - Multiple Producer, Single Consumer Wait-Free Queue

C11 library that allows multiple threads to enqueue something to a queue, and allows one thread (and only one thread) to dequeue from it.

This code is tested, but not proven. Use it at your own peril.

Interface

Creation and destruction of a queue can be done with:

struct mpscq *mpscq_create(struct mpscq *n, size_t capacity);
void mpscq_destroy(struct mpscq *q);

Passing a NULL pointer as n will allocate a new queue with malloc, initialize it, and return it. Passing a pointer to a struct mpscq as n will initialize that object. Calling the destroy function will free the internal data of the object, and if the object was allocated via malloc, it will be freed as well.

Enqueuing can be done with:

bool mpscq_enqueue(struct mpscq *q, void *obj);

which will enqueue obj in q, returning true if it was enqueued and false if it wasn't (queue was full).

Dequeuing can be done with:

void *mpscq_dequeue(struct mpscq *q);

which will return NULL if the queue was empty or an object from the queue if it wasn't. Note that a queue may appear to be empty if a thread is in the process if writing the object in the next slot in the buffer, but that's okay because the function can be called again (see the comments in the source for more interesting comments on this).

The queue may also be queried for current number of items and for total capacity:

size_t mpscq_capacity(struct mpscq *q);
size_t mpscq_count(struct mpscq *q);

Comments

PLEASE report bugs to me if you find any (email me at [email protected]).

Technical Details

During the first half of the enqueuing function, we prevent writing to the queue if the queue is full. This is done by doing an add anyway, and then seeing if the old value was greater than or equal to max. If so, then we cannot write to the queue because it's full. This is safe for multiple threads, since the worst thing that can happen is a thread sees the count to be way above the max. This is okay, since it'll just report the queue as being full.

The second half of the enqueuing function gains near-exclusive access to the head element. It isn't completely exclusive, since the consumer thread may be observing that element. However, we prevent any producer threads from trying to write to the same area of the queue. Once head is fetched and incremented, we store the object to the head location, thus releasing that memory location.

In the dequeue function, we exchange the tail with NULL, and observe the return value. If the return value is NULL, then there's nothing in the queue and so we return NULL. If we got back an object, we just increment the tail and decrement the count, before returning.

Performance (preliminary)

Here's a quick comparison to a locked circular queue I wrote quickly, fueled by beer. With 64 threads, each writing 200 objects to the queue with the speed of 64 fairly slow threads (and, of course, a singular thread reading from it with the speed of a one fairly slow thread... ) the lock-free queue wins pretty convincingly:

I WILL WRITE 500 OBJECTS, AND I WILL WRITE 500 MORE

(hard to see: the left-most data points are at x=50, not 0)

Well, that's pretty nice. If your queue is small, then MPSCQ does wonders compared to locking, which is what I would expect.

waitfree-mpsc-queue's People

Contributors

dbittman avatar hq6 avatar klowner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

waitfree-mpsc-queue's Issues

Head overflow

I believe mpscq_enqueue will have a problem when head overflows as SIZE_T_MAX % q->max and 0 % q->max don't have to point to adjacent elements in the circular buffer.

This wouldn't be a problem if queue capacity was a power of two but mpscq_create doesn't require that.

Note implementation drawbacks in readme

As you mention in a comment:

/* a thread is adding to the queue, but hasn't done the atomic_exchange yet
		 * to actually put the item in. Act as if nothing is in the queue.
		 * Worst case, other producers write content to tail + 1..n and finish, but
		 * the producer that writes to tail doesn't do it in time, and we get here.
		 * But that's okay, because once it DOES finish, we can get at all the data
		 * that has been filled in. */

and another issue:

I think this depends on your definition of progress. The reason I wrote this is for use in kernel interrupt context in which nothing may block and must return in a bounded number of steps. Both operations fulfill this need. It's wait-free if you allow for failure, which I do. In your example, just because there's work that could be dequeued doesn't mean it must be, because we may establish a different valid ordering between the threads' calls.

there is a glaring problem where elements in the queue cannot be consumed unless pending writes (after space has been reserved) are performed in order. In practice, this could mean a queue being used to process tasks submitted from other threads being particularly unresponsive if a submitting thread happens to preempt after reservation and before the element write.

This is generally solved with a compare-and-swap loop and "soft" write indexes, which can guarantee ordered atomic writes. It's also worth mentioning scenarios that tolerate failure are slim, as even with such tolerance this effectively creates a problem where the queue can be rendered temporarily unresponsive despite having several completed writes.

This isn't a problem if you don't have to worry about preemption, but it should be noted to prospective users that this is probably bad for userspace, especially with contention compared to other queue implementations.

It's not wait-free

The reason is your comment in mpscq_dequeue, a thread can be suspended thus the consumer may not be able to make any progress even if the queue has data pushed by other producers.

Also I've made a MPMC queue template which is also almost wait-free(for the same reason) and zero-copy, and can also reside in shared-memory for IPC purpose: WFMPMC

Header is imcompatible with c++

The header is not c++ friendly.

Reproducing the problem

Put the following program into a file called main.cc

#include "mpscq.h"
int main(){}

Compile with g++ -c main.cc and get multiple errors that look like this.

usr/lib/gcc/x86_64-linux-gnu/4.9/include/stdatomic.h:68:9: error: ‘_Atomic’ does not name a type
 typedef _Atomic __UINT_FAST32_TYPE__ atomic_uint_fast32_t;
         ^
/usr/lib/gcc/x86_64-linux-gnu/4.9/include/stdatomic.h:69:9: error: ‘_Atomic’ does not name a type
 typedef _Atomic __INT_FAST64_TYPE__ atomic_int_fast64_t;

Compilation error using Make implicit rules

I cloned and type make. The output follows.

make
gcc -lpthread -pg  mpsc_test.o mpsc.o   -o mpsc_test
mpsc_test.o: In function `main':
mpsc_test.c:(.text.startup+0xb9): undefined reference to `pthread_create'
mpsc_test.c:(.text.startup+0xe2): undefined reference to `pthread_create'
mpsc_test.c:(.text.startup+0xfd): undefined reference to `pthread_join'
mpsc_test.c:(.text.startup+0x135): undefined reference to `pthread_join'
collect2: error: ld returned 1 exit status
<builtin>: recipe for target 'mpsc_test' failed
make: *** [mpsc_test] Error 1

Based on known algorithm

Hi
Is this implementation based on published algorithm? Seems too straightforward to be true, not withstanding Mpsc has simpler semantics...

Are these stronger guarantees actually needed?

This library is awesome, thanks!

Quick performance question. This code:

#define memory_order_release memory_order_seq_cst
#define memory_order_acquire memory_order_seq_cst
#define memory_order_relaxed memory_order_seq_cst

makes sure that all atomic operations happen with the maximum amount of synchronization. Is this required, or is this code just erring on the side of caution?

Check for argc in mpsc_test to avoid Segmentation Fault

mpsc_test doesn't check argc argument of main, while the program require at least argv[1] (capacity of the queue).
This cause the program to Segmentation Fault when launched without arguments.

Just check argc to avoid that, and print a small syntax help for the user.

Small patch is attached. Hope that can be useful. Thank you for the code provided.

argv.patch.gz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.