Coder Social home page Coder Social logo

thruster-rs / thruster Goto Github PK

View Code? Open in Web Editor NEW
1.1K 21.0 47.0 865 KB

A fast, middleware based, web framework written in Rust

License: MIT License

Rust 97.62% Shell 0.06% HTML 0.02% Starlark 2.30%
rust web-framework web web-development thruster thruster-rs hacktoberfest

thruster's Introduction

Thruster Latest Version Downloads Online

Get started with examples and walkthroughs on our website!

A fast and intuitive rust web framework

Don't have time to read the docs? Check out

✅ Runs in stable ✅ Runs fast ✅ Doesn't use unsafe

Documentation

Features

Motivation

Thruster is a web framework that aims for developers to be productive and consistent across projects and teams. Its goals are to be:

  • Performant
  • Simple
  • Intuitive

Thruster also

  • Does not use unsafe
  • Works in stable rust

Fast

Thruster can be run with different server backends and represents a nicely packaged layer over them. This means that it can keep up with the latest and greatest changes from the likes of Hyper, Actix, or even ThrusterServer, a home-grown http engine.

Intuitive

Based on frameworks like Koa, and Express, Thruster aims to be a pleasure to develop with.

Example

To run the example cargo run --example <example-name>. For example, cargo run --example hello_world and open http://localhost:4321/

Middleware Based

The core parts that make the new async await code work is designating middleware functions with the #[middleware_fn] attribute (which marks the middleware so that it's compatible with the stable futures version that Thruster is built on,) and then the m! macro in the actual routes.

A simple example for using async await is:

use std::boxed::Box;
use std::future::Future;
use std::pin::Pin;
use std::time::Instant;

use thruster::{App, BasicContext as Ctx, Request};
use thruster::{m, middleware_fn, MiddlewareNext, MiddlewareResult, Server, ThrusterServer};

#[middleware_fn]
async fn profile(context: Ctx, next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    let start_time = Instant::now();

    context = next(context).await;

    let elapsed_time = start_time.elapsed();
    println!(
        "[{}μs] {} -- {}",
        elapsed_time.as_micros(),
        context.request.method(),
        context.request.path()
    );

    Ok(context)
}

#[middleware_fn]
async fn plaintext(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    let val = "Hello, World!";
    context.body(val);
    Ok(context)
}

#[middleware_fn]
async fn four_oh_four(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    context.status(404);
    context.body("Whoops! That route doesn't exist!");
    Ok(context)
}

#[tokio::main]
fn main() {
    println!("Starting server...");

    let mut app = App::<Request, Ctx, ()>::new_basic();

    app.get("/plaintext", m![profile, plaintext]);
    app.set404(m![four_oh_four]);

    let server = Server::new(app);
    server.build("0.0.0.0", 4321).await;
}

Error handling

Here's a nice example

use thruster::errors::ThrusterError as Error;
use thruster::proc::{m, middleware_fn};
use thruster::{map_try, App, BasicContext as Ctx, Request};
use thruster::{MiddlewareNext, MiddlewareResult, MiddlewareReturnValue, Server, ThrusterServer};

#[middleware_fn]
async fn plaintext(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    let val = "Hello, World!";
    context.body(val);
    Ok(context)
}

#[middleware_fn]
async fn error(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    let res = "Hello, world".parse::<u32>()
        .map_err(|_| {
            let mut context = Ctx::default();
            
            context.status(400);

            ThrusterError {
                context,
                message: "Custom error message".to_string(),
                cause: None,
            }
        }?;

    context.body(&format!("{}", non_existent_param));

    Ok(context)
}

#[tokio::main]
fn main() {
    println!("Starting server...");

    let app = App::<Request, Ctx, ()>::new_basic()
        .get("/plaintext", m![plaintext])
        .get("/error", m![error]);

    let server = Server::new(app);
    server.build("0.0.0.0", 4321).await;
}

Testing

Thruster provides an easy test suite to test your endpoints, simply include the testing module as below:

let mut app = App::<Request, Ctx, ()>::new_basic();

...

app.get("/plaintext", m![plaintext]);

...

let result = testing::get(app, "/plaintext");

assert!(result.body == "Hello, World!");

Make your own middleware modules

Middleware is super easy to make! Simply create a function and export it at a module level. Below, you'll see a piece of middleware that allows profiling of requests:

#[middleware_fn]
async fn profiling<C: 'static + Context + Send>(
    mut context: C,
    next: MiddlewareNext<C>,
) -> MiddlewareResult<C> {
    let start_time = Instant::now();

    context = next(context).await?;

    let elapsed_time = start_time.elapsed();
    info!("[{}μs] {}", elapsed_time.as_micros(), context.route());

    Ok(context)
}

You might find that you want to allow for more specific data stored on the context, for example, perhaps you want to be able to hydrate query parameters into a hashmap for later use by other middlewares. In order to do this, you can create an additional trait for the context that middlewares downstream must adhere to. Check out the provided query_params middleware for an example.

Other, or Custom Backends

Thruster is capable of just providing the routing layer on top of a server of some sort, for example, in the Hyper snippet above. This can be applied broadly to any backend, as long as the server implements ThrusterServer.

use async_trait::async_trait;

#[async_trait]
pub trait ThrusterServer {
    type Context: Context + Send;
    type Response: Send;
    type Request: RequestWithParams + Send;

    fn new(App<Self::Request, Self::Context>) -> Self;
    async fn build(self, host: &str, port: u16);
}

There needs to be:

  • An easy way to create a server.
  • A function to build the server into a future that could be loaded into an async runtime.

Within the build function, the server implementation should:

  • Start up some sort of listener for connections
  • Call let matched = app.resolve_from_method_and_path(<some method>, <some path>); (This is providing the actual routing.)
  • Call app.resolve(<incoming request>, matched) (This runs the chained middleware.)

Why you should use Thruster

  • Change your backends at will. Out of the box, Thruster now can be used over: actix-web, hyper, or a custom backend
  • Thruster supports testing from the framework level
  • @trezm gets lonely when no one makes PRs or opens issues.
  • Thruster is more succinct for more middleware-centric concepts -- like a route guard. Take this example in actix to restrict IPs:
fn ip_guard(head: &RequestHead) -> bool {
    // Check for the cloudflare IP header
    let ip = if let Some(val) = head.headers().get(CF_IP_HEADER) {
        val.to_str().unwrap_or("").to_owned()
    } else if let Some(val) = head.peer_addr {
        val.to_string()
    } else {
        return false;
    };

    "1.2.3.4".contains(&ip)
}

#[actix_web::post("/ping")]
async fn ping() -> Result<HttpResponse, UserPersonalError> {
    Ok(HttpResponse::Ok().body("pong"))
}

...
        web::scope("/*")
            // This is confusing, but we catch all routes that _aren't_
            // ip guarded and return an error.
            .guard(guard::Not(ip_guard))
            .route("/*", web::to(HttpResponse::Forbidden)),
    )
    .service(ping);
...

Here is Thruster:

#[middleware_fn]
async fn ip_guard(mut context: Ctx, next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    if "1.2.3.4".contains(&context.headers().get("Auth-Token").unwrap_or("")) {
        context = next(context).await?;

        Ok(context)
    } else {
        Err(Error::unauthorized_error(context))
    }

}

#[middleware_fn]
async fn ping(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    context.body("pong");
    Ok(context)
}

...
    app.get("/ping", m![ip_guard, plaintext]);
...

A bit more direct is nice!

Why you shouldn't use Thruster

  • It's got few maintainers (pretty much just one.)
  • There are other projects that have been far more battle tested. Thruster is in use in production, but nowhere that you'd know or that matters.
  • It hasn't been optimized by wicked smarties. @trezm tries his best, but keeps getting distracted by his dog(s).
  • Serously, this framework could be is great, but it definitely hasn't been poked and proded like others. Your help could go a long way to making it more secure and robust, but we might not be there just yet.

If you got this far, thanks for reading! Always feel free to reach out.

thruster's People

Contributors

agersant avatar allevo avatar cquintana-verbio avatar dovrine avatar justinas avatar kjvalencik avatar nihaals avatar ohsayan avatar pohl avatar rakshith-ravi avatar saiumesh535 avatar theredfish avatar trezm avatar upsuper avatar vorot93 avatar whitfin avatar will-weiss avatar xacrimon avatar ynuwenhof avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

thruster's Issues

[Question] Server shutdown

I'm new to Rust and Thruster, coming from a Node.js background. I would like to know if there is a way to gracefully shut down a Thruster server and do some cleanup before the process ends.

In Node, I can listen for a SIGTERM and prepare for the shutdown like this:

process.on('SIGTERM', () => {
    log.warn('SIGTERM received. Stopping server.');
    myServices.stopAll();
    server.close();
});

Is there a way to do something similar with Thruster?

Thanks!

Tokio has fallen

RE: tokio-rs/tokio#1087 (comment)

For the time being, for await support, you must use github to depend on tokio.

    Checking tokio-async-await v0.1.7
error[E0432]: unresolved import `std::await`
  --> /Users/ckarper/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-async-await-0.1.7/src/lib.rs:35:9
   |
35 | pub use std::await as std_await;
   |         ^^^^^^^^^^^^^^^^^^^^^^^ no `await` in the root

middleware error compilation

I ha ve an error when i test basic middleware

16 |   let ctx_future = chain.next(context)
   |       ^^^^^^^^^^ `futures::Future<Error=std::io::Error, Item=thruster::BasicContext> + std::marker::Send` does not have a constant size known at compile-time
   |
   = help: the trait `std::marker::Sized` is not implemented for `futures::Future<Error=std::io::Error, Item=thruster::BasicContext> + std::marker::Send`
   = note: all local variables must have a statically known size

most_basic.rs

extern crate thruster;
extern crate futures;

use std::boxed::Box;
use futures::future;

use thruster::{App, BasicContext as Ctx, MiddlewareChain, MiddlewareReturnValue};

fn index(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
  context.body = "Hello, Index!".to_owned();;
  Box::new(future::ok(context))
}

fn profiling(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
  println!("{}", "before");
  let ctx_future = _chain.next(context)
      .and_then(move |ctx| {
        println!("{}", "after");
        future::ok(ctx)
      });
  Box::new(ctx_future)
}

fn main() {
  println!("Starting server...");

  let mut app = App::<Ctx>::new();

  app.use_middleware("/", profiling);
  app.get("/", vec![index]);

  App::start(app, "0.0.0.0", 4321);
}

Ami44

file content example

thank you for BasicContext update (and cookie). very great.
I search a minimal middleware example for return file content (image, favicon. ..) async example with tokio' not a sync example.
thanks
ami44

test question

Hello

  • how test status code ?
  • how test all headers (or one header) ?

thanks
ami44

Run and fix Clippy issues

We should be using best practices in this repository. So:

  • Add clippy to our Travis build
  • Fix clippy issues locally

F5

Hello

  • rustc --version -> rustc 1.28.0 (9634041f0 2018-07-30)
  • cargo run --example most_basic
  • open http://127.0.0.1:4321/plaintext
  • "Hello, World !" - Firefox or Chrome - :-)
  • F5 .... "lost connection/ERR_EMPTY_RESPONSE" - Firefox or Chrome - :-(

Ami44

HTTP response status codes

What do you think about using the status codes from the http crate or implementing something similar? I think a defined status code type is way more ergonomic to use and less error prone for a developer than needing to enter a number or string of a status code.

Add gRPC support

Thruster should be able to have have gRPC support like tonic. Update this issue with more details as they evolve.

Example doesn't work

I am getting the following error:

error[E0277]: the trait bound `futures::future::FutureResult<thruster::BasicContext, _>: futures::future::Future` is not satisfied
  --> src/main.rs:21:3
   |
21 |   Box::new(future::ok(context))
   |   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `futures::future::Future` is not implemented for `futures::future::FutureResult<thruster::BasicContext, _>`
   |
   = note: required for the cast to the object type `futures::future::Future<Item=thruster::BasicContext, Error=std::io::Error> + std::marker::Send`

Also, it seems that there is no need to list serde, serde_json, and tokio explicitly in the example project.

Rust version: 1.27 and Nightly (2018-06-19).

P.S. I had to comment #![feature(test)] in the lib.rs to get it compiling to this state on stable Rust.

Proposal: Using traits instead of static fn for middleware chains

RFC:

I'm proposing moving middleware chains to using traits rather than static fns. It'll make it significantly easier to add objects to chains, rather than creating a new function for each chain item. Moreover, traits can respond dynamically rather than using a static function whose definition can't be changed.

This is similar to how Nickel.rs does it. You can take a look at that here:
https://github.com/nickel-org/nickel.rs/blob/master/src/middleware.rs

Ability to return futures

Suppose I want to do a HTTP request and return a context which depends on it, when I receive a request on my Fanta endpoint.

I'd think I need to create a Hyper Client with a tokio Handle and then return the future request (for Fanta to make sure the future is run and the response from the future used).

Is this already possible?

Windows

Windows

  • cargo run --example most_basic-> - ``error[E0432]: unresolved import `net2::unix```

Thruster is only Linux ?

Ami44

Share state using generate context

Hi!
I would like to share a state between requests storing it into request context.

The proposal is to create a new trait that implements a method generate

pub trait ContextGenerator {
    fn generate(req: R) -> T;
}

The change is about

HTML templating?

Will you be adding support for server side HTML generation?

For a full stack web framework the following would be required.

page generation
layouts
asset management i.e. webpack (for CSS, JS and images)

Add a wildcard route that matches all routes

Hello,

Currently set404 function allows to set the middleware if no route is successfully matched . This function can be renamed to be less specific, since it's the developer who defines the logic.

My proposition :

  • Define all routes with a specific match
  • Define a default behavior for routes that do not match : forward, 404, 403, ...

We can rename set404 with something like set_default_behavior / set_default_middleware... or with the name you want, the discussion is opened :)

Have you considered dropping the "e"?

This repo would get way more traction with a name like "Thrustr". You could even capitalize the "r" as a nod to Rust. Something like "ThrustR"?

I'll submit a PR.

`*` routes do not propagate downward

setting a wildcard route doesn't actually propagate down, in other words, curling /test/a/b/2 causes an exception.

fn main() {
    let host = "0.0.0.0";
    let port = 8080;

    println!("Starting server, accessible from : http://{}:{}", host, port);

    let mut app = App::create(generate_context);
    app.use_middleware("/", profiling);
    app.get("/*", vec![not_found]);

    app.get("/test/a/b", vec![test1]);
    app.get("/test/a/c", vec![test1]);

    App::start(app, host, port);

Allow streams for large request processing

This issue is for an investigation and a subsequent implementation. Important questions to answer will be:

  • How does streaming look in terms of the existing API? Is a data stream something we can add on to a request while keeping the rest of the request intact?
  • How does handling a stream in a middleware function look? My first inclination is to say that it would look something like
request.body_stream().// do something with said stream

but am unclear how that would play with the existing body field.

Consider offloading types to the HTTP crate

I'm curious if you've thought about moving some of your types to the http crate. This would reduce the amount of code you're having to maintain, and also be more familiar to people coming from other frameworks (Hyper, etc).

No bother if you can't/don't want to, just thought it might be worth a suggestion! The reason I bring it up now is that it'd be easier now than later to migrate :p

Route parameters with async await?

Currently when i try to use route parameters i get a panic with the message Chain out of cycle from the thruster-core-async-await crate L39.

too basic context

As a new user, I find it constraining to code (each time) my own "context.rs" to start a new project. it's not intuitive.

Some basic methods in BasicContext should be proposed by default which allow to define headers (add, del) and define the return code (404)

  • add_header("headerKey", "value")
  • set_status_code(418)

Thanks
Ami44

how exec code after thruster::App::start

Hello

  • how exec println!("{}", "do some others actions after start") (or a function) after thruster::App::start(app, host, port);
  • how catch thruster stop (and print message) ?
  • how catch thruster error (and print error message)?

main.rs

....
fn main() {
 ...
  println!("Starting server {}://{}:{}", &protocol, &host, &port); // ok display
  thruster::App::start(app, host, port);
  println!("{}", "do some others actions after start"); // <== never display 
 

Thanks
Ami44

Using unix domain socket

Hello, is there any way to use thruster with unix domain sockets?
If they is no way to do that now, I wonder if is possible to add a method .build_from_incoming which use hyper::Server::builder instead hyper::Server::bind for create the underlying hyper::Server

or maybe just a new type of server thruster::UsdHyperServer which impl the ThrusterServer trait but ignoring the port argument, like this:

use thruster::{
    context::basic_hyper_context::{
        generate_context, BasicHyperContext as Ctx, HyperRequest,
    },
    async_middleware, middleware_fn,
    App, Context, ThrusterServer,
    MiddlewareNext, MiddlewareResult,
};

use hyper::{
    service::{make_service_fn, service_fn},
    Body, Request, Response, Server,
    server::accept,
};

use std::sync::Arc;
use async_trait::async_trait;
use tokio::net::UnixListener;

pub struct UdsHyperServer<T: 'static + Context + Send> {
    app: App<HyperRequest, T>,
}

impl<T: 'static + Context + Send> UdsHyperServer<T> { }

#[async_trait]
impl<T: Context<Response = Response<Body>> + Send> ThrusterServer for UdsHyperServer<T> {
    type Context = T;
    type Response = Response<Body>;
    type Request = HyperRequest;

    fn new(app: App<Self::Request, T>) -> Self {
        UdsHyperServer { app }
    }

    async fn build(mut self, path: &str, _port: u16) {
        self.app._route_parser.optimize();

        let arc_app = Arc::new(self.app);

        async move {
            let service = make_service_fn(|_| {
                let app = arc_app.clone();
                async {
                    Ok::<_, hyper::Error>(service_fn(move |req: Request<Body>| {
                        let matched = app.resolve_from_method_and_path(
                            &req.method().to_string(),
                            &req.uri().to_string(),
                        );

                        let req = HyperRequest::new(req);
                        app.resolve(req, matched)
                    }))
                }
            });

            let mut listener = UnixListener::bind(path).unwrap();
            let incoming = listener.incoming();
            let incoming = accept::from_stream(incoming);

            let server = Server::builder(incoming).serve(service);

            server.await?;

            Ok::<_, hyper::Error>(())
        }
        .await
        .expect("hyper server failed");
    }
}

#[middleware_fn]
async fn plaintext(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
    let val = "Hello, World!";
    context.body(val);
    Ok(context)
}

fn main() {
    println!("Starting server...");

    let mut app = App::<HyperRequest, Ctx>::create(generate_context);

    app.get("/plaintext", async_middleware!(Ctx, [plaintext]));

    let server = UdsHyperServer::new(app);
    server.start("/tmp/thruster.sock", 4321);

    // test the server with the following command:
    // curl --unix-socket /tmp/thruster.sock http://host/plaintext
}

README mistake

In the basic example of the README the endpoint declaration is currently:

app.get("/plaintext", middleware![plaintext]);

But the correct way seems to be:

app.get("/plaintext", middleware![Ctx => plaintext]);

Might want to adjust it, through me off at first might do the same to others.

Create a testing harness

We should have a testing harness akin to supertest in nodejs. That is, calling the harness would look something like:

use thruster::test;
use super::my_app::{Context, init};
...
  let app: App<Context> = init();
  let test_app = test::wrap(app);
  
  let result = test_app.get("test/route");

  assert!(result == "Hello, world!");

It might make sense to automatically wrap the response in an object as well?

Formalize route matching algorithm

Given our tree structure, which I believe is fairly comprehensive, as it hasn't changed for a long time, I'd like to better formalize the algorithm for matching routes. Right now the matching code has been taped up many times and is smelling pretty bad. With a more formal algorithm we can drastically clean it up.

Improve Thruster Server struct performance

Would love to get the home-grown implementation of an http encoder/decoder more in line with hyper's perf.

For perf focused users, you can now easily use Hyper as the backend, but it would be nice to be on level playing fields in the future.

A few usage questions

Hello and thanks for working on Thruster!
I have been experimenting with various Rust web frameworks over the past few days to potentially replace my usage of Rocket. So far Thruster has been one of the more promising candidates. I love the simplicity of the API and how easy it is to create, manipulate and pass around the App type.

I have a few questions on how to use the framework correctly:

  • Is the best/only way to serve static files to write a middleware which loads the entire file in memory and calls context.body(entire_file_content)? This seems very inefficient if that is the case :(
  • Also on the subject of serving static files, how would you recommend matching the trailing portion of a route? For example, let's say I want to make all URL starting with /swagger to serve the corresponding files in a ./docs directory on disk. For instance, hitting /swagger/img/logo.png should serve /docs/img/logo.png
  • When using App::create, it would be very useful if it was possible to pass in a closure as the generate_context argument. As it stands, I am not sure how to shove any data that is determined during application startup into the context objects. EDIT: I guess this is the answer: #130

Many thanks in advance!

[Question] Group routes

Hi there,

I'm trying Thruster (good job !) and I would like to group my routes with the same base path. Example :

  • /admin
  • /admin/:adminId/posts

Middlewares (with a naive approach) :

  • _app.get("/admin", vec![show_admins]);
  • _app.get("/admin/:adminId/posts", vec![show_admin_posts]);

How do you associate a different middleware for /admin/:adminId/posts? Currently, this route match the middleware of /admin/ first (which returns a json response for example).

thank you!

index route

app.get("/", vec![index]); not recognize !

  • 127.0.0.1:4321/plaintext : ok
  • 127.0.0.1:4321/ (or 127.0.0.1:4321) : ko, display always page 404

How catch index page ?
Ami44

most_basic.rs:

extern crate thruster;
extern crate futures;

use std::boxed::Box;
use futures::future;

use thruster::{App, BasicContext as Ctx, MiddlewareChain, MiddlewareReturnValue};

fn index(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
  context.body = "Hello, Index!".to_owned();;
  Box::new(future::ok(context))
}
fn plaintext(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
  context.body = "Hello, Plaintext !".to_owned();;
  Box::new(future::ok(context))
}

fn page404(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
  context.body = "Hello, 404 !".to_owned();;
  Box::new(future::ok(context))
}

fn main() {
  println!("Starting server...");

  let mut app = App::<Ctx>::new();

  app.get("/", vec![index]);
  app.get("/plaintext", vec![plaintext]);
  app.get("/*", vec![page404]);

  App::start(app, "0.0.0.0", 4321);
}

[Feature] Create an example for handling sockets and socket messages (Websockets)

While using the hyper server, this should already be possible. This is the tracking issue to make that support first class, along with an example and short guide.

Ideally the upgrade for the socket will be handled via a single middleware function, but we should consider the following:

  • Should each socket be stored in a static map somewhere for reference later?
  • Should we also have first class support for socket.io?

Allow use of different contexts in sub apps

I was wondering why it isn‘t possible to use a different type of context in the sub apps. I haven‘t done much work with this library yet but this seems to hinder usability, does it? I can imagine a scenario where I‘d have an /api endpoint that automatically parses all requests as JSON (and stores the data on the context) while the other endpoints would treat the requests differently.

Application State

Are there plans to allow storing application state in some form? Like an R2D2 connection manager that can be used within routes to retrieve a database session?

thruster template

I have created a little thruster template: https://github.com/ami44/thruster-basic-template with tests, coverage and livereload.

Can you test if is ok, and fork if you need

I'm going to be out of town from now on. I hope to integrate evolutions when you have solved other problems (but without guarantee).

Ami44

Can't override Server header

I tried to set a Server header, but it was getting doubled with the static header text in the Response. Having Thruster as the default is a great idea, but if it could go through the regular set method on the response, that would allow a program to remove it when desired.

Standard approach in my microservices is (or at least what I expected to work):

#[middleware_fn]
async fn server(mut context: Ctx, next: MiddlewareNext<Ctx>) -> Ctx {
	context = await!(next(context));
	context.set("Server", &format!("{} v{}", PKG_NAME, PKG_VERSION));
	context
}

Reference gitter channel in README

You really should reference the gitter channel somewhere in the README and maybe even in the docs. I accidentially saw the channel mentioned in a closed issue, otherwise wouldn't know that it exists. So make it more present, so that people join ;)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.