Coder Social home page Coder Social logo

fasthttp's Introduction

fasthttp GoDoc Go Report

FastHTTP – Fastest and reliable HTTP implementation in Go

Fast HTTP implementation for Go.

fasthttp might not be for you!

fasthttp was designed for some high performance edge cases. Unless your server/client needs to handle thousands of small to medium requests per second and needs a consistent low millisecond response time fasthttp might not be for you. For most cases net/http is much better as it's easier to use and can handle more cases. For most cases you won't even notice the performance difference.

General info and links

Currently fasthttp is successfully used by VertaMedia in a production serving up to 200K rps from more than 1.5M concurrent keep-alive connections per physical server.

TechEmpower Benchmark round 19 results

Server Benchmarks

Client Benchmarks

Install

Documentation

Examples from docs

Code examples

Awesome fasthttp tools

Switching from net/http to fasthttp

Fasthttp best practices

Tricks with byte buffers

Related projects

FAQ

HTTP server performance comparison with net/http

In short, fasthttp server is up to 10 times faster than net/http. Below are benchmark results.

GOMAXPROCS=1

net/http server:

$ GOMAXPROCS=1 go test -bench=NetHTTPServerGet -benchmem -benchtime=10s
BenchmarkNetHTTPServerGet1ReqPerConn                	 1000000	     12052 ns/op	    2297 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn                	 1000000	     12278 ns/op	    2327 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn               	 2000000	      8903 ns/op	    2112 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet10KReqPerConn              	 2000000	      8451 ns/op	    2058 B/op	      18 allocs/op
BenchmarkNetHTTPServerGet1ReqPerConn10KClients      	  500000	     26733 ns/op	    3229 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn10KClients      	 1000000	     23351 ns/op	    3211 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn10KClients     	 1000000	     13390 ns/op	    2483 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet100ReqPerConn10KClients    	 1000000	     13484 ns/op	    2171 B/op	      18 allocs/op

fasthttp server:

$ GOMAXPROCS=1 go test -bench=kServerGet -benchmem -benchtime=10s
BenchmarkServerGet1ReqPerConn                       	10000000	      1559 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn                       	10000000	      1248 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn                      	20000000	       797 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10KReqPerConn                     	20000000	       716 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet1ReqPerConn10KClients             	10000000	      1974 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn10KClients             	10000000	      1352 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn10KClients            	20000000	       789 ns/op	       2 B/op	       0 allocs/op
BenchmarkServerGet100ReqPerConn10KClients           	20000000	       604 ns/op	       0 B/op	       0 allocs/op

GOMAXPROCS=4

net/http server:

$ GOMAXPROCS=4 go test -bench=NetHTTPServerGet -benchmem -benchtime=10s
BenchmarkNetHTTPServerGet1ReqPerConn-4                  	 3000000	      4529 ns/op	    2389 B/op	      29 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn-4                  	 5000000	      3896 ns/op	    2418 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn-4                 	 5000000	      3145 ns/op	    2160 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet10KReqPerConn-4                	 5000000	      3054 ns/op	    2065 B/op	      18 allocs/op
BenchmarkNetHTTPServerGet1ReqPerConn10KClients-4        	 1000000	     10321 ns/op	    3710 B/op	      30 allocs/op
BenchmarkNetHTTPServerGet2ReqPerConn10KClients-4        	 2000000	      7556 ns/op	    3296 B/op	      24 allocs/op
BenchmarkNetHTTPServerGet10ReqPerConn10KClients-4       	 5000000	      3905 ns/op	    2349 B/op	      19 allocs/op
BenchmarkNetHTTPServerGet100ReqPerConn10KClients-4      	 5000000	      3435 ns/op	    2130 B/op	      18 allocs/op

fasthttp server:

$ GOMAXPROCS=4 go test -bench=kServerGet -benchmem -benchtime=10s
BenchmarkServerGet1ReqPerConn-4                         	10000000	      1141 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn-4                         	20000000	       707 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn-4                        	30000000	       341 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10KReqPerConn-4                       	50000000	       310 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet1ReqPerConn10KClients-4               	10000000	      1119 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet2ReqPerConn10KClients-4               	20000000	       644 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet10ReqPerConn10KClients-4              	30000000	       346 ns/op	       0 B/op	       0 allocs/op
BenchmarkServerGet100ReqPerConn10KClients-4             	50000000	       282 ns/op	       0 B/op	       0 allocs/op

HTTP client comparison with net/http

In short, fasthttp client is up to 10 times faster than net/http. Below are benchmark results.

GOMAXPROCS=1

net/http client:

$ GOMAXPROCS=1 go test -bench='HTTPClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkNetHTTPClientDoFastServer                  	 1000000	     12567 ns/op	    2616 B/op	      35 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1TCP               	  200000	     67030 ns/op	    5028 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10TCP              	  300000	     51098 ns/op	    5031 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100TCP             	  300000	     45096 ns/op	    5026 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1Inmemory          	  500000	     24779 ns/op	    5035 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10Inmemory         	 1000000	     26425 ns/op	    5035 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100Inmemory        	  500000	     28515 ns/op	    5045 B/op	      57 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1000Inmemory       	  500000	     39511 ns/op	    5096 B/op	      56 allocs/op

fasthttp client:

$ GOMAXPROCS=1 go test -bench='kClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkClientDoFastServer                         	20000000	       865 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1TCP                      	 1000000	     18711 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10TCP                     	 1000000	     14664 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100TCP                    	 1000000	     14043 ns/op	       1 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1Inmemory                 	 5000000	      3965 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10Inmemory                	 3000000	      4060 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100Inmemory               	 5000000	      3396 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1000Inmemory              	 5000000	      3306 ns/op	       2 B/op	       0 allocs/op

GOMAXPROCS=4

net/http client:

$ GOMAXPROCS=4 go test -bench='HTTPClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkNetHTTPClientDoFastServer-4                    	 2000000	      8774 ns/op	    2619 B/op	      35 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1TCP-4                 	  500000	     22951 ns/op	    5047 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10TCP-4                	 1000000	     19182 ns/op	    5037 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100TCP-4               	 1000000	     16535 ns/op	    5031 B/op	      55 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1Inmemory-4            	 1000000	     14495 ns/op	    5038 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd10Inmemory-4           	 1000000	     10237 ns/op	    5034 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd100Inmemory-4          	 1000000	     10125 ns/op	    5045 B/op	      56 allocs/op
BenchmarkNetHTTPClientGetEndToEnd1000Inmemory-4         	 1000000	     11132 ns/op	    5136 B/op	      56 allocs/op

fasthttp client:

$ GOMAXPROCS=4 go test -bench='kClient(Do|GetEndToEnd)' -benchmem -benchtime=10s
BenchmarkClientDoFastServer-4                           	50000000	       397 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1TCP-4                        	 2000000	      7388 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10TCP-4                       	 2000000	      6689 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100TCP-4                      	 3000000	      4927 ns/op	       1 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1Inmemory-4                   	10000000	      1604 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd10Inmemory-4                  	10000000	      1458 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd100Inmemory-4                 	10000000	      1329 ns/op	       0 B/op	       0 allocs/op
BenchmarkClientGetEndToEnd1000Inmemory-4                	10000000	      1316 ns/op	       5 B/op	       0 allocs/op

Install

go get -u github.com/valyala/fasthttp

Switching from net/http to fasthttp

Unfortunately, fasthttp doesn't provide API identical to net/http. See the FAQ for details. There is net/http -> fasthttp handler converter, but it is better to write fasthttp request handlers by hand in order to use all of the fasthttp advantages (especially high performance :) ).

Important points:

  • Fasthttp works with RequestHandler functions instead of objects implementing Handler interface. Fortunately, it is easy to pass bound struct methods to fasthttp:

    type MyHandler struct {
    	foobar string
    }
    
    // request handler in net/http style, i.e. method bound to MyHandler struct.
    func (h *MyHandler) HandleFastHTTP(ctx *fasthttp.RequestCtx) {
    	// notice that we may access MyHandler properties here - see h.foobar.
    	fmt.Fprintf(ctx, "Hello, world! Requested path is %q. Foobar is %q",
    		ctx.Path(), h.foobar)
    }
    
    // request handler in fasthttp style, i.e. just plain function.
    func fastHTTPHandler(ctx *fasthttp.RequestCtx) {
    	fmt.Fprintf(ctx, "Hi there! RequestURI is %q", ctx.RequestURI())
    }
    
    // pass bound struct method to fasthttp
    myHandler := &MyHandler{
    	foobar: "foobar",
    }
    fasthttp.ListenAndServe(":8080", myHandler.HandleFastHTTP)
    
    // pass plain function to fasthttp
    fasthttp.ListenAndServe(":8081", fastHTTPHandler)
  • The RequestHandler accepts only one argument - RequestCtx. It contains all the functionality required for http request processing and response writing. Below is an example of a simple request handler conversion from net/http to fasthttp.

    // net/http request handler
    requestHandler := func(w http.ResponseWriter, r *http.Request) {
    	switch r.URL.Path {
    	case "/foo":
    		fooHandler(w, r)
    	case "/bar":
    		barHandler(w, r)
    	default:
    		http.Error(w, "Unsupported path", http.StatusNotFound)
    	}
    }
    // the corresponding fasthttp request handler
    requestHandler := func(ctx *fasthttp.RequestCtx) {
    	switch string(ctx.Path()) {
    	case "/foo":
    		fooHandler(ctx)
    	case "/bar":
    		barHandler(ctx)
    	default:
    		ctx.Error("Unsupported path", fasthttp.StatusNotFound)
    	}
    }
  • Fasthttp allows setting response headers and writing response body in an arbitrary order. There is no 'headers first, then body' restriction like in net/http. The following code is valid for fasthttp:

    requestHandler := func(ctx *fasthttp.RequestCtx) {
    	// set some headers and status code first
    	ctx.SetContentType("foo/bar")
    	ctx.SetStatusCode(fasthttp.StatusOK)
    
    	// then write the first part of body
    	fmt.Fprintf(ctx, "this is the first part of body\n")
    
    	// then set more headers
    	ctx.Response.Header.Set("Foo-Bar", "baz")
    
    	// then write more body
    	fmt.Fprintf(ctx, "this is the second part of body\n")
    
    	// then override already written body
    	ctx.SetBody([]byte("this is completely new body contents"))
    
    	// then update status code
    	ctx.SetStatusCode(fasthttp.StatusNotFound)
    
    	// basically, anything may be updated many times before
    	// returning from RequestHandler.
    	//
    	// Unlike net/http fasthttp doesn't put response to the wire until
    	// returning from RequestHandler.
    }
  • Fasthttp doesn't provide ServeMux, but there are more powerful third-party routers and web frameworks with fasthttp support:

    Net/http code with simple ServeMux is trivially converted to fasthttp code:

    // net/http code
    
    m := &http.ServeMux{}
    m.HandleFunc("/foo", fooHandlerFunc)
    m.HandleFunc("/bar", barHandlerFunc)
    m.Handle("/baz", bazHandler)
    
    http.ListenAndServe(":80", m)
    // the corresponding fasthttp code
    m := func(ctx *fasthttp.RequestCtx) {
    	switch string(ctx.Path()) {
    	case "/foo":
    		fooHandlerFunc(ctx)
    	case "/bar":
    		barHandlerFunc(ctx)
    	case "/baz":
    		bazHandler.HandlerFunc(ctx)
    	default:
    		ctx.Error("not found", fasthttp.StatusNotFound)
    	}
    }
    
    fasthttp.ListenAndServe(":80", m)
  • Because creating a new channel for every request is just too expensive, so the channel returned by RequestCtx.Done() is only closed when the server is shutting down.

    func main() {
      fasthttp.ListenAndServe(":8080", fasthttp.TimeoutHandler(func(ctx *fasthttp.RequestCtx) {
      	select {
      	case <-ctx.Done():
      		// ctx.Done() is only closed when the server is shutting down.
      		log.Println("context cancelled")
      		return
      	case <-time.After(10 * time.Second):
      		log.Println("process finished ok")
      	}
      }, time.Second*2, "timeout"))
    }
  • net/http -> fasthttp conversion table:

    • All the pseudocode below assumes w, r and ctx have these types:
      var (
      	w http.ResponseWriter
      	r *http.Request
      	ctx *fasthttp.RequestCtx
      )
  • VERY IMPORTANT! Fasthttp disallows holding references to RequestCtx or to its' members after returning from RequestHandler. Otherwise data races are inevitable. Carefully inspect all the net/http request handlers converted to fasthttp whether they retain references to RequestCtx or to its' members after returning. RequestCtx provides the following band aids for this case:

    • Wrap RequestHandler into TimeoutHandler.
    • Call TimeoutError before returning from RequestHandler if there are references to RequestCtx or to its' members. See the example for more details.

Use this brilliant tool - race detector - for detecting and eliminating data races in your program. If you detected data race related to fasthttp in your program, then there is high probability you forgot calling TimeoutError before returning from RequestHandler.

Performance optimization tips for multi-core systems

  • Use reuseport listener.
  • Run a separate server instance per CPU core with GOMAXPROCS=1.
  • Pin each server instance to a separate CPU core using taskset.
  • Ensure the interrupts of multiqueue network card are evenly distributed between CPU cores. See this article for details.
  • Use the latest version of Go as each version contains performance improvements.

Fasthttp best practices

  • Do not allocate objects and []byte buffers - just reuse them as much as possible. Fasthttp API design encourages this.
  • sync.Pool is your best friend.
  • Profile your program in production. go tool pprof --alloc_objects your-program mem.pprof usually gives better insights for optimization opportunities than go tool pprof your-program cpu.pprof.
  • Write tests and benchmarks for hot paths.
  • Avoid conversion between []byte and string, since this may result in memory allocation+copy. Fasthttp API provides functions for both []byte and string - use these functions instead of converting manually between []byte and string. There are some exceptions - see this wiki page for more details.
  • Verify your tests and production code under race detector on a regular basis.
  • Prefer quicktemplate instead of html/template in your webserver.

Tricks with []byte buffers

The following tricks are used by fasthttp. Use them in your code too.

  • Standard Go functions accept nil buffers
var (
	// both buffers are uninitialized
	dst []byte
	src []byte
)
dst = append(dst, src...)  // is legal if dst is nil and/or src is nil
copy(dst, src)  // is legal if dst is nil and/or src is nil
(string(src) == "")  // is true if src is nil
(len(src) == 0)  // is true if src is nil
src = src[:0]  // works like a charm with nil src

// this for loop doesn't panic if src is nil
for i, ch := range src {
	doSomething(i, ch)
}

So throw away nil checks for []byte buffers from you code. For example,

srcLen := 0
if src != nil {
	srcLen = len(src)
}

becomes

srcLen := len(src)
  • String may be appended to []byte buffer with append
dst = append(dst, "foobar"...)
  • []byte buffer may be extended to its' capacity.
buf := make([]byte, 100)
a := buf[:10]  // len(a) == 10, cap(a) == 100.
b := a[:100]  // is valid, since cap(a) == 100.
  • All fasthttp functions accept nil []byte buffer
statusCode, body, err := fasthttp.Get(nil, "http://google.com/")
uintBuf := fasthttp.AppendUint(nil, 1234)
  • String and []byte buffers may converted without memory allocations
func b2s(b []byte) string {
    return *(*string)(unsafe.Pointer(&b))
}

func s2b(s string) (b []byte) {
    bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
    sh := (*reflect.StringHeader)(unsafe.Pointer(&s))
    bh.Data = sh.Data
    bh.Cap = sh.Len
    bh.Len = sh.Len
    return b
}

Warning:

This is an unsafe way, the result string and []byte buffer share the same bytes.

Please make sure not to modify the bytes in the []byte buffer if the string still survives!

Related projects

  • fasthttp - various useful helpers for projects based on fasthttp.
  • fasthttp-routing - fast and powerful routing package for fasthttp servers.
  • http2 - HTTP/2 implementation for fasthttp.
  • router - a high performance fasthttp request router that scales well.
  • fastws - Bloatless WebSocket package made for fasthttp to handle Read/Write operations concurrently.
  • gramework - a web framework made by one of fasthttp maintainers
  • lu - a high performance go middleware web framework which is based on fasthttp.
  • websocket - Gorilla-based websocket implementation for fasthttp.
  • websocket - Event-based high-performance WebSocket library for zero-allocation websocket servers and clients.
  • fasthttpsession - a fast and powerful session package for fasthttp servers.
  • atreugo - High performance and extensible micro web framework with zero memory allocations in hot paths.
  • kratgo - Simple, lightweight and ultra-fast HTTP Cache to speed up your websites.
  • kit-plugins - go-kit transport implementation for fasthttp.
  • Fiber - An Expressjs inspired web framework running on Fasthttp
  • Gearbox - ⚙️ gearbox is a web framework written in Go with a focus on high performance and memory optimization
  • http2curl - A tool to convert fasthttp requests to curl command line

FAQ

  • Why creating yet another http package instead of optimizing net/http?

    Because net/http API limits many optimization opportunities. For example:

    • net/http Request object lifetime isn't limited by request handler execution time. So the server must create a new request object per each request instead of reusing existing objects like fasthttp does.
    • net/http headers are stored in a map[string][]string. So the server must parse all the headers, convert them from []byte to string and put them into the map before calling user-provided request handler. This all requires unnecessary memory allocations avoided by fasthttp.
    • net/http client API requires creating a new response object per each request.
  • Why fasthttp API is incompatible with net/http?

    Because net/http API limits many optimization opportunities. See the answer above for more details. Also certain net/http API parts are suboptimal for use:

  • Why fasthttp doesn't support HTTP/2.0 and WebSockets?

    HTTP/2.0 support is in progress. WebSockets has been done already. Third parties also may use RequestCtx.Hijack for implementing these goodies.

  • Are there known net/http advantages comparing to fasthttp?

    Yes:

    • net/http supports HTTP/2.0 starting from go1.6.
    • net/http API is stable, while fasthttp API constantly evolves.
    • net/http handles more HTTP corner cases.
    • net/http can stream both request and response bodies
    • net/http can handle bigger bodies as it doesn't read the whole body into memory
    • net/http should contain less bugs, since it is used and tested by much wider audience.
  • Why fasthttp API prefers returning []byte instead of string?

    Because []byte to string conversion isn't free - it requires memory allocation and copy. Feel free wrapping returned []byte result into string() if you prefer working with strings instead of byte slices. But be aware that this has non-zero overhead.

  • Which GO versions are supported by fasthttp?

    Go 1.18.x. Older versions won't be supported.

  • Please provide real benchmark data and server information

    See this issue.

  • Are there plans to add request routing to fasthttp?

    There are no plans to add request routing into fasthttp. Use third-party routers and web frameworks with fasthttp support:

    See also this issue for more info.

  • I detected data race in fasthttp!

    Cool! File a bug. But before doing this check the following in your code:

  • I didn't find an answer for my question here

    Try exploring these questions.

fasthttp's People

Contributors

alexandear avatar amezghal avatar aoang avatar bobochka avatar byene0923 avatar cipriancraciun avatar cristaloleg avatar dependabot[bot] avatar dgrr avatar enchantner avatar erikdubbelboer avatar ernado avatar gotoxu avatar ichxxx avatar kirilldanshin avatar kiyonlin avatar li-jin-gou avatar mkorolyov avatar moredure avatar nickajacks1 avatar panjf2000 avatar peczenyj avatar shulhan avatar stokito avatar tolyar avatar tylitianrui avatar valyala avatar xuxiao415 avatar yankawayu avatar zhangyunhao116 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fasthttp's Issues

Prefork w/reuse port is NOT really faster than multi-threaded

Hello I am an author of dumb & simple WebFrameworkBenchmark. It isn't as complete as TechEmpower's one, but the purpose it just to test the framework (router) overhead.

According to your Performance optimization tips for multi-core systems using pre-fork with SO_REUSEPORT is preferred way to scale on multicore system.

However I get completely opposite numbers, when using prefork I get worse results (~440kreq/s) than with multi-threded simpler version (~480kreq).

You can find the source for my benchmark at:

https://github.com/nanoant/WebFrameworkBenchmark/blob/master/benchmarks/go-fasthttp/helloworldserver.go

Also another important observation is that is performance increase comparing to net/http is around 1.8x, and it is nowhere close to claimed 4x-10x. Thoughts?

Therefore I humbly ask you to provide some solid benchmark examples where we can see the performance differences.

Handling connections after closing net.Listener

Hi @valyala ,

I am curious if it is possible (or will be in the future) to trace the worker pool in the context of connection pool size. After closing the net.Listener, the HTTP server does not accept any new connections and it allows earlier accepted ones to be finished (to be served until disconnection).
I need to find a way to monitor the connection pool for the case of graceful shutdown, just to be notified when the number of active connections decreases to zero.

The reference implementation of HTTP server supporting such feature is available at:
https://github.com/tylerb/graceful
(it stores net.Conn in a hash map)

Kind regards,
Marcin

URI.UpdateBytes does not appear to be working with fragments

I've been writing a simple web server with fasthttp to deal with some simple logic in how we serve our web apps. And one of the things we need to do is to do a redirect with a fragment into our web app. As in:
https://godoc.org/some#fragment?sss=sds

Now the problem is that it appear that the URI parser code does not appear to handle that scenario. And will simply urlencode the fragment marker (#)

Now if I was using URI directly I could work around it by setting the fragment directly. But since I am using the RequestCtx.RedirectBytes() function it is not really an option. And I presume something like this is should work?

Quick demonstration of the issue:

package main

import (
    "fmt"

    "github.com/valyala/fasthttp"
)

func main() {
  uri := fasthttp.URI{}
  uri.Parse(nil, []byte("https://godoc.org"))
  fmt.Printf("Initial: %s\n", uri.FullURI())
  uri.Update("/some#fragment?sss=sds")
  fmt.Printf("Updated: %s\n", uri.FullURI())
}

any advice on improve performance on AWS

I just started to use fasthttp, I checked the CPU usage is only around 20%, I want to improve performance.

is there any advice on how to improve performance on AWS?

Please provide real benchmark data and sever information.

The claim for 1m concurrent connections is a pretty big one. Please provide the following:

  • What machine was used to handle 1m connections? E.g. m3.2xLarge (8 cpus, 30gb memory)
    • To put it into perspective, node.js can handle 800k connections on a m3.2xLarge.
  • Are these just ping/pong connections? If so then the actual throughput/rps is MUCH lower then 1million.
    • G-WAN + Go can handle Average RPS:784,113 (at least according to their homepage)
  • What was the average latency for handling 1m concurrent connections?
  • Was there any bottlenecks? E.g. Is it just hardware that is holding this library back from achieving anymore throughput?

Thank you.

byte range issue

Hi,

Seems like an issue with byte range requests.

E.g.
File size: 210720491 bytes

When I send a request with the header "Range: bytes=95644624-" fasthttp handles it correctly.
However if we'll modify the header to "Range: bytes=95644624-210720491" (from 95644624 to the last byte of the file) it drops to 416 http error because it can't handle the byte range.

cannot parse byte range "bytes=97851544-210720491" for path="/some_file.dat": invalid byte range

For instance I've just checked Nginx and it handles the same query correctly.

Keep-Alive in HTTP 1.0

I've created simple app to test it with Apache Benchmark (which uses HTTP/1.0):

package main

import (
    "github.com/valyala/fasthttp"
)

const helloWorldString = "Hello, World!"

var (
    helloWorldBytes    = []byte(helloWorldString)
    helloWorldBytesLen = len(helloWorldBytes)
)

func plaintextHandler(ctx *fasthttp.RequestCtx) {
    ctx.SetStatusCode(fasthttp.StatusOK)
    ctx.Response.Header.SetContentType("text/plain")
    ctx.Response.Header.SetContentLength(helloWorldBytesLen)
    ctx.Response.Header.Set("Connection", "keep-alive")
    ctx.Write(helloWorldBytes)
}

func main() {
    fasthttp.ListenAndServe(":8080", plaintextHandler)
}

Unfortunately server closes connection, when it sees HTTP/1.0 in GET request:

GET / HTTP/1.0
Connection: Keep-Alive
Host: 127.0.0.1:8080
User-Agent: ApacheBench/2.3
Accept: */*

If I change first line in request to GET / HTTP/1.1 then everything works, server returns response:

HTTP/1.1 200 OK
Server: fasthttp
Date: Mon, 07 Dec 2015 11:43:30 GMT
Content-Type: text/plain
Content-Length: 13
Connection: keep-alive

Hello, World!

and waits for another request on the same connection.
Could you add support for Keep-Alive in HTTP 1.0?
HTTP/1.0, in its documented form, made no provision for persistent connections. But some HTTP/1.0 implementations, however, use a Keep-Alive header to request that a connection persist.

URI().Path bytes to string

Hi,

It's told to avoid conversion between []byte and string 'cause Fasthttp API provides functions for both.
However I've looked through the code and haven't found a function that would return a string for Path.
There is URI().String() but I don't see the same for URI().Path()

It seems the only way to sort this out is to convert from bytes to string but it's not recommended by you :)
Is there any other way to get Path as a string?

Thanks.

How do you handle unit tests?

I've been exploring full-stack unit testing for apps built on fasthttp, and my initial instinct was to use the default http package (to reduce the chance of a fasthttp flaw being shared between client and server, causing something to go undetected). However, Go doesn't like sharing a TCP port:

--- FAIL: TestServer (0.00s)
panic: listen tcp :8080: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. [recovered]
        panic: listen tcp :8080: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

You seem to use unexported functions to do internal testing. What's the recommended way to do testing without running into errors with these? I'm doing a server.ListenAndServe(":8080") for the server and then doing this to connect to the localhost server:

req, err := http.NewRequest(GET, "localhost:8080/hostTest", nil)
if err != nil {
    panic(err)
}
req.Host = "example.com"
_, err = c.Do(req)
panicErr(err)
// Validate the response here...

I've been considering mocking up the net.Listener interface and passing that to Server.Serve(). Is there a better solution? What do you suggest?

Client.Do() returns EOF errors when reusing the Request and Response structs

Hi,
I am doing something like this :

var request = &fasthttp.Request{}
var response = &fasthttp.Response{}
var client = fasthttp.Client{....}
    for {
        request.Reset()
        response.Reset()
        response.ResetBody()
        request.ResetBody()

                [ setup the request URI, header etc... ]
        err = client.Do(request, response)
                if err != nil{
                      fmt.Println(err.Error())
               }
}

in this use case, I am getting EOF errors for some reason. Could you please explain to me what I am doing wrong?
Before i was initiializing the http.Request inside of the loop, and calling default go http client Do() method.
I am running everything on my local machine, and there is haproxy between the client and server. I didn't see those problems if i don't connect through haproxy.

I am using fasthttp from commit e823a9a.

My fasthttp client struct looks like that :

fasthttp.Client{
            Dial: func(addr string) (net.Conn, error) {
                var dialer = net.Dialer{
                    Timeout:   10 * time.Second,
                    KeepAlive: 5 * time.Second,
                }
                return dialer.Dial("tcp", addr)
            },
        },

Question: net/http.Request.Form equivalent?

Hi,

I have a question, if there's any analog to http.Request.Form container planned?
http.Request doc

Or any convenience methods like PeekMulti(name) (for fasthttp.RequestCtx and/or fasthttp.Request), that peek multi params both from query and body?

I guess, this may lead to increased memory usage (for http.Request.Form analog) or memory allocations (for PeekMulti(name) methods) or be tricky to implement (pooling, if possible).

Or some kind of walker method like .Visit(name, callback(value)) (also for fasthttp.RequestCtx and/or fasthttp.Request) would be also nice (and shouldn't result in memory allocs). Btw, I've read code for peekArgStr and it seems, that such .Visit method (that focuses on specific param name) would not provide any performance gain, as peekArgStr already loops through key-value pair list (and I don't think, that this should/can be optimized).

Anyway, that's just a question/suggestion; that's, obviously, not hard to work around.

missing Handle and HandleFunc

are you going to add ?

http.Handle("/foo", fooHandler)

http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
})

InMemoryListener freezes on Write() method

I've been trying to use InMemoryListener for a few hours now, and I've pinned the issue down to the fact that it freezes on the Write() method of the "server side" connection (the one returned from Accept()).

The write method freezes the goroutine and doesn't write anything to the connection.

Same result on both go version go1.5.3 windows/amd64 and go version go1.5.1 windows/amd64. A friend on an unknown version of Go on a Mac had a similar issue with the same exact code.

Thoughts on what's causing this issue?

package main

import (
    "fmt"
    "net"

    "github.com/valyala/fasthttp/fasthttputil"
)

func main() {
    l := fasthttputil.NewInmemoryListener()
    go accept(l)

    c, err := l.Dial()
    if err != nil {
        panic(err)
    }
    defer c.Close()

    b := readOneByte(c)

    if string(b) == "A" {
        fmt.Println("Okay")
    } else {
        fmt.Println("Not Okay")
    }
}

func readOneByte(c net.Conn) byte {
    var b []byte

    n, err := c.Read(b)
    for n == 0 && err == nil {
        n, err = c.Read(b)
    }
    if err != nil {
        panic(err)
    }

    return b[0]
}

func accept(l *fasthttputil.InmemoryListener) {
    c, err := l.Accept() // Should be in a for loop, but doesn't matter here since there's only one connection.
    if err != nil {
        panic(err)
    }
    defer c.Close()

    _, err = c.Write([]byte("A")) // Freezes here.
    if err != nil {
        panic(err)
    }
}

Make normalizeHeaderKey optional

Could you please make this function optional ?

Even if headers are supposed to be case-insensitive, many applications aren't working correctly when they header case isn't the one they were expecting. I want to build a reverse proxy based on fasthttp and need it to be compatible with any application.

Connection Close on 204 No Content

I have an issue where a webserver is returning a 204 response with Connection: keep-alive, but fasthttp sets Connection: close and closes the connection, this does not happen with curl or net/http.

This is the response according to fasthttp:

HTTP/1.1 204 No Content
Server: fasthttp
Date: Tue, 12 Jan 2016 16:34:26 GMT
Content-Type: text/plain; charset=utf-8
X-Powered-By: Express
Etag: W/"2-2745614147"
Date: Tue, 12 Jan 2016 16:34:30 GMT
Connection: keep-alive
Transfer-Encoding: identity
Connection: close

This is the response according to net/http:

&{Status:204 No Content StatusCode:204 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Powered-By:[Express] Etag:[W/"2-2745614147"] Date:[Tue, 12 Jan 2016 16:34:30 GMT] Connection:[keep-alive]] Body:0xc8200de580 ContentLength:0 TransferEncoding:[] Close:false Trailer:map[] Request:0xc820112000 TLS:<nil>}

And this is according to curl:

< HTTP/1.1 204 No Content
< X-Powered-By: Express
< ETag: W/"2-2745614147"
< Date: Tue, 12 Jan 2016 16:35:39 GMT
< Connection: keep-alive
< 
* Connection #0 to host xxx left intact

This is the code i'm using:

    req := fasthttp.AcquireRequest()
    res := fasthttp.AcquireResponse()
    defer fasthttp.ReleaseRequest(req)
    defer fasthttp.ReleaseResponse(res)
    req.SetRequestURI("URL")
    req.Header.SetMethod("GET")
    req.Header.Set("Accept", "application/json")
    err := fasthttp.DoTimeout(req, res, 1*time.Second)

Am I doing something wrong or is it a bad behavior of fasthttp?

websockets support

My coworkers and I are a big fan of fasthttp, it has proven very useful for a work project and we find it very ergonomic to use.

I now have need for websockets with a fasthttp.Server. What is the plan for websockets support in fasthttp?

There are two major Go packages for websockets:

The former is a bit easier to use and probably is considered the "standard" Go websockets package since it comes from the Go team. Nonetheless the gorilla websockets seems to support the standard better and it appears more flexible and not as tied to net/http as the golang/net version.

I plan to fork gorilla/websocket and add a fasthttp version of the Upgrade method:

https://godoc.org/github.com/gorilla/websocket#Upgrader.Upgrade

If anyone is working on this, or if there are other ideas on how to get websockets working with fasthttp, please reply.

bytesconv.go gmtLocation Issues

var gmtLocation = func() *time.Location {
    x, err := time.LoadLocation("GMT")
    if err != nil {
        panic(fmt.Sprintf("cannot load GMT location: %s", err))
    }
    return x
}()

I think should use UTC timezone

x, err := time.LoadLocation("GMT")
// LoadLocation returns the Location with the given name.
//
// If the name is "" or "UTC", LoadLocation returns UTC.
// If the name is "Local", LoadLocation returns Local.
//
// Otherwise, the name is taken to be a location name corresponding to a file
// in the IANA Time Zone database, such as "America/New_York".
//
// The time zone database needed by LoadLocation may not be
// present on all systems, especially non-Unix systems.
// LoadLocation looks in the directory or uncompressed zip file
// named by the ZONEINFO environment variable, if any, then looks in
// known installation locations on Unix systems,
// and finally looks in $GOROOT/lib/time/zoneinfo.zip.
func LoadLocation(name string) (*Location, error) {
    if name == "" || name == "UTC" {
        return UTC, nil
    }
    if name == "Local" {
        return Local, nil
    }
    if zoneinfo != "" {
        if z, err := loadZoneFile(zoneinfo, name); err == nil {
            z.name = name
            return z, nil
        }
    }
    return loadLocation(name)
}

If user not install go sdk, the $GOROOT/lib/time/zoneinfo.zip will not found

RequestCtx.SendFile doesn't consider Range request header

Hi,

Seems, that RequestCtx.SendFile does not consider Range header - it always sends the whole file (with Response.SendFile, which, obviously, contains no request headers).

But, for example, RequestCtx.SendFile analyzes If-Modified-Since.

It would be great, if RequestCtx.SendFile considered Range request header too, as FS does (I haven't tried it myself yet, but I looked through it's code + seen issue #47).

I understand, that implementing it would require some copy-pasting so Response.SendFile cannot be reused there, so it's also a design-related issue.

So, such behaviour is intentional (and I should stick to FS if I want Range analysis) or simply not implemented?

I've looked at master's code as of 83e1796 commit, if it matters :)

it takes a long time on function readMultipartForm

When i’m upload a file with “multipart/form-data” from Web Browser or Post-man for Google Chome
,it takes a long time on function readMultipartForm and not return error "cannot read multipart/form-data body"

Odd behavior: continues to serve content after listener.Close()

Hello,

I'm attempting to do a graceful shutdown by closing the listener. However, it doesn't work; I can continue to request new pages and they continue to be served, even after s.Serve returns, until the entire application ends.

    package main

    import (
        "github.com/valyala/fasthttp"
        "fmt"
        "log"
        "net"
        "time"
    )

    var listener net.Listener

    func root(ctx *fasthttp.RequestCtx) {
        if string(ctx.Path()) == `/close` {
            fmt.Fprint(ctx, "CLOSE\n")
            err := listener.Close()
            if err != nil {
                log.Println(err)
            }
        }
        fmt.Fprintf(ctx, "Hi there! RequestURI is %q", ctx.RequestURI())
    }

    func main() {
        var err error
        listener, err = net.Listen(`tcp`, `:8081`)
        if err != nil {
            log.Fatal(err)
        }
        s := &fasthttp.Server{
            Handler: root,
        }
        err = s.Serve(listener)
        if err != nil {
            log.Println(err)
        }
        log.Println(`Exiting in 30 seconds`)
        time.Sleep(time.Second * 30)
        log.Println(`Exiting`)
    }

Unable to cross-compile using go build

Trying to compile for linux $ env GOOS=linux GOARCH=arm go build -v github.com/abacaj/fasthttp

produces the following error:
# github.com/valyala/fasthttp
C:\Projects\Go\src\github.com\valyala\fasthttp\bytesconv.go:20: constant 18446744073709551615 overflows uint
C:\Projects\Go\src\github.com\valyala\fasthttp\bytesconv.go:33: constant 18446744073709551615 overflows uint

seems like a bug, but not sure - I'm on GO 1.5.1

A way to specify returned HTTP code when calling TimeoutError

Currently, we need to call ctx.TimeoutError() if we wish to return from the request, which in turn sets the status code to StatusRequestTimeout.

Should there be another function:
func (*RequestCtx) TimeoutErrorWithCode(statusCode int, msg string)
which lets the user override the status code and message?

Can't get high throughput

I have a very simple program:

package main

import (
    "flag"
    "log"
"time"
  "github.com/valyala/fasthttp"
  "github.com/valyala/fasthttp/reuseport"
  "github.com/davecheney/profile"
)

var (
    addr = flag.String("addr", ":10000", "TCP address to listen to")
  c = &fasthttp.HostClient{
    Addr: "192.168.1.1:80",
    ReadTimeout: 30 * time.Second,
    WriteTimeout: 30 * time.Second,
    ReadBufferSize: 64 * 1024,
    WriteBufferSize: 64 * 1024,
  }
)

func main() {
  defer profile.Start(profile.CPUProfile).Stop()
    flag.Parse()


listener, err := reuseport.Listen("tcp4", *addr)
if err != nil {
  panic(err)
}
defer listener.Close()

    if err := fasthttp.Serve(listener, requestHandler); err != nil {
        log.Fatalf("Error in ListenAndServe: %s", err)
    }
}

func requestHandler(ctx *fasthttp.RequestCtx) {
  err := c.Do(&ctx.Request, &ctx.Response)
  if err != nil {
    log.Printf("Error: %s", err)
  }
  ctx.Response.Header.DisableNormalizing()
  etag := string(ctx.Response.Header.Peek("Etag"))
  ctx.Response.Header.Del("Etag")
  ctx.Response.Header.Set("ETag", etag)
}

I can't get more than 100 MB/s, but if I run the same benchmark using 192.168.1.1:80 directly, I get more than twice this throughput.

Here is the profile output:

Entering interactive mode (type "help" for commands)
(pprof) top10
68.64s of 71.57s total (95.91%)
Dropped 207 nodes (cum <= 0.36s)
Showing top 10 nodes out of 54 (cum >= 54.31s)
      flat  flat%   sum%        cum   cum%
    38.15s 53.30% 53.30%     38.36s 53.60%  syscall.Syscall
    28.83s 40.28% 93.59%     28.83s 40.28%  runtime.memclr
     0.85s  1.19% 94.77%      0.85s  1.19%  runtime.memmove
     0.37s  0.52% 95.29%      0.37s  0.52%  runtime.futex
     0.16s  0.22% 95.51%     25.67s 35.87%  net.(*netFD).Read
     0.07s 0.098% 95.61%     25.78s 36.02%  bufio.(*Reader).Read
     0.06s 0.084% 95.70%     25.73s 35.95%  net.(*conn).Read
     0.06s 0.084% 95.78%      0.78s  1.09%  runtime.(*mspan).sweep
     0.05s  0.07% 95.85%      0.44s  0.61%  runtime.findrunnable
     0.04s 0.056% 95.91%     54.31s 75.88%  github.com/valyala/fasthttp.appendBodyFixedSize

go get github.com/valyala/fasthttp compile errors

I seem to be having problems when I am trying to fetch the package. This is the error I am getting:

github.com/valyala/fasthttp

src/github.com/valyala/fasthttp/bytesconv.go:53: date.In(gmtLocation).AppendFormat undefined (type time.Time has no field or method AppendFormat)
src/github.com/valyala/fasthttp/header.go:1125: undefined: bytes.LastIndexByte
src/github.com/valyala/fasthttp/header.go:1450: r.Discard undefined (type *bufio.Reader has no field or method Discard)
src/github.com/valyala/fasthttp/http.go:430: undefined: io.CopyBuffer
src/github.com/valyala/fasthttp/uri.go:221: undefined: bytes.LastIndexByte
src/github.com/valyala/fasthttp/uri.go:233: undefined: bytes.LastIndexByte

[Fileserver] Fileserver Memory Usage with 200k concurrent connections.

Hi @valyala,

I am using the fileserver given in the example to serve files from a 4 core system on production. I load tested it on the server and could easily achieve 20k requests/second (this includes kafka produce of access logs). I have two queries, I would be really helpful if you can help me with the same.

The issue I faced was when the number of concurrent connections went upto around 250k while serving a 300kb file to real users. At this moment the RAM (30GB) was full and the space on disk went to 0 starting from 80 GB and I had no other option apart from killing the processes.

After going through the code, I am suspecting that request handler opens a file every time all the bigFileReaders are already in use, does this mean 200k concurrent connection created 200k bigFileReaders instances. Am I right on this?

Does fileserver use sendfile for bigfiles?

FileServer speed

Hi,

I'm not sure it's an issue however possibly you can recommend some kind of trick to get this resolved.
I stream a lot of files using fasthttp but the summary streaming speed is about 20% slower than nginx streaming.

It's clear fasthttp was developed with requests concurrency and high-load environments in mind. But is there a way to get the same performance for file streaming?

It's absolutely possible I do something wrong in my code.
Please have a look it. I'll be grateful for any kind of advice.

Thanks.

package main

import (
    "runtime"
    "strings"
    "time"

    "github.com/valyala/fasthttp"
)

func main() {

    runtime.GOMAXPROCS(runtime.NumCPU())

    rw := fasthttp.PathRewriteFunc(func(ctx *fasthttp.RequestCtx) []byte {
        urlPart := strings.Split(string(ctx.Path()), "/")
        return []byte("/" + urlPart[2] + "/" + urlPart[3])
    })

    fs := fasthttp.FS{
        Root:            "/var/spool/cache",
        AcceptByteRange: true,
        PathRewrite:     rw,
        Compress:        false,
        CacheDuration:   time.Duration(1) * time.Hour,
    }

    h := fs.NewRequestHandler()

    requestHandler := func(ctx *fasthttp.RequestCtx) {
        urlPart := strings.Split(string(ctx.Path()), "/")
        fileName := urlPart[4]

        if len(urlPart) == 5 && (ctx.IsGet() == true || ctx.IsHead() == true) {

            ctx.Response.Header.Set("Content-disposition", "attachment; filename="+fileName)
            h(ctx)
        } else {
            ctx.NotFound()
        }
    }

    var s fasthttp.Server

    s.Concurrency = 262144
    s.MaxKeepaliveDuration = time.Duration(2) * time.Second
    s.ReadBufferSize = 16384
    s.WriteTimeout = time.Duration(15) * time.Second

    s.Handler = requestHandler

    errListen := s.ListenAndServe("0.0.0.0:80")
    if errListen != nil {
        panic(errListen)
    }

}

Add transparent compression handling in the Response struct.

Hi,
I was wondering if adding transparent compression handling in the code is feasible.
Currently we check if the "Content-Encoding" header have a "gzip" value and decompress the data ourselves.
I might do it myself if I have some free time on my hands, but I was wondering if this change would be even accepted

Reverse Proxy?

The golang httputil package has a ReverseProxy that will serve from an http.Request.

Is there any comporable revrese proxy for fasthttp that will serve from a fasthttp.Request?

Content-Encoding: deflate is actually zlib

Had this issue with another library.
It's a little counter-intuitive, but Content-Encoding: deflate is actually not a simple flate stream, but a zlib one.
According to [Wikipedia](https://en.wikipedia.org/wiki/HTTP_compression:

deflate – compression based on the deflate algorithm (described in RFC 1951), wrapped inside the zlib data format (RFC 1950);

fasthttp currently uses flate package for handling deflated content, but should use zlib

Also, it would be great if fasthttp used: https://github.com/klauspost/compress as it provides optimized compression packages

What is the best way to read headers ?

The only way I found is to get use RequestCtx.Header.String() and to parse them, but I think it would be easier to provide a function for this, no ? Or perhaps, I missed it.

MaxIdleConnsPerHost in Client

Hi!
Thanks for the library! 🚀

Am I understand it right, that there is no ability to reuse connections in manner of net/http – when the client tries to use idle (free connection) when it possible, or create new one, even if MaxIdleConnsPerHost limit exceeded?

Can't set Content-Type on GET requests

package main
import "github.com/valyala/fasthttp"
import "fmt"

func main() {
    header := fasthttp.RequestHeader{}

    header.SetRequestURI("http://localhost/test")
    header.Set("Accept", "application/json")
    header.Set("Arbitrary", "should-be-present")
    header.SetContentType("application/json")
    header.Set("Content-Type", "application/json")
    fmt.Printf("Header is: %s\n", string(header.String()))
}

Actual result:

Header is: GET http://localhost/test HTTP/1.1
User-Agent: fasthttp client
Accept: application/json
Arbitrary: should-be-present

According to the RFC2616 section 7.2.1 Content-Type should be set on methods containing entity but RFC does not prohibit to set it on other methods. Some APIs are using content-type to select API version on GET requests.

Is it OK to reuse Request?

Assuming I'm going to concurrently execute the exact same request multiple times, is it OK to reuse the same Request instance or should I copy it?

Broken build on 386/arm architecture

$ go get -v github.com/valyala/fasthttp
github.com/valyala/fasthttp
# github.com/valyala/fasthttp
../../valyala/fasthttp/bytesconv.go:19: constant 18446744073709551615 overflows uint
../../valyala/fasthttp/bytesconv.go:32: constant 18446744073709551615 overflows uint

I'm fixed bytesconv.go: https://github.com/msoap/fasthttp/commit/1d5c402cdc457aa9c08671d84bbe2b0c9df1873b

but another test has fallen (client_test.go:505):

$ go version
go version go1.5 linux/arm

$ go test
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x4 pc=0x140998]

goroutine 26 [running]:
sync/atomic.storeUint64(0x1076e0fc, 0x56596ced, 0x0)
    /home/msa/var/src/go/src/sync/atomic/64bit_arm.go:20 +0x40
github.com/msoap/fasthttp.(*HostClient).do(0x1076e0b0, 0x1078c180, 0x1077e100, 0x10727600, 0x673c8, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:728 +0x100
github.com/msoap/fasthttp.(*HostClient).Do(0x1076e0b0, 0x1078c180, 0x1077e100, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:716 +0x40
github.com/msoap/fasthttp.(*Client).Do(0x4f62e0, 0x1078c180, 0x1077e100, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:270 +0x4d4
github.com/msoap/fasthttp.doRequest(0x1078c180, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0xb63f35d0, 0x4f62e0, 0x0, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:568 +0x200
github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn.func1(0x1078c180, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0xb63f35d0, 0x4f62e0, 0x10718ac0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:512 +0x54
created by github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:518 +0x110

goroutine 1 [chan receive]:
testing.RunTests(0x402600, 0x4f4aa0, 0x6c, 0x6c, 0x4f5b01)
    /home/msa/var/src/go/src/testing/testing.go:562 +0x618
testing.(*M).Run(0x10747f74, 0x12380)
    /home/msa/var/src/go/src/testing/testing.go:494 +0x6c
main.main()
    github.com/msoap/fasthttp/_test/_testmain.go:376 +0x118

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /home/msa/var/src/go/src/runtime/asm_arm.s:1036 +0x4

goroutine 5 [sleep]:
time.Sleep(0x3b9aca00, 0x0)
    /home/msa/var/src/go/src/runtime/time.go:59 +0x104
github.com/msoap/fasthttp.init.1.func1()
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/header.go:897 +0x24
created by github.com/msoap/fasthttp.init.1
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/header.go:900 +0x28

goroutine 10 [runnable]:
testing.tRunner.func1(0x10710600)
    /home/msa/var/src/go/src/testing/testing.go:452 +0x174
testing.tRunner(0x10710600, 0x4f4ad0)
    /home/msa/var/src/go/src/testing/testing.go:458 +0xb8
created by testing.RunTests
    /home/msa/var/src/go/src/testing/testing.go:561 +0x5ec

goroutine 24 [select]:
github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn(0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0xb63f35d0, 0x4f62e0, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:530 +0x300
github.com/msoap/fasthttp.clientGetURLTimeout(0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0xb63f35d0, 0x4f62e0, 0x10752090, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:468 +0x24c
github.com/msoap/fasthttp.(*Client).GetTimeout(0x4f62e0, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0x20, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:168 +0xc8
github.com/msoap/fasthttp.testClientGetTimeoutSuccess(0x10710fc0, 0x4f62e0, 0x10713360, 0x16, 0x64)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:381 +0x1ac
github.com/msoap/fasthttp.TestClientGetTimeoutSuccess(0x10710fc0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:21 +0xd4
testing.tRunner(0x10710fc0, 0x4f4b60)
    /home/msa/var/src/go/src/testing/testing.go:456 +0xa8
created by testing.RunTests
    /home/msa/var/src/go/src/testing/testing.go:561 +0x5ec

goroutine 25 [runnable]:
github.com/msoap/fasthttp.startEchoServerExt.func2(0x107523f0, 0xb63f2580, 0x1070a548, 0x10710fc0, 0x10718a80)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:505
created by github.com/msoap/fasthttp.startEchoServerExt
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:511 +0x460

goroutine 27 [runnable]:
github.com/msoap/fasthttp.(*Client).mCleaner(0x4f62e0, 0x107133e0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:273
created by github.com/msoap/fasthttp.(*Client).Do
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:267 +0x4b8
exit status 2
FAIL    github.com/msoap/fasthttp   0.194s

How do I disable sending a content-type header when there is no body?

I run this code:

package main

import (
    "github.com/valyala/fasthttp"
)

func main() {
    h := func(ctx *fasthttp.RequestCtx) {

    }
    s := fasthttp.Server{
        Handler: h,
    }
    s.ListenAndServe(":6060")
}

but inspect element on Chrome still says Content-Type: text/plain. Wouldn't it make sense to not send a Content-Type header when there is no content?

Store value in RequestContext!

It would be nice if we can store value in RequestContext! I will help a lot for passing value cross middlewares!

With Go 1.5, the map[string]interface{} is fast and simple for this kind of storage!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.