Coder Social home page Coder Social logo

fasthttp's People

Contributors

alexandear avatar amezghal avatar aoang avatar bobochka avatar byene0923 avatar cipriancraciun avatar cristaloleg avatar dependabot[bot] avatar dgrr avatar enchantner avatar erikdubbelboer avatar ernado avatar gotoxu avatar ichxxx avatar kirilldanshin avatar kiyonlin avatar li-jin-gou avatar mkorolyov avatar moredure avatar nickajacks1 avatar panjf2000 avatar peczenyj avatar shulhan avatar stokito avatar tolyar avatar tylitianrui avatar valyala avatar xuxiao415 avatar yankawayu avatar zhangyunhao116 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fasthttp's Issues

What is the best way to read headers ?

The only way I found is to get use RequestCtx.Header.String() and to parse them, but I think it would be easier to provide a function for this, no ? Or perhaps, I missed it.

websockets support

My coworkers and I are a big fan of fasthttp, it has proven very useful for a work project and we find it very ergonomic to use.

I now have need for websockets with a fasthttp.Server. What is the plan for websockets support in fasthttp?

There are two major Go packages for websockets:

The former is a bit easier to use and probably is considered the "standard" Go websockets package since it comes from the Go team. Nonetheless the gorilla websockets seems to support the standard better and it appears more flexible and not as tied to net/http as the golang/net version.

I plan to fork gorilla/websocket and add a fasthttp version of the Upgrade method:

https://godoc.org/github.com/gorilla/websocket#Upgrader.Upgrade

If anyone is working on this, or if there are other ideas on how to get websockets working with fasthttp, please reply.

Keep-Alive in HTTP 1.0

I've created simple app to test it with Apache Benchmark (which uses HTTP/1.0):

package main

import (
    "github.com/valyala/fasthttp"
)

const helloWorldString = "Hello, World!"

var (
    helloWorldBytes    = []byte(helloWorldString)
    helloWorldBytesLen = len(helloWorldBytes)
)

func plaintextHandler(ctx *fasthttp.RequestCtx) {
    ctx.SetStatusCode(fasthttp.StatusOK)
    ctx.Response.Header.SetContentType("text/plain")
    ctx.Response.Header.SetContentLength(helloWorldBytesLen)
    ctx.Response.Header.Set("Connection", "keep-alive")
    ctx.Write(helloWorldBytes)
}

func main() {
    fasthttp.ListenAndServe(":8080", plaintextHandler)
}

Unfortunately server closes connection, when it sees HTTP/1.0 in GET request:

GET / HTTP/1.0
Connection: Keep-Alive
Host: 127.0.0.1:8080
User-Agent: ApacheBench/2.3
Accept: */*

If I change first line in request to GET / HTTP/1.1 then everything works, server returns response:

HTTP/1.1 200 OK
Server: fasthttp
Date: Mon, 07 Dec 2015 11:43:30 GMT
Content-Type: text/plain
Content-Length: 13
Connection: keep-alive

Hello, World!

and waits for another request on the same connection.
Could you add support for Keep-Alive in HTTP 1.0?
HTTP/1.0, in its documented form, made no provision for persistent connections. But some HTTP/1.0 implementations, however, use a Keep-Alive header to request that a connection persist.

URI().Path bytes to string

Hi,

It's told to avoid conversion between []byte and string 'cause Fasthttp API provides functions for both.
However I've looked through the code and haven't found a function that would return a string for Path.
There is URI().String() but I don't see the same for URI().Path()

It seems the only way to sort this out is to convert from bytes to string but it's not recommended by you :)
Is there any other way to get Path as a string?

Thanks.

missing Handle and HandleFunc

are you going to add ?

http.Handle("/foo", fooHandler)

http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
})

URI.UpdateBytes does not appear to be working with fragments

I've been writing a simple web server with fasthttp to deal with some simple logic in how we serve our web apps. And one of the things we need to do is to do a redirect with a fragment into our web app. As in:
https://godoc.org/some#fragment?sss=sds

Now the problem is that it appear that the URI parser code does not appear to handle that scenario. And will simply urlencode the fragment marker (#)

Now if I was using URI directly I could work around it by setting the fragment directly. But since I am using the RequestCtx.RedirectBytes() function it is not really an option. And I presume something like this is should work?

Quick demonstration of the issue:

package main

import (
    "fmt"

    "github.com/valyala/fasthttp"
)

func main() {
  uri := fasthttp.URI{}
  uri.Parse(nil, []byte("https://godoc.org"))
  fmt.Printf("Initial: %s\n", uri.FullURI())
  uri.Update("/some#fragment?sss=sds")
  fmt.Printf("Updated: %s\n", uri.FullURI())
}

How do you handle unit tests?

I've been exploring full-stack unit testing for apps built on fasthttp, and my initial instinct was to use the default http package (to reduce the chance of a fasthttp flaw being shared between client and server, causing something to go undetected). However, Go doesn't like sharing a TCP port:

--- FAIL: TestServer (0.00s)
panic: listen tcp :8080: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted. [recovered]
        panic: listen tcp :8080: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

You seem to use unexported functions to do internal testing. What's the recommended way to do testing without running into errors with these? I'm doing a server.ListenAndServe(":8080") for the server and then doing this to connect to the localhost server:

req, err := http.NewRequest(GET, "localhost:8080/hostTest", nil)
if err != nil {
    panic(err)
}
req.Host = "example.com"
_, err = c.Do(req)
panicErr(err)
// Validate the response here...

I've been considering mocking up the net.Listener interface and passing that to Server.Serve(). Is there a better solution? What do you suggest?

Can't get high throughput

I have a very simple program:

package main

import (
    "flag"
    "log"
"time"
  "github.com/valyala/fasthttp"
  "github.com/valyala/fasthttp/reuseport"
  "github.com/davecheney/profile"
)

var (
    addr = flag.String("addr", ":10000", "TCP address to listen to")
  c = &fasthttp.HostClient{
    Addr: "192.168.1.1:80",
    ReadTimeout: 30 * time.Second,
    WriteTimeout: 30 * time.Second,
    ReadBufferSize: 64 * 1024,
    WriteBufferSize: 64 * 1024,
  }
)

func main() {
  defer profile.Start(profile.CPUProfile).Stop()
    flag.Parse()


listener, err := reuseport.Listen("tcp4", *addr)
if err != nil {
  panic(err)
}
defer listener.Close()

    if err := fasthttp.Serve(listener, requestHandler); err != nil {
        log.Fatalf("Error in ListenAndServe: %s", err)
    }
}

func requestHandler(ctx *fasthttp.RequestCtx) {
  err := c.Do(&ctx.Request, &ctx.Response)
  if err != nil {
    log.Printf("Error: %s", err)
  }
  ctx.Response.Header.DisableNormalizing()
  etag := string(ctx.Response.Header.Peek("Etag"))
  ctx.Response.Header.Del("Etag")
  ctx.Response.Header.Set("ETag", etag)
}

I can't get more than 100 MB/s, but if I run the same benchmark using 192.168.1.1:80 directly, I get more than twice this throughput.

Here is the profile output:

Entering interactive mode (type "help" for commands)
(pprof) top10
68.64s of 71.57s total (95.91%)
Dropped 207 nodes (cum <= 0.36s)
Showing top 10 nodes out of 54 (cum >= 54.31s)
      flat  flat%   sum%        cum   cum%
    38.15s 53.30% 53.30%     38.36s 53.60%  syscall.Syscall
    28.83s 40.28% 93.59%     28.83s 40.28%  runtime.memclr
     0.85s  1.19% 94.77%      0.85s  1.19%  runtime.memmove
     0.37s  0.52% 95.29%      0.37s  0.52%  runtime.futex
     0.16s  0.22% 95.51%     25.67s 35.87%  net.(*netFD).Read
     0.07s 0.098% 95.61%     25.78s 36.02%  bufio.(*Reader).Read
     0.06s 0.084% 95.70%     25.73s 35.95%  net.(*conn).Read
     0.06s 0.084% 95.78%      0.78s  1.09%  runtime.(*mspan).sweep
     0.05s  0.07% 95.85%      0.44s  0.61%  runtime.findrunnable
     0.04s 0.056% 95.91%     54.31s 75.88%  github.com/valyala/fasthttp.appendBodyFixedSize

MaxIdleConnsPerHost in Client

Hi!
Thanks for the library! 🚀

Am I understand it right, that there is no ability to reuse connections in manner of net/http – when the client tries to use idle (free connection) when it possible, or create new one, even if MaxIdleConnsPerHost limit exceeded?

RequestCtx.SendFile doesn't consider Range request header

Hi,

Seems, that RequestCtx.SendFile does not consider Range header - it always sends the whole file (with Response.SendFile, which, obviously, contains no request headers).

But, for example, RequestCtx.SendFile analyzes If-Modified-Since.

It would be great, if RequestCtx.SendFile considered Range request header too, as FS does (I haven't tried it myself yet, but I looked through it's code + seen issue #47).

I understand, that implementing it would require some copy-pasting so Response.SendFile cannot be reused there, so it's also a design-related issue.

So, such behaviour is intentional (and I should stick to FS if I want Range analysis) or simply not implemented?

I've looked at master's code as of 83e1796 commit, if it matters :)

InMemoryListener freezes on Write() method

I've been trying to use InMemoryListener for a few hours now, and I've pinned the issue down to the fact that it freezes on the Write() method of the "server side" connection (the one returned from Accept()).

The write method freezes the goroutine and doesn't write anything to the connection.

Same result on both go version go1.5.3 windows/amd64 and go version go1.5.1 windows/amd64. A friend on an unknown version of Go on a Mac had a similar issue with the same exact code.

Thoughts on what's causing this issue?

package main

import (
    "fmt"
    "net"

    "github.com/valyala/fasthttp/fasthttputil"
)

func main() {
    l := fasthttputil.NewInmemoryListener()
    go accept(l)

    c, err := l.Dial()
    if err != nil {
        panic(err)
    }
    defer c.Close()

    b := readOneByte(c)

    if string(b) == "A" {
        fmt.Println("Okay")
    } else {
        fmt.Println("Not Okay")
    }
}

func readOneByte(c net.Conn) byte {
    var b []byte

    n, err := c.Read(b)
    for n == 0 && err == nil {
        n, err = c.Read(b)
    }
    if err != nil {
        panic(err)
    }

    return b[0]
}

func accept(l *fasthttputil.InmemoryListener) {
    c, err := l.Accept() // Should be in a for loop, but doesn't matter here since there's only one connection.
    if err != nil {
        panic(err)
    }
    defer c.Close()

    _, err = c.Write([]byte("A")) // Freezes here.
    if err != nil {
        panic(err)
    }
}

Please provide real benchmark data and sever information.

The claim for 1m concurrent connections is a pretty big one. Please provide the following:

  • What machine was used to handle 1m connections? E.g. m3.2xLarge (8 cpus, 30gb memory)
    • To put it into perspective, node.js can handle 800k connections on a m3.2xLarge.
  • Are these just ping/pong connections? If so then the actual throughput/rps is MUCH lower then 1million.
    • G-WAN + Go can handle Average RPS:784,113 (at least according to their homepage)
  • What was the average latency for handling 1m concurrent connections?
  • Was there any bottlenecks? E.g. Is it just hardware that is holding this library back from achieving anymore throughput?

Thank you.

How do I disable sending a content-type header when there is no body?

I run this code:

package main

import (
    "github.com/valyala/fasthttp"
)

func main() {
    h := func(ctx *fasthttp.RequestCtx) {

    }
    s := fasthttp.Server{
        Handler: h,
    }
    s.ListenAndServe(":6060")
}

but inspect element on Chrome still says Content-Type: text/plain. Wouldn't it make sense to not send a Content-Type header when there is no content?

Make normalizeHeaderKey optional

Could you please make this function optional ?

Even if headers are supposed to be case-insensitive, many applications aren't working correctly when they header case isn't the one they were expecting. I want to build a reverse proxy based on fasthttp and need it to be compatible with any application.

Handling connections after closing net.Listener

Hi @valyala ,

I am curious if it is possible (or will be in the future) to trace the worker pool in the context of connection pool size. After closing the net.Listener, the HTTP server does not accept any new connections and it allows earlier accepted ones to be finished (to be served until disconnection).
I need to find a way to monitor the connection pool for the case of graceful shutdown, just to be notified when the number of active connections decreases to zero.

The reference implementation of HTTP server supporting such feature is available at:
https://github.com/tylerb/graceful
(it stores net.Conn in a hash map)

Kind regards,
Marcin

Prefork w/reuse port is NOT really faster than multi-threaded

Hello I am an author of dumb & simple WebFrameworkBenchmark. It isn't as complete as TechEmpower's one, but the purpose it just to test the framework (router) overhead.

According to your Performance optimization tips for multi-core systems using pre-fork with SO_REUSEPORT is preferred way to scale on multicore system.

However I get completely opposite numbers, when using prefork I get worse results (~440kreq/s) than with multi-threded simpler version (~480kreq).

You can find the source for my benchmark at:

https://github.com/nanoant/WebFrameworkBenchmark/blob/master/benchmarks/go-fasthttp/helloworldserver.go

Also another important observation is that is performance increase comparing to net/http is around 1.8x, and it is nowhere close to claimed 4x-10x. Thoughts?

Therefore I humbly ask you to provide some solid benchmark examples where we can see the performance differences.

Broken build on 386/arm architecture

$ go get -v github.com/valyala/fasthttp
github.com/valyala/fasthttp
# github.com/valyala/fasthttp
../../valyala/fasthttp/bytesconv.go:19: constant 18446744073709551615 overflows uint
../../valyala/fasthttp/bytesconv.go:32: constant 18446744073709551615 overflows uint

I'm fixed bytesconv.go: https://github.com/msoap/fasthttp/commit/1d5c402cdc457aa9c08671d84bbe2b0c9df1873b

but another test has fallen (client_test.go:505):

$ go version
go version go1.5 linux/arm

$ go test
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x4 pc=0x140998]

goroutine 26 [running]:
sync/atomic.storeUint64(0x1076e0fc, 0x56596ced, 0x0)
    /home/msa/var/src/go/src/sync/atomic/64bit_arm.go:20 +0x40
github.com/msoap/fasthttp.(*HostClient).do(0x1076e0b0, 0x1078c180, 0x1077e100, 0x10727600, 0x673c8, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:728 +0x100
github.com/msoap/fasthttp.(*HostClient).Do(0x1076e0b0, 0x1078c180, 0x1077e100, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:716 +0x40
github.com/msoap/fasthttp.(*Client).Do(0x4f62e0, 0x1078c180, 0x1077e100, 0x0, 0x0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:270 +0x4d4
github.com/msoap/fasthttp.doRequest(0x1078c180, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0xb63f35d0, 0x4f62e0, 0x0, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:568 +0x200
github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn.func1(0x1078c180, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0xb63f35d0, 0x4f62e0, 0x10718ac0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:512 +0x54
created by github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:518 +0x110

goroutine 1 [chan receive]:
testing.RunTests(0x402600, 0x4f4aa0, 0x6c, 0x6c, 0x4f5b01)
    /home/msa/var/src/go/src/testing/testing.go:562 +0x618
testing.(*M).Run(0x10747f74, 0x12380)
    /home/msa/var/src/go/src/testing/testing.go:494 +0x6c
main.main()
    github.com/msoap/fasthttp/_test/_testmain.go:376 +0x118

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /home/msa/var/src/go/src/runtime/asm_arm.s:1036 +0x4

goroutine 5 [sleep]:
time.Sleep(0x3b9aca00, 0x0)
    /home/msa/var/src/go/src/runtime/time.go:59 +0x104
github.com/msoap/fasthttp.init.1.func1()
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/header.go:897 +0x24
created by github.com/msoap/fasthttp.init.1
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/header.go:900 +0x28

goroutine 10 [runnable]:
testing.tRunner.func1(0x10710600)
    /home/msa/var/src/go/src/testing/testing.go:452 +0x174
testing.tRunner(0x10710600, 0x4f4ad0)
    /home/msa/var/src/go/src/testing/testing.go:458 +0xb8
created by testing.RunTests
    /home/msa/var/src/go/src/testing/testing.go:561 +0x5ec

goroutine 24 [select]:
github.com/msoap/fasthttp.clientGetURLTimeoutFreeConn(0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0xb63f35d0, 0x4f62e0, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:530 +0x300
github.com/msoap/fasthttp.clientGetURLTimeout(0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0xb63f35d0, 0x4f62e0, 0x10752090, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:468 +0x24c
github.com/msoap/fasthttp.(*Client).GetTimeout(0x4f62e0, 0x0, 0x0, 0x0, 0x107147e0, 0x24, 0x3b9aca00, 0x0, 0x20, 0x0, ...)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:168 +0xc8
github.com/msoap/fasthttp.testClientGetTimeoutSuccess(0x10710fc0, 0x4f62e0, 0x10713360, 0x16, 0x64)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:381 +0x1ac
github.com/msoap/fasthttp.TestClientGetTimeoutSuccess(0x10710fc0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:21 +0xd4
testing.tRunner(0x10710fc0, 0x4f4b60)
    /home/msa/var/src/go/src/testing/testing.go:456 +0xa8
created by testing.RunTests
    /home/msa/var/src/go/src/testing/testing.go:561 +0x5ec

goroutine 25 [runnable]:
github.com/msoap/fasthttp.startEchoServerExt.func2(0x107523f0, 0xb63f2580, 0x1070a548, 0x10710fc0, 0x10718a80)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:505
created by github.com/msoap/fasthttp.startEchoServerExt
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client_test.go:511 +0x460

goroutine 27 [runnable]:
github.com/msoap/fasthttp.(*Client).mCleaner(0x4f62e0, 0x107133e0)
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:273
created by github.com/msoap/fasthttp.(*Client).Do
    /home/msa/var/lib/go/src/github.com/msoap/fasthttp/client.go:267 +0x4b8
exit status 2
FAIL    github.com/msoap/fasthttp   0.194s

Store value in RequestContext!

It would be nice if we can store value in RequestContext! I will help a lot for passing value cross middlewares!

With Go 1.5, the map[string]interface{} is fast and simple for this kind of storage!

[Fileserver] Fileserver Memory Usage with 200k concurrent connections.

Hi @valyala,

I am using the fileserver given in the example to serve files from a 4 core system on production. I load tested it on the server and could easily achieve 20k requests/second (this includes kafka produce of access logs). I have two queries, I would be really helpful if you can help me with the same.

The issue I faced was when the number of concurrent connections went upto around 250k while serving a 300kb file to real users. At this moment the RAM (30GB) was full and the space on disk went to 0 starting from 80 GB and I had no other option apart from killing the processes.

After going through the code, I am suspecting that request handler opens a file every time all the bigFileReaders are already in use, does this mean 200k concurrent connection created 200k bigFileReaders instances. Am I right on this?

Does fileserver use sendfile for bigfiles?

go get github.com/valyala/fasthttp compile errors

I seem to be having problems when I am trying to fetch the package. This is the error I am getting:

github.com/valyala/fasthttp

src/github.com/valyala/fasthttp/bytesconv.go:53: date.In(gmtLocation).AppendFormat undefined (type time.Time has no field or method AppendFormat)
src/github.com/valyala/fasthttp/header.go:1125: undefined: bytes.LastIndexByte
src/github.com/valyala/fasthttp/header.go:1450: r.Discard undefined (type *bufio.Reader has no field or method Discard)
src/github.com/valyala/fasthttp/http.go:430: undefined: io.CopyBuffer
src/github.com/valyala/fasthttp/uri.go:221: undefined: bytes.LastIndexByte
src/github.com/valyala/fasthttp/uri.go:233: undefined: bytes.LastIndexByte

A way to specify returned HTTP code when calling TimeoutError

Currently, we need to call ctx.TimeoutError() if we wish to return from the request, which in turn sets the status code to StatusRequestTimeout.

Should there be another function:
func (*RequestCtx) TimeoutErrorWithCode(statusCode int, msg string)
which lets the user override the status code and message?

byte range issue

Hi,

Seems like an issue with byte range requests.

E.g.
File size: 210720491 bytes

When I send a request with the header "Range: bytes=95644624-" fasthttp handles it correctly.
However if we'll modify the header to "Range: bytes=95644624-210720491" (from 95644624 to the last byte of the file) it drops to 416 http error because it can't handle the byte range.

cannot parse byte range "bytes=97851544-210720491" for path="/some_file.dat": invalid byte range

For instance I've just checked Nginx and it handles the same query correctly.

Unable to cross-compile using go build

Trying to compile for linux $ env GOOS=linux GOARCH=arm go build -v github.com/abacaj/fasthttp

produces the following error:
# github.com/valyala/fasthttp
C:\Projects\Go\src\github.com\valyala\fasthttp\bytesconv.go:20: constant 18446744073709551615 overflows uint
C:\Projects\Go\src\github.com\valyala\fasthttp\bytesconv.go:33: constant 18446744073709551615 overflows uint

seems like a bug, but not sure - I'm on GO 1.5.1

it takes a long time on function readMultipartForm

When i’m upload a file with “multipart/form-data” from Web Browser or Post-man for Google Chome
,it takes a long time on function readMultipartForm and not return error "cannot read multipart/form-data body"

Is it OK to reuse Request?

Assuming I'm going to concurrently execute the exact same request multiple times, is it OK to reuse the same Request instance or should I copy it?

Content-Encoding: deflate is actually zlib

Had this issue with another library.
It's a little counter-intuitive, but Content-Encoding: deflate is actually not a simple flate stream, but a zlib one.
According to [Wikipedia](https://en.wikipedia.org/wiki/HTTP_compression:

deflate – compression based on the deflate algorithm (described in RFC 1951), wrapped inside the zlib data format (RFC 1950);

fasthttp currently uses flate package for handling deflated content, but should use zlib

Also, it would be great if fasthttp used: https://github.com/klauspost/compress as it provides optimized compression packages

Can't set Content-Type on GET requests

package main
import "github.com/valyala/fasthttp"
import "fmt"

func main() {
    header := fasthttp.RequestHeader{}

    header.SetRequestURI("http://localhost/test")
    header.Set("Accept", "application/json")
    header.Set("Arbitrary", "should-be-present")
    header.SetContentType("application/json")
    header.Set("Content-Type", "application/json")
    fmt.Printf("Header is: %s\n", string(header.String()))
}

Actual result:

Header is: GET http://localhost/test HTTP/1.1
User-Agent: fasthttp client
Accept: application/json
Arbitrary: should-be-present

According to the RFC2616 section 7.2.1 Content-Type should be set on methods containing entity but RFC does not prohibit to set it on other methods. Some APIs are using content-type to select API version on GET requests.

any advice on improve performance on AWS

I just started to use fasthttp, I checked the CPU usage is only around 20%, I want to improve performance.

is there any advice on how to improve performance on AWS?

Connection Close on 204 No Content

I have an issue where a webserver is returning a 204 response with Connection: keep-alive, but fasthttp sets Connection: close and closes the connection, this does not happen with curl or net/http.

This is the response according to fasthttp:

HTTP/1.1 204 No Content
Server: fasthttp
Date: Tue, 12 Jan 2016 16:34:26 GMT
Content-Type: text/plain; charset=utf-8
X-Powered-By: Express
Etag: W/"2-2745614147"
Date: Tue, 12 Jan 2016 16:34:30 GMT
Connection: keep-alive
Transfer-Encoding: identity
Connection: close

This is the response according to net/http:

&{Status:204 No Content StatusCode:204 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[X-Powered-By:[Express] Etag:[W/"2-2745614147"] Date:[Tue, 12 Jan 2016 16:34:30 GMT] Connection:[keep-alive]] Body:0xc8200de580 ContentLength:0 TransferEncoding:[] Close:false Trailer:map[] Request:0xc820112000 TLS:<nil>}

And this is according to curl:

< HTTP/1.1 204 No Content
< X-Powered-By: Express
< ETag: W/"2-2745614147"
< Date: Tue, 12 Jan 2016 16:35:39 GMT
< Connection: keep-alive
< 
* Connection #0 to host xxx left intact

This is the code i'm using:

    req := fasthttp.AcquireRequest()
    res := fasthttp.AcquireResponse()
    defer fasthttp.ReleaseRequest(req)
    defer fasthttp.ReleaseResponse(res)
    req.SetRequestURI("URL")
    req.Header.SetMethod("GET")
    req.Header.Set("Accept", "application/json")
    err := fasthttp.DoTimeout(req, res, 1*time.Second)

Am I doing something wrong or is it a bad behavior of fasthttp?

Question: net/http.Request.Form equivalent?

Hi,

I have a question, if there's any analog to http.Request.Form container planned?
http.Request doc

Or any convenience methods like PeekMulti(name) (for fasthttp.RequestCtx and/or fasthttp.Request), that peek multi params both from query and body?

I guess, this may lead to increased memory usage (for http.Request.Form analog) or memory allocations (for PeekMulti(name) methods) or be tricky to implement (pooling, if possible).

Or some kind of walker method like .Visit(name, callback(value)) (also for fasthttp.RequestCtx and/or fasthttp.Request) would be also nice (and shouldn't result in memory allocs). Btw, I've read code for peekArgStr and it seems, that such .Visit method (that focuses on specific param name) would not provide any performance gain, as peekArgStr already loops through key-value pair list (and I don't think, that this should/can be optimized).

Anyway, that's just a question/suggestion; that's, obviously, not hard to work around.

FileServer speed

Hi,

I'm not sure it's an issue however possibly you can recommend some kind of trick to get this resolved.
I stream a lot of files using fasthttp but the summary streaming speed is about 20% slower than nginx streaming.

It's clear fasthttp was developed with requests concurrency and high-load environments in mind. But is there a way to get the same performance for file streaming?

It's absolutely possible I do something wrong in my code.
Please have a look it. I'll be grateful for any kind of advice.

Thanks.

package main

import (
    "runtime"
    "strings"
    "time"

    "github.com/valyala/fasthttp"
)

func main() {

    runtime.GOMAXPROCS(runtime.NumCPU())

    rw := fasthttp.PathRewriteFunc(func(ctx *fasthttp.RequestCtx) []byte {
        urlPart := strings.Split(string(ctx.Path()), "/")
        return []byte("/" + urlPart[2] + "/" + urlPart[3])
    })

    fs := fasthttp.FS{
        Root:            "/var/spool/cache",
        AcceptByteRange: true,
        PathRewrite:     rw,
        Compress:        false,
        CacheDuration:   time.Duration(1) * time.Hour,
    }

    h := fs.NewRequestHandler()

    requestHandler := func(ctx *fasthttp.RequestCtx) {
        urlPart := strings.Split(string(ctx.Path()), "/")
        fileName := urlPart[4]

        if len(urlPart) == 5 && (ctx.IsGet() == true || ctx.IsHead() == true) {

            ctx.Response.Header.Set("Content-disposition", "attachment; filename="+fileName)
            h(ctx)
        } else {
            ctx.NotFound()
        }
    }

    var s fasthttp.Server

    s.Concurrency = 262144
    s.MaxKeepaliveDuration = time.Duration(2) * time.Second
    s.ReadBufferSize = 16384
    s.WriteTimeout = time.Duration(15) * time.Second

    s.Handler = requestHandler

    errListen := s.ListenAndServe("0.0.0.0:80")
    if errListen != nil {
        panic(errListen)
    }

}

Reverse Proxy?

The golang httputil package has a ReverseProxy that will serve from an http.Request.

Is there any comporable revrese proxy for fasthttp that will serve from a fasthttp.Request?

Odd behavior: continues to serve content after listener.Close()

Hello,

I'm attempting to do a graceful shutdown by closing the listener. However, it doesn't work; I can continue to request new pages and they continue to be served, even after s.Serve returns, until the entire application ends.

    package main

    import (
        "github.com/valyala/fasthttp"
        "fmt"
        "log"
        "net"
        "time"
    )

    var listener net.Listener

    func root(ctx *fasthttp.RequestCtx) {
        if string(ctx.Path()) == `/close` {
            fmt.Fprint(ctx, "CLOSE\n")
            err := listener.Close()
            if err != nil {
                log.Println(err)
            }
        }
        fmt.Fprintf(ctx, "Hi there! RequestURI is %q", ctx.RequestURI())
    }

    func main() {
        var err error
        listener, err = net.Listen(`tcp`, `:8081`)
        if err != nil {
            log.Fatal(err)
        }
        s := &fasthttp.Server{
            Handler: root,
        }
        err = s.Serve(listener)
        if err != nil {
            log.Println(err)
        }
        log.Println(`Exiting in 30 seconds`)
        time.Sleep(time.Second * 30)
        log.Println(`Exiting`)
    }

Client.Do() returns EOF errors when reusing the Request and Response structs

Hi,
I am doing something like this :

var request = &fasthttp.Request{}
var response = &fasthttp.Response{}
var client = fasthttp.Client{....}
    for {
        request.Reset()
        response.Reset()
        response.ResetBody()
        request.ResetBody()

                [ setup the request URI, header etc... ]
        err = client.Do(request, response)
                if err != nil{
                      fmt.Println(err.Error())
               }
}

in this use case, I am getting EOF errors for some reason. Could you please explain to me what I am doing wrong?
Before i was initiializing the http.Request inside of the loop, and calling default go http client Do() method.
I am running everything on my local machine, and there is haproxy between the client and server. I didn't see those problems if i don't connect through haproxy.

I am using fasthttp from commit e823a9a.

My fasthttp client struct looks like that :

fasthttp.Client{
            Dial: func(addr string) (net.Conn, error) {
                var dialer = net.Dialer{
                    Timeout:   10 * time.Second,
                    KeepAlive: 5 * time.Second,
                }
                return dialer.Dial("tcp", addr)
            },
        },

bytesconv.go gmtLocation Issues

var gmtLocation = func() *time.Location {
    x, err := time.LoadLocation("GMT")
    if err != nil {
        panic(fmt.Sprintf("cannot load GMT location: %s", err))
    }
    return x
}()

I think should use UTC timezone

x, err := time.LoadLocation("GMT")
// LoadLocation returns the Location with the given name.
//
// If the name is "" or "UTC", LoadLocation returns UTC.
// If the name is "Local", LoadLocation returns Local.
//
// Otherwise, the name is taken to be a location name corresponding to a file
// in the IANA Time Zone database, such as "America/New_York".
//
// The time zone database needed by LoadLocation may not be
// present on all systems, especially non-Unix systems.
// LoadLocation looks in the directory or uncompressed zip file
// named by the ZONEINFO environment variable, if any, then looks in
// known installation locations on Unix systems,
// and finally looks in $GOROOT/lib/time/zoneinfo.zip.
func LoadLocation(name string) (*Location, error) {
    if name == "" || name == "UTC" {
        return UTC, nil
    }
    if name == "Local" {
        return Local, nil
    }
    if zoneinfo != "" {
        if z, err := loadZoneFile(zoneinfo, name); err == nil {
            z.name = name
            return z, nil
        }
    }
    return loadLocation(name)
}

If user not install go sdk, the $GOROOT/lib/time/zoneinfo.zip will not found

Add transparent compression handling in the Response struct.

Hi,
I was wondering if adding transparent compression handling in the code is feasible.
Currently we check if the "Content-Encoding" header have a "gzip" value and decompress the data ourselves.
I might do it myself if I have some free time on my hands, but I was wondering if this change would be even accepted

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.