Coder Social home page Coder Social logo

lua-resty-balancer's Introduction

Name

OpenResty - Turning Nginx into a Full-Fledged Scriptable Web Platform

Table of Contents

Description

OpenResty is a full-fledged web application server by bundling the standard nginx core, lots of 3rd-party nginx modules, as well as most of their external dependencies.

This bundle is maintained by Yichun Zhang (agentzh).

Because most of the nginx modules are developed by the bundle maintainers, it can ensure that all these modules are played well together.

The bundled software components are copyrighted by the respective copyright holders.

The homepage for this project is on openresty.org.

For Users

Visit the download page on the openresty.org web site to download the latest bundle tarball, and follow the installation instructions in the installation page.

For Bundle Maintainers

The bundle's source is at the following git repository:

https://github.com/openresty/openresty

To reproduce the bundle tarball, just do

make

at the top of the bundle source tree.

Please note that you may need to install some extra dependencies, like perl, dos2unix, and mercurial. On Fedora 22, for example, installing the dependencies is as simple as running the following commands:

sudo dnf install perl dos2unix mercurial

Back to TOC

Additional Features

In additional to the standard nginx core features, this bundle also supports the following:

Back to TOC

resolv.conf parsing

syntax: resolver address ... [valid=time] [ipv6=on|off] [local=on|off|path]

default: -

context: http, stream, server, location

Similar to the resolver directive in standard nginx core with additional support for parsing additional resolvers from the resolv.conf file format.

When local=on, the standard path of /etc/resolv.conf will be used. You may also specify arbitrary path to be used for parsing, for example: local=/tmp/test.conf.

When local=off, parsing will be disabled (this is the default).

This feature is not available on Windows platforms.

Back to TOC

Mailing List

You're very welcome to join the English OpenResty mailing list hosted on Google Groups:

https://groups.google.com/group/openresty-en

The Chinese mailing list is here:

https://groups.google.com/group/openresty

Back to TOC

Report Bugs

You're very welcome to report issues on GitHub:

https://github.com/openresty/openresty/issues

Back to TOC

Copyright & License

The bundle itself is licensed under the 2-clause BSD license.

Copyright (c) 2011-2019, Yichun "agentzh" Zhang (章亦春) [email protected], OpenResty Inc.

This module is licensed under the terms of the BSD license.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Back to TOC

lua-resty-balancer's People

Contributors

agentzh avatar archangelsdy avatar doujiang24 avatar elvinefendi avatar jizhuozhi avatar sysulq avatar xiaoxuanzi avatar zjcnaruto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lua-resty-balancer's Issues

Question: How to configure backup node

Hello,
nginx has support for configuring upstream as backup which allows to fail over. How can I achieve similar results with resty balancer? Either using weight (1-0) or other flag?

Can I configure weight 1 and 0 where upstream with weight 0 will be only used if upstream with 1 fails?

nodes with"weight" does not work correctly?

I try to follow it.
Below is lua code in my nginx.conf:

` init_by_lua_block {
local resty_chash = require "resty.chash"

    local server_list = {
        ["192.168.46.135:80"] = 3,
        ["192.168.46.136:80"] = 1,
    }

    local str_null = string.char(0)

    local servers, nodes = {}, {}
    for serv, weight in pairs(server_list) do
        local id = string.gsub(serv, ":", str_null)

        servers[id] = serv
        nodes[id] = weight
    end

    local chash_up = resty_chash:new(nodes)

    package.loaded.my_chash_up = chash_up
    package.loaded.my_servers = servers
}

upstream backend_lua {
    server 0.0.0.1;
    balancer_by_lua_block {
        local strutil = require "strutil"
        local to_str = strutil.to_str
        local b = require "ngx.balancer"

        local chash_up = package.loaded.my_chash_up
        local servers = package.loaded.my_servers

        local id = chash_up:find(ngx.var.arg_cid)
        local server = servers[id]
        ngx.log(ngx.ERR, to_str('id : ', id,' servers: ',servers,' server: ', server))
        assert(b.set_current_peer(server))
    }
}

    location /t {
        access_log  logs/access.log reqjson;
        proxy_pass http://backend_lua;
    }

`
There are 2 nodes: 192.168.46.135:80 and 192.168.46.136:80. In the init block, the weight of node 192.168.46.135:80 is 3, and another is 1. And i use 100000 cid to test,
I expect that the request delivered to node 192.168.46.135:80 is 3 times that of B.
However, the request delivered to node 192.168.46.135:80 is 71925.

DO YOU encounter the same problem?
OR What wrong with me?

Round Robin is always peeking the largest weight even using random node id

When the node with the largest weight is unique, no matter how the initial last_id is selected, the first peeked node will always be the node with the largest weight, which will cause this node to overheat.

For example, we have nodes with weights 5, 1, 1, 1, 1, the distribution is always

1
1
1
1
1
2
3
4
5

because when cw = max_weight, only the node with the largest weight satisfies condition weight >= cw

[Question] Is there any way to get the ips from DNS?

Hi,
Thank you for this awesome plugin. I have got a specific scenario. I'm trying to DNS Loadbalance (sticky sessions) a websocket service. I've tried with static IPs, and this plugin works beautifully.
Unfortunately, I'm a noob in lua. Is there any way to integrate this solution with the dns queries, rather than giving static ips.
Any suggestions are highly appreciated.

Regrads.

Memory corruption when `string.char(0)` used as a key

We have been seeing occasional memory corruption when string.char(0) used as part of the key. We can reproduce core only on one machine. When replaced it with '_' , not able to reproduce core.

`
local str_null = string.char(0)

    local servers, nodes = {}, {}
    for serv, weight in pairs(server_list) do
        local id = string.gsub(serv, ":", str_null)

        servers[id] = serv
        nodes[id] = weight
    end

`

chash init

In function chash_point_init_crc,Why is i initialized to 0 instead of from?

Is the result of this algorithm consistent with nginx?

bug: compile error on mac.

 ~/src/OR/edgelang-fan/lua-resty-balancer master|…1 make
cc -Wall -O3 -flto -g -DFP_RELAX=0 -DDEBUG -fPIC -MMD -fvisibility=hidden -DBUILDING_SO -c chash.c
chash.c:81:29: error: unknown type name 'u_char'; did you mean 'char'?
crc32_update(uint32_t *crc, u_char *p, size_t len)
                            ^~~~~~
                            char
chash.c:103:9: error: unknown type name 'u_char'; did you mean 'char'?
        u_char                          byte[4];
        ^~~~~~
        char
chash.c:125:30: error: use of undeclared identifier 'u_char'
        prev_hash.byte[0] = (u_char) (hash & 0xff);
                             ^
chash.c:126:30: error: use of undeclared identifier 'u_char'
        prev_hash.byte[1] = (u_char) ((hash >> 8) & 0xff);
                             ^
chash.c:127:30: error: use of undeclared identifier 'u_char'
        prev_hash.byte[2] = (u_char) ((hash >> 16) & 0xff);
                             ^
chash.c:128:30: error: use of undeclared identifier 'u_char'
        prev_hash.byte[3] = (u_char) ((hash >> 24) & 0xff);
                             ^
6 errors generated.
make: *** [chash.o] Error 1# Creating a new issue for 'agentzh/lua-resty-balancer':

can chash object be reinitialized?

say,can I call new() for many times or not?
if I call new() with nodes(a,b,c),then one object be hashed to server b;
later I call new() with nodes(a,b),will the same object be hashed to server b still?

Timeout setting

Hi,

is it possible to set a custom timeout for the below two (at least) per balancer_by_lua_block :

  • connect timeout
  • response timeout

so that I can configure ? It seems the current default is 2 seconds when unable to connect to a backend ?

Thanks
Alex

Strange weighted round robin behaviour with special configuration of weights

Description

tl;dr: With the use of a test script, we can observe a momentary dip in traffic down to zero due to the weighted round robin algorithm.

Setup

I am attempting to reproduce this behaviour using a test lua script to investigate the behaviour of the weighted round robin algorithm (WRR). The ingress_nginx balancer utilizes round robin (RR) as it's default load balancing algorithm which is the default implementation provided by openresty.

Weights Configuration

-- config 1
local NODES = {}
NODES["10.0.0.1"] = 100
NODES["10.0.0.2"] = 100
NODES["10.0.0.3"] = 25

With this, the next step is to use the WRR algorithm to pick a node for a number of cycles to investigate the distribution of nodes. We can repeat this with the second set of weight configurations to observe the distribution of nodes with the WRR algorithm.

-- config 2
local NODES = {}
NODES["10.0.0.1"] = 100
NODES["10.0.0.2"] = 100
NODES["10.0.0.3"] = 66

Observations

For this test I am using a lua script to explicitly call the WRR algorithm for 300 cycles. The distribution of nodes for the first weight configuration is constant with little variations, this is expected. The overall distribution of traffic follows the weight configuration.

image

For the next weight configuration, we can observe that although the overall distribution of traffic adheres to the relative weights, the distribution of traffic is not constant. The node 10.0.0.3 does not get picked by WRR for some time and then the algorithm picks each node as if they were equal in weight. We can also see that this pattern repeats. After sometime, again node 10.0.0.3 is not picked by WRR for some time.

image

We can observe similar results regardless of the number of cycles. The fault in this implementation is more apparent when the number of cycles is less than the amount of cycles required for the algorithm to pick node 10.0.0.3.

image

image

Openresty WRR implementation

The openresty implementation is an accurate implementation of WRR. However, this implementation is only accurate for large finite sets of data to distribute traffic to weighted nodes. It is not viable for real time data with varying weights. This algorithm is an Interleaved WRR implementation which utilizes the greatest common denominator (GCD) between the weights to calculate the probability a node is picked based on it's weight.

The algorithm initially starts at a maximum probability for the last_node that was last picked where only nodes with weights >= the weight of the last_picked node will qualify for the next pick.

-- cycle and only pick a node, where node.weights >= cw
local last_id, cw, weight = self.last_id, self.cw

The last_picked node is randomized on initialization, so during the first pick a random node will be used to represent the last_picked node.

On each pick the algorithm infinitely iterates through the set of nodes until a node is picked. On every full iteration of all nodes we increase the chance a node is picked by the (GCD / MAX_WEIGHT) * 100%.

cw = cw - self.gcd

For example for the first configuration of nodes, we have GCD = 25 and MAX_WIEGHT = 100 so we pick each node in the following order.

Number of cycles: 20
Pick: 10.0.0.1, Weight: 100, Pick threshold: 75%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 75%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 50%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 50%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 25%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 25%
Pick: 10.0.0.3, Weight: 25, Pick threshold: 25%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 75%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 75%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 50%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 50%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 25%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 25%
Pick: 10.0.0.3, Weight: 25, Pick threshold: 25%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 75%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 75%

Distribution...
10.0.0.1: 45%
10.0.0.2: 45%
10.0.0.3: 10%

Although node 10.0.0.3 only has a weights of 25 compared to 100 of the other two nodes, the algorithm quickly increases it's chance of being picked (by 25% every complete cycle). However, for the second configuration of nodes, we have a GCD = 2 and MAX_WIEGHT = 100. This only allows an increase of 2% per complete cycle of nodes. This results in this pattern, where node 10.0.0.3 is not picked.

Number of cycles: 34
Pick: 10.0.0.2, Weight: 100, Pick threshold: 98%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 98%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 96%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 96%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 94%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 94%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 92%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 92%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 90%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 90%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 88%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 88%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 86%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 86%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 84%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 84%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 82%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 82%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 80%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 80%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 78%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 78%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 76%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 76%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 74%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 74%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 72%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 72%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 70%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 70%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 66%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 66%

Distribution...
10.0.0.1: 47.058823529412%
10.0.0.2: 50%
10.0.0.3: 2.9411764705882%

After the initial pick of node 10.0.0.3, we can see a uniform distribution for all nodes. After a while, when the probability of pick becomes <= 0%, we reset back to the MAX_WEIGHT and this pattern repeats.

Number of cycles: 200
Pick: 10.0.0.1, Weight: 100, Pick threshold: 98%

...

Pick: 10.0.0.2, Weight: 100, Pick threshold: 70%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 66%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 66%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 66%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 64%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 64%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 64%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 62%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 62%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 62%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 60%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 60%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 60%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 58%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 58%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 58%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 56%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 56%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 56%

...

Pick: 10.0.0.2, Weight: 100, Pick threshold: 6%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 4%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 4%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 4%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 2%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 2%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 2%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 100%
Pick: 10.0.0.1, Weight: 100, Pick threshold: 98%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 98%

...

Pick: 10.0.0.1, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.2, Weight: 100, Pick threshold: 68%
Pick: 10.0.0.3, Weight: 66, Pick threshold: 66%

Distribution...
10.0.0.1: 39%
10.0.0.2: 38.5%
10.0.0.3: 22.5%

When running this algorithm with a large number of cycles the overall weighted distribution is correct, however this still leads to a incorrect distribution of nodes for smaller intervals.

Possible Solution

I am suggesting a slight modification to this algorithm to avoid this scenario. Instead of the GCD, we can utilize the smallest weight to better distribute the picked nodes.

-- Replace get_gcd(nodes) https://github.com/openresty/lua-resty-balancer/blob/a7a8b625c6d79d203702709983b736137be2a9bd/lib/resty/roundrobin.lua#L28
local function get_lowest_weight(nodes)
  local first_id, max_weight = next(nodes)
  if not first_id then
    return error("empty nodes")
  end

  local only_key = first_id
  local lowest = max_weight
  for id, weight in next, nodes, first_id do
    only_key = nil
    lowest = weight < lowest and weight or lowest
    max_weight = weight > max_weight and weight or max_weight
  end

  return only_key, max(lowest, 1), max_weight
end

-- ================================

-- Replace self.gcd with self.lowest
function _M.new(_, nodes)
  ...
  local only_key, lowest, max_weight = get_lowest_weight(newnodes)

  local self = {
      ...
      lowest = lowest,
      ...
  }
  return setmetatable(self, mt)
end

Now, instead of resetting the probability of pick back to max_weight when it becomes <= 0%, we can allow a guaranteed pick for the next node. This will avoid the cases when a node can be skipped entirely when using to the lowest_weight to increase the pick chance.

local function find(self)
  ...

  while true do
    while true do
      ...
    end

    -- New logic
    if cw == 0 then
      cw = self.max_weight
    else
      cw = cw - self.lowest
      if cw < 0 then
          cw = 0
      end
    end
  end
end

With this solution I have conducted the same test for 20, 300, 1000 cycles using the second weight configuration.

20 Cycles

image

Distribution...
10.0.0.1: 40%
10.0.0.2: 35%
10.0.0.3: 25%

300 Cycles

image

Distribution...
10.0.0.1: 37.666666666667%
10.0.0.2: 37.333333333333%
10.0.0.3: 25%

1000 Cycles

image

Distribution...
10.0.0.1: 37.5%
10.0.0.2: 37.5%
10.0.0.3: 25%

We can see that the overall distribution of nodes still follows their respective weights and we no longer have this pattern where node 10.0.0.3 is not picked for large period of time. Even at smaller intervals (20) the distribution of nodes are correct.

Some Limitations

Even with the above solutions we can still have configurations where this issue persist. For example the following configuration will still have a similar distribution of nodes even with the modified algorithm.

local NODES = {}
NODES["10.0.0.1"] = 100
NODES["10.0.0.2"] = 100
NODES["10.0.0.3"] = 66
NODES["10.0.0.4"] = 1

We can avoid this by introducing an offset to self.lowest or by setting a minimum value. However, this will mess with the relative weight distribution at small number of cycles.

Conclusion

I do not have a perfect solution to this problem. But it is affecting real world data that relies on this algorithm to accurately distribute traffic across a set of nodes. Something to keep in mind is that, nginx implementation of WRR algorithm is vastly different to the one implemented here and on first inspection it doesn't look like that algorithm suffers this same limitation. It possible that algorithm can be adopted in here. I will include a real world test with production data as comment bellow in this issue.

Test script and setup

Setup

$ git clone [email protected]:openresty/lua-resty-balancer.git
$ touch lib/test.lua
Script
local rr = require("resty.roundrobin")

local BALANCE_CALLS = 1000

local function dump(o)
  if type(o) == 'table' then
    local s = '{ '
    for k,v in pairs(o) do
        if type(k) ~= 'number' then k = '"'..k..'"' end
        s = s .. '['..k..'] = ' .. dump(v) .. ','
    end
    return s .. '} '
  else
    return tostring(o)
  end
end

print("Setup Nodes ...")

local NODES = {}
NODES["10.0.0.1"] = 100
NODES["10.0.0.2"] = 100
NODES["10.0.0.3"] = 66

print(dump(NODES))

local NODE_COUNTS = {}
NODE_COUNTS["10.0.0.1"] = 0
NODE_COUNTS["10.0.0.2"] = 0
NODE_COUNTS["10.0.0.3"] = 0

print ("Setup roundrobin ...")
local rr_instance = rr:new(NODES)

print("Number of cycles: ", BALANCE_CALLS)

local out = io.open('data.csv', 'w')
for i = 1, BALANCE_CALLS do
  if (rr.DEBUG) then
    print(" ")
  end
  local node = rr_instance:find()
  print("Pick: ", node, ", Weight: ", NODES[node], ", Pick threshold: ", (rr_instance.cw / rr_instance.max_weight) * 100, "%")
  NODE_COUNTS[node] = NODE_COUNTS[node] + 1
  out:write(i .. "," .. NODE_COUNTS["10.0.0.1"] .. "," .. NODE_COUNTS["10.0.0.2"] .. "," .. NODE_COUNTS["10.0.0.3"] .. "\n")
end
out:close()

print("\nDistribution...")
print("10.0.0.1: ", NODE_COUNTS["10.0.0.1"] / BALANCE_CALLS * 100, "%")
print("10.0.0.2: ", NODE_COUNTS["10.0.0.2"] / BALANCE_CALLS * 100, "%")
print("10.0.0.3: ", NODE_COUNTS["10.0.0.3"] / BALANCE_CALLS * 100, "%")
Modified `roundrobin.lua`
local pairs = pairs
local next = next
local tonumber = tonumber
local setmetatable = setmetatable
local math_random = math.random
local error = error
local max = math.max

local utils = require "resty.balancer.utils"

local copy = utils.copy
local nkeys = utils.nkeys
local new_tab = utils.new_tab

local _M = {}
local mt = { __index = _M }


local function get_lowest_weight(nodes)
    local first_id, max_weight = next(nodes)
    if not first_id then
        return error("empty nodes")
    end

    local only_key = first_id
    local lowest = max_weight
    for id, weight in next, nodes, first_id do
        only_key = nil
        lowest = weight < lowest and weight or lowest
        max_weight = weight > max_weight and weight or max_weight
    end

    return only_key, max(lowest, 1), max_weight
end

local function get_random_node_id(nodes)
    local count = nkeys(nodes)

    local id = nil
    local random_index = math_random(count)

    for _ = 1, random_index do
        id = next(nodes, id)
    end

    return id
end


function _M.new(_, nodes)
    local newnodes = copy(nodes)
    local only_key, lowest, max_weight = get_lowest_weight(newnodes)
    local last_id = get_random_node_id(nodes)

    local self = {
        nodes = newnodes,  -- it's safer to copy one
        only_key = only_key,
        max_weight = max_weight,
        lowest = lowest,
        cw = max_weight,
        last_id = last_id,
    }
    return setmetatable(self, mt)
end


function _M.reinit(self, nodes)
    local newnodes = copy(nodes)
    self.only_key, self.lowest, self.max_weight = get_lowest_weight(newnodes)

    self.nodes = newnodes
    self.last_id = get_random_node_id(nodes)
    self.cw = self.max_weight
end


local function _delete(self, id)
    local nodes = self.nodes

    nodes[id] = nil

    self.only_key, self.lowest, self.max_weight = get_lowest_weight(nodes)

    if id == self.last_id then
        self.last_id = nil
    end

    if self.cw > self.max_weight then
        self.cw = self.max_weight
    end
end
_M.delete = _delete


local function _decr(self, id, weight)
    local weight = tonumber(weight) or 1
    local nodes = self.nodes

    local old_weight = nodes[id]
    if not old_weight then
        return
    end

    if old_weight <= weight then
        return _delete(self, id)
    end

    nodes[id] = old_weight - weight

    self.only_key, self.lowest, self.max_weight = get_lowest_weight(nodes)

    if self.cw > self.max_weight then
        self.cw = self.max_weight
    end
end
_M.decr = _decr


local function _incr(self, id, weight)
    local weight = tonumber(weight) or 1
    local nodes = self.nodes

    nodes[id] = (nodes[id] or 0) + weight

    self.only_key, self.lowest, self.max_weight = get_lowest_weight(nodes)
end
_M.incr = _incr



function _M.set(self, id, new_weight)
    local new_weight = tonumber(new_weight) or 0
    local old_weight = self.nodes[id] or 0

    if old_weight == new_weight then
        return
    end

    if old_weight < new_weight then
        return _incr(self, id, new_weight - old_weight)
    end

    return _decr(self, id, old_weight - new_weight)
end


local function find(self)
    local only_key = self.only_key
    if only_key then
        return only_key
    end

    local nodes = self.nodes
    local last_id, cw, weight = self.last_id, self.cw

    while true do
        while true do
            last_id, weight = next(nodes, last_id)
            if not last_id then
                break
            end

            if weight >= cw then
                self.cw = cw
                self.last_id = last_id
                return last_id
            end
        end

        if cw == 0 then
            cw = self.max_weight
        else
          cw = cw - self.lowest
          if cw < 0 then
              cw = 0
          end
        end
    end
end
_M.find = find
_M.next = find


return _M

Execute with

$ resty lib/test.lua

Data used in graphs: https://docs.google.com/spreadsheets/d/1ba570tbELUbG_N-Q-5vq15EyOP0x6wCd6X2Pu1ZAPrQ/edit?usp=sharing

module 'ngx.balancer' not found ?

dear :
my nginx.conf is use to example,when i access to my nginx server, Nginx server response 500 and some error in error_log.

ENV
[root@QA-PUB01 bin]# ./luajit -v LuaJIT 2.0.4 -- Copyright (C) 2005-2015 Mike Pall. http://luajit.org/

[root@QA-PUB01 sbin]# ./nginx -V nginx version: nginx/1.11.8

nginx error log

2017/01/12 16:52:17 [error] 11661#0: 4 failed to run balancer_by_lua: balancer_by_lua:2: module 'ngx.balancer' not found:
no field package.preload['ngx.balancer']
no file '/root/lua-resty-balancer-master/lib/ngx/balancer.lua'
no file './ngx/balancer.lua'
no file '/usr/local/share/luajit-2.1.0-beta1/ngx/balancer.lua'
no file '/usr/local/share/lua/5.1/ngx/balancer.lua'
no file '/usr/local/share/lua/5.1/ngx/balancer/init.lua'
no file '/usr/local/nginx_lua/lua/ngx/balancer.so'
no file './ngx/balancer.so'
no file '/usr/local/lib/lua/5.1/ngx/balancer.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/usr/local/nginx_lua/lua/ngx.so'
no file './ngx.so'
no file '/usr/local/lib/lua/5.1/ngx.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'require'
balancer_by_lua:2: in function <balancer_by_lua:1> while connecting to upstream, client: 192.168.2.145, server: , request: "GET / HTTP/1.1", host: "192.168.2.188:9999"

pass ip:port as the addr and pass 0 as port to lua-resty-core/lib/ngx/balancer.lua

I integrated the level077/lua-resty-upstream-etcd into my project. The function b.set_current_peer(server) in the lua-resty-upstream-etcd/balancer.lua pass the 10.20.22.85:8010 and nil into the function of _M.set_current_peer(addr, port) in lua-resty-core/lib/ngx/balancer.lua. So the default port is 0 in lua-resty-core/lib/ngx/balancer.lua. I got no error after integration. Does the C.ngx_http_lua_ffi_balancer_set_current_peer deal with the addr as ip:port and port as 0 correctly? What's the usage of the port, it seems that it is Unnecessary.

Write

How to use the resty.chash

@doujiang24 ,

With your example code in Synopsis, I have no idea how to use the resty.chash. What usage of the weight? How does the weight relate to the hash for session sticky? You mentioned that we can balance by any key when using chash_up:find(ngx.var.arg_key). When we initialize chash_up(), do we need this key? Is it possible to find by session id?

Would you like to describe it in more detail?

Thanks

Load balancing with "content_by_lua block"

I have a content_by_lua block which execute when below location run.

location / {
      content_by_lua '
          local client = require "readuser"
          client.ReadUser();
      ';
    }

I want to do load balancing between multiple servers so that all servers run same lua block. I thought i did this by

http
{
 upstream read
 {
  server one;
  server two;
  server three;
 }
}

server {
 location / {
  proxy_pass http://read
  content_by_lua '
          local client = require "readuser"
          client.ReadUser();
      ';
 }
}

but it did not work. How can I achieved this with lua block?

[Advise]the improvement on roundrobin.

@doujiang24 , the gcd implement is not fair enough, the following is more better:

 function _M:next()
     local servers=self.servers
     local selectedIdx
     for i = 1, #servers do
        servers[i]['cweight'] = servers[i]['weight'] + severs[i]['cweight']
        if not selectedIdx or servers[selectedIdx]['cweight'] < severs[i]['cweight'] then
            selectedIdx = i
        end
    end

    servers[selectedIdx]['cweight'] = servers[selectedIdx]['cweight'] - self.totalWeight

    return servers[selectedIdx]['name']

 end

Any plan to share the state of load balancing across workers?

I might be mistaken but looking at the implementation seems like the state of a load balancing algorithm is per Nginx worker. This means if for example there are 4 requests hitting 4 different workers then they will all be proxied to the first upstream server.

If that is not the case then could you give some hints how the state is shared?

Thanks!

Is the resty.chash concurrency safe?

init_master.lua

global_chash = require("resty.chash"):new(my_nodes)
ngx.timer.every( -- update nodes every 15s
        15,
        function()
            local nodes = get_my_nodes() 
            global_chash:reinit(nodes)
        end
)
location / {
      content_by_lua '
            ...
           local node = global_chash:find(my_hash_key)
            ...
      ';
    }

Is this concurrency safe here?

Weight vs hash

How can I choose servers based on the weight and not the hash?

can't load librestychash.so ,how to solve

chash.lua:82: can not load librestychash.so
stack traceback:
[C]: in function 'error'
.../chash.lua:82: in main chunk
[C]: in function 'require'
...controller.lua:11: in main chunk
[C]: in function 'require'
init_by_lua:1: in main chunk

make: unrecognized command line option "-flto"

I found a error:

[root@]# make
cc -Wall -O3 -flto -g -DFP_RELAX=0 -DDEBUG -fPIC -MMD -fvisibility=hidden -DBUILDING_SO -c chash.c
cc1: error: unrecognized command line option "-flto"
make: *** [chash.o] error 1

AND

#gcc -v:
gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC)

I delete "-flto" AND it work correctly.

"-flto" support after GCC 4.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.