Coder Social home page Coder Social logo

Comments (12)

soloestoy avatar soloestoy commented on July 18, 2024 3

maxmemory-clients 100% means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.

I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as maxmemory-data and maxmemory-system. If the memory for data exceeds maxmemory-data, eviction would be triggered, if the memory for system operation goes beyond maxmemory-system, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.

About latency monitor, enable it is not a big deal.

/* Add the sample only if the elapsed time is >= to the configured threshold. */
#define latencyAddSampleIfNeeded(event, var)                                                                           \
    if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));

It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.

from valkey.

zuiderkwast avatar zuiderkwast commented on July 18, 2024 2

repl-diskless-load disabled -> on-empty-db. Any drawbacks?

from valkey.

ranshid avatar ranshid commented on July 18, 2024 1

maxmemory-clients 100% means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.

I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as maxmemory-data and maxmemory-system. If the memory for data exceeds maxmemory-data, eviction would be triggered, if the memory for system operation goes beyond maxmemory-system, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.

About latency monitor, enable it is not a big deal.

/* Add the sample only if the elapsed time is >= to the configured threshold. */
#define latencyAddSampleIfNeeded(event, var)                                                                           \
    if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));

It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.

@soloestoy I also agree that we need to keep some tight memory accounting for the user data. I wonder though if evictions (both clients and data) should only be triggered by that thresholds alone? for example how do we account for memory waste like fragmentation, it is very hard to evaluate the fragmentation ratio of each of the system/data memory and take it into account.

from valkey.

zvi-code avatar zvi-code commented on July 18, 2024 1

maxmemory-clients 100% means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.
I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as maxmemory-data and maxmemory-system. If the memory for data exceeds maxmemory-data, eviction would be triggered, if the memory for system operation goes beyond maxmemory-system, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.
About latency monitor, enable it is not a big deal.

/* Add the sample only if the elapsed time is >= to the configured threshold. */
#define latencyAddSampleIfNeeded(event, var)                                                                           \
    if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));

It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.

@soloestoy I also agree that we need to keep some tight memory accounting for the user data. I wonder though if evictions (both clients and data) should only be triggered by that thresholds alone? for example how do we account for memory waste like fragmentation, it is very hard to evaluate the fragmentation ration of each of the system/data memory and take it into account?

I agree about memory accounting needs especially for clients, that can consume resource independently of each other and as such you may want to have independent control.

I think that another aspect of memory usage is the transient memory allocations, that are allocated and freed during command execution (processing memory := memory for processing needs), like lua allocation/modules allocation/temp objs and so on, this memory usage can cause memory pressure/swap without being apparent in any data point except the peak memory that only works for all-time memory maximum and not recent.

[On this note, I hoped to raise one day the idea of memory pool isolation. where we use different memory pool for data for clients for processing memory. I have poc'd this idea using jemalloc's private arena. @ranshid, I think this will solve the fragmentation cost association to usage type. The motivation for memory pool is of course of much wider scope as the memory life cycle is very different, so isolating the usages will improve the overall efficiency. Maybe I'll raise a separate issue on this]

from valkey.

ranshid avatar ranshid commented on July 18, 2024 1

. @ranshid, I think this will solve the fragmentation cost association to usage type.

I agree, depending on the complexity of it :)

from valkey.

enjoy-binbin avatar enjoy-binbin commented on July 18, 2024 1

ohh, we already have the history limit:

/* The latency time series for a given event. */
struct latencyTimeSeries {
    int idx; /* Index of the next sample to store. */
    uint32_t max; /* Max latency observed for this event. */
    struct latencySample samples[LATENCY_TS_LEN]; /* Latest history. */
};

#define LATENCY_TS_LEN 160 /* History length for every monitored event. */

from valkey.

soloestoy avatar soloestoy commented on July 18, 2024

some suggestions:

  • maxmemory-clients 0 -> 100%, limit client memory usage to avoid data eviction caused by network read/write buffering, or even machine system OOM.
  • latency-monitor-threshold 0 -> 100, enable the latency monitoring by default (100ms), to avoid being unable to track after a problem occurs.

from valkey.

zuiderkwast avatar zuiderkwast commented on July 18, 2024

Interesting. @soloestoy can you explain what maxmemory-clients 100% means? Clients and data share the maxmemory limit and are evicted at the same time? If Valkey is used as an LRU cache, maybe we want to evict data first?

Latency monitor, if it has almost no CPU overhead, I'm OK with enabling it by default. Please explain to me that I don't have to worry about this comment in the config file:

# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load.

from valkey.

enjoy-binbin avatar enjoy-binbin commented on July 18, 2024

lets do it in 8.0, i think we should configure the ideal configuration items for users.
btw, i want to change latency-monitor-threshold to default 10ms, just like the slowlog's default value.

from valkey.

zuiderkwast avatar zuiderkwast commented on July 18, 2024

@enjoy-binbin what's the current default latency-monitor-threshold? Is there a max size for the latency history? I'm thinking about the users who are not using this feature, that it doesn't eat a lot of memory or CPUs.

from valkey.

enjoy-binbin avatar enjoy-binbin commented on July 18, 2024

the current default latency-monitor-threshold is 0 (disable). I see you guys discussed this in here #653 (comment)

the memory one, yean, maybe, or a latency-history-max-len similar to slowlog-max-len? the latency i think is useful like slowlog, i think they can be put together as an analogy.

from valkey.

enjoy-binbin avatar enjoy-binbin commented on July 18, 2024

repl-diskless-load disabled -> on-empty-db. Any drawbacks?

Seems to have no drawbacks.

Speaking of repl-diskless-load, do we consider adding an empty-db option? it perform a flushall at first so that we can take on-empty-db

from valkey.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.