Coder Social home page Coder Social logo

Comments (14)

cirocosta avatar cirocosta commented on June 16, 2024 1

maybe some good news!

As a way of trying to get rid of some of the iptables memory allocation problems (see concourse/concourse#3127) that could be just a symptom of memory pressure, I tried raising /proc/sys/vm/min_free_kbytes in order to ensure that we always leave some more room for memory for the kernel, and it seems like the behavior is better - not only no more of those iptables problems, but in general, it seems like we have a better CPU usage distribution in overall.

I'll perform some more runs tomorrow and see how it goes πŸ‘

https://github.com/torvalds/linux/blob/baf76f0c58aec435a3a864075b8f6d8ee5d1f17e/Documentation/sysctl/vm.txt#L438-L450

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

cc @topherbullock the thing I mentioned yesterday after standup
cc @ddadlani @kcmannem (after investigating this, it might touch runtime at some point?)

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

Another interesting fact: despite having provisioned what could be the best combination of cpu count + disk size to max out the possible amount of IOPS and consistent throughput we could get, it seems like we're still being throttled:

Screen Shot 2019-03-13 at 3 18 44 PM

Which is interesting, given that in those moments, we've never gotten close to what should be our quota:

Screen Shot 2019-03-13 at 3 25 52 PM

Screen Shot 2019-03-13 at 3 26 35 PM

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

I've been wondering whether this is related to overlay(the driver we use there) - in theory, we should be able to see that by taking a profile of the machine once Strabo kicks in and then derive some more data around what's the workload that makes the system cpu usage to grow as much, then, with that, create a specialized workload that allows us to better understand what in Overlay causes that πŸ€”

from hush-house.

kcmannem avatar kcmannem commented on June 16, 2024

I don't know the specifics of overlay but we do know one situation where overlay struggles is when using privileged containers. In the case of big resources with Strabo, using privileged will cause overlay to apply a real write on the upper dir with the correct permissions instead of using the read only version in the lower dir.

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

If I understand correctly, overlay ends up being a problem during the beginning of a step when all of those chowns need to happen, but, for what happens after that, would it be anyhow worse than btrfs? πŸ€”

In the case of strabo there are no privileged containers πŸ€”

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

It turns out that sys cpu is pretty tricky πŸ€·β€β™‚οΈ

Running chicken-gun's net stress we can drive sys util to 32% constantly w pretty much ~1% user CPU usage, which is quite interesting! Next question is - does sys util count when cgroups throttles processes? πŸ€”

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

Hey,

I was giving a look at what goes on with the machine when it's in such state, and it seems that such performance trashing is coming from the memory subsystem.

# Children      Self       Samples  Command          Shared Object             Symbol
# ........  ........  ............  ...............  ........................  ...........................................................................................................................................................................................
#
    24.91%     0.00%             0  exe              [kernel.kallsyms]         [k] try_charge
13.63% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% try_charge;try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49
    24.91%     0.00%             0  exe              [kernel.kallsyms]         [k] try_to_free_mem_cgroup_pages
13.63% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% try_to_free_mem_cgroup_pages;do_try_to_free_pages;shrink_node;shrink_slab.part.49
    24.91%     0.00%             0  exe              [kernel.kallsyms]         [k] do_try_to_free_pages
13.63% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count
1.15% do_try_to_free_pages;shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% do_try_to_free_pages;shrink_node;shrink_slab.part.49
    24.77%     0.00%             2  exe              [kernel.kallsyms]         [k] shrink_node
13.63% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% shrink_node;shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% shrink_node;shrink_slab.part.49;super_cache_count
1.15% shrink_node;shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
0.98% shrink_node;shrink_slab.part.49
    24.58%     0.98%           464  exe              [kernel.kallsyms]         [k] shrink_slab.part.49
13.63% shrink_slab.part.49;super_cache_count;list_lru_count_one
3.57% shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock;native_queued_spin_lock_slowpath
3.52% shrink_slab.part.49;super_cache_count;list_lru_count_one;do_raw_spin_lock
1.17% shrink_slab.part.49;super_cache_count
1.15% shrink_slab.part.49;super_cache_count;shmem_unused_huge_count
    24.06%     0.00%             0  exe              [kernel.kallsyms]         [k] handle_mm_fault

Screen Shot 2019-04-24 at 10 13 40 PM

I remember reading that per-cgroup memory accounting is not free, but that seems quite expensive. Maybe there's something we're missing here.

Here are some other interesting stats:

weird-stats

At the moment that we're in such state, we have a bunch of small IOPS, having a quite high IO completion time πŸ€”

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

Supporting the theory that it's indeed something related to paging and not the fact that our IOPS are being throttled:

Artboard

We're pretty much never being throttled for reads πŸ€” (the stats for throttling come from Stackdriver, i.e., from the IaaS itself).

Reading https://engineering.linkedin.com/blog/2016/08/don_t-let-linux-control-groups-uncontrolled, it feels like we're hitting case 2:

A cgroup’s memory limit (e.g., 10GB) includes all memory usage of the processes running in itβ€”both the anonymous memory and page cache of the cgroup are counted towards the memory limit. In particular, when the application running in a cgroup reads or writes files, the corresponding page cache allocated by OS is counted as part of the cgroup’s memory limit.

For some applications, the starvation of page cache (and corresponding low page cache hit rate) has two effects: it degrades the performance of the application and increases workload on the root disk drive, and it could also severely degrade the performance of all applications on the machine that perform disk IO.

It'd be nice to create a reproducible for this. I'll see if I can come with something up with chicken-gun.

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

I was under the impression that it could be that overlay could potentially be part of the issue, but it seems like that's not the case - we just hit the same kind of CPU usage curve with Ubuntu-based machines (thus, not overlay related):

Screen Shot 2019-05-01 at 10 10 16 AM

Screen Shot 2019-05-01 at 10 11 06 AM

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

sys cpu continued consistently low compared to how it was before the tuning πŸ™Œ iptables mem allocation problem still happens though.

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

Actually ... this didn't really help much πŸ€·β€β™‚

It's quite unfortunate that when tracing unzip we get the following:

Screen Shot 2019-05-24 at 6 27 47 PM

Looking at https://lore.kernel.org/lkml/[email protected]/, which suggests that 4.19+ ships with an improvement to the whole list_lru_count_one having such high sys CPU utilization and knowing that we'd like to try out the iptables problem in 4.19+, I'll go ahead with having hush-house workers running on top of Ubuntu Disco to see how that compares.

That'd still not be a 100%-valid comparison as we'd not be running on top of a GKE-managed instance type (PKS wouldn't help us here either as the stemcells are 4.14-based just like the Ubuntu family from GKE, as well as the COS-family - see https://cloud.google.com/container-optimized-os/docs/release-notes#current_active_releases).

Also, this is quite interesting:

Every mount introduce a new shrinker to the system, so it's easy to see a system with 100 or ever 1000 shrinkers.

ooooh yeah, strabo has "just" 100s of those πŸ˜… not counting all of those other 100s of containers that are usually already there in the machine πŸ˜…

Knowing those, we can also prepare something with chicken-gun πŸ€”


side note: if this really helps, the next step will be seeing if it's possible to get strabo doing its thing with configurable concurrency πŸ‘€

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

(not completely related, but something else to keep an eye on - cpu.cfs_quota https://lkml.org/lkml/2019/5/17/581)

see kubernetes/kubernetes#67577

from hush-house.

cirocosta avatar cirocosta commented on June 16, 2024

I spent some time trying to see how a large number of cgroups affect the performance of reading files from disk, and it turns out that it's quite significant:

no extra cgroups:
    0.00user 9.89system 0:13.25elapsed 74%CPU (0avgtext+0avgdata 3336maxresident)k
    10486016inputs+8outputs (1major+130minor)pagefaults 0swaps

100
    0.02user 18.96system 0:22.10elapsed 85%CPU (0avgtext+0avgdata 3220maxresident)k
    10486152inputs+8outputs (2major+131minor)pagefaults 0swaps

300
    0.06user 80.90system 1:24.64elapsed 95%CPU (0avgtext+0avgdata 3360maxresident)k
    10486024inputs+8outputs (1major+132minor)pagefaults 0swaps

which seems to be quite on point with the comments in the lkml articles regarding the quadratic times of shrinkers πŸ€”

The reproducible consists of:

  1. having a bunch of large files to read (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/scenarios/page-cache/setup.sh#L11-L16)

  2. setting up a number of cgroups (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/scenarios/page-cache/containers.sh#L5-L10), then

  3. reading all of those files (see https://github.com/cirocosta/chicken-gun/blob/3c5fe02045e784c7da47575a415bb98a6bbf9e71/src/page_cache.rs)

from hush-house.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.