Coder Social home page Coder Social logo

Comments (10)

werkt avatar werkt commented on June 16, 2024

Buildfarm is currently intolerant to redis configuration changes. Redis clusters that change IPs/names are not reflected in server/worker grouped communication. Its something I've been meaning to experiment with tolerating, but the current guidance is to only run buildfarm on stable redis clusters, and to bring it down when clusters' layouts (including slot allocations) change. A number of modifications will need to be made to have the entire cluster respond correctly to reconfigured clusters.

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

Thanks for explanation @werkt, from what I've observed is that if I restart buildfarm after redis comes back up, it starts working as usual. I can give a demo if your up for zoom call.

One more anecdotal data point, we have old internal buildfarm build (before 2.8.0) which closes the tcp port if redis goes down. This allows me to use tcp liveness probe. In case of redis restart buildfarm gets restarted by kubernetes.

Edit:

Redis clusters that change IPs/names are not reflected in server/worker grouped communication.

For redis we use kubernetes ClusterIP service which is like load balance with static IP address. We also pass redis address as domain (f.e.: redis://redis.buildfarm-java:6379).

from bazel-buildfarm.

werkt avatar werkt commented on June 16, 2024

I think I see - you're questioning the discrepancy between 'No available workers' and the threefold service availability on the workers.

The problem needs to be looked at on the servers first - conditions in the cluster are leading to them, not the workers, responding with UNAVAILABLE.

The servers need to see the manifest of current storage workers in redis to read/write blobs. I don't know if a failing connection to redis will inspire it into something other than the UNAVAILABLE behavior from above.
Further, if all of the workers listed in storage are inaccessible as named in the storage map, you will get the UNAVAILABLE response above.
I suggest trying this with a single instance, seeding broken values, introduce redis shutdowns or node removals, and see the results. With explicit steps for an unexpected reproducer case, you can augment this issue and/or write a unit test.

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

I think we are going away from the main problem I was referring to. Which is buildfarm shard-worker is not healthy (as observed in the dashboard), but liveness and readiness probes report that shard-worker is fine.

Manually sending curl request to worker-shard readiness/liveness endpoit returns 200:

% curl -I http://10.32.8.185:9090/
HTTP/1.1 200 OK
Date: Tue, 20 Feb 2024 13:12:18 GMT
Content-type: text/plain; version=0.0.4; charset=utf-8
Content-length: 37180

from bazel-buildfarm.

werkt avatar werkt commented on June 16, 2024

200 on the prometheus endpoint doesn't necessarily indicate 'fine', but I agree that there's no presentation difference on any public interface that a worker can't talk to the backplane if it happens to be down. Ostensibly, the worker is fine, and coping with the fact that it can't talk to redis at a particular moment.

Starting a worker does fail currently if it cannot connect to redis, which happens because backplane initialization was never made tolerant to it. With little effort, I could actually make it start cleanly and function normally, without any connection available to redis.
Killing my connected redis server during steady state and then putting it back up however does allow the worker to continue without a restart.
Workers with a disconnected redis are able to serve files as normal CAS members, and without an execution pipeline enabled, would essentially serve as standalone grpc CAS endpoints.

So the statement "Health check does not seem to work..." is not accurate: the worker is healthy. Redis, hosting the backplane, is not.

What do you want the worker to do differently when the backplane cannot be reached?

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

Thank you for explanation, it does make sense. I initially though that redis is required for all operations and killing it makes the cluster unavailable.

To add to that, I did find one worker in bad state and prometheus endpoint was not responding for that worker. So the health checks must be working too.

Do you know a simple way to put a worker in a bad state so that it would report that it is not healthy?

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

I've got a new data point. We are using automated GKE node pool upgrade, which happened over the weekend, which also restarted redis. While servers have to survive the redis restart, workers failed to rejoin servers.

screencapture-devpod-monitor-brazil-uberinternal-grafana-d-TyavacH4z-devpod-buildfarm-2024-03-18-16_11_20

Health check says that everything is fine:

 > curl -I http://10.80.17.157:9090/
HTTP/1.1 200 OK
Date: Mon, 18 Mar 2024 15:07:02 GMT
Content-type: text/plain; version=0.0.4; charset=utf-8
Content-length: 33036

Logs indicate that worker failed to reconnect to redis:

Exception in thread "Worker.failsafeRegistration" redis.clients.jedis.exceptions.JedisClusterMaxAttemptsException: No more cluster attempts left.
        at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:152)
        at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:27)
        at redis.clients.jedis.JedisCluster.hset(JedisCluster.java:477)
        at build.buildfarm.common.redis.RedisHashMap.insert(RedisHashMap.java:61)
        at build.buildfarm.instance.shard.RedisShardBackplane.addWorkerByType(RedisShardBackplane.java:632)
        at build.buildfarm.instance.shard.RedisShardBackplane.lambda$addWorker$5(RedisShardBackplane.java:614)
        at build.buildfarm.common.redis.RedisClient.call(RedisClient.java:117)
        at build.buildfarm.instance.shard.RedisShardBackplane.addWorker(RedisShardBackplane.java:610)
        at build.buildfarm.worker.shard.Worker.addWorker(Worker.java:445)
        at build.buildfarm.worker.shard.Worker$1.registerIfExpired(Worker.java:497)
        at build.buildfarm.worker.shard.Worker$1.run(Worker.java:507)
        at java.base/java.lang.Thread.run(Thread.java:833)
        Suppressed: redis.clients.jedis.exceptions.JedisConnectionException: Unexpected end of stream.
...
        Suppressed: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
                at redis.clients.jedis.util.Pool.getResource(Pool.java:59)
                at redis.clients.jedis.JedisPool.getResource(JedisPool.java:234)
                at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:89)
                at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:102)
                ... 11 more
        Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host bf-redis-standalone.cache-buildfarm:6379
                at redis.clients.jedis.Connection.connect(Connection.java:225)
                at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:100)
                at redis.clients.jedis.BinaryJedis.connect(BinaryJedis.java:1896)
                at redis.clients.jedis.JedisFactory.makeObject(JedisFactory.java:117)
                at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:571)
                at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:298)
                at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:223)
                at redis.clients.jedis.util.Pool.getResource(Pool.java:50)
                ... 14 more
        Caused by: java.net.ConnectException: Connection refused
                at java.base/sun.nio.ch.Net.pollConnect(Native Method)
                at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
                at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:549)
                at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
                at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
                at java.base/java.net.Socket.connect(Socket.java:639)
                at redis.clients.jedis.Connection.connect(Connection.java:201)
                ... 21 more

Bazel build completes, but gives us these warnings:

INFO: Elapsed time: 129.918s, Critical Path: 62.01s
INFO: 928 processes: 57 internal, 871 processwrapper-sandbox.
INFO: Build completed successfully, 928 total actions
WARNING: Uploading BEP referenced local file /tmp/bazel_20240318152138_ogvru hash: "48c7622b5379be1139c26015f69eb9c6bf41b8e0fe2fd5b04ea8197ac74dee0c"
size_bytes: 2530893
: UNAVAILABLE: no available workers
INFO: Build Event Protocol files produced successfully.

Health checks are configured as follows:

> kubectl describe pod --context brazil -n cache-buildfarm buildfarm-worker-0 | grep -iE '(Liveness|Readiness|Startup):' 
    Liveness:   http-get http://:metrics/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:metrics/ delay=0s timeout=2s period=10s #success=1 #failure=3
    Startup:    http-get http://:metrics/ delay=0s timeout=2s period=10s #success=1 #failure=10

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

What do you want the worker to do differently when the backplane cannot be reached?

I would love that worker would report that it is not healthy if it can not talk to backplane, in that case kubernetes would restart it.

Workers with a disconnected redis are able to serve files as normal CAS members, and without an execution pipeline enabled, would essentially serve as standalone grpc CAS endpoints.

Is there a flag to tell worker that it must connect to redis and it must exit if it fails to do so?

from bazel-buildfarm.

werkt avatar werkt commented on June 16, 2024

I would love that worker would report that it is not healthy if it can not talk to backplane, in that case kubernetes would restart it.

Without the below - where we shut down in the case where our connectivity to redis can't be established (any node), then we need to have a signal light that you'd get to see - I'd expect this to be on the prometheus endpoint, but will require more than just querying the endpoint and checking for 200: the specific metric we select to indicate connectivity failures would need to be observed.

Is there a flag to tell worker that it must connect to redis and it must exit if it fails to do so?

I'll interpret this as 'can we make one', because there isn't one currently. Answer is yes.

from bazel-buildfarm.

aisbaa avatar aisbaa commented on June 16, 2024

A prometheus metric would work, I could add a shell script that greps for metric and checks the value. As an alternative, would you consider GRCP health check to provide this signal. I think it would work nicely with potential flag/setting to report not healthy if connection to redis is lost.

from bazel-buildfarm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.