Coder Social home page Coder Social logo

ackdistro's People

Contributors

ankushgoel27 avatar arctan90 avatar dawnguodev avatar denverdino avatar haiyangding avatar hhyasdf avatar kakazhou719 avatar luckyfengyong avatar lxs137 avatar stevent-fei avatar tamerga avatar thebeatles1994 avatar tzzcfrank avatar vincecui avatar wizardlyk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ackdistro's Issues

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

Install Error [ERROR] [root.go:75] sealer-0.9.4-beta2: failed to run hook: preflight

Detail Log:

2023-08-11 11:25:47 [INFO] Start pre-flight...
2023-08-11 11:25:47 [INFO] Start parsing and validating trident config from /root/.sealer/Clusterfile
2023-08-11 11:25:47 [INFO] failed to get kube client, think it is first time installation
2023-08-11 11:25:47 [INFO] Start parsing site info...
2023-08-11 11:25:51 [INFO] Start validating cluster...
2023-08-11 11:25:51 [INFO] Start running cluster scope validators...
2023-08-11 11:25:51 [INFO] Start running cluster validator HostNameDuplicate...
2023-08-11 11:25:51 [INFO] Start running cluster validator TimeDiff...
2023-08-11 11:25:52 [INFO] Start running agent validators...
2023-08-11 11:26:14 [WARN] [Validator:OS on host:192.168.3.121] fails: kernel version 4.19.284-0419284-generic cannot be marshal, please contact us for help, but since --ignore-errors is configured, this failure has been ignored.
2023-08-11 11:26:14 [WARN] [Validator:OS on host:192.168.3.122] fails: kernel version 4.19.284-0419284-generic cannot be marshal, please contact us for help, but since --ignore-errors is configured, this failure has been ignored.
2023-08-11 11:26:14 [WARN] [Validator:OS on host:192.168.3.123] fails: kernel version 4.19.284-0419284-generic cannot be marshal, please contact us for help, but since --ignore-errors is configured, this failure has been ignored.
2023-08-11 11:26:14 [INFO] Starting dumping validate report anyway...
Trident version 1.14.3, Build: d1412075 go1.17.2, Build time: 2023-03-31-11-37-07, EnableLicense: false, Log file: /var/lib/sealer/data/my-cluster/rootfs/trident_log_2023-08-11T11-25-47+08-00
failed to validate cluster:
[Validator:SystemTools on host:192.168.3.121] fails: missing system tools: [setenforce] on this node, please install them first, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||SystemTools"] to ignore it.
[Validator:SystemTools on host:192.168.3.122] fails: missing system tools: [setenforce] on this node, please install them first, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||SystemTools"] to ignore it.
[Validator:SystemTools on host:192.168.3.123] fails: missing system tools: [setenforce] on this node, please install them first, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||SystemTools"] to ignore it.
[Validator:TimeSyncService on host:192.168.3.121] fails: either chronyd and ntpd not set in this host, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||TimeSyncService"] to ignore it.
[Validator:TimeSyncService on host:192.168.3.122] fails: either chronyd and ntpd not set in this host, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||TimeSyncService"] to ignore it.
[Validator:TimeSyncService on host:192.168.3.123] fails: either chronyd and ntpd not set in this host, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||TimeSyncService"] to ignore it.
[Validator:KernelFlags on host:192.168.3.121] fails:
invalid kernel flag: kernel/panic_on_oops, expected value: [1], actual value: 0
, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||KernelFlags"] to ignore it.
[Validator:KernelFlags on host:192.168.3.122] fails:
invalid kernel flag: kernel/panic_on_oops, expected value: [1], actual value: 0
, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="|OS|KernelFlags"] to ignore it.
[Validator:KernelFlags on host:192.168.3.123] fails:
invalid kernel flag: kernel/panic_on_oops, expected value: [1], actual value: 0
, please check your cluster and fix this error. Or, if you know what you are doing, you can run with flag [--env IgnoreErrors="OS||KernelFlags"] to ignore it..
You can see entire survey report in /var/lib/sealer/data/my-cluster/rootfs/site-survey-report.yaml

Environment

  • ACK Distro version: sealer-0.9.4-beta2
  • OS (e.g. cat /etc/os-release):Ubuntu 18.04.6 LTS
  • Kernel (e.g. uname -a):4.19.284-0419284-generic

I isure install chronyd Or ntpd and check /var/lib/sealer/data/my-cluster/rootfs/site-survey-report.yaml

I don't know the location of the executed script,like ansible ,I know how it works。

Hope for help

1.20 support-ipv6-dualstack test result

Summarizing 6 Failures:

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:418

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:325

[Fail] [sig-scheduling] SchedulerPreemption [Serial] [It] validates lower priority pod preemption by critical pod [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:1123

[Fail] [sig-scheduling] SchedulerPreemption [Serial] [It] validates basic preemption works [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:438

Ran 309 of 5667 Specs in 6177.341 seconds
FAIL! -- 303 Passed | 6 Failed | 0 Pending | 5358 Skipped
--- FAIL: TestE2E (6177.45s)
FAIL

Ginkgo ran 1 suite in 1h42m59.12647752s
Test Suite Failed
2022/07/01 07:40:54 Saving results at /tmp/sonobuoy/results
2022/07/01 07:40:54 running command: exit status 1

  • ret=1
  • exit 1

hybridnet cpu request too high

Bug Report

Type: bug report

What happened

image

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • ACK Distro version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

[github-actions] auto-build ackdistro branch image

If you want to build the image you want, you can comment on /imagebuild version branch(defult amd64)
eg:/imagebuild 1.20.14 feature/v1.20.4-ack-3
image

multiarch:
eg:/imagebuild v1-20-4-ack-5 main multiarch
image

Pulling the image from sea.hub:5000 reports an error

Pulling the image from sea.hub:5000 reports an error

http: server gave HTTP response to HTTPS client

You need to modify the /etc/docker/daemon.json file, add { "insecure-registries":["sea.hub:5000"] } configuration, then load the configuration and restart docker

systemctl daemon-reload
systemctl restart docker

Run the installation command again, the installation is complete

# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
etcd-2               Healthy   ok
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   ok
etcd-1               Healthy   ok

PR for 1.22

Issue Description

Type: feature request

Describe what feature you want

Additional context

ackd not support alinux

containerd --version
containerd.sh: line 13: containerd: command not found
++ utils_get_distribution
++ lsb_dist=
++ '[' -r /etc/os-release ']'
+++ . /etc/os-release
++++ NAME='Alibaba Cloud Linux'
++++ VERSION='3 (Soaring Falcon)'
++++ ID=alinux
++++ ID_LIKE='rhel fedora centos anolis'
++++ VERSION_ID=3
++++ PLATFORM_ID=platform:al8
++++ PRETTY_NAME='Alibaba Cloud Linux 3 (Soaring Falcon)'
++++ ANSI_COLOR='0;31'
++++ HOME_URL=https://www.aliyun.com/
+++ echo alinux
++ lsb_dist=alinux
++ echo alinux
lsb_dist=alinux
++ echo alinux
++ tr '[:upper:]' '[:lower:]'
lsb_dist=alinux
echo 'current system is alinux'
current system is alinux
case "${lsb_dist}" in
utils_error 'unknown system to use /etc/systemd/system/containerd.service'
containerd.sh: line 51: utils_error: command not found

Environment

  • ACK Distro version:
  • OS (e.g. cat /etc/os-release): Alibaba Cloud Linux 3 (Soaring Falcon)
  • Kernel (e.g. uname -a): Linux iZbp1efisqbwapk8hs5w8cZ 5.10.134-12.al8.x86_64 #1 SMP Tue Sep 6 14:59:57 CST 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

auto-test

在两个环境同时部署,
环境1:

  1. 进行一次sealer run,用hybridnet部署;
  2. 执行clean,
  3. 进行一次sealer run,用hybridnet部署;
  4. 跑E2E
  5. 上报测试结果

环境2:

  1. 进行一次sealer apply,用calico部署;
  2. 执行clean,
  3. 进行一次sealer apply,用calico部署;
  4. 跑E2E
  5. 上报测试结果

测试利用到ginkgo去调用sealer run test module --->正在调研中

GitHub action online construction and verification

1. 自动化构建

触发条件:

  1. issue: 通过issue 评论去触发
  2. commit:通过提交代码去触发
  3. 定时任务 :通过设置时间去触发

issue触发:获取到触发命令带tag

自动化构建--->sh /root/ackdistro/build/build.sh $tag

等待构建完成以后会自动回复 auto-build success

[github-actions]auto-test ackdistro images

delete cluster after default test

  • e2e: e2e test

  • noe2e: no e2e test

  • hold: save cluster

  • Basic test
    /test imageName calico/hybridnet noe2e

  • Basic test and save cluster
    /test imageName calico/hybridnet noe2e hold

  • Basic test + e2e
    /test imageName calico/hybridnet e2e

  • Basic test + e2e and save cluster
    /test imageName calico/hybridnet e2e hold

No longer need specify images in imageList

Bug Report

Type: bug report

What happened

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • ACK Distro version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

InstanceResource Error when install, Could not find disk that exists

sealer run -f ClusterFile failed, could not find disk that exists

Bug Report

Type: bug report

What happened

Trident version 1.14.3, Build: d1412075 go1.17.2, Build time: 2023-03-31-11-38-17, EnableLicense: false, Log file: /var/lib/sealer/data/my-cluster/rootfs/trident_log_2023-04-07T11-53-58+08-00
failed to validate cluster:
[Validator:InstanceResource on host:172.25.132.244] fails: [FATAL] failed to find device /dev/nvme1n1, which is assigned as StorageDevice;device /dev/nvme1n1, which is assigned as StorageDevice, need 150 GB, but only has 0.00 GB. You can 'lsblk' on host 172.25.132.244 to choose an right disk, and this error can't be ignored, please check your cluster.
[Validator:InstanceResource on host:172.25.132.246] fails: [FATAL] failed to find device /dev/nvme1n1, which is assigned as StorageDevice;device /dev/nvme1n1, which is assigned as StorageDevice, need 150 GB, but only has 0.00 GB. You can 'lsblk' on host 172.25.132.246 to choose an right disk, and this error can't be ignored, please check your cluster.
[Validator:InstanceResource on host:172.25.132.245] fails: [FATAL] failed to find device /dev/nvme1n1, which is assigned as StorageDevice;device /dev/nvme1n1, which is assigned as StorageDevice, need 150 GB, but only has 0.00 GB. You can 'lsblk' on host 172.25.132.245 to choose an right disk, and this error can't be ignored, please check your cluster..
You can see entire survey report in /var/lib/sealer/data/my-cluster/rootfs/site-survey-report.yaml

What you expected to happen

should recognize /dev/nvme1n1 disk

How to reproduce it (as minimally and precisely as possible)

sealer run -f ClusterFile

Anything else we need to know?

my clusterFile

kind: Cluster
metadata:
  name: my-cluster # must be my-cluster
spec:
  image: ack-agility-registry.cn-shanghai.cr.aliyuncs.com/ecp_builder/ackdistro:v1-22-15-ack-10
  containerRuntime:
    type: docker # which container runtime, support containerd/docker, default is docker
  registry:
    #externalRegistry: # external registry configuration
    #  domain: ack-agility-registry.cn-shanghai.cr.aliyuncs.com # if use lite mode image, externalRegistry must be set as this
    localRegistry: # local registry configuration
      domain: sea.hub # domain for local registry, default is sea.hub
      port: 5000 # port for local registry, default is 5000
  env:
    #- EtcdDevice="" # EtcdDevice is device for etcd, default is "", which will use system disk
    - StorageDevice=/dev/nvme1n1 # StorageDevice is device for kubelet and container daemon, default is "", which will use system disk
    - YodaDevice=/dev/nvme1n1 # YodaDevice is device for open-local, if not specified, open local can't provision pv
    - DockerRunDiskSize=120 # unit is GiB, capacity for /var/lib/docker, default is 100
    - KubeletRunDiskSize=30 # unit is GiB, capacity for /var/lib/kubelet, default is 100
    # - Addons=paralb,kube-prometheus-crds,ack-node-problem-detector
    - Network=calico # support hybridnet/calico, default is hybridnet
    - DNSDomain=cluster.local # default is cluster.local
    - ServiceNodePortRange=30000-32767 # default is 30000-32767
    - EnableLocalDNSCache=false # enable local dns cache component, default is false
    - RemoveMasterTaint=false # remove master taint or not, default is false
    - IgnoreErrors=InstanceResource # ignore errors for preflight
    # - TrustedRegistry=your.registry.url # Registry Domain to be trusted by container runtime

mydisks
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 40G 0 disk
├─nvme0n1p1 259:2 0 191M 0 part /boot/efi
└─nvme0n1p2 259:3 0 39.8G 0 part /
nvme2n1 259:4 0 20G 0 disk
nvme1n1 259:1 0 200G 0 disk
└─nvme1n1p1 259:5 0 200G 0 part

Environment

  • ACK Distro version: v1-22-15-ack-10
  • OS (e.g. cat /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 4.18.0-348.20.1.el7.aarch64
  • Others:

`kubectl top node` fails.

Bug Report

Type: bug report

What happened

kubectl top node

would fail with

"error: metrics not available yet"

What you expected to happen

kubectl top node

returns the node resource usage normally

How to reproduce it (as minimally and precisely as possible)

kubectl top node

Anything else we need to know?

Environment

  • ACK Distro version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:
[root@iZm5ebh5r61znlcl60iqumZ ~]# kubectl get node
NAME                      STATUS   ROLES                  AGE   VERSION
izm5ebh5r61znlcl60iqumz   Ready    control-plane,master   32d   v1.20.4-aliyun.1
[root@iZm5ebh5r61znlcl60iqumZ ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
[root@iZm5ebh5r61znlcl60iqumZ ~]# sealer version
{"gitVersion":"v0.5.2","gitCommit":"858ece9","buildDate":"2021-12-13 03:22:26","goVersion":"go1.14.15","compiler":"gc","platform":"linux/amd64"}
[root@iZm5ebh5r61znlcl60iqumZ ~]# uname -a
Linux iZm5ebh5r61znlcl60iqumZ 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

no need to re-download build context if it is already exist.

Issue Description

Type: feature request

Describe what feature you want

hi, @Stevent-fei @VinceCui ,

so fast when i use this repo to build ackd sealer image.

while i run build cmd: bash build.sh v1.22.15-aliyun.1 v1.1 true , every time I re-run, this script will re-download the corresponding oss file.

Can we optimize this to make build more faster ?

Additional context

push image

2. push

  1. Github 配置登陆密码
  2. 登陆到项目私有仓库
  3. 下载sealer二进制文件
  4. 镜像前缀固定写法,只改变tag的方式。eg:cloud-image-registry.cn-shanghai.cr.aliyuncs.com/foundations/ackdistro:tag
  5. sealer push imageName . Eg: sealer push cloud-image-registry.cn-shanghai.cr.aliyuncs.com/foundations/ackdistro:$tag

1.22 k8s test report

Summarizing 2 Failures:

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:431

[Fail] [sig-auth] ServiceAccounts [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789

Ran 346 of 6432 Specs in 5737.033 seconds
FAIL! -- 344 Passed | 2 Failed | 0 Pending | 6086 Skipped
--- FAIL: TestE2E (5739.15s)
FAIL

Ginkgo ran 1 suite in 1h35m39.279950085s
Test Suite Failed
2022/06/30 04:25:44 Saving results at /tmp/sonobuoy/results
2022/06/30 04:25:44 running command: exit status 1
namespace="sonobuoy" pod="sonobuoy-e2e-job-dd069f1531904f7c" container="sonobuoy-worker"
time="2022-06-30T04:25:45Z" level=info msg="Detected done file, transmitting result file" resultFile=/tmp/sonobuoy/results/e2e.tar.gz
namespace="sonobuoy" pod="sonobuoy" container="kube-sonobuoy"
time="2022-06-30T04:25:45Z" level=info msg="received request" client_cert="[e2e]" method=PUT plugin_name=e2e url=/api/v1/results/global/e2e
time="2022-06-30T04:25:45Z" level=info msg="Last update to annotations on exit"
time="2022-06-30T04:25:45Z" level=info msg="Shutting down aggregation server"
time="2022-06-30T04:25:45Z" level=info msg="Resources is not set explicitly implying query all resources, but skipping secrets for safety. Specify the value explicitly in Resources to gather this data."
time="2022-06-30T04:25:45Z" level=info msg="Collecting Node Configuration and Health..."
time="2022-06-30T04:25:45Z" level=info msg="Creating host results for izj6c0ccak9v1pxg9muizpz under /tmp/sonobuoy/b335dbc0-151d-42d6-a824-61803851b8c0/hosts/izj6c0ccak9v1pxg9muizpz\n"
time="2022-06-30T04:25:45Z" level=info msg="Running cluster queries"
W0630 04:25:45.810700 1 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0630 04:25:46.033333 1 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
time="2022-06-30T04:25:46Z" level=info msg="Running ns query (kube-system)"
time="2022-06-30T04:25:46Z" level=info msg="Namespace acs-system Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace default Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace disruption-9089 Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace kube-node-lease Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace kube-public Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace kube-system Matched=true"
time="2022-06-30T04:25:46Z" level=info msg="Namespace resourcequota-3125 Matched=false"
time="2022-06-30T04:25:46Z" level=info msg="Namespace sonobuoy Matched=true"
time="2022-06-30T04:25:46Z" level=info msg="Collecting Pod Logs by namespace (kube-system)"
time="2022-06-30T04:25:48Z" level=info msg="Collecting Pod Logs by namespace (sonobuoy)"
time="2022-06-30T04:25:48Z" level=info msg="Collecting Pod Logs by FieldSelectors []"
time="2022-06-30T04:25:48Z" level=info msg="Log lines after this point will not appear in the downloaded tarball."
time="2022-06-30T04:25:49Z" level=info msg="Results available at /tmp/sonobuoy/202206300249_sonobuoy_b335dbc0-151d-42d6-a824-61803851b8c0.tar.gz"
time="2022-06-30T04:25:49Z" level=info msg="no-exit was specified, sonobuoy is now blocking"

[paralb]supporting dual-stack

Issue Description
Paralb will support dual-stack.

We update a new version of paralb to support dual-stack. Meaning that, the LoadBalancer services in k8s could have dual-stack vips as external-ips supplied by Paralb.

open-local-agent is conflicted with yoda-agent

Bug Report

Type: bug report

What happened

after helm install open-local charts, open-local-agent pod STATUS is CrashLoopBackOff.
image

What you expected to happen

open-local-agent should start successful

How to reproduce it (as minimally and precisely as possible)

1.sealer -f ./ClusterFile
2. helm install open-local open-local-chart

Anything else we need to know?

Environment

  • ACK Distro version: v1-22-15-ack-10
  • OS (e.g. cat /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 4.18.0-348.20.1.el7.aarch64
  • Others:

after some digging , it might be conflicted with yoda-agent, port 1736 already used by yoda-agent.
image

What is the version policy of ACK distro?

I found that currently ACK-distro has no version policy there, such as ack-distro 0.1.0 or 1.0.0, or conclude a detailed set of feature.

I think if ack-distro has a detailed plan fo version policy, it could provide more confidence when users are judging whether to adopt it.

Security Bug

Hi,

I submitted a security bug on security.alibaba.com related to this GitHub repository but I haven't heard anything back from them. navigating the security.alibaba.com website is also very tough as it always error's out.

I see that you have disabled the aliyun/Alibaba cloud credentials. Can you please help me in communicating with the security team?

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

[chore] do we need to delete `/etc/containerd/` when do clean ops

Bug Report

Type: bug report

What happened

sealer delete -a , but those containerd files is not deleted .

image

What you expected to happen

How to reproduce it (as minimally and precisely as possible)

Anything else we need to know?

Environment

  • ACK Distro version:
  • OS (e.g. cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.