Coder Social home page Coder Social logo

Comments (18)

gentios avatar gentios commented on August 27, 2024 1

Hi @alexvicegrab thank you for your quick response

My ca_values.yaml looks like below:

image:
  tag: 1.2.0

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
  path: /
  hosts:
    # TODO: Change this back
    - ca.lf.notarised.xyz
  tls:
    - secretName: ca--tls
      hosts:
        # TODO: Change this back
        - ca.lf.notarised.xyz

persistence:
  accessMode: ReadWriteOnce
  size: 1Gi
  storageClass: openebs-standalone

caName: ca

postgresql:
  enabled: true

config:
  hlfToolsVersion: 1.2.0
  csr:
    names:
      c: MK
      st: Skopje
      l:
      o: "Notarised"
      ou: Blockchain
  affiliations:
    notarised: []

affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 95
        podAffinityTerm:
          topologyKey: "kubernetes.io/hostname"
          labelSelector:
            matchLabels:
              app: hlf-ca
  podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app: postgresql
          release: ca
      topologyKey: "kubernetes.io/hostname"

Can you please address the changes that I have to make.

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024 1

Hi @gentios, either you need to specify the persistence you want to use for PostgreSQL in the postgresql block, using the PostgreSQL chart conventions, or alternatively deploy a PostgreSQL database separately (this is my preferred way of doing things for upgradability, and loose-coupling) and filling in these values instead:

externalDatabase:
  # Either postgres or mysql
  type:
  host: ""
  port: ""
  database: ""
  username: ""
  password: ""

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024 1

Thank you @alexvicegrab for your support, you guys have done a great job.

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024 1

@alexvicegrab yes I managed to fix it, when I checked the logs of the CA the error was since my database name was with a hyphen - :(

I appreciate your help, and I would like to enrich the documentation with these extra steps that you told me, I would be glad to contribute guys.

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Hi @gentios, it looks like the PostgreSQL volume is not being set up correctly. You may need to specify some extra values in your CA values file to use the correct storage account not only for the CA itself, but also for the accompanying PostgreSQL chart.

Alternatively you can install the PostgreSQL chart separately and point to it from the CA.

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024

hi @alexvicegrab I did all the steps but the CA is now in a pending state and won't deploy, I have a running PostgreSQL database and I can connect to it, I have setup the cert-manager successfully in this domain: ca.lf.notarised.xyz, here are the logs as below:

**root@notarised-master-01:~# kubectl get po --all-namespaces**
NAMESPACE             NAME                                                              READY     STATUS    RESTARTS   AGE
blockchain            ca-hlf-ca-8966595f9-txb87                                         0/1       Pending   0          16h
cert-manager          cert-manager-66cf5c6dbf-c8jc4                                     1/1       Running   0          17h
cert-manager          cert-manager-webhook-7bcfdbdcfd-8zmgz                             1/1       Running   0          17h
default               postgres-postgresql-0                                             1/1       Running   0          17h
ingress-controlller   nginx-ingress-controller-frqcm                                    1/1       Running   0          17h
ingress-controlller   nginx-ingress-controller-l76ng                                    1/1       Running   0          17h
ingress-controlller   nginx-ingress-controller-v5frh                                    1/1       Running   0          17h
ingress-controlller   nginx-ingress-default-backend-6b9b546dc8-bdw9s                    1/1       Running   0          17h
kube-system           etcd-notarised-master-01                                          1/1       Running   0          18h
kube-system           kube-apiserver-notarised-master-01                                1/1       Running   0          18h
kube-system           kube-controller-manager-notarised-master-01                       1/1       Running   0          18h
kube-system           kube-dns-6f4fd4bdf-d4s8k                                          3/3       Running   0          18h
kube-system           kube-flannel-ds-bzq8k                                             1/1       Running   0          18h
kube-system           kube-flannel-ds-jzznx                                             1/1       Running   0          18h
kube-system           kube-flannel-ds-msvsf                                             1/1       Running   0          18h
kube-system           kube-flannel-ds-r5h7t                                             1/1       Running   0          18h
kube-system           kube-proxy-gkdjl                                                  1/1       Running   0          18h
kube-system           kube-proxy-qfd2k                                                  1/1       Running   0          18h
kube-system           kube-proxy-v2rrv                                                  1/1       Running   0          18h
kube-system           kube-proxy-wfsxd                                                  1/1       Running   0          18h
kube-system           kube-scheduler-notarised-master-01                                1/1       Running   0          18h
kube-system           tiller-deploy-774df8f6c-wgx8w                                     1/1       Running   0          17h
openebs               cstor-sparse-pool-4cii-56d476c7f5-qhkkh                           2/2       Running   0          17h
openebs               cstor-sparse-pool-i3gu-6df5588b6b-7x6hj                           2/2       Running   0          17h
openebs               cstor-sparse-pool-nmj2-67995c8778-rmtg2                           2/2       Running   0          17h
openebs               openebs-apiserver-5fb96d6cdb-622l5                                1/1       Running   0          17h
openebs               openebs-ndm-b4v9h                                                 1/1       Running   0          17h
openebs               openebs-ndm-dn7sw                                                 1/1       Running   0          17h
openebs               openebs-ndm-hmjrj                                                 1/1       Running   0          17h
openebs               openebs-provisioner-6695f5f78c-rcrvc                              1/1       Running   2          17h
openebs               openebs-snapshot-operator-5b6f7d666c-glzn9                        2/2       Running   2          17h
openebs               pvc-2e3d641f-39d5-11e9-b75c-9600001cbe61-target-7666c5fd948dbv4   3/3       Running   0          17h
openebs               pvc-524ffd50-39d9-11e9-b75c-9600001cbe61-target-cb4969799-rrnb8   3/3       Running   0          16h

The CA logs:

Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ca-hlf-ca
    ReadOnly:   false
  ca-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ca-hlf-ca--config
    Optional:  false
  default-token-b7clt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-b7clt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  2m (x3432 over 16h)  default-scheduler  0/4 nodes are available: 1 PodToleratesNodeTaints, 4 MatchInterPodAffinity, 4 PodAffinityRulesNotMatch.

The services in my cluster:

blockchain            ca-hlf-ca                                  ClusterIP   10.99.172.10     <none>          7054/TCP                              16h
cert-manager          cert-manager-webhook                       ClusterIP   10.100.144.232   <none>          443/TCP                               17h
default               kubernetes                                 ClusterIP   10.96.0.1        <none>          443/TCP                               18h
default               postgres-postgresql                        ClusterIP   10.105.86.47     <none>          5432/TCP                              17h
default               postgres-postgresql-headless               ClusterIP   None             <none>          5432/TCP                              17h
ingress-controlller   nginx-ingress-controller                   ClusterIP   10.104.186.68    95.216.180.54   80/TCP,443/TCP                        17h
ingress-controlller   nginx-ingress-controller-metrics           ClusterIP   10.96.239.157    <none>          9913/TCP                              17h
ingress-controlller   nginx-ingress-controller-stats             ClusterIP   10.102.5.253     <none>          18080/TCP                             17h
ingress-controlller   nginx-ingress-default-backend              ClusterIP   10.102.28.188    <none>          80/TCP                                17h
kube-system           kube-dns                                   ClusterIP   10.96.0.10       <none>          53/UDP,53/TCP                         18h
kube-system           tiller-deploy                              ClusterIP   10.105.82.7      <none>          44134/TCP                             17h
openebs               openebs-apiservice                         ClusterIP   10.111.75.39     <none>          5656/TCP                              17h
openebs               pvc-2e3d641f-39d5-11e9-b75c-9600001cbe61   ClusterIP   10.110.135.174   <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   17h
openebs               pvc-524ffd50-39d9-11e9-b75c-9600001cbe61   ClusterIP   10.108.249.247   <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   16h

And the cert manager logs:

I0226 15:08:18.301282       1 sync.go:263] Certificate blockchain/ca--tls scheduled for renewal in 1438h41m18.698753399s
I0226 15:08:18.332621       1 controller.go:151] certificates controller: Finished processing work item "blockchain/ca--tls"
I0226 15:08:56.076739       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
I0226 15:08:56.077417       1 sync.go:177] Certificate "ca--tls" for ingress "ca-hlf-ca" already exists
I0226 15:08:56.077543       1 sync.go:180] Certificate "ca--tls" for ingress "ca-hlf-ca" is up to date
I0226 15:08:56.077560       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"
I0226 15:13:12.245470       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
E0226 15:13:12.245843       1 controller.go:197] ingress 'blockchain/ca-hlf-ca' in work queue no longer exists
I0226 15:13:12.245886       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"
I0226 15:13:12.273564       1 controller.go:145] certificates controller: syncing item 'blockchain/ca--tls'
E0226 15:13:12.274193       1 controller.go:170] certificate 'blockchain/ca--tls' in work queue no longer exists
I0226 15:13:12.274525       1 controller.go:151] certificates controller: Finished processing work item "blockchain/ca--tls"
I0226 15:15:16.369088       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
I0226 15:15:16.410775       1 controller.go:145] certificates controller: syncing item 'blockchain/ca--tls'
I0226 15:15:16.414616       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"
I0226 15:15:16.414887       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
I0226 15:15:16.415056       1 sync.go:177] Certificate "ca--tls" for ingress "ca-hlf-ca" already exists
I0226 15:15:16.415203       1 sync.go:180] Certificate "ca--tls" for ingress "ca-hlf-ca" is up to date
I0226 15:15:16.415285       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"
I0226 15:15:16.415049       1 helpers.go:183] Setting lastTransitionTime for Certificate "ca--tls" condition "Ready" to 2019-02-26 15:15:16.41502761 +0000 UTC m=+1923.733847235
I0226 15:15:16.416280       1 sync.go:263] Certificate blockchain/ca--tls scheduled for renewal in 1438h34m20.583738866s
I0226 15:15:16.447610       1 controller.go:151] certificates controller: Finished processing work item "blockchain/ca--tls"
I0226 15:15:16.448702       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
I0226 15:15:16.450521       1 sync.go:177] Certificate "ca--tls" for ingress "ca-hlf-ca" already exists
I0226 15:15:16.450606       1 sync.go:180] Certificate "ca--tls" for ingress "ca-hlf-ca" is up to date
I0226 15:15:16.450679       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"
I0226 15:15:16.450775       1 controller.go:145] certificates controller: syncing item 'blockchain/ca--tls'
I0226 15:15:16.452365       1 sync.go:263] Certificate blockchain/ca--tls scheduled for renewal in 1438h34m20.547659256s
I0226 15:15:16.480964       1 controller.go:151] certificates controller: Finished processing work item "blockchain/ca--tls"
I0226 15:15:55.876584       1 controller.go:173] ingress-shim controller: syncing item 'blockchain/ca-hlf-ca'
I0226 15:15:55.879147       1 sync.go:177] Certificate "ca--tls" for ingress "ca-hlf-ca" already exists
I0226 15:15:55.879610       1 sync.go:180] Certificate "ca--tls" for ingress "ca-hlf-ca" is up to date
I0226 15:15:55.879747       1 controller.go:179] ingress-shim controller: Finished processing work item "blockchain/ca-hlf-ca"

I am using OpenEbs to deploy PostgreSQL and also other components of the blockchain

My yaml configuration file looks like this:

persistence:
  accessMode: ReadWriteOnce
  size: 1Gi
  storageClass: openebs-cstor-sparse

caName: ca

externalDatabase:
  # Either postgres or mysql
  type: postgres
  host: "10.244.2.5"
  port: "5432"
  database: ""
  username: ""
  password: ""

Here are my PV's

pvc-2e3d641f-39d5-11e9-b75c-9600001cbe61   8Gi        RWO            Delete           Bound     default/data-postgres-postgresql-0   openebs-cstor-sparse             17h
pvc-524ffd50-39d9-11e9-b75c-9600001cbe61   1Gi        RWO            Delete           Bound     blockchain/ca-hlf-ca                 openebs-cstor-sparse             16h

Postgresql logs

INFO  ==> Starting postgresql... 
2019-02-26 14:49:08.934 GMT [131] LOG:  received fast shutdown request
2019-02-26 14:49:09.252 GMT [131] LOG:  aborting any active transactions
2019-02-26 14:49:09.253 GMT [131] LOG:  worker process: logical replication launcher (PID 138) exited with exit code 1
2019-02-26 14:49:09.254 GMT [133] LOG:  shutting down
2019-02-26 14:49:11.127 GMT [131] LOG:  database system is shut down
2019-02-26 14:49:13.739 GMT [220] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-02-26 14:49:13.739 GMT [220] LOG:  listening on IPv6 address "::", port 5432
2019-02-26 14:49:14.007 GMT [220] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-02-26 14:49:14.863 GMT [222] LOG:  database system was shut down at 2019-02-26 14:49:10 GMT
2019-02-26 14:49:15.153 GMT [220] LOG:  database system is ready to accept connections

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Aha, because you are using a separate service for the PostgreSQL, rather than the Helm chart, you will need to modify the affinity rules in the CA values YAML to remove the affinity to the postgresql deployment.

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024

@alexvicegrab thank you for the response. Do I have to delete this part

 podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app: postgresql
          release: ca
      topologyKey: "kubernetes.io/hostname"

What if I want the CA to spin off a postgresql database for me, where I have to modify the values in order to give persistance and storageclass to postgresql ?

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Yes, that's the bit you want to delete.

If you want the CA to spin up a PosgreSQL database for you, instead of the externalDatabase, you can enable the postgresql one here: https://github.com/aidtechnology/at-charts/blob/master/hlf-ca/values.yaml#L69

I personally prefer to create the database separately externally with the default PostgreSQL Helm Chart (calling it ca-pg for instance), and then add a relevant affinity pointing to the release ca-pg instead of ca, as it makes it easier to update this database if required. This is how I implement it in Nephos, our
(alpha) library to automate deployment of Fabric networks.

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Under that same location as above, you should be able to add extra values for the PostgreSQL Helm chart, as it's specified as a sub-dependency. So, something like this

postgresql:
  ## Whether to deploy a postgres server to satisfy the Fabric CA database requirements.
  # To use an external database set this to false and configure the externalDatabase parameters, specifying the type to 'postgres'
  enabled: true
  persistence:
    enabled: true
    storageClass: "myCustomSC"
    accessModes:
      - ReadWriteOnce
    size: 8Gi

See here for details: https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024

@alexvicegrab thank you for the support, I did that but it keep restarting, here are my ca logs:

Type    Reason                 Age                From                          Message
  ----    ------                 ----               ----                          -------
  Normal  Scheduled              1m                 default-scheduler             Successfully assigned ca-hlf-ca-f598f778f-mfqsw to notarised-worker-03
  Normal  SuccessfulMountVolume  1m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "ca-config"
  Normal  SuccessfulMountVolume  1m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "default-token-b7clt"
  Normal  SuccessfulMountVolume  1m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "pvc-9a9161b5-3a93-11e9-b75c-9600001cbe61"
  Normal  Pulled                 19s (x2 over 50s)  kubelet, notarised-worker-03  Container image "jwilder/dockerize" already present on machine
  Normal  Created                19s (x2 over 50s)  kubelet, notarised-worker-03  Created container
  Normal  Started                19s (x2 over 50s)  kubelet, notarised-worker-03  Started container

My pods

blockchain            ca-hlf-ca-f598f778f-mfqsw                                         0/1       Init:0/1   2          2m
blockchain            ca-pg-postgresql-0                                                1/1       Running    0          5m
cert-manager          cert-manager-66cf5c6dbf-c8jc4                                     1/1       Running    0          22h
cert-manager          cert-manager-webhook-7bcfdbdcfd-8zmgz                             1/1       Running    0          22h
ingress-controlller   nginx-ingress-controller-frqcm                                    1/1       Running    0          22h
ingress-controlller   nginx-ingress-controller-l76ng                                    1/1       Running    0          22h
ingress-controlller   nginx-ingress-controller-v5frh                                    1/1       Running    0          22h
ingress-controlller   nginx-ingress-default-backend-6b9b546dc8-bdw9s                    1/1       Running    0          22h
kube-system           etcd-notarised-master-01                                          1/1       Running    0          1d
kube-system           kube-apiserver-notarised-master-01                                1/1       Running    0          1d
kube-system           kube-controller-manager-notarised-master-01                       1/1       Running    0          1d
kube-system           kube-dns-6f4fd4bdf-d4s8k                                          3/3       Running    0          1d
kube-system           kube-flannel-ds-bzq8k                                             1/1       Running    0          1d
kube-system           kube-flannel-ds-jzznx                                             1/1       Running    0          1d

And I changed the values of the Affinity like this:

podAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          app: postgresql
          release: ca-pg
      topologyKey: "kubernetes.io/hostname"

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Aha, what are the logs of the restarting CA pod now?

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

And did you pass the password information of the PostgreSQL deployment to the CA?

I recommend not saving this to values, since it's sensitive, but to retrieve it from the PostgreSQL Helm Chart secret and add it as a --set parameter.

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024

@alexvicegrab the ca logs are as below:

root@notarised-master-01:~/notario-kubernetes# kubectl logs ca-hlf-ca-f598f778f-gscnk -n blockchain
Error from server (BadRequest): container "ca" in pod "ca-hlf-ca-f598f778f-gscnk" is waiting to start: PodInitializing

and if I describe the CA pod the logs are as below:

Events:
  Type     Reason                 Age                From                          Message
  ----     ------                 ----               ----                          -------
  Normal   Scheduled              2m                 default-scheduler             Successfully assigned ca-hlf-ca-f598f778f-gscnk to notarised-worker-03
  Normal   SuccessfulMountVolume  2m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "ca-config"
  Normal   SuccessfulMountVolume  2m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "default-token-b7clt"
  Normal   SuccessfulMountVolume  2m                 kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "pvc-35486a06-3a98-11e9-b75c-9600001cbe61"
  Normal   Pulled                 44s (x3 over 1m)   kubelet, notarised-worker-03  Container image "jwilder/dockerize" already present on machine
  Normal   Created                44s (x3 over 1m)   kubelet, notarised-worker-03  Created container
  Normal   Started                43s (x3 over 1m)   kubelet, notarised-worker-03  Started container
  Warning  BackOff                13s (x2 over 56s)  kubelet, notarised-worker-03  Back-off restarting failed container

I am running postgresql as NodePort here are my services:

blockchain            ca-hlf-ca                                  ClusterIP   10.105.243.165   <none>          7054/TCP                              3m
blockchain            ca-pg-postgresql                           NodePort    10.104.135.150   <none>          5432:31070/TCP                        39m
blockchain            ca-pg-postgresql-headless                  ClusterIP   None             <none>          5432/TCP                              39m
cert-manager          cert-manager-webhook                       ClusterIP   10.100.144.232   <none>          443/TCP                               23h
default               kubernetes                                 ClusterIP   10.96.0.1        <none>          443/TCP                               1d
ingress-controlller   nginx-ingress-controller                   ClusterIP   10.104.186.68    95.216.180.54   80/TCP,443/TCP                        23h
ingress-controlller   nginx-ingress-controller-metrics           ClusterIP   10.96.239.157    <none>          9913/TCP                              23h
ingress-controlller   nginx-ingress-controller-stats             ClusterIP   10.102.5.253     <none>          18080/TCP                             23h
ingress-controlller   nginx-ingress-default-backend              ClusterIP   10.102.28.188    <none>          80/TCP                                23h
kube-system           kube-dns                                   ClusterIP   10.96.0.10       <none>          53/UDP,53/TCP                         1d
kube-system           tiller-deploy                              ClusterIP   10.105.82.7      <none>          44134/TCP                             23h
openebs               openebs-apiservice                         ClusterIP   10.111.75.39     <none>          5656/TCP                              23h
openebs               pvc-287f42e4-3a93-11e9-b75c-9600001cbe61   ClusterIP   10.97.152.169    <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   39m
openebs               pvc-35486a06-3a98-11e9-b75c-9600001cbe61   ClusterIP   10.97.101.52     <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   3m
openebs               pvc-734cecb6-3a8b-11e9-b75c-9600001cbe61   ClusterIP   10.103.115.226   <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   1h
openebs               pvc-e4fc07c5-3a79-11e9-b75c-9600001cbe61   ClusterIP   10.103.63.84     <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   3h
openebs               pvc-ec96d2d1-3a7c-11e9-b75c-9600001cbe61   ClusterIP   10.104.229.223   <none>          3260/TCP,7777/TCP,6060/TCP,9500/TCP   3h

And I am configuring in the ca_values.yaml as below:

externalDatabase:
  # Either postgres or mysql
  type: postgres
  host: "localhost"
  port: "31070"
  database: ""
  username: ""
  password: ""

Also the PV's are created successfully:

root@notarised-master-01:~/notario-kubernetes# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                STORAGECLASS           REASON    AGE
pvc-287f42e4-3a93-11e9-b75c-9600001cbe61   8Gi        RWO            Delete           Bound     blockchain/data-ca-pg-postgresql-0   openebs-cstor-sparse             43m
pvc-35486a06-3a98-11e9-b75c-9600001cbe61   1Gi        RWO            Delete           Bound     blockchain/ca-hlf-ca                 openebs-cstor-sparse             7m

from lf-k8s-hlf-webinar.

gentios avatar gentios commented on August 27, 2024

@alexvicegrab I did a clean environment and now I get new error logs in the ca:

Type     Reason                 Age               From                          Message
  ----     ------                 ----              ----                          -------
  Normal   Scheduled              2m                default-scheduler             Successfully assigned ca-hlf-ca-7745cf64d7-8r85z to notarised-worker-03
  Normal   SuccessfulMountVolume  2m                kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "ca-config"
  Normal   SuccessfulMountVolume  2m                kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "default-token-65cwg"
  Normal   SuccessfulMountVolume  1m                kubelet, notarised-worker-03  MountVolume.SetUp succeeded for volume "pvc-8298082a-3a9f-11e9-9788-9600001cedf5"
  Normal   Pulling                1m                kubelet, notarised-worker-03  pulling image "jwilder/dockerize"
  Normal   Pulled                 1m                kubelet, notarised-worker-03  Successfully pulled image "jwilder/dockerize"
  Normal   Created                1m                kubelet, notarised-worker-03  Created container
  Normal   Started                1m                kubelet, notarised-worker-03  Started container
  Normal   Pulling                1m                kubelet, notarised-worker-03  pulling image "hyperledger/fabric-ca:1.2.0"
  Normal   Pulled                 1m                kubelet, notarised-worker-03  Successfully pulled image "hyperledger/fabric-ca:1.2.0"
  Warning  Unhealthy              42s (x3 over 1m)  kubelet, notarised-worker-03  Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Created                11s (x2 over 1m)  kubelet, notarised-worker-03  Created container
  Normal   Started                11s (x2 over 1m)  kubelet, notarised-worker-03  Started container
  Normal   Killing                11s               kubelet, notarised-worker-03  Killing container with id docker://ca:Container failed liveness probe.. Container will be killed and recreated.
  Normal   Pulled                 11s               kubelet, notarised-worker-03  Container image "hyperledger/fabric-ca:1.2.0" already present on machine
  Warning  Unhealthy              6s (x7 over 1m)   kubelet, notarised-worker-03  Readiness probe failed: HTTP probe failed with statuscode: 500

Is this because when you navigate to the domain: ca.lf.notarised.xyz throws a 500

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

https://ca.lf.notarised.xyz/cainfo works fine for me

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Did you change anything?

from lf-k8s-hlf-webinar.

alexvicegrab avatar alexvicegrab commented on August 27, 2024

Hi @gentios, I'd be delighted for you to submit a PR, ideally in the workshop, webinar (here) or Nephos repositories:

https://github.com/aidtechnology/hgf-k8s-workshop
https://github.com/aidtechnology/nephos

from lf-k8s-hlf-webinar.

Related Issues (12)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.