jenkins-x / terraform-aws-eks-jx Goto Github PK
View Code? Open in Web Editor NEWA Terraform module for creating Jenkins X infrastructure on AWS
License: Apache License 2.0
A Terraform module for creating Jenkins X infrastructure on AWS
License: Apache License 2.0
Currently the jx-requirements.yml is created in the module directory. It would be more useful for users for it to be generated in the current working directory.
It would be nice to use terratest to write tests for this and the gke module. https://github.com/gruntwork-io/terratest.
Currently, you need to create an additional IAM user just for the Vault setup and you also need to export the credentials for this user in environment variables prior to running jx boot
. This is not clear in the documentation and easy to get wrong leading to failing Jenkins X installs.
Initially, this was required due to some issues with Vault which got resolved upstream. Now that a new Vault version is available we can remove this workaround.
On the Terraform side we will need to set up proper IAM Roles for Service Accounts for Vault.
However, there are also changes required in jx
itself. We need make sure that the Vault Operator uses the right version of Vault and the the current jx boot
code will also make use of the new way to configure the Vault setup (this is to ensure that at least for now one can still use jx create cluster
with jx boot
creating all resources).
We keep seeing the following error after some time running pull request builds:
Error: Error attaching policy arn:aws:iam::296178596335:policy/vault_us-east-1-2020050616512586470000000c to IAM User **-bdd-test: LimitExceeded: Cannot exceed quota for PoliciesPerUser: 10
status code: 409, request id: 53cf51d8-3234-47b7-90bd-f75efe7daa76
After manually cleaning up policies the builds work again until the limit is exceeded again.
It seems something does not get cleaned up properly. If the policies in question are created by Terraform, then they should be deleted on terraform destroy
, right? If for some reason the policies cannot be deleted by Terraform, we might need to add some manual cleanup on top of the terraform destroy
.
After doing a full terraform destroy, if we try to run a terraform apply, then it fails with this message:
Error: error getting user: NoSuchEntity: The user with name jenkins-x-vault cannot be found.
status code: 404, request id: a916d4a6-bf92-4dce-a12a-a07ff1f78d6f
on ../terraform-aws-eks-jx/modules/vault/main.tf line 19, in data "aws_iam_user" "vault_user":
19: data "aws_iam_user" "vault_user" {
The fix is to add explicit dependency on the resource creation when trying to query the data source. hashicorp/terraform#15285 (comment)
The default should not be to create public repos, but private ones.
What's about splitting the script parts into descriptive modules, either per Jenkins X functional group similar to here https://github.com/jenkins-x/terraform-google-jx/tree/master/jx/modules or by cloud "resource" type, eg service-account, storage, secret.
After installing Jenkins X a terraform destroy
fails with:
module.eks-jx.module.cluster.module.vpc.aws_internet_gateway.this[0]: Still destroying... [id=igw-02deb1c3a69fc3b48, 14m50s elapsed]
Error: Error waiting for internet gateway (igw-02deb1c3a69fc3b48) to detach: timeout while waiting for state to become 'detached' (last state: 'detaching', timeout: 15m0s)
Error: timeout while waiting for resource to be gone (last state: 'Terminating', timeout: 5m0s)
Error: error deleting S3 Bucket (logs-hardy-tf-jx-20200424063048248600000005): BucketNotEmpty: The bucket you tried to delete is not empty
status code: 409, request id: 8F7DC8F3201C0660, host id: Ql4eaqlM6/d3pPc11DuA0sMnPKiQVM36sQ3lWVB3VRN/E05MvoL/lSSZ0EharZ7XtP1w2fHptYE=
After running jx boot --requirements requirements.yml
my vault pod fails to start correctly. Before running I set env variables with vault secret and key.
Verifying the CLI packages using version stream URL: https://github.com/jenkins-x/jenkins-x-versions.git and git ref: v1.0.472
using version 2.1.46 of jx
CLI packages kubectl, git, helm seem to be setup correctly
NAME VERSION
jx 2.1.46
Kubernetes cluster v1.15.11-eks-af3caf
kubectl v1.13.2
git 2.23.0
kubectl get pods
NAME READY STATUS RESTARTS AGE
exdns-external-dns-58ddb9fd48-tzrw8 1/1 Running 0 6m24s
jx-vault-jx-0 1/3 CrashLoopBackOff 10 4m17s
jx-vault-jx-configurer-588fd5c76f-zvmjp 1/1 Running 0 4m16s
vault-operator-5b9c49d9-wp55n 1/1 Running 0 4m38s
this happens despite the env variables for the Vault Iam access and secret key being correct.
The latest version of this repo creates the following error on the first terraform apply run:
Error: [ERR]: Error building changeset: InvalidChangeBatch: [Tried to create resource record set [name='jenkins.snip.', type='NS'] but it already exists]
status code: 400, request id: 51ad7953-608c-4c8b-8520-49938db17ae8
on modules/dns/main.tf line 17, in resource "aws_route53_record" "subdomain_ns_delegation":
17: resource "aws_route53_record" "subdomain_ns_delegation" {
The domain did not exist prior to running. A re-run of the command also hard fails. Is there a way for this to soft fail on this step and continue executing?
Tested on TF v0.12.26 and v0.12.25.
master
branch failed. π¨I recommend you give this issue a high priority, so other packages depending on you could benefit from your bug fixes and new features.
You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. Iβm sure you can resolve this πͺ.
Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.
Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master
branch. You can also manually restart the failed CI job that runs semantic-release.
If you are not sure how to resolve this, here is some links that can help you:
If those donβt help, or if this issue is reporting something you think isnβt right, you can always ask the humans behind semantic-release.
package.json
file.A package.json file at the root of your project is required to release on npm.
Please follow the npm guideline to create a valid package.json
file.
Good luck with your project β¨
Your semantic-release bot π¦π
As a workaround, until issue #20 is resolved and the vault_user becomes obsolete, we should improve the user experience around the required vault_user.
The intended improvement is two-fold. On the Terraform side we are going to create the required user if none is specified, something along the lines:
resource "aws_iam_user" "test" {
count = var.vault_user == "" ? 1 : 0
name = "test-user"
}
resource "aws_iam_access_key" "test" {
count = var.vault_user == "" ? 1 : 0
user = aws_iam_user.test[0].name
}
We also add the name, is and secret to the output variables:
output "vault_user" {
value = aws_iam_user.test[0].name
}
output "id" {
value = aws_iam_access_key.test[0].id
}
output "secret" {
value = aws_iam_access_key.test[0].secret
}
In the case where there is no vault_user specified and the user has enough permissions to create new users, this will allow running boot like this:
$ VAULT_AWS_ACCESS_KEY_ID=$(terraform output id) VAULT_AWS_SECRET_ACCESS_KEY=$(terraform output secret) jx boot -r jx-requirements.yml
The above will be documented with an additional note/info that in the case where new users cannot be created, a user needs to be provided and jx boot
needs to be run as:
VAULT_AWS_ACCESS_KEY_ID=<id> VAULT_AWS_SECRET_ACCESS_KEY=<secret> jx boot -r jx-requirements.yml
Build pipeline errors with after using this Terraform module to prepare Jenkins X cluster:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/igdianov/test-springboot-app:0.0.1": unsupported status code 401; body: Not Authorized
Pipeline failed on stage 'from-build-pack' : container 'step-build-container-build'. The execution of the pipeline has stopped.
This: https://github.com/terraform-aws-modules/terraform-aws-eks#conditional-creation
Also, when doing a terraform destroy if the cluster has been deleted, destroy fails because this variable is set to true by default and the module tries to query the datasource.
Let's create a helper script for authenticating against AWS where the IAM user has to assume a role.
Steps to reproduce:
export AWS_ACCESS_KEY_ID=[...]
export AWS_SECRET_ACCESS_KEY=[...]
export AWS_DEFAULT_REGION=us-east-1
echo "module \"eks-jx\" {
source = \"jenkins-x/eks-jx/aws\"
cluster_name = \"jx-demo\"
region = \"us-east-1\"
desired_node_count = 3
min_node_count = 3
max_node_count = 6
}
output \"vault_user_id\" {
value = module.eks-jx.vault_user_id
description = \"The Vault IAM user id\"
}
output \"vault_user_secret\" {
value = module.eks-jx.vault_user_secret
description = \"The Vault IAM user secret\"
}" | tee main.tf
terraform init
terraform apply
export PATH_TO_TERRAFORM=$PWD
cd ..
git clone \
https://github.com/jenkins-x/jenkins-x-boot-config.git \
environment-jx-demo-dev
cd environment-jx-demo-dev
jx boot
The last lines of the output of jx boot
are as follows.
...
You can watch progress in the CloudFormation console: https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/stackinfo?stackId=arn:aws:cloudformation:us-east-1:036548781187:stack/jenkins-x-vault-stack18b4fe6/e1884c10-9472-11ea-9ba2-0a5afd1032bb
error: unable to create/update Vault: unable to set cloud provider specific Vault configuration: unable to apply cloud provider config: an error occurred while creating the vaultCRD resources: executing the Vault CloudFormation : unable to create vault prerequisite resources: ResourceNotReady: failed waiting for successful resource state
error: failed to interpret pipeline file jenkins-x.yml: failed to run '/bin/sh -c jx step boot vault --provider-values-dir ../../kubeProviders' command in directory 'systems/vault', output: ''
Running terraform apply with the subdomain resource record creates a hard failure when the record gets created earlier in the run and hard stops the rest of the deployment.
Error: [ERR]: Error building changeset: InvalidChangeBatch: [Tried to create resource record set [name='jenkins.snip.', type='NS'] but it already exists]
status code: 400, request id: cf01eaa7-e431-4049-a12a-a4bc351a126d
on .terraform/modules/eks-jx/terraform-aws-eks-jx-1.0.5/modules/dns/main.tf line 17, in resource "aws_route53_record" "subdomain_ns_delegation":
17: resource "aws_route53_record" "subdomain_ns_delegation" {
This is only resolved by setting create_and_configure_subdomain to false and performing terraform apply again.
Hi guys,
I think I found a bug with the new version. I haven't supplied an IAM user, I want the module to create one to keep the installation clean but I get the following error.
Error: error getting user: NoSuchEntity: The user with name jenkins-x-vault cannot be found.
status code: 404, request id: 0b3eebff-f6e3-4443-93bb-a5b605627029
on ../aws-eks-jx/modules/vault/main.tf line 19, in data "aws_iam_user" "vault_user":
19: data "aws_iam_user" "vault_user" {
@hferentschik Thank you for making the new version available. Much appreciated. I am testing out the new module version. It's trying to find if the user does exist if then create one but throwing 404. Not sure if it is something to do with terraform version.
I am using following versions:
Terraform v0.12.24
- provider.aws v2.52.0
- provider.kubernetes v1.11.1
- provider.local v1.4.0
- provider.null v2.1.2
- provider.random v2.2.1
- provider.template v2.1.2
Terraform destroy does not delete non-empty s3 buckets.
Error: error deleting S3 Bucket (vault-unseal-tf-jx-grand-leopard-20200510193244396100000005): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: 410A09EDE43DF2A4, host id: PYpRJ6LZyL09cOGTXPQ/rURaQiJpZih58V/xssaHZOlN+b7zjBOGWokx6ukYHYLt5U6HeMlKgzs=
The way to delete them is to empty them in the aws s3 console, and then re-run terraform destroy. This can be fixed by setting:
force_destroy = true
when creating the s3 buckets
The README contains a path eks/terraform/jx/irsa.tf
which should be modules/cluster/irsa.tf
When going through "Getting started" on AWS and using terraform, the docs indicate you can optionally not create an IAM user for vault, and the user should be autocreated, however it fails.
Follow Step 1 in https://jenkins-x.io/docs/getting-started/
terraform apply
works successfully
Provisioning fails with
Error: error getting user: NoSuchEntity: The user with name jenkins-x-vault cannot be found.
Fedora 31 -> AWS
If I use terraform eks module https://github.com/terraform-aws-modules/terraform-aws-eks, I can add additional IAM account, IAM role or IAM users. This will simplify accessing k8s Dashboard.
Is it possible to add this feature to JX module please?
I have enabled create_and_configure_subdomain for a subdomain and it gets created by TF correctly. The issue is that resolving on this subdomain does not seem to work on its own in either internal route53 or on public internet with enable_external_dns enabled. The only way I have to test this is spinning up a new deployment. This causes a health check in the command jx boot --requirements
to timeout and not proceed in any subsequent steps.
We should add terraform-compliance checks to our pipeline: https://github.com/eerkunt/terraform-compliance, wdyt @hferentschik?
One use case would be to check if created resources have tags or not: https://terraform-compliance.com/pages/Examples/tags_related.html (This will help us to fix some issues with terraform destroy)
Seems like tfsec also does similar things: https://github.com/liamg/tfsec.
Running tfsec on the module shows some issues:
9 potential problems detected:
Problem 1
[AWS002][ERROR] Resource 'aws_s3_bucket.logs_jenkins_x' does not have logging enabled.
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:5-13
2 | // Create the AWS S3 buckets for Long Term Storage based on flags
3 | // See https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
4 | // ----------------------------------------------------------------------------
5 | resource "aws_s3_bucket" "logs_jenkins_x" {
6 | count = var.enable_logs_storage ? 1 : 0
7 | bucket_prefix = "logs-${var.cluster_name}-"
8 | acl = "private"
9 | tags = {
10 | Owner = "Jenkins-x"
11 | }
12 | force_destroy = var.force_destroy
13 | }
14 |
15 | resource "aws_s3_bucket" "reports_jenkins_x" {
16 | count = var.enable_reports_storage ? 1 : 0
See https://github.com/liamg/tfsec/wiki/AWS002 for more information.
Problem 2
[AWS017][ERROR] Resource 'aws_s3_bucket.logs_jenkins_x' defines an unencrypted S3 bucket (missing server_side_encryption_configuration block).
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:5-13
2 | // Create the AWS S3 buckets for Long Term Storage based on flags
3 | // See https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
4 | // ----------------------------------------------------------------------------
5 | resource "aws_s3_bucket" "logs_jenkins_x" {
6 | count = var.enable_logs_storage ? 1 : 0
7 | bucket_prefix = "logs-${var.cluster_name}-"
8 | acl = "private"
9 | tags = {
10 | Owner = "Jenkins-x"
11 | }
12 | force_destroy = var.force_destroy
13 | }
14 |
15 | resource "aws_s3_bucket" "reports_jenkins_x" {
16 | count = var.enable_reports_storage ? 1 : 0
See https://github.com/liamg/tfsec/wiki/AWS017 for more information.
Problem 3
[AWS002][ERROR] Resource 'aws_s3_bucket.reports_jenkins_x' does not have logging enabled.
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:15-23
12 | force_destroy = var.force_destroy
13 | }
14 |
15 | resource "aws_s3_bucket" "reports_jenkins_x" {
16 | count = var.enable_reports_storage ? 1 : 0
17 | bucket_prefix = "reports-${var.cluster_name}-"
18 | acl = "private"
19 | tags = {
20 | Owner = "Jenkins-x"
21 | }
22 | force_destroy = var.force_destroy
23 | }
24 |
25 | resource "aws_s3_bucket" "repository_jenkins_x" {
26 | count = var.enable_repository_storage ? 1 : 0
See https://github.com/liamg/tfsec/wiki/AWS002 for more information.
Problem 4
[AWS017][ERROR] Resource 'aws_s3_bucket.reports_jenkins_x' defines an unencrypted S3 bucket (missing server_side_encryption_configuration block).
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:15-23
12 | force_destroy = var.force_destroy
13 | }
14 |
15 | resource "aws_s3_bucket" "reports_jenkins_x" {
16 | count = var.enable_reports_storage ? 1 : 0
17 | bucket_prefix = "reports-${var.cluster_name}-"
18 | acl = "private"
19 | tags = {
20 | Owner = "Jenkins-x"
21 | }
22 | force_destroy = var.force_destroy
23 | }
24 |
25 | resource "aws_s3_bucket" "repository_jenkins_x" {
26 | count = var.enable_repository_storage ? 1 : 0
See https://github.com/liamg/tfsec/wiki/AWS017 for more information.
Problem 5
[AWS002][ERROR] Resource 'aws_s3_bucket.repository_jenkins_x' does not have logging enabled.
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:25-33
22 | force_destroy = var.force_destroy
23 | }
24 |
25 | resource "aws_s3_bucket" "repository_jenkins_x" {
26 | count = var.enable_repository_storage ? 1 : 0
27 | bucket_prefix = "repository-${var.cluster_name}-"
28 | acl = "private"
29 | tags = {
30 | Owner = "Jenkins-x"
31 | }
32 | force_destroy = var.force_destroy
33 | }
34 |
See https://github.com/liamg/tfsec/wiki/AWS002 for more information.
Problem 6
[AWS017][ERROR] Resource 'aws_s3_bucket.repository_jenkins_x' defines an unencrypted S3 bucket (missing server_side_encryption_configuration block).
/home/ankitm123/work/terraform-aws-eks-jx/modules/cluster/storage.tf:25-33
22 | force_destroy = var.force_destroy
23 | }
24 |
25 | resource "aws_s3_bucket" "repository_jenkins_x" {
26 | count = var.enable_repository_storage ? 1 : 0
27 | bucket_prefix = "repository-${var.cluster_name}-"
28 | acl = "private"
29 | tags = {
30 | Owner = "Jenkins-x"
31 | }
32 | force_destroy = var.force_destroy
33 | }
34 |
See https://github.com/liamg/tfsec/wiki/AWS017 for more information.
Problem 7
[AWS002][ERROR] Resource 'aws_s3_bucket.vault-unseal-bucket' does not have logging enabled.
/home/ankitm123/work/terraform-aws-eks-jx/modules/vault/main.tf:28-38
25 | // Vault S3 bucket
26 | // See https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
27 | // ----------------------------------------------------------------------------
28 | resource "aws_s3_bucket" "vault-unseal-bucket" {
29 | bucket_prefix = "vault-unseal-${var.cluster_name}-"
30 | acl = "private"
31 | tags = {
32 | Name = "Vault unseal bucket"
33 | }
34 | versioning {
35 | enabled = false
36 | }
37 | force_destroy = var.force_destroy
38 | }
39 |
40 | // ----------------------------------------------------------------------------
41 | // Vault DynamoDB Table
See https://github.com/liamg/tfsec/wiki/AWS002 for more information.
Problem 8
[AWS017][ERROR] Resource 'aws_s3_bucket.vault-unseal-bucket' defines an unencrypted S3 bucket (missing server_side_encryption_configuration block).
/home/ankitm123/work/terraform-aws-eks-jx/modules/vault/main.tf:28-38
25 | // Vault S3 bucket
26 | // See https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
27 | // ----------------------------------------------------------------------------
28 | resource "aws_s3_bucket" "vault-unseal-bucket" {
29 | bucket_prefix = "vault-unseal-${var.cluster_name}-"
30 | acl = "private"
31 | tags = {
32 | Name = "Vault unseal bucket"
33 | }
34 | versioning {
35 | enabled = false
36 | }
37 | force_destroy = var.force_destroy
38 | }
39 |
40 | // ----------------------------------------------------------------------------
41 | // Vault DynamoDB Table
See https://github.com/liamg/tfsec/wiki/AWS017 for more information.
Problem 9
[AWS019][WARNING] Resource 'aws_kms_key.kms_vault_unseal' does not have KMS Key auto-rotation enabled.
/home/ankitm123/work/terraform-aws-eks-jx/modules/vault/main.tf:71-92
68 | // Vault KMS Key
69 | // See https://www.terraform.io/docs/providers/aws/r/kms_key.html
70 | // ----------------------------------------------------------------------------
71 | resource "aws_kms_key" "kms_vault_unseal" {
72 | description = "KMS Key for bank vault unseal"
73 | policy = <<POLICY
74 | {
75 | "Version": "2012-10-17",
76 | "Statement": [
77 | {
78 | "Sid": "EnableIAMUserPermissions",
79 | "Effect": "Allow",
80 | "Principal": {
81 | "AWS": [
82 | "${data.aws_iam_user.vault_user.arn}",
83 | "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
84 | ]
85 | },
86 | "Action": "kms:*",
87 | "Resource": "*"
88 | }
89 | ]
90 | }
91 | POLICY
92 | }
93 |
94 | // ----------------------------------------------------------------------------
95 | // Permissions that will need to be attached to the provides IAM Username
See https://github.com/liamg/tfsec/wiki/AWS019 for more information.
We should fix them asap.
I realise that this repo creates IAM Roles and also the ServiceAccounts in k8s linking those IAM Roles.
So let's say we create the following SAs:
jx:tekton-bot
jx:exdns-external-dns
jx:cm-cert-manager
etc
Now the helm charts for tekton, external-dns, cert-manager etc also have their SA resource defined. When using JX with helm2 the way charts are applied are something like:
helm template --name release-name prefix/chart-name | kubectl apply -f
Any existing resources are updated. So far so good.
When we start using helm 3 though, I believe it will be just helm install
that we use. When using helm install
, helm manages all the resources defined in a chart. Install fails if a resource defined in a chart already exists.
Now this can be solved in one of 2 ways:
What I'm recommending is approach 2. The idea is to manage as much k8s resources using the helm based GitOps workflow as possible and let the cloud tooling create just resources outside of it.
With approach 2, we'll need to annotate SAs with the role ARN though. But this I think is consistent with the approach being taken for GCP; jenkins-x-labs/jenkins-x-versions@6c7845a, jenkins-x-labs/jenkins-x-versions@44874e0 etc.
Please provide feedback
The first iteration of the EKS scripts should define a rough idea of what resources we'll need to meet feature parity in a Jenkins X booted cluster using jx create cluster eks
and jx boot
.
These are the resources that should be created:
MUST HAVE
m5.large
EC2 instances that serve as Worker nodes for the cluster.jx-requirements-eks.yml
file.IAM Roles for Service Accounts
.NICE TO HAVE
Route53
Hosted Zone
for the provided subdomain
in the form of <subdomain>.<apex_domain>
and automatic DNS delegation done for the apex_domain
hosted zone. If no subdomain
is provided, this doesn't need to be done.A Terraform module that can be used with a simple set of variables and produce a jx-requirements-eks.yml
file that can be used to boot the cluster.
After a full terraform destroy, there are several invalid index error messages: One example is this
Error: Invalid index
on ../terraform-aws-eks-jx/modules/cluster/outputs.tf line 18, in output "logs_jenkins_x":
18: value = aws_s3_bucket.logs_jenkins_x[0].id
|----------------
| aws_s3_bucket.logs_jenkins_x is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on ../terraform-aws-eks-jx/modules/cluster/outputs.tf line 22, in output "reports_jenkins_x":
22: value = aws_s3_bucket.reports_jenkins_x[0].id
|----------------
| aws_s3_bucket.reports_jenkins_x is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on ../terraform-aws-eks-jx/modules/cluster/outputs.tf line 26, in output "repository_jenkins_x":
26: value = aws_s3_bucket.repository_jenkins_x[0].id
|----------------
| aws_s3_bucket.repository_jenkins_x is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on ../terraform-aws-eks-jx/modules/vault/outputs.tf line 27, in output "vault_user_id":
27: value = var.vault_user == "" ? aws_iam_access_key.jenkins-x-vault[0].id : ""
|----------------
| aws_iam_access_key.jenkins-x-vault is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on ../terraform-aws-eks-jx/modules/vault/outputs.tf line 34, in output "vault_user_secret":
34: value = var.vault_user == "" ? aws_iam_access_key.jenkins-x-vault[0].secret : ""
|----------------
| aws_iam_access_key.jenkins-x-vault is empty tuple
The given key does not identify an element in this collection value.
The error comes from output blocks:
output "repository_jenkins_x" {
value = aws_s3_bucket.repository_jenkins_x[0].id
}
Instead we should use aws_s3_bucket.repository_jenkins_x.*.id.
Terraform destroy gets stuck on deleting public subnet and eventually times out:
Error: Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)
Error: Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)
Error: Error waiting for internet gateway (igw-040fd33822a0369a0) to detach: timeout while waiting for state to become 'detached' (last state: 'detaching', timeout: 15m0s)
Error: Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)
The way to get around this is to delete the Load balancer (and check all the ENIs are deleted), and then re-run terraform destroy
. May be this can be handled better in the module?
Document the EKS Terraform module using the standard approach as for example seen here: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/README.md
@hferentschik I really need some help. I am having this issue for a while now. I am trying everything I can but I can't seem to fix it.
Steps to reproduce:
I am using eu-west-2 and I have existing Route 53 Zone which i am using as apex_domain
module "eks-jx" {
source = "../aws-eks-jx/"
vault_user = var.vault_iam_user
enable_external_dns = true
cluster_name = var.cluster_name
desired_node_count = "2"
max_node_count = "3"
min_node_count = "2"
enable_tls = true
region = var.region
node_machine_type = var.ec2_type
enable_logs_storage = true
enable_reports_storage = true
enable_repository_storage = true
apex_domain = var.apex_domain
tls_email = var.tls_email
vpc_name = var.cluster_name
vpc_cidr_block = "10.5.0.0/16"
vpc_subnets = [
"10.5.1.0/24",
"10.5.2.0/24",
"10.5.3.0/24"
]
}
I have used admin account to run:
terraform apply
Terraform was successfully finished without any error. jx-requirements.yml is created.
I have then updated config map for kube-system configmap/aws-auth to make sure IAM user jenkins-x-vault can access the cluster. (without updating configmap you can't run jx boot command, it will fail)
I want to use webhook: Jenkins (still planning to use Jenkins X Server) so, I have changed jx-requirements.yml file.
versionStream:
ref: master
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: jenkins
So this is how my jx-requirements.yml file looks like:
autoUpdate:
enabled: false
schedule: ""
terraform: true
cluster:
clusterName: "eks-cluster"
environmentGitOwner: ""
provider: eks
region: "eu-west-2"
gitops: true
environments:
- key: dev
- key: staging
- key: production
ingress:
domain: "y.uk."
ignoreLoadBalancer: true
externalDNS: true
tls:
email: "[email protected]"
enabled: true
production: false
kaniko: true
secretStorage: vault
vault:
aws:
iamUserName: "jenkins-x-vault"
dynamoDBTable: "vault-unseal-eks-cluster-92qpWWPP"
dynamoDBRegion: "eu-west-2"
kmsKeyId: "f1a00000-f9e1-4a8f-ya99-01fr001d6574"
kmsRegion: "eu-west-2"
s3Bucket: "vault-unseal-eks-cluster-88888000000111234500000008"
s3Region: "eu-west-2"
storage:
logs:
enabled: true
url: s3://logs-eks-cluster-88888000000111234500000008
reports:
enabled: true
url: s3://reports-eks-cluster-88888000000111234500000008
repository:
enabled: true
url: s3://repository-eks-cluster-88888000000111234500000008
versionStream:
ref: master
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: jenkins
export VAULT_AWS_ACCESS_KEY_ID=<access key id>
export VAULT_AWS_SECRET_ACCESS_KEY=<secret key>
I did run "printenv" to make sure parameters are set properly
jx boot --requirements ../jx-requirements.yml
It did ask for:
Do you want to clone the Jenkins X Boot Git repository? [? for help] (Y/n)
I pressed Y
Would you like to upgrade to the jx version? [? for help] (Y/n)
I pressed No because I am using the following Jenkins X version:
CLI packages kubectl, git, helm seem to be setup correctly
NAME VERSION
jx 2.0.1286
Kubernetes cluster v1.15.11-eks-af3caf
kubectl v1.16.9
git 2.20.1 (Apple Git-117)
jx 2.0.1286 was the latest Jenkis X server release.
After that, jx boot started to install the resources in the cluster.
It takes very long time to seal the vault, and errors out, see below:
Waiting for vault to be initialized and unsealed...
Waiting for vault to be initialized and unsealed...
error: creating system vault URL client: wait for vault to be initialized and unsealed: reading vault health: Get https://vault-jx.y.uk/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299: dial tcp: lookup vault-jx.y-tree.uk on 55.37.0.2:53: no such host
error: failed to interpret pipeline file /Users/zak/y/infrastructure-as-code/terraform/staging/eks/jxboot/jenkins-x-boot-config/jenkins-x.yml: failed to run '/bin/sh -c jx step create values --name parameters' command in directory '/Users/zak/y/infrastructure-as-code/terraform/staging/eks/jxboot/jenkins-x-boot-config/env', output: ''
I installed Dashboard.
I checked logs for jx-vault-montrose-0 pod and it shows this:
Using eth0 for VAULT_CLUSTER_ADDR: https://10.5.3.57:8201
telemetry.disable_hostname has been set to false. Recommended setting is true for Prometheus to avoid poorly named metrics.
==> Vault server configuration:
AWS KMS KeyID: cf309c7d-d9bd-4169-aeab-b962cb15d44b
AWS KMS Region: eu-west-2
Seal Type: awskms
Api Address: http://jx-vault-montrose.jx:8200
Cgo: disabled
Listener 1: tcp (addr: "0.0.0.0:8200", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: true
Recovery Mode: false
Storage: dynamodb (HA available)
Version: Vault v1.3.4
==> Vault server started! Log data will stream in below:
2020-04-29T02:34:15.481Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-04-29T02:34:15.652Z [INFO] core: stored unseal keys supported, attempting fetch
2020-04-29T02:34:15.656Z [WARN] failed to unseal core: error="stored unseal keys are supported, but none were found"
2020-04-29 02:34:16.582632 I | [ERR] Error flushing to statsd! Err: write udp 127.0.0.1:56532->127.0.0.1:9125: write: connection refused
2020-04-29T02:34:16.657Z [INFO] core: security barrier not initialized
2020-04-29T02:34:17.170Z [WARN] core: stored keys supported on init, forcing shares/threshold to 1
2020-04-29T02:34:17.177Z [INFO] core: security barrier not initialized
2020-04-29T02:34:17.195Z [INFO] core: security barrier initialized: stored=1 shares=1 threshold=1
2020-04-29T02:34:17.279Z [INFO] core: post-unseal setup starting
2020-04-29T02:34:17.306Z [INFO] core: loaded wrapping token key
2020-04-29T02:34:17.306Z [INFO] core: successfully setup plugin catalog: plugin-directory=
2020-04-29T02:34:17.313Z [INFO] core: no mounts; adding default mount table
2020-04-29T02:34:17.324Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-04-29T02:34:17.324Z [INFO] core: successfully mounted backend: type=system path=sys/
2020-04-29T02:34:17.324Z [INFO] core: successfully mounted backend: type=identity path=identity/
2020-04-29T02:34:17.388Z [INFO] core: successfully enabled credential backend: type=token path=token/
2020-04-29T02:34:17.388Z [INFO] core: restoring leases
2020-04-29T02:34:17.388Z [INFO] rollback: starting rollback manager
2020-04-29T02:34:17.395Z [INFO] expiration: lease restore complete
2020-04-29T02:34:17.413Z [INFO] identity: entities restored
2020-04-29T02:34:17.417Z [INFO] identity: groups restored
2020-04-29T02:34:17.455Z [WARN] core: post-unseal upgrade seal keys failed: error="no recovery key found"
I can not see any subdomain created in existing Route53 zone.
Any idea what is going wrong please? I am running out ideas now.
Hi guys,
I need some help here please.
@hferentschik @jstrachan
First question:
It seems like I have to create a separate IAM user for vault. Is there any way I can provide I am role? This is very inconvenient because lot of users manage cross accounts with IAM roles.
I use aws-vault profiles to create resources in aws. After creating eks cluster I can not access it, do you only allow vault IAM user to access the cluster? Is there anyway I can output configmap using your module? Or I can inject IAM role into this module so that when I can use aws-vault with IAM role and I will be able to access eks cluster and k8s dashboard?
If I need to use Vault User then it will be a nightmare. I have to create Access key and Secret key only for that account where I have EKS cluster. This breaks the whole best practices. IAM users are centralized from one root account. Creating additional IAM user in a separate account where EKS cluster is, actually it makes the whole thing a little difficult.
second question:
I still want to use Jenkins X Old server, don't want to use Prow because lot of our developers still using classic version. How do I make sure this module is installing https://github.com/jenkins-x/jx/releases/tag/v2.0.1286,This is the last release with perfect support for classic Jenkins server. From 2.1.1 Jenkins server is being removed.
This module is using this template https://github.com/jenkins-x/jenkins-x-versions, ref is master and webhook is prow. Which tag shall I use for old Jenkins server?
Please advise. Thank you very much.
I get the same behavior of terraform-aws-modules/terraform-aws-eks#757
module.eks-jx.module.cluster.module.eks.null_resource.wait_for_cluster[0]: Still creating... [23m20s elapsed]
module.eks-jx.module.cluster.module.eks.null_resource.wait_for_cluster[0] (local-exec): /bin/sh: wget: command not found
Perhaps aws-eks dependency is outdated and the problem is solved in 750, but the workaround is not working because the input wait_for_cluster_cmd
is not exposed in eks-jx.
Any suggestion?
error: creating system vault URL client: wait for vault to be initialized and unsealed: reading vault health: Get https://vault-jx.snipv1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299: dial tcp: lookup vault-jx.snip on snip:53: server misbehaving
error: failed to interpret pipeline file /Users/a/code/jxmanage/jenkins-x-boot-config/jenkins-x.yml: failed to run '/bin/sh -c jx step create values --name parameters' command in directory '/Users/a/code/jxmanage/jenkins-x-boot-config/env', output: ''
Noting that the dns name does not work on the internet. Pods seem to set up fine.
kubectl get pods
NAME READY STATUS RESTARTS AGE
exdns-external-dns-7c686cd8d6-w74f7 1/1 Running 0 24m
jx-vault-hl-jx-0 3/3 Running 0 12m
jx-vault-hl-jx-configurer-5df6c8994c-gxf2q 1/1 Running 0 12m
vault-operator-5b9c49d9-lbd9m 1/1 Running 0 22m
Hello Jenkins-X team,
As I said here https://kubernetes.slack.com/archives/C9MBGQJRH/p1591192617327400 I detected a problem with terraform and AWS deployment of Jenkins-X.
I'm having the following error:
Error: error creating EKS Cluster (tf-jx-nearby-cod): UnsupportedAvailabilityZoneException: Cannot create cluster 'tf-jx-nearby-cod' because us-east-1c, the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1b, us-east-1d, us-east-1e, us-east-1f
Steps to reproduce:
main.tf
file as described on docs.terraform init
.terraform apply
(error received in the middle of launch (resource 30 of 74)).Regards.
The helper script release.sh
uses:
jx step changelog -v $version -p $prev_tag_base -r $current_tag_base --generate-yaml=false --no-dev-release --update-release=false
This works, however, this step is supposed to run in cluster and prints a lot of warnings when running locally. Partly because it will always use the pipeline's user credentials.
Also, due to the credential issue, the changelog is just written to the console and one needs to manually cut&paste.
A short jq
script should do the trick.
terraform plan
produces There is no function named "trimprefix".
cat creds
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=us-east-1
source creds
export CLUSTER_NAME=[...] # Replace with the name of the cluster
echo "module \"eks-jx\" {
source = \"jenkins-x/eks-jx/aws\"
cluster_name = \"$CLUSTER_NAME\"
region = \"us-east-1\"
desired_node_count = 3
min_node_count = 3
max_node_count = 6
}
output \"vault_user_id\" {
value = module.eks-jx.vault_user_id
description = \"The Vault IAM user id\"
}
output \"vault_user_secret\" {
value = module.eks-jx.vault_user_secret
description = \"The Vault IAM user secret\"
}" | tee main.tf
terraform init
Initializing modules...
Downloading jenkins-x/eks-jx/aws 1.0.2 for eks-jx...
- eks-jx in .terraform/modules/eks-jx
- eks-jx.cluster in .terraform/modules/eks-jx/modules/cluster
Downloading terraform-aws-modules/eks/aws 10.0.0 for eks-jx.cluster.eks...
- eks-jx.cluster.eks in .terraform/modules/eks-jx.cluster.eks
- eks-jx.cluster.eks.node_groups in .terraform/modules/eks-jx.cluster.eks/modules/node_groups
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_cert_manager...
- eks-jx.cluster.iam_assumable_role_cert_manager in .terraform/modules/eks-jx.cluster.iam_assumable_role_cert_manager/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_cm_cainjector...
- eks-jx.cluster.iam_assumable_role_cm_cainjector in .terraform/modules/eks-jx.cluster.iam_assumable_role_cm_cainjector/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_controllerbuild...
- eks-jx.cluster.iam_assumable_role_controllerbuild in .terraform/modules/eks-jx.cluster.iam_assumable_role_controllerbuild/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_external_dns...
- eks-jx.cluster.iam_assumable_role_external_dns in .terraform/modules/eks-jx.cluster.iam_assumable_role_external_dns/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_jxui...
- eks-jx.cluster.iam_assumable_role_jxui in .terraform/modules/eks-jx.cluster.iam_assumable_role_jxui/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/iam/aws 2.6.0 for eks-jx.cluster.iam_assumable_role_tekton_bot...
- eks-jx.cluster.iam_assumable_role_tekton_bot in .terraform/modules/eks-jx.cluster.iam_assumable_role_tekton_bot/modules/iam-assumable-role-with-oidc
Downloading terraform-aws-modules/vpc/aws 2.6.0 for eks-jx.cluster.vpc...
- eks-jx.cluster.vpc in .terraform/modules/eks-jx.cluster.vpc
- eks-jx.dns in .terraform/modules/eks-jx/modules/dns
- eks-jx.vault in .terraform/modules/eks-jx/modules/vault
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.1...
- Downloading plugin for provider "random" (hashicorp/random) 2.2.1...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.61.0...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan
Error: Call to unknown function
on .terraform/modules/eks-jx/modules/dns/outputs.tf line 2, in output "domain":
2: value = trimprefix(join(".", [var.subdomain, var.apex_domain]), ".")
There is no function named "trimprefix".
Similar to the GKE module there should be a Velero backup configuration. At the very least it should be possible to optionally enable it. If enabled the Velero storage bucket needs to be created.
ATM, we don't have a release pipeline configured and rely on manual releases which consist of creating a tag and release notes.
Let's automate this a bit with a script.
terraform init doesn't work with the example main.tf listed at https://jenkins-x.io/docs/getting-started/
It stops on
Error: Failed to download module
terraform --version
# => Terraform v0.12.26
cat > main.tf
module "eks-jx" {
source = "jenkins-x/eks-jx/aws"
}
output "vault_user_id" {
value = module.eks-jx.vault_user_id
description = "The Vault IAM user id"
}
output "vault_user_secret" {
value = module.eks-jx.vault_user_secret
description = "The Vault IAM user secret"
}
terraform init
Initializing modules...
Downloading jenkins-x/eks-jx/aws 1.0.5 for eks-jx...
Error: Failed to download module
Could not download module "eks-jx" (main.tf:1) source code from
"https://github.com/jenkins-x/terraform-aws-eks-jx/archive/v1.0.5.tar.gz//*?archive=tar.gz":
bad response code: 404.
Error: Failed to download module
Could not download module "eks-jx" (main.tf:1) source code from
"https://github.com/jenkins-x/terraform-aws-eks-jx/archive/v1.0.5.tar.gz//*?archive=tar.gz":
bad response code: 404.
To download required modules without any errors.
Running on macOS Catalina 10.15.4 (19E287)
Initially the tf module generates the jx-requirements.yml which is meant to be provided to jx boot -r.
However jx boot creates a new environment git repository that contains jx-requirements.yml as well.
This is potentially inconsistent because changes made using subsequent tf module executions do not make it into the environment repository.
An approach to solve this could be to let the tf module write the information contained within jx-requirements.yml into the k8s cluster so that jx boot could read it from there more consistently.
Using the jenkins-x/terraform-aws-eks-jx repo, I created a Kubernetes cluster and all the cloud resources that jx will need later (vault S3 bucket, reports S3 bucket, Dynamo DB tables etc.) for jx boot.
terraform init
terraform plan
terraform apply
Terraform created all my resources and generate a "jx-requirements.yml" file which is already filled with the good values and the good parameters so that jx can read it on jx boot.
Then I run :
jx boot -r jx-requirements.yml
as it's said in the README on the repo.
But after two seconds I got this output :
Stashing any changes made in local boot clone.
Booting Jenkins X
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x36dc0f7]
goroutine 1 [running]:
github.com/jenkins-x/jx/pkg/cmd/boot.(*BootOptions).Run(0xc000639220, 0x0, 0x0)
/workspace/source/pkg/cmd/boot/boot.go:238 +0x787
main.main()
/workspace/source/cmd/jx/jx.go:11 +0x32
jx, kubectl, go and terraform are installed and updated
jx : 2.0.1261
kubectl : v1.16.8
go : go1.13.8
Terraform v0.12.24
When running terraform apply
, there is a local-exec
command that uses wget
. If we do need something like that, I propose we use curl
instead. It is installed by default almost everywhere, while wget
is not that common. For example, macOS does have curl
, but doesn't have wget
.
Right now, I'm getting module.eks-jx.module.cluster.module.eks.null_resource.wait_for_cluster[0] (local-exec): /bin/sh: wget: command not found
from terraform apply
.
From what I saw, we are not using aws_eks_node_group
for worker nodes. That was probably the most important improvement AWS did to EKS recently (e.g., 6 months ago). Is there a specific reason for that?
Using the minimal configuration, just specifying vault_user, I get the following error when trying to use the resulting jx-requirements.yaml
(cluster creation was ok):
error: overwriting the default requirements: loading requirements from file "/tmp/terraform/jx-requirements.yml": validation failures in YAML file /tmp/terraform/jx-requirements.yml:
ingress.tls.email: Invalid type. Expected: string, given: null
ingress.domain: Invalid type. Expected: string, given: null
When vault_user input parameter is empty, it causes the below error now:
Error: Error attaching policy arn:aws:iam::<account>:policy/vault_<region>-<timestamp> to IAM User : InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, AttachUserPolicyInput.UserName.
on modules/vault/main.tf line 173, in resource "aws_iam_user_policy_attachment" "attach_vault_policy_to_user":
173: resource "aws_iam_user_policy_attachment" "attach_vault_policy_to_user" {
There are several typos and issues with the section headings in the README which need to be addressed.
terraform -version
Terraform v0.12.24
provider.aws v2.61.0
provider.kubernetes v1.11.1
jx --version 2.1.35
Master branch latest
I am running this locally and the initial terraform apply sometimes taking 10+ minutes to complete. Despite looking for the source of the values <project id>_<zone>_<cluster name>
it is not clear to me where those values should be defined but my belief is they expect autogenerated values from the tf output. Please let me know any additional details you may like, this is a really great project.
Code output after jx boot:
(snips)
Cloning https://github.com/jenkins-x/jenkins-x-boot-config.git @ master to jenkins-x-boot-config
Attempting to resolve version for boot config https://github.com/jenkins-x/jenkins-x-boot-config.git from https://github.com/jenkins-x/jenkins-x-versions.git
Booting Jenkins X
WARNING: failed to load ConfigMap jenkins-x-docker-registry in namespace jx: failed to get configmap jenkins-x-docker-registry in namespace jx, configmaps "jenkins-x-docker-registry" not found
WARNING: failed to load ConfigMap jenkins-x-docker-registry in namespace jx: failed to get configmap jenkins-x-docker-registry in namespace jx, configmaps "jenkins-x-docker-registry" not found
STEP: validate-git command: /bin/sh -c jx step git validate in dir: /Users/a/code/jenkinsxtests/default_cluster/jenkins-x-boot-config/env
Git configured for user: (snips)
STEP: verify-preinstall command: /bin/sh -c jx step verify preinstall --provider-values-dir="kubeProviders" in dir: /Users/a/code/jenkinsxtests/default_cluster/jenkins-x-boot-config
error: : unable to parse arn:aws:eks:us-east-1:193358141811:cluster/tf-jx-peaceful-garfish as
error: failed to interpret pipeline file /Users/a/code/jenkinsxtests/default_cluster/jenkins-x-boot-config/jenkins-x.yml: failed to run '/bin/sh -c jx step verify preinstall --provider-values-dir="kubeProviders"' command in directory '/Users/a/code/jenkinsxtests/default_cluster/jenkins-x-boot-config', output: ''
I'm getting the following error when running terraform destroy
:
Error: Invalid index
on .terraform/modules/eks-jx/terraform-aws-eks-jx-1.0.5/main.tf line 85, in resource "local_file" "jx-requirements":
85: logs_storage_bucket = module.cluster.logs_jenkins_x[0]
|----------------
| module.cluster.logs_jenkins_x is empty tuple
The given key does not identify an element in this collection value.
Due to #56, I deleted S3 buckets manually and re-run terraform destroy. Can that be the cause of the issue?
This would be helpful when debugging issues that users of this module have. Similar to the main jenkinsx repo.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.