Coder Social home page Coder Social logo

import-map-deployer's Introduction

import-map-deployer

Build Status

The import-map-deployer is a backend service that updates import map json files. When using import-map-deployer, a frontend deployment is completed in two steps:

  1. Upload a javascript file to a static server or CDN, such as AWS S3, Azure Storage, Digital Ocean Spaces, or similar.
  2. Make an HTTP request (e.g. via curl or httpie) to modify an existing import map to point to the new file.

These two steps are often performed during a CI process, to automate deployments of frontend code.

import-map-deployer demo

Why does this exist?

The alternative to the import-map-deployer is to pull down the import map file, modify it, and reupload it during your CI process. That alternative has one problem: it doesn't properly handle concurrency. If two deployments occur in separate CI pipelines at the same time, it is possible they pull down the import map at the same time, modify it, and reupload. In that case, there is a race condition where the "last reupload wins," overwriting the deployment that the first reupload did.

When you have a single import map file and multiple services' deployment process modifies that import map, there is a (small) chance for a race condition where two deployments attempt to modify the import map at the same time. This could result in a CI pipeline indicating that it successfully deployed the frontend module, even though the deployment was overwritten with a stale version.

Explanation video

Tutorial video for import map deployer

Security

The import-map-deployer must have read / write access to the CDN / bucket that is storing your production import map. It exposes a web server that allows for modifying the state of your production application. It is password protected with HTTP basic authentication.

Securing the import-map-deployer

The following security constraints are highly recommended to secure the import-map-deployer

  1. The import-map-deployer's web server is only exposed within your VPC.
  2. Your CI runners should either be within the VPC or tunnel into it when calling the import-map-deployer.
  3. The import-map-deployer has HTTP basic authentication enabled, and only the CI runners know the username and password.
  4. You have configured urlSafeList with a list of URL prefixes that are trusted in your import map. Any attempts to modify the state of production so that your import map downloads from other URLs will be rejected.

Secure alternative

If you are not comfortable with running the import-map-deployer at all, you do not have to. Instead, give read/write access to your CI runners for modifying your import map file. Perform all import map modifications the import map inside of your CI process.

If you do this, decide whether you care about the deployment race condition scenario described in the Why does this exist? section. If you are willing to live with that unlikely race condition, see these examples (1, 2) for some example CI commands.

Note that several object stores (notably Google Cloud Storage and Azure Storage) allow for optimistic concurrency when uploading files. By correctly sending pre-condition headers on those services, your CI process can correctly fail and/or retry in the event of a race condition. For further reading, see Azure's docs or Google Cloud's docs on concurrency.

If you do want to address the deployment race condition without using import-map-deployer, we'd love to hear what you come up with. Consider leaving a PR to these docs that explain what you did!

Example repository

This github repository shows an example of setting up your own Docker image that can be configured specifically for your organization.

Related Projects

Installation and usage

Docker

import-map-deployer is available on DockerHub as singlespa/import-map-deployer. If you want to run just the single container, you can run docker-compose up from the project root. When running via docker-compose, it will mount a volume in the project root's directory, expecting a config.json file to be present.

Example Dockerfile

FROM singlespa/import-map-deployer:<version-tag>

ENV HTTP_USERNAME= HTTP_PASSWORD=

COPY conf.js /www/

CMD ["yarn", "start", "conf.js"]

Node

To run the import-map-deployer in Node, run the following command: npx import-map-deployer config.json

It is available as import-map-deployer on npm.

The default web server port is 5000. To run web server with a custom port, se the PORT ENV variable.

$ PORT=8080 npx import-map-deployer config.json

Configuration file

The import-map-deployer expects a configuration file to be present so it (1) can password protect deployments, and (2) knows where and how to download and update the "live" import map.

If no configuration file is present, import-map-deployer defaults to using the filesystem to host the manifest file, which is called sofe-manifest.json and created in the current working directory. If username and password are included, http basic auth will be required. If username and password is not provided, no http auth will be needed.

Here are the properties available in the config file:

  • urlSafeList (optional, but highly recommended): An array of strings and/or functions that indicate which URLs are trusted when updating the import map. A string value is treated as a URL prefix - for example https://unpkg.com/. A function value is called with a URL object and must return a truthy value when the URL is trusted. Any attempt to update the import map to include an untrusted URL will be rejected. If you omit urlSafeList, all URLs are considered trusted (not recommended).
  • packagesViaTrailingSlashes (optional, defaults to true): A boolean that indicates whether to turn off the automatic generation of trailing slash package records on PATCH service requests. For more information and examples visit standard guideline.
  • manifestFormat (required): A string that is either "importmap" or "sofe", which indicates whether the import-map-deployer is interacting with an import map or a sofe manifest.
  • locations (required): An object specifying one or more "locations" (or "environments") for which you want the import-map-deployer to control the import map. The special default location is what will be used when no query parameter ?env= is provided in calls to the import-map-deployer. If no default is provided, the import-map-deployer will create a local file called import-map.json that will be used as the import map. The keys in the locations object are the names of environments, and the values are strings that indicate how the import-map-deployer should interact with the import map for that environment. For more information on the possible string values for locations, see the Built-in IO Methods section.
  • username (optional): The username for HTTP auth when calling the import-map-deployer. If username and password are omitted, anyone can update the import map without authenticating. This username is not related to authenticating with S3/Digital Ocean/Other, but rather is the username your CI process will use in its HTTP request to the import-map-deployer.
  • password (optional): The password for HTTP auth when calling the import-map-deployer. If username and password are omitted, anyone can update the import map without authenticating. This password is not related to authenticating with S3/Digital Ocean/Other, but rather is the password your CI process will use in its HTTP request to the import-map-deployer.
  • port (optional): The port to run the import-map-deployer on. Defaults to 5000.
  • region (optional): The AWS region to be used when retrieving and updating the import map. This can also be specified via the AWS_DEFAULT_REGION environment variable, which is the preferred method.
  • s3.putObject (optional): The s3.putObject is an object that is merged with the default putObject parameters. This can contain and override any of of the valid request options, such as ACL, encoding, SSE, etc. The sdk options can be found here.
  • s3Endpoint (optional): The url for aws-sdk to call when interacting with S3. Defaults to AWS' default domain, but can be configured for Digital Ocean Spaces or other S3-compatible APIs.
  • readManifest(env) (optional): A javascript function that will be called to read the import map. One argument is provided, a string env indicating which location to read from. This allows you to implement your own way of reading the import map. The function must return a Promise that resolves with the import map as a string. Since javascript functions are not part of JSON, this option is only available if you provide a config.js file (instead of config.json).
  • writeManifest(importMapAsString, env) (optional): A javascript function that will be called to write the import map. Two arguments are provided, the first being the import map as a string to be written, and the second is the string env that should be updated. This allows you to implement your own way of writing the import map. The function must return a Promise that resolves with the import map as an object. Since javascript functions are not part of JSON, this option is only available if you provide a config.js file (instead of config.json).
  • cacheControl (optional): Cache-control header that will be set on the import map file when the import-map-deployer is called. Defaults to public, must-revalidate, max-age=0.
  • alphabetical (optional, defaults to false): A boolean that indicates whether to sort the import-map alphabetically by service/key/name.

Option 1: json file

The below configuration file will set up the import-map-deployer to do the following:

  • Requests to import-map-deployer must use HTTP auth with the provided username and password.
  • The import maps are hosted on AWS S3. This is indicated with the s3:// prefix.
  • There are three different import maps being managed by this import-map-deployer: default, prod, and test.
{
  "urlSafeList": ["https://unpkg.com/", "https://my-organization-cdn.com/"],
  "username": "admin",
  "password": "1234",
  "manifestFormat": "importmap|sofe",
  "locations": {
    "default": "import-map.json",
    "prod": "s3://cdn.canopytax.com/import-map.json",
    "test": "import-map-test.json"
  }
}

Option 2: javascript module

Example config.js

// config.js
module.exports = {
  // The username that must be provided via HTTP auth when calling the import-map-deployer
  username: "admin",
  // The password that must be provided via HTTP auth when calling the import-map-deployer
  password: "1234",
  // The type of json file that should be updated. Import-maps are two ways of defining URLs for javascript module.
  manifestFormat: "importmap|sofe",
  // Optional, if you are using a built-in "IO Method"
  readManifest: function (env) {
    return new Promise((resolve, reject) => {
      const manifest = ""; //read a string from somewhere
      resolve(manifest); //must resolve a string
    });
  },
  // Optional, if you are using a built-in "IO Method"
  writeManifest: function () {
    return new Promise((resolve, reject) => {
      //write the file....
      resolve(); //you don't have to call resolve with any value
    });
  },
};

Setting Authentication Credentials

Basic auth credentials can be set either in the config.json file (see above) or using the following environment variables:

  • IMD_USERNAME
  • IMD_PASSWORD

ℹ️ Both environment variables must be set for them to take effect.

⚠️ The above environment variables will override the username and password from the config file.

Building image using docker

To build image using default settings

$ docker build .
# ...

To build image with a custom container port in the PORT ENV variable

$ docker build --container-port=8080 .
# ...

Built-in IO Methods

The import-map-deployer knows how to update import maps that are stored in the following ways:

AWS S3

If your import map json file is hosted by AWS S3, you can use the import-map-deployer to modify the import map file by specifying in your config s3:// in the locations config object.

The format of the string is s3://bucket-name/file-name.json

import-map-deployer relies on the AWS CLI environment variables for authentication with S3.

config.json:

{
  "manifestFormat": "importmap",
  "locations": {
    "prod": "s3://mycdn.com/import-map.json"
  }
}

Digital Ocean Spaces

If your import map json file is hosted by Digital Ocean Spaces, you can use the import-map-deployer to modify the import map file by specifying in your config spaces:// in the locations config object.

The format of the string is spaces://bucket-name.digital-ocean-domain-stuff.com/file-name.json. Note that the name of the Bucket is everything after spaces:// and before the first . character.

Since the API Digital Ocean Spaces is compatible with the AWS S3 API, import-map-deployer uses aws-sdk to communicate with Digital Ocean Spaces. As such, all options that can be passed for AWS S3 also are applied to Digital Ocean Spaces. You need to provide AWS CLI environment variables for authentication with Digital Ocean Spaces, since import-map-deployer is using aws-sdk to communicate with Digital Ocean.

Instead of an AWS region, you should provide an s3Endpoint config value that points to a Digital Ocean region.

config.json:

{
  "manifestFormat": "importmap",
  "s3Endpoint": "https://nyc3.digitaloceanspaces.com",
  "locations": {
    "prod": "spaces://mycdn.com/import-map.json"
  }
}

Minio

Minio also has an s3 compatible API, so you can use a process similar to digital ocean spaces. You would use the import-map-deployer to modify the import map file by specifying in your config spaces:// in the locations config object.

Instead of an AWS region, you should provide an s3Endpoint config value that points to your root domain.

config.json:

{
  "manifestFormat": "importmaps",
  "s3Endpoint": "https://<selfhosted.domain>",
  "locations": {
    "default": "spaces://minio.<selfhosted.domain>/import-map.json"
  }
}

Azure Storage

Note, that you must have environment variables AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY, or AZURE_STORAGE_CONNECTION_STRING defined for authentication.

If you wish to provide custom authentication keys for specific environments you can provide it also via the keys azureConnectionString, azureAccount or azureAccessKey.

Its not recommended to put authentication keys in code. Always provide them via environment variables.

config.js:

module.exports = {
  manifestFormat: "importmap",
  locations: {
    prod: {
      azureContainer: "static",
      azureBlob: "importmap.json",
      azureConnectionString: process.env.AZURE_STORAGE_ACCOUNT_PROD, // optional
      azureAccount: process.env.AZURE_STORAGE_ACCOUNT_PROD, // optional
      azureAccessKey: process.env.AZURE_STORAGE_ACCOUNT_PROD, // optional
    },
  },
};

Google Cloud Storage

Note that you must have the GOOGLE_APPLICATION_CREDENTIALS environment variable set for authentication.

config.json:

{
  "manifestFormat": "importmap",
  "locations": {
    "prod": "gs://name-of-bucket/importmap.json"
  }
}

File system

If you'd like to store the import map locally on the file system, provide the name of a file in your locations instead.

{
  "manifestFormat": "importmap",
  "locations": {
    "prod": "prod-import-map.json"
  }
}

Endpoints

This service exposes the following endpoints

GET /health

An endpoint for health checks. It will return an HTTP 200 with a textual response body saying that everything is okay. You may also call / as a health check endpoint.

GET /environments

You can retrieve the list of environments (locations) a GET request at /environments

Example using HTTPie:

http :5000/environments

Example using cURL:

curl localhost:5000/environments

Response:

{
  "environments": [
    {
      "name": "default",
      "aliases": ["prod"],
      "isDefault": true
    },
    {
      "name": "prod",
      "aliases": ["default"],
      "isDefault": true
    },
    {
      "name": "staging",
      "aliases": [],
      "isDefault": false
    }
  ]
}

GET /import-map.json?env=prod

You can request the importmap.json file by making a GET request.

Example using HTTPie:

http :5000/import-map.json\?env=prod

Example using cURL:

curl localhost:5000/import-map.json\?env=prod

PATCH /import-map.json?env=prod

You can modify the import map by making a PATCH request. The import map should be sent in the HTTP request body and will be merged into the import map controlled by import-map-deployer.

If you have an import map called importmap.json, here is how you can merge it into the import map deployer's import map.

Note that the skip_url_check query param indicates that the import-map-deployer will update the import map even if it is not able to reach it via a network request.

Example using HTTPie:

http PATCH :5000/import-map.json\?env=prod < importmap.json

# Don't check whether the URLs in the import map are publicly reachable
http PATCH :5000/import-map.json\?env=prod\&skip_url_check < importmap.json

Example using cURL:

curl -X PATCH localhost:5000/import-map.json\?env=prod --data "@import-map.json" -H "Accept: application/json" -H "Content-Type: application/json"

# Don't check whether the URLs in the import map are publicly reachable
curl -X PATCH localhost:5000/import-map.json\?env=prod\&skip_url_check --data "@import-map.json" -H "Accept: application/json" -H "Content-Type: application/json"

PATCH /services?env=stage&packageDirLevel=1

You can PATCH services to add or update a service, the following json body is expected:

Note that the skip_url_check query param indicates that the import-map-deployer will update the import map even if it is not able to reach it via a network request.

Note that the packageDirLevel query param indicates the number of directories to remove when determining the root directory for the package. The default is 1. Note that this option only takes effect if packagesViaTrailingSlashes is set to true.

Body:

{
  "service": "my-service",
  "url": "http://example.com/path/to/my-service.js"
}

Response:

{
  "imports": {
    "my-service": "http://example.com/path/to/my-service.js",
    "my-service/": "http://example.com/path/to/"
  }
}

Example using HTTPie:

http PATCH :5000/services\?env=stage service=my-service url=http://example.com/my-service.js

# Don't check whether the URL in the request is publicly reachable
http PATCH :5000/services\?env=stage\&skip_url_check service=my-service url=http://example.com/my-service.js

Example using cURL:

curl -d '{ "service":"my-service","url":"http://example.com/my-service.js" }' -X PATCH localhost:5000/services\?env=beta -H "Accept: application/json" -H "Content-Type: application/json"

# Don't check whether the URL in the request is publicly reachable
curl -d '{ "service":"my-service","url":"http://example.com/my-service.js" }' -X PATCH localhost:5000/services\?env=beta\&skip_url_check -H "Accept: application/json" -H "Content-Type: application/json"

DELETE /services/{SERVICE_NAME}?env=alpha

You can remove a service by sending a DELETE with the service name. No request body needs to be sent. Example:

DELETE /services/my-service

Example using HTTPie:

http DELETE :5000/services/my-service

Example using cURL:

curl -X DELETE localhost:5000/services/my-service
Special Chars

This project uses URI encoding: encode URI. If you have any service with special chars like @, /, etc... you need to use It's corresponding UTF-8 encoding character.

Imagine you have this service name in your import-map.json @company/my-service. You have to replace those characters to utf-8 encoded byte: See detailed list utf8 encode

curl -X DELETE localhost:5000/services/%40company%2Fmy-service

import-map-deployer's People

Contributors

bartjanvanassen avatar blittle avatar brandones avatar cristianosl avatar danopia avatar dckesler avatar dependabot[bot] avatar dijitali avatar frehner avatar iot-resister avatar joeldenning avatar kfrederix avatar kgehmlich avatar kristianmandrup avatar lhtdesignde avatar mellis481 avatar milankovacic avatar nhumrich avatar thawkin3 avatar themcmurder avatar tungurlakachakcak avatar vongohren avatar yzalvin avatar zleight1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

import-map-deployer's Issues

[Question] Is PATCH /import-map.json supposed to work?

It doesn't seem to; and I am not clear on why I would use that over /services.

If there is no difference can I remove this /import-map.json PATCH from the docs because I just spend a stupid amount of time bumbling around trying to update

How cdn url is same for every environment?

@joeldenning

https://github.com/single-spa/import-map-deployer/blob/main/examples/ci-for-javascript-repo/gitlab-aws-no-import-map-deployer/.gitlab-ci.yml

I am trying to implement deploy process for our single spa apps. Above example has same CDN URL's for all environment. But each environment has its own bucket. So how would one cdn url point to different bucket?

And if that is not the case and cdn url's are supposed to be different, then it takes me to another question. In root-config we have to specify cdn url to download import-map.json. So do we have any example how different cdn url can be configured in root config for different environment?

Way I am doing right now is when I build the app, I pass environment variable for cdn url and that gets replaced in index.html for importmap.json.

To do that, I will have to build my apps again for other environment (qa, prod) before deploying to s3.

Non Existent Envoirnments in Conf.js should return error instead of returning the default envoirnment settings

I have configured two envoirnment in my conf.js with below file which are working fine when i get and patch import-map.json but it always returns sit1 file for wrong envoirnment like sit3 and sit4 which are not even existing in conf.js , ideally it should throw error if wrong envoirnment name is passed in the query string

Below url still works which should not work
import-map.json?env=sit3

module.exports = {
manifestFormat: "importmap",
packagesViaTrailingSlashes: false,
locations: {
sit1: "/www/mfe-sit/sit1/microfrontends/import-map.json",
sit2: "/www/mfe-sit/sit2/microfrontends/import-map.json",
},
};

Auth environment variables have no effect

Setting HTTP_USERNAME and HTTP_PASSWORD as in the Dockerfile example has no noticeable effect. They neither set nor override the username and password values for basic auth.

Have I misunderstood the point of these environment variables?

Feature request: allow non-auth access to health check

When basic auth is enabled, accessing /health will return a 401 if the auth header is missing. It would be nice to have the ability to turn off basic authentication for the /health request path.

In the meantime, / will work.

Support per environment access keys for azure

So when a storage bucket is different per application you'll have to create an instance of the import map deployer per env, because the process env vars only support one key process.env.AZURE_STORAGE_... It would be nice if you can over ride the keys from the config file.

Add cache control configuration option

This issue is a continuation of the discussion from this item regarding adding a new configuration option to be able to set the cache-control header for the importmap.json file. The default is public, must-revalidate, max-age=0 which results in the file being cached at cloud edge locations for an indeterminate amount of time.

In the linked issue, there was talk of making it a new configuration item. After reviewing the documentation again, however, I'm wondering if it's currently possible using the s3.putObject property. So... including something like this in the import-map-deployer's config.json:

{
  "s3": {
    "putObject": {
      "CacheControl": "no-store"
    }
  }
}

@joeldenning Would this work or is there a need to add something new? Based on how it was coded, it looks like it would work.

Understanding http auth for getting json

Hi,

I am trying to use the deployer to avoid cache problems in my microfronts.

I require to update the import-map.json there is some kind of security like username and password, but when I want to fetch it from my container (index.html) I don't want to use the http auth.

is there any configuration for the import-map-deployer to act this way?

Im getting cannot read importmap.json during health check

Right now Im getting this issue when building and deploying to cloud run.
I have solved it by adding USER root in my wrapping Dockerfile.

But it does not feel good, so I wonder what is suppose to be possible?

There is a line in this Dockerfile, that sets www to be owned by root, and I believe that conflicts with the user is set to node. That again conflicts with IO operation of reading the local importmap file.

So just curious to what is the right way of doing it?

Tests fail after adding `config.json`

Once a config.json file is added to the root of the project, the tests fail:

expected 200 "OK", got 401 "Unauthorized"

This looks to be an issue with the setConfig method in src/config.js: looking at the code, it seems that the intention was to allow the config to be overriden with the arg passed into this method. However, as the tests fail, it seems that this is not the case?

Create empty file when not found

I wonder about one thing. I found that this service expects it to exist a file in the storage location for it to work? Is that correct? Or is it something I have done wrong?

The problem with this is when initiating the infrastructure, it is unclear who shall controll this empty state. Setting up with terraform and creating a file a provision is not optimal, as the terraform state changes and the file is not consistent. So I do believe this service should handle empty bucket, and potentially just create an empty file.

But maybe it already does, and I had the wrong setup when testing out?

Cant delete a service name with /

I patch a service via http call to import-map-deployer API endpoint with a name that contains a slash in it's name like @divilo/authentication

I think import map deployer should validate service name before adding it's reference. Otherwise delete mehhod wont do

{
  "imports": {
    "single-spa": "https://cdn.jsdelivr.net/npm/[email protected]/lib/system/single-spa.min.js",
    "vue": "https://cdn.jsdelivr.net/npm/[email protected]/dist/vue.min.js",
    "vue-router": "https://cdn.jsdelivr.net/npm/[email protected]/dist/vue-router.min.js",
    "@divilo/authentication": "https://assets.divilodemo.com/authentication/cef4e230dde1d946135647428ef4a65e/divilo-authentication.js"
  }
}

image

How to use the Docker Hub image to push to Kubernetes cluster

I'm following along your importmaps-deployer video tutorial.

I see there is a live-import-map-deployer

This is example of extending the import-map-deployer from docker hub

FROM canopytax/import-map-deployer

Wondering about the conf.js file. I've seen it in the video as config.json I believe? Please add sth to the readme for live-import-map-deployer and update it to match the latest single-spa/import-map-deployer image and conventions

Took me a while to figure all this out.

What about using the gcloud CLI to push the image and interact with the cluster?

Thanks for everything. Very excited to get this infrastructure all working :)

Docker Image Versioned Tags

Hi,

I've noticed that the Docker image on Docker Hub only has a "latest" tag. It would be great if new tagged releases could generate a new tag on Docker Hub as well, so it is possible for us to run a specific version in Production.

Thanks!

Add cache control header to import maps in Microsoft Azure

Currently S3, Digital Ocean, and Google Cloud Storage all are instructed to send a Cache-Control HTTP response header when serving the import map file.

References:

cacheControl: "public, must-revalidate, max-age=0",

CacheControl: "public, must-revalidate, max-age=0",

However, the same thing does not exist for Microsoft Azure. We should add this to the Azure integration. The file to modify is https://github.com/single-spa/import-map-deployer/blob/master/src/io-methods/azure.js

Typescript support / source code + Deno?

There has been some interest in porting the source code to Typescript (e.g. @dgreene1 has expressed interest)

At the same time, Deno has just been released which supports TS natively. There have been some that have been interested in trying it out/supporting it. (@filoxo maybe?)

Perhaps we can take this as an opportunity to do both at the same time? Create a Deno version of this project that's written in TS?

Or maybe that would be too much to bite off in one pass, so perhaps we create a TS fork of this repo, and then take that as source material and port it to Deno?

Just spitballing here. Thoughts?

Contribute - Support for Angular Module Federation (mf.manifest.json)

Hello,

I would love to contribute to this project. I am not using single-spa - I am unsing Angular Module Federation.

They have a concept very very similart to the import map:

https://github.com/angular-architects/module-federation-plugin/blob/main/libs/mf/tutorial/tutorial.md#part-4c-use-a-registry

I would like to add a simple parameter "type = mef" to the import map deployer to suppor this type of files.

Please confirm that you will add it :)

Should DELETE delete both entries - with and without trailing slash?

Afer reading it, I'm not sure if this is a duplicate of #62 or not. My apologies if it is.

  1. With packagesViaTrailingSlashes=true:

Calling PATCH http://server:5000/services
with this body:

{
    "service": "TestService",
    "url": "https://apis.google.com/js/client.js"
}

Adds two entries to the import map:

{
    "imports": {
        "TestService": "https://apis.google.com/js/client.js",
        "TestService/": "https://apis.google.com/js/"
    }
}

Great.

  1. My question: When you call DELETE http://server:5000/services/TestService, it only removes one entry, and the import map now looks like:
{
    "imports": {
        "TestService/": "https://apis.google.com/js/"
    }
}

Should it delete "TestService/" as well? I tried making a separate DELETE call -- calling various combinations: DELETE server:5000/services/TestService&#47; , etc.

Minio Support

Just wanted to share. I got this working as it is similar to digital ocean spaces:

config.json

{
  "manifestFormat": "importmaps",
  "s3Endpoint": "https://<selfhosted.domain>",
  "locations":{
    "default": "spaces://minio.<selfhosted.domain>/import-map.json"
  }
}

docker-compose.yml

version: "3.7"
services:
  import-map-deployer:
    image: singlespa/import-map-deployer
    ports:
      - 5000:5000
    environment:
      AWS_ACCESS_KEY_ID: $MINIO_ID
      AWS_SECRET_ACCESS_KEY: $MINIO_SECRET
    volumes:
      - ./config.json:/www/config.json

Throwing request error on updates

I'm getting following error for PATCH requests:

RequestError: The downloaded data did not match the data from the server. To be sure the content is the same, you should download the file again.

I'm not sure what this file is. I'm running the deployer as a cloud run instance with max_instances 1. Does this implementation require the service to be running all the time?

Cannot delete from import map if service name includes a `/` in it

Having a / in a service name is pretty common, for namespacing. Example: @openmrs/root-config.

But deleting something from the import map doesn't work if you attempt to do so, probably because the / is interpreted to mean a different part of the route entirely.

Add ability to specify which import map to patch

Hello,
we are happy users of import-map-deployer. We have it configured to update an import map hosted in S3 bucket. Url is like s3://<something>/prod/import-map.json.

We are going to have a brand new product with its own import map. However duplicating all our AWS infrastructure for the new product seems to be an overkill. Instead, we'd like to make import-map-deployer to be able to update s3://<something>/prod/{productName}/import-map.json where productName we will send as in a request body for PATCH/import-map.json endpoint.

Is there any reason import-map-deployer does not currently allow to specify import map to update during PATCH operation?

I am ready to open a pull request implementing that feature, but first wanted to make sure it conforms to the vision of this project.

Thanks!

CC @joeldenning @TheMcMurder

Noisy logging of health check requests

We use the /health endpoint as a Kubernetes readiness probe which results in a lot log noise because it gets called every few seconds.

Could we suppress logging of the /health endpoint if it returns a 200 response?

Happy to make a PR but looking for confirmation of:

  • is the proposal OK?
  • should I bother making it a configurable setting?
  • if so, should it be off or on by default?

How to delete scopes?

Was able to add scopes via PATCH import-map.json, but can't seem to find a way to clean them up after adding them. Is there currently a solution for this?

NPX example doesn't work

I'm setting up the recommended single-spa workflow and in testing this tool I got some weird behavior.

If you run the example npx import-map-deployer config.json (inside another project not this one cloned) you will get a 404 error from the NPM registry:

npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry.npmjs.org/import-map-deployer - Not found
npm ERR! 404
npm ERR! 404  'import-map-deployer@latest' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
npm ERR! 404
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, http url, or git url.

Either you're assuming the user is running it with a private mirror, or something else is my guess.

If you could clarify what the proper way to run this from Node is, with a little more context it would probably solve this issue.

If you need more information or would like me to clarify more, let me know. Thanks for the awesome work on all the single-spa ecosystem so far!

IAM role support to prevent the need to rotate keys for S3

Enhancement Request
A new configuration option would allow the import map deployer to run as a specific IAM Role (cli ref provided here) which would allow us to avoid using AWS CLI keys.

Context / User Story:

One of the AWS experts at my company @dsmith2828 pointed that due to our companies key rotating policies we would have to log into our EC2 instance every 3 months to update the AWS cli keys for the import map deployer

Work Plan:
I would write the PR and provide unit tests.

Enable configuration of exposed container port

Currently the Dockerfile is hardcoded to expose port 5000

EXPOSE 5000

However in web-server.js it allows for a custom port supplied via the config file supplied as an argument when the webserver is started

let config = require('./config.js').config

app.listen(config.port || 5000, 
// ...
)

Can be done nicely via ARG apparently get-environment-variable-value-in-dockerfile

ARG container_port=5000
ENV PORT=$container_port
EXPOSE $PORT

Pretty neat

$ docker build --build-arg container_port=5000

Eslint validation

Hey!

I was looking at some pr's i was writing on this project and noticed the eslint was ran by travis, though my editor + precommit hook is only checking for prettier. Can we also add eslint to the project as dev dependencies + precommit? the configuration is already on the project i noticed.

Feature Request: Set S3 ACL other than public-read

We have a setup where setting the ACL of the import-map.json in S3 needs to be something other than public-read.

Right now in the s3 code, the PUT call is defaulted to 'public-read' and can't be overridden.

I think the best way would be another configuration variable, that validates against the list of pre-canned ACLs here: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

I'll open a PR for this, let me know if there is anything to know or watch out for.

Documentation: aliases missing

The documentation shows that an alias can be created to set a default environment but it the syntax is missing from the documentation.

I would like to change my current setup to make prod the default but I'm not sure how to do it:

{
    "manifestFormat": "importmap",
    "locations": {
        "prod": "google://my-bucket/prod-import-map.json",
        "staging": "google://my-bucket/staging-import-map.json",
        "dev": "google://my-bucket/dev-import-map.json"
    },
    ...
}

AWS, s3, AccessControlListNotSupported

Hello.
When I try to use AWS S3 bucket as storage for import-maps with "ACLs disabled" option enabled for the bucket I get this error:
"Could not patch service -- AccessControlListNotSupported: The bucket does not allow ACLs".

From documentation: "If the bucket that you're uploading objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don't specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that contain other ACLs (for example, custom grants to certain Amazon Web Services accounts) fail and return a 400 error with the error code AccessControlListNotSupported." https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Am I missing something?
Thank you in advance.

Authentication broken

Hi! First of all: thank you for this great tool!
While playing with this yesterday, I noticed that the authentication was not working.
A recent change to config.js removed the exports.config, but that was still used from io-operations.js.

I have linked a pull request to fix this.

Offering unit testing help

After #20 is complete, I'd be interested in helping to write some unit tests. Would @joeldenning be able to write some expectations here so that I can fill out the setups?

The following are just ideas. I'm open to however you want to help me to help you guys add test coverage. Since you @frehner and @joeldenning have been so kind, I'd really like to find a way to repay that kindness and to give back to the single-spa community. :)

In other words, if I was provided information I would be able to write the test code:

  • Given the following setup (i.e. Arrange)
  • Under this action (i.e. Act)
  • I expect this (i.e. Assert)

And example might be:

  • Given a malformed import map
  • That a user tries to patch
  • I expect a 400 error with a message of "The import map was malformed"

How make an HTTP request with curl or http when `username` and `password` was set?

Hello, I'm doing some experiment with Dockerfile
and setting HTTP_USERNAME and HTTP_PASSWORD

but I will fail when I call at normally (like docs)

vctqs1$ http :5000/enviroments
HTTP/1.1 401 Unauthorized
Connection: keep-alive
Content-Length: 0
Date: Wed, 09 Feb 2022 10:52:03 GMT
Keep-Alive: timeout=5
WWW-Authenticate: Basic realm="sofe-deplanifester"
X-Powered-By: Express

Allow adding/updating service with non-public url

When I run a curl command to a running instance of import-map-deployer

curl -d '{"service":"service-name", "url":"service-url"}' -X PATCH localhost:5000/services\?env=dev ...

, it first verifies that the file in service-url exists by making a network request. However, if the service-url is not public, it fails.

It might be preferable if this check/verification can be disabled optionally, to allow adding service urls which are secured in some cloud storage.

npx command not working on Node 12.x

Hey, it's my first time using this. Very nice tooling!

I didn't find which Node version is supported in the documentation so I'm just reporting it.
It's working fine on Node.js 14.x for me.

$ npx import-map-deployer config.js
Cannot find module 'fs/promises'
Require stack:
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-methods/filesystem.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-methods/default.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/io-operations.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/src/web-server.js
- ~/.npm/_npx/27925/lib/node_modules/import-map-deployer/bin/import-map-deployer

Thanks!

AWS Role assigned to ServiceAccount is ignored when running in kubernetes/EKS

import-map-deployer ignores role assigned to service account when running in k8s/EKS. As a result it is impossible to limit S3 access permissions just to particular pod with import-map-deployer running.

Looks like [email protected] locked in yarn.lock file has either bug or lack of functionality and it seems to ignore AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE provided by EKS integration with AWS IAM roles.

When I run your image with interactive shell and execute below commands in node REPL:

const aws = require('aws-sdk');
const sts = new aws.STS({region: "eu-west-1"});
sts.getCallerIdentity({}, function(error,data){console.log(data)});

I got EKS Worker node role in return

However when I run your base image (node:14-alpine) and install latest aws-sdk@2 (in my case it is 2.1274.0) it returns role assigned to service account. I double-checked this using same image (node:14-alpine) and explicitly installing [email protected] and behaviour is exactly same as in your image.

Possibly quickest fix is to update aws-sdk@2 version in yarn.lock to something more recent.

Upcoming AWS ACL Changes

Back in 2020 I had done some work here to add ACL configs for AWS - see PR, it's been a few years but I think originally the issue was the deployer assumed the bucket access was public-read only so we needed to add more ACL options - see issue.

Anyways, it seems like AWS is changing the default way bucket access/ACLs work come April 2023 and if I understand correctly any new buckets created will have issues using the import-map-deployer as-is unless they specifically set the ACL to the previous behavior, which many people would miss. It seems that existing buckets should be OK, but my guess is that these will eventually need to be migrated.

Any new buckets that need to use import-map-deployer could have issues either with the API calls (since we'd still be sending ACLs) and config or with the ownership changes (might need specific user rights and would be good to document).

The blog post can be found here

This isn't necessarily an issue but more of a discussion (but it will be an issue/PR eventually is my guess), so my first question is:

  • Should we make changes to the way the ACL config works in the deployer?
    Note also there could be a situation of mixed old/new bucket types so we might need a fallback or additional type of flag, etc.

[Feature] support before and after hook, after call import map deployer

In my purpose, after cal imd i'd like to call to another service to update version. and make sure that it also put in lock to handle race-conditional

The behavior look like this

  1. use curl call to imd
  2. open lock
  3. beforeHook
  4. update import-map.json file
  5. afterHook. that maybe integrate to send noti to slack or another service
  6. close lock
  7. exec next call

Could the team help to review this, I'd be happy to open PR and contribute if possible

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.