Coder Social home page Coder Social logo

balena-io-modules / balena-compose Goto Github PK

View Code? Open in Web Editor NEW
7.0 11.0 0.0 1.43 MB

Complete toolkit for building docker-compose.yml files and optionally deploy them on balenaCloud

License: Apache License 2.0

JavaScript 0.14% TypeScript 99.77% Dockerfile 0.09%

balena-compose's Introduction

balena-compose

Complete toolkit to build docker-compose.yml files and optionally deploy them to balenaCloud.

Important: balena-compose is stable and perfectly usable but also, fundamentally, just a merge of several pre-existing different modules, that are merely re-exported from this one. You should expect a complete rewrite of the exported API in the medium term. What follows are pretty much the concatenated READMEs of these modules.

multibuild

This module is designed to make it easy to build a composition given a representation of this composition, and a tar stream. The output will be several images present on the given docker daemon.

Reference

function splitBuildStream(composition: Composition, buildStream: ReadableStream): Promise<BuildTask[]>

Given a Composition which conforms to the type from parse and a stream which will produce a tar archive, split this tar archive into a set of build tasks which can then be further processed.

function performResolution(
	tasks: BuildTask[],
	architecture: string,
	deviceType: string
): Promise<BuildTask[]>

Given a list of build tasks, resolve the projects to a form which the docker daemon can build. Currently this function supports all project types which resolve supports.

Note that this function will also populate the dockerfile and projectType values in the build tasks.

function performBuilds(
	tasks: BuildTask[],
	docker: Dockerode
): Promise<LocalImage[]>

Given a list of build tasks, perform the task necessary for the LocalImage to be produced. A local image represents an image present on the docker daemon given.

Note that one should assign a stream handling function for build output OR a progress handling function for image pull output before calling this function. The fields for these functions are streamHook and progressHook.

Example (pseudocode)

import { multibuild, parse } from '@balena/compose';

const { Composition, normalize } = parse;
const { splitBuildStream, performBuilds } = multibuild;

// Get a tar stream and composition from somewhere
const stream = getBuildStream();
const composeFile = getComposeFile();
const docker = getDockerodeHandle();

// Parse the compose file
const comp = normalize(composeFile);

splitBuildStream(comp, stream)
.then((tasks) => {
	return performResolution(tasks, 'armv7hf', 'raspberrypi3');
})
.then((tasks) => {
	tasks.forEach((task) => {
		if (task.external) {
			task.progressHook = (progress) => {
				console.log(task.serviceName + ': ' + progress);
			};
		} else {
			task.streamHook = (stream) => {
				stream.on('data', (data) => {
					console.log(task.serviceName + ': ', data.toString());
				});
			};
		}
		return task;
	}
})
.then(() => {
	return performBuilds(builds, docker);
})
.then((images) => {
	// Do something with your images
});

build

A modular, plugin-based approach to building docker containers. build uses streams and hooks to provide a system which can be added to a build pipeline easily. With a simple but flexible interface, this module is meant to take the pain out of automating docker builds.

Reference

All building is done via the Builder object.

The Builder API has two top-level methods, which are used to trigger builds;

  • createBuildStream(buildOpts: Object, hooks: BuildHooks, handler: ErrorHandler): ReadWriteStream

Initialise a docker daemon and set it up to wait for some streaming data. The stream is returned to the caller for both reading and writing. Success and failure callbacks are provided via the hooks interface (see below). buildOpts is passed directly to the docker daemon and the expected input by the daemon is a tar stream.

  • buildDir(directory: string, buildOpts: Object, hooks: BuildHooks, handler: ErrorHandler): ReadWriteStream

Inform the docker daemon to build a directory on the host. A stream is returned for reading, and the same success/failure callbacks apply. buildOpts is passed directly to the docker daemon.

  • The handler parameter:

If an exception is thrown from within the hooks, because it is executing in a different context to the initial api call they will not be propagated. Using the error handler means that you can handle the error as necessary (for instance propagate to your global catch, or integrate it into a promise chain using reject as a handler). The error handler is optional. Note that the error handler will not be called with a build error, instead with that being dropped to the buildFailure hook, but if that hook throws, the handler will be called.

Hooks

Currently the hooks supported are;

  • buildStream(stream: ReadWriteStream): void

Called by the builder when a stream is ready to communicate directly with the daemon. This is useful for parsing/showing the output and transforming any input before providing it to the docker daemon.

  • buildSuccess(imageId: string, layers: string[]): void

Called by the builder when the daemon has successfully built the image. imageId is the sha digest provided by the daemon, which can be used for pushing, running etc. layers is a list of sha digests pointing to the intermediate layers used by docker. Can be useful for cleanup.

  • buildFailure(error: Error)

Called by the builder when a build has failed for whatever reason. The reason is provided as a standard node error object. This was also close the build stream. No more hooks will be called after this.

Examples

Examples are provided in typescript.

Building a directory

import { build } from '@balena/compose';

const { Builder, BuildHooks } = build;

const builder = Builder.fromDockerOpts({ socketPath: '/var/run/docker.sock' })

const hooks: BuildHooks = {
	buildStream: (stream: NodeJS.ReadWriteStream): void => {
		stream.pipe(process.stdout)
	},
	buildSuccess: (imageId: string, layers: string[]): void => {
		console.log(`Successful build! ImageId: ${imageId}`)
	},
	buildFailure: (error: Error): void => {
		console.error(`Error building container: ${error}`)
	}
}

builder.buildDir('./my-dir', {}, hooks)

Building a tar archive

import * as fs from 'fs'
import { build } from '@balena/compose';

const { Builder, BuildHooks } = build;

const builder = Builder.fromDockerOpts({ socketPath: '/var/run/docker.sock' })

const getHooks = (archive: string): BuildHooks => {
	return {
		buildSuccess: (imageId: string, layers: string[]): void => {
			console.log(`Successful build! ImageId: ${imageId}`)
		},
		buildFailure: (error: Error): void => {
			console.error(`Error building container: ${error}`)
		},
		buildStream: (stream: NodeJS.ReadWriteStream): void => {
			// Create a stream from the tar archive.
			// Note that this stream could be from a webservice,
			// or any other source. The only requirement is that
			// when consumed, it produces a valid tar archive
			const tarStream = fs.createReadStream(archive)

			// Send the tar stream to the docker daemon
			tarStream.pipe(stream)

			stream.pipe(process.stdout)
		}
	}
}

builder.createBuildStream({}, getHooks('my-archive.tar'))

emulate

Transpose a Dockerfile to use qemu-linux-user and emulate a build. Using this module as a pre-processor for Dockerfiles which will not run on your system natively, along with a version of qemu-linux-user suitable for emulation, will produce a Dockerfile which will run seamlessly.

Usage

  • tranpose(dockerfile: string, options: TranposeOptions): string

Given a Dockerfile and tranpose options, produce a Dockerfile which will run on the same architecture as the qemu provided in options (detailed below).

  • transposeTarStream(tarStream: ReadableStream, options: TransposeOptions, dockerfileName = 'Dockerfile'): Promise<ReadableStream>

Given a tar archive stream, this function will extract the Dockerfile (or file named with the given dockerfileName parameter) and transpose it. It then creates a new tar stream, and returns it wrapped in a Promise.

  • getBuildThroughStream(options: TransposeOptions): ReadWriteStream

Get a through stream, which when piped to will remove all extra output that is added as a result of this module transposing a Dockerfile.

This function enables 'silent' emulated builds, with the only difference in output from a native build being that there is an extra COPY step, where the emulator is added to the container

Options

  • TranposeOptions is an interface with two required fields;
    • hostQemu - The location of the qemu binary on the host filesystem
    • containerQemu - Where qemu should live on the built container

Notes

A version of qemu with execve support is required, which can be retrieved from https://github.com/balena-io/qemu.

resolve

Resolve balena project bundles into a format recognised by the docker daemon.

What is a project bundle?

A project bundle is a tar archive which contains a type of Dockerfile and metadata used to create a Dockerfile proper, which docker can understand.

Which bundles are supported

Currently default resolvers included are;

  • Dockerfile.template
    • Resolve template variables with metadata, currently supported:
      • %%RESIN_MACHINE_NAME%%
      • %%RESIN_ARCH%%
      • %%BALENA_MACHINE_NAME%%
      • %%BALENA_ARCH%%
  • Architecture Specific Dockerfiles
    • Choose the correct Dockerfile for a given build architecture or device type
  • Standard Dockerfile projects

How do I add a resolver?

Resolve supports the adding of generic resolvers, by implementing the resolver.d.ts interface in ./lib/resolve. Examples of this can be found in the lib/resolve/resolvers/ directory.

Your resolvers can then be passed to the resolveBundle function.

What is the input and output?

Resolve takes a tar stream and outputs a tar stream, which can be passed to the docker daemon or further processed.

dockerfile

Process dockerfile templates, a format that allows replacing variables in dockerfiles. The format is the same as a Dockerfile, but augmented with variables.

Variables have the format %%VARIABLE_NAME%%, where variable name starts with an uppercase letter, and can contain uppercase letters and underscores.

Variables in comments (lines that start with #) are not processed.

Usage

The module exposes a single process method. It receives a dockerfile template body and key-value pairs and replaces variables in the template.

If the template has variables that cannot be found, it throws an error.

Example:

import { dockerfile } from '@balena/compose';

const variables = {
    BASE: 'debian',
    TAG: 'latest',
};
const res = dockerfile.process('FROM %%BASE%%:%%TAG%%', variables);
console.log(res);
// Output:
// FROM debian:latest

parse

Parse docker-compose.yml files into a general, usable and fully typed object.

release

Create releases on balenaCloud without having to deal with the boilerplate.

balena-compose's People

Contributors

balena-ci avatar dfunckt avatar flowzone-app[bot] avatar kb2ma avatar klutchell avatar page- avatar pipex avatar thgreasi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

balena-compose's Issues

Always fails to get manifest when the authentication is required

When the docker-compose file includes the image from the registry that requires authentication (e.g. ghcr.io) it always fails to get manifest with the following error (from build.ts)

debug(`${task.serviceName}: Image manifest data unavailable for ${r}`);

Reason is that inside getManifest function, it uses docker-modem library without passing any authentication info.

	const optsf = {
		path: `/distribution/${repository}/json?`,
		method: 'GET',
		statusCodes: {
			200: true,
			403: 'not found or not authorized',
			500: 'server error',
		},
	};

optsf shall include the object authconfig so docker-modem will encode it and use it as X-Registry-Auth.

FYI: https://docs.docker.com/engine/api/v1.42/#section/Authentication

This authentication info shall be derived from registry-secrets.yml file from Balena CLI, but that is also not the case.

As a result, it is currently impossible to use multiarch images from the major private repositories such as ghcr.io.

Docker compose validation allows both `networks` and `network_mode`

The following composition is rejected by docker-compose with service hello declares mutually exclusive network_modeandnetworks: invalid compose project

version: '2.4'

services: 
  hello:
    image: alpine
    command: ['sleep', 'infinity']
    networks: ['my-network']
    network_mode: host


networks:
  my-network:

While balena-compose allows it. Making the change might prevent invalid or ambiguous compositions reaching the supervisor

Missing support for anonymous volumes

The parser does not currently allow volumes without a source (anonymous volumes) even though the SV supports it at runtime.

case 'volume':
if (!serviceVolume.source) {
throw new ValidationError('Missing volume source');
}
if (volumeNames.indexOf(serviceVolume.source) === -1) {
throw new ValidationError(
`Missing volume definition for '${serviceVolume.source}'`,
);
}
if (serviceVolume.volume) {
throw new ValidationError('Volume options are not allowed');
}

`Builder.buildDir` should respect `.dockerignore`

Although the library has an option to create a build stream from an archive, Builder.buildDir is the most handy option building an image from a directory. However, if the directory has build files (e.g. node_modules, library files, etc), these will end up in the tar, delaying the build start. Having the method read from .dockerignore to skip files for the build, might be a way to solve the issue.

Compile failures on release model create/update

An npm install followed by npm test results in the type errors below during compilation. The release model create() and update() functions expect a generic body object, but the API post() and patch() methods expect the body to be an AnyObject.

$ npm test

> @balena/[email protected] test /home/kbee/dev/balena-compose/test-repo
> npm run lint && ts-mocha --project ./tsconfig.test.json


> @balena/[email protected] lint /home/kbee/dev/balena-compose/test-repo
> npm run lint:lib && npm run lint:tests && tsc --noEmit


> @balena/[email protected] lint:lib /home/kbee/dev/balena-compose/test-repo
> balena-lint --typescript lib/ typings/

Warning: The 'no-floating-promises' rule requires type information.
0 errors and 0 warnings in 4 files

> @balena/[email protected] lint:tests /home/kbee/dev/balena-compose/test-repo
> balena-lint --typescript --tests test/

Warning: The 'no-floating-promises' rule requires type information.
0 errors and 0 warnings in 18 files
lib/release/models.ts:122:30 - error TS2322: Type 'U' is not assignable to type 'AnyObject | undefined'.
  Type 'U' is not assignable to type 'AnyObject'.

122  return api.post({ resource, body }).catch(wrapResponseError) as Promise<T>;
                                 ~~~~

  lib/release/models.ts:117:27
    117 export function create<T, U>(
                                  ~
    This type parameter might need an `extends AnyObject` constraint.
  lib/release/models.ts:117:27
    117 export function create<T, U>(
                                  ~
    This type parameter might need an `extends AnyObject | undefined` constraint.
  node_modules/pinejs-client-core/index.d.ts:231:5
    231     body?: AnyObject;
            ~~~~
    The expected type comes from property 'body' which is declared here on type 'Params'

lib/release/models.ts:131:35 - error TS2322: Type 'T' is not assignable to type 'AnyObject | undefined'.
  Type 'T' is not assignable to type 'AnyObject'.

131  return api.patch({ resource, id, body }).catch(wrapResponseError);
                                      ~~~~

  lib/release/models.ts:125:24
    125 export function update<T>(
                               ~
    This type parameter might need an `extends AnyObject` constraint.
  lib/release/models.ts:125:24
    125 export function update<T>(
                               ~
    This type parameter might need an `extends AnyObject | undefined` constraint.
  node_modules/pinejs-client-core/index.d.ts:231:5
    231     body?: AnyObject;
            ~~~~
    The expected type comes from property 'body' which is declared here on type 'Params'


Found 2 errors in the same file, starting at: lib/release/models.ts:122

npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! @balena/[email protected] lint: `npm run lint:lib && npm run lint:tests && tsc --noEmit`
npm ERR! Exit status 2
npm ERR! 
npm ERR! Failed at the @balena/[email protected] lint script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/kbee/.npm/_logs/2022-09-03T11_10_58_831Z-debug.log
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! @balena/[email protected] test: `npm run lint && ts-mocha --project ./tsconfig.test.json`
npm ERR! Exit status 2
npm ERR! 
npm ERR! Failed at the @balena/[email protected] test script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/kbee/.npm/_logs/2022-09-03T11_10_58_846Z-debug.log

Rename `resin` labels to `balena` for default composition

The code is here

return `# This file has been auto-generated.
version: '${DEFAULT_SCHEMA_VERSION}'
networks: {}
volumes:
resin-data: {}
services:
main:
${context}
privileged: true
tty: true
restart: always
network_mode: host
volumes:
- type: volume
source: resin-data
target: /data
labels:
io.resin.features.kernel-modules: 1
io.resin.features.firmware: 1
io.resin.features.dbus: 1
io.resin.features.supervisor-api: 1
io.resin.features.resin-api: 1
`;
}

We should update this so we can get rid of the code referencing resin labels on the supervisor

jsesc is incorrectly marked as a devDependency only

See:

import * as jsesc from 'jsesc';
import * as _ from 'lodash';
import * as tar from 'tar-stream';
import { normalizeTarEntry } from 'tar-utils';
/**
* TransposeOptions:
* Options to be passed to the transpose module
*/
export interface TransposeOptions {
/**
* hostQemuPath: the path of the qemu binary on the host
*/
hostQemuPath: string;
/**
* containerQemuPath: Where to add the qemu binary on-container
*/
containerQemuPath: string;
/**
* Optional file mode (permission) to assign to the Qemu executable,
* e.g. 0o555. Useful on Windows, when Unix-like permissions are lost.
*/
qemuFileMode?: number;
}
interface Command extends Pick<parser.CommandEntry, 'name' | 'args'> {}
type CommandTransposer = (
options: TransposeOptions,
command: Command,
) => Command;
const generateQemuCopy = (options: TransposeOptions): Command => {
return {
name: 'COPY',
args: [options.hostQemuPath, options.containerQemuPath],
};
};
const processArgString = (argString: string) => {
return jsesc(argString, { quotes: 'double' });

See: balena-io/balena-cli#2515

Docker compose validation allows `network_mode: <network-name>`

While this is valid with docker-compose, it doesn't make sense with balena-compose as this allows linking the container to an external network that may not exist on the final device.

Validation should probably restrict the valid values of this field to bridge, host or none

Test failures due to dockerode regression in buildImage()

Tests fail with a timeout. Root cause is regression in dockerode v3.3.4 that hangs balena-compose use of its buildImage() function. See Issue on that repo for details. We are awaiting response to a fix PR.

  136 passing (8m)
  2 pending
  14 failing

  1) Directory build
       should build a directory image:
     Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/home/kbee/dev/balena-compose/repo/test/build/all.spec.ts)
      at listOnTimeout (internal/timers.js:549:17)
      at processTimers (internal/timers.js:492:7)
...

Consider restricting network `driver_opts` to string values

While the docker-compose specification accepts both strings and numbers, using a number seems to be rejected on the engine with a JSON unmarshaling error

(HTTP code 400) unexpected - json: cannot unmarshal number into Go struct field NetworkCreateRequest.Options of type string

This after pushing this valid network configuration

networks:
  default:
    driver_opts:
      com.docker.network.driver.mtu: 1420

Leaving this as an issue for discussion

references to previous build stages are treated as invalid manifests

This prevents applying the platform flag automatically, when it should actually be used to correctly pull multiarch base images.

eg.

[Warn]    Service 'balena-supervisor':
[Warn]      Multi-stage Dockerfile found with a mix of base images that require
[Warn]      CPU architecture selection and base images that do not support it.
[Warn]      The following base images do not support CPU architecture selection:
[Warn]      - build-base
[Warn]      - runtime-base
[Warn]      - runtime-base
[Warn]      The following base images require CPU architecture selection:
[Warn]      - alpine:3.11
[Warn]      - alpine:3.16
[Warn]      - debian:bullseye-slim
[Warn]      - alpine:3.16
[Warn]      As a result, the CPU architecture of the machine where the Docker Engine
[Warn]      is running will be used by default to select base images that require
[Warn]      architecture selection. This may result in incorrect architecture selection
[Warn]      and "exec format error" at runtime. It is usually possible to override the
[Warn]      architecture in the FROM line with e.g. "FROM --platform=linux/arm/v7",
[Warn]      or by adding the sha256 digest of the image for a specific architecture

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.