Coder Social home page Coder Social logo

secotrec's Introduction

Secotrec: Deploy Coclobas/Ketrew with class.

secotrec is a library, it provides a bunch of hacks to create more or less generic deployments.

It comes with 3 preconfigured “examples”. You should pick one.

  1. secotrec-gke:
    • sets up a new GCloud host with a bunch of services: Postgresql, Ketrew, Coclobas (in GKE mode), Tlstunnel (with valid domain-name and certificate), and optionally an nginx basic-auth HTTP proxy.
    • creates an new NFS server.
    • it also deals with the GKE cluster, firewall rules, etc.
  2. secotrec-local: sets up Postgresql, Ketrew, and Coclobas (in local docker mode) running locally or on a fresh GCloud box.
  3. seotrec-aws: still experimental, sets up a similar deployment as secotrec-gke based on AWS-Btatch.

The examples have the option of preparing the default biokepi-work directory with b37decoy_20160927.tgz and b37_20161007.tgz.

For other administration purposes, we also provide secotrec-make-dockerfiles, as Dockerfile generation tool.

This file provides detailed usage information, for high-level tutorial-oriented documentation please check-out the hammerlab/wobidisco project. A good starting point is the “Running Local” tutorial.

Install

You can install secotrec either from Opam or from a Docker image.

Option 1: With Opam

If you have an opam environment, for now we need a few packages pinned:

opam pin -n add ketrew https://github.com/hammerlab/ketrew.git
opam pin -n add biokepi https://github.com/hammerlab/biokepi.git
opam pin -n add secotrec https://github.com/hammerlab/secotrec.git
opam upgrade
opam install tls secotrec biokepi

Notes:

  • We need tls to submit jobs to the deployed Ketrew server (prepare and test-biokepi-machine sub-commands).
  • biokepi is only used by generated code (biokepi machine and its test).

Option 2: Dockerized

Getting the Docker image

# Get the docker image
docker pull hammerlab/keredofi:secotrec-default

Setup for secrotec-gke

# Enter the container for GKE use case
docker run -e KETREW_CONFIGURATION -it hammerlab/keredofi:secotrec-default

Setup secrotec-local

If you've chosen to use secotrec-local:

# Enter the container for local use case
docker run \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e KETREW_CONFIGURATION \
  -it hammerlab/keredofi:secotrec-default \
  bash

If you do use secotrec-local, please mind that we cannot access the Ketrew server from the current container (which is in a different network). We can jump to another container which is in the right network:

secotrec-local docker-compose exec coclo opam config exec bash

Configure your gcloud utility

Once you are inside the container, you first need to configure your gcloud utilities for proper access to GCloud services:

# Login to your acount
gcloud auth login

# Set your GKE project
gcloud config set project YOUR_GKE_PROJECT

Usage

Secotrec-gke

Configure

Generate a template configuration file:

secotrec-gke generate-configuration my-config.env

Edit the nicely documented my-config.env file until . my-config.env ; secotrec-gke print-configuration is happy and you are too.

Note: if you decide to use HTTP-Basic-Auth (htpasswd option), you will need to append a user-name and password to some of the commands below (see the optional --username myuser --password mypassword arguments).

Deploy

Then just bring everything up (can take about 15 minutes):

. my-config.env
secotrec-gke up

Check:

secotrec-gke status

Output should say that all services are up:

...
       Name                      Command               State           Ports
-------------------------------------------------------------------------------------
coclotest_coclo_1     opam config exec -- /bin/b ...   Up      0.0.0.0:8082->8082/tcp
coclotest_kserver_1   opam config exec -- dash - ...   Up      0.0.0.0:8080->8080/tcp
coclotest_pg_1        /docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp
coclotest_tlstun_1    opam config exec -- sh -c  ...   Up      0.0.0.0:443->8443/tcp
...

It should show some more interesting information (incl. Ketrew WebUI URL(s)), if there is no display like:

SECOTREC: Getting Ketrew Hello:
Curl ketrew/hello says: ''
[
  "V0",
  [
    "Server_status",
    {
    ...
    /* A few lines of JSON */
...

it means that Ketrew is not (yet) ready to answer (depending on Opam-pins and configuration; the server may not be up right away).

Prepare/Test

The deployment is usable as is, but to use it with Biokepi more efficiently one can start the “preparation” Ketrew workflow:

secotrec-gke prepare [--username myuser --password mypassword]

and go baby-sit the Ketrew workflow on the WebUI.

There is also a test of the Biokepi-Machine (for now, it can be run concurrently to the preparation workflow):

secotrec-gke test-biokepi-machine [--username myuser --password mypassword]

the workflow uses Coclobas/Kubernetes.

Generate Ketrew/Biokepi Stuff

Generate a Ketrew client configuration:

secotrec-gke ketrew-configuration /tmp/kc.d [--username myuser --password mypassword]

(then you can use export KETREW_CONFIG=/tmp/kc.d/configuration.ml).

Generate a Biokepi Machine.t:

secotrec-gke biokepi-machine /tmp/bm.ml [--username myuser --password mypassword]

More Inspection / Tools

We can have a top-like display:

secotrec-gke top

We can talk directly to the database used by Ketrew and Coclobas:

secotrec-gke psql

The subcommand docker-compose (alias dc) forwards its arguments to docker-compose on the node, with the configuration, e.g.:

secotrec-gke dc ps
secotrec-gke dc logs coclo
secotrec-gke dc exec kserver ps aux
secotrec-gke get-coclobas-logs somewhere.tar.gz
...
# See:
secotrec-gke --help

Destroy

Take down everything (including the Extra-NFS server and its storage) with the following command:

secotrec-gke down --confirm

Note that this action requires the additional --confirm argument to prevent destroying the secotrec setup accidentally. -y, --yes, --really, and --please are other alternatives that can be used to confirm the destroy procedure.

Secotrec-local

Configuration is all optional (the gcloud version adds some constrains; cf. the generated config-file), but it works the same way as Secotrec-GKE:

secotrec-local generate-configuration my-config.env
# Edit my-config.env
source my-config.env
secotrec-local print-configuration
secotrec-local up
secotrec-local status

Other commands work as well:

secotrec-local top       # top-like display of the containers
...
secotrec-local prepare    # submits the preparation workflow to Ketrew (get `b37decoy`)
...
secotrec-local ketrew-configuration /path/to/config/dir/     # generate a Ketrew config
secotrec-local biokepi-machine /path/to/biokepi-machine.ml   # generate a Biokepi config/machine
...
secotrec-local test biokepi-machine
...
secotrec-local down

Secotrec-Make-Dockerfiles

secotrec-make-dockerfiles is designed to update the Docker-Hub images at hammerlab/keredofi.

The README.md and the corresponding branches of GitHub repository hammerlab/keredofi are also updated for convenience by this tools (but we do not use Docker-Hub automated builds any more).

Display all the Dockerfiles on stdout:

secotrec-make-dockerfiles view

Write the Dockerfiles in their respective branches and commit if something changed:

secotrec-make-dockerfiles write --path=/path/to/keredofi

when done, the tool displays the Git graph of the Keredofi repo; if you're happy, just go there and git push --all.

Submit a Ketrew workflow that builds and runs some tests on all the Dockerfiles (for now this expects a secotrec-local-like setup):

eval `secotrec-local env`
secotrec-make-dockerfiles test

See secotrec-make-dockerfiles test --help for more options, you can for instance push to the Docker-Hub:

secotrec-make-dockerfiles test \
    --repo hammerlab/keredofi-test-2 --push agent-cooper,black-lodge

Secotrec-aws-node

When in the environment WITH_AWS_NODE is true, and application secotrec-aws-node is built, see:

secotrec-aws-node --help

For now the this uses the AWS API to setup a “ready-to-use” EC2 server.

The build requires master versions of: aws and aws-ec2:

pin_all () {
    local tmpdir=$HOME/tmp/
    rm -fr $tmpdir/ocaml-aws
    cd $tmpdir
    git clone https://github.com/inhabitedtype/ocaml-aws.git
    cd ocaml-aws
    opam pin add -y -n aws .
    for lib in $(find libraries/ -type d -name opam) ; do
        echo "Do $lib"
        opam pin add -y -n aws-$(basename $(dirname $lib)) ./$(dirname $lib)/
    done;
}
installs () {
    local all="aws aws-ec2"
    opam remove -y $all
    for lib in $all ; do
        ocamlfind remove $lib
    done
    opam install -y $all
}

(cf. also comment on #21).

Usage:

# Set your AWS credentials:
export AWS_KEY_ID=AKSJDIEJDEIJDEKJJKXJXIIEJDIE
export AWS_SECRET_KEY=dliejsddlj09823049823sdljsd/sdelidjssleidje
# Configure once:
secotrec-aws-node config --path /path/to/store/config-and-more/ \
    --node-name awsuser-dev-0 \
    --ports 22,443 \
    --region us-east-1 \
    --pub-key-file ~/.ssh/id_rsa.pub
# Then play as much as needed:
secotrec-aws-node show-configuration --path /path/to/store/config-and-more/
secotrec-aws-node up --path /path/to/store/config-and-more/ [--dry-run]
secotrec-aws-node down --path /path/to/store/config-and-more/ [--dry-run]
secotrec-aws-node ssh --path /path/to/store/config-and-more/

Everything should be idempotent (but some “Waiting for” functions may timeout for now).

More to come…

secotrec's People

Contributors

smondet avatar armish avatar e5c avatar ihodes avatar rleonid avatar

Stargazers

Sora Morimoto avatar A ghost. avatar Andres Vizcaino avatar  avatar Thomas Gazagnaire avatar

Watchers

 avatar Maxim Zaslavsky avatar Tim O'Donnell avatar  avatar James Cloos avatar Rohan Pai avatar  avatar  avatar

secotrec's Issues

Kube-clusters loose DNS access

Output: mount.nfs: Failed to resolve server foo-secogke-bar-vm: Temporary failure in name resolution

Using the IP addresses instead of the names in the biokepi-machine works (requires coclobas fixed for IP addresses → hammerlab/coclobas@4437772).

Should we always find and put the IP addresses?

Scripts without execute permissions

Running secotrec-gke up results in:

julia@fuzzy-epidisco:~$ secotrec-gke up
SECOTREC: instance--juliasec-secobox: Checking...
SECOTREC: instance--juliasec-secobox: Build In Progress
SECOTREC: firewall-rule--juliasec-secobox: Checking...
SECOTREC: firewall-rule--juliasec-secobox: Build In Progress
bash: /tmp/secotrecab8ecdscript.sh: Permission denied
SECOTREC: DNS juliasec-secobox.gcloud.hammerlab.org needs to be setup
SECOTREC: GCloud-DNS transaction: add/replace
* IP-file: /tmp/secotrece31bf0node-ip
* DNS-zone-file: /tmp/secotrecae0de6dnszone.yaml
* Transaction-file: /tmp/secotrec142b66transaction.yaml
SECOTREC: Waiting for DNS juliasec-secobox.gcloud.hammerlab.org to be *really* up, 60 attempts with 10 seconds in between (600 sec max)
0.1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23.24.25.26.27.28.29.
bash: /tmp/secotrec748042script.sh: Permission denied
Run-genspio: fatal error (script: /tmp/run-genspio4e9e0a-cmd.sh, errors: /tmp/secotrec1fa19berror.txt)
secotrec-gke: internal error, uncaught exception:
              (Failure "Run-genspio: fatal error")

Oracle-java-7 discontinued

--2017-06-05 20:51:48--  http://download.oracle.com/otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz?AuthParam=1496696028_ffad8785cac9abfc5eea8b8e21e97351
Connecting to download.oracle.com (download.oracle.com)|118.214.160.216|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-06-05 20:51:49 ERROR 404: Not Found.

download failed
Oracle JDK 7 is NOT installed.
dpkg: error processing package oracle-java7-installer (--configure):
 subprocess installed post-installation script returned error exit status 1

Why Oracle Java 7 And 6 Installers No Longer Work

While Oracle Java 6 and 7 are not supported for quite a while, they were still available for download on Oracle's website until recently.

However, the binaries were removed about 10 days ago (?), so the Oracle Java (JDK) 6 and 7 installers available in the WebUpd8 Oracle Java PPA no longer work.

Oracle Java 6 and 7 are now only available for those with an Oracle Support account (which is not free), so I can't support this for the PPA packages.

Cf.
also
https://askubuntu.com/questions/920106/webupd8-oracle-java-7-installer-failing-with-404

Workflows start failing when relatively unhealthier/old nodes don't pull new images

Perks of making heavy use multiple secotrec setups non-stop for almost a week: edge cases.

tl/dr: nodes in the cluster sometimes stops updating their images and start causing problems since they always try to run an older version of the biokepi-runner. Should we start versioning the images of thinking about modifying the imagePullPolicy setting on the cluster?

I was sometimes seeing a fraction of my jobs failing on certain nodes but not on the others (e.g.: bam2fq step always failing on gke-problematic-node). Although a clear and isolated problem, this is amazingly hard to put a finger on from the UI side since there it is just another step failing with no specifics to nodes/pods (unless you go and check them all).

As I was going down the list of failed parts and fixing issues one by one, I saw that one of them failed because GATK was complaining about the incompatible Java versions, which was weird because we updated both GATK and froze the one mutect uses, and this one was trying to use the new GATK against the old image. Some poking around revelaed that the node was >14 days old and it was still using the biokepi image dated back to its creation.

Removing the node solved the problem and I moved on since this is not something we always see but I just came across this:

https://kukulinski.com/10-most-common-reasons-kubernetes-deployments-fail-part-2#10containerimagenotupdating

links to the official kube doc:

Two things:

  1. So this does keep happening to people, unlike what I thought at first. So we should address it before someone else starts pulling his hair debugging this.
  2. Using the latest does make it more low-maintenance and I don't think we have to address it right away; but I do agree Kube's suggestion from a debugging point of view. Sometimes it really helps to have the previous image easily accessible and compare it to the new one, but right now the only way to do that is to roll back the keredofi repo and rebuild the image (which is not ideal).

AWS Batch + EFS Weird failure

While running Epidisco (hence quite a few nodes), one random failure:

An error occurred (InvalidGroup.NotFound) when calling the DescribeSecurityGroups operation: The security group 'launch-wizard-1default' does not exist in default VPC 'vpc-75a1c110'

and nope, it was there:

image

Not sure where the default suffix comes from.

Smarter upgrades of Ketrew

something like

secotrec-gke upgrade-ketrew

could

  1. stop ketrew
  2. use the current Ketrew to download the the data (ketrew sync)
  3. destroy the DB
  4. upgrade Ketrew
  5. reload the Json data-dump
  6. start the new Ketrew

(2) and (5) being optional…

The very first members of the cluster are suffering the most

Especially the kube head and the most senior GCE in the pool:

NAME             CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.67.240.1   <none>        443/TCP   26d

26 days of up time! That is impressive, but on the other hand as they stick longer their fluent instance is more likely to fail, it seems (anectodal, no definitive data). And there is also this:

(and do keep in mind that I had to keep restarting my rcc{0-5} DCs almost twice a week. Maybe we can set a regular schedule to reset (down && up) our clusters to make them more error prone. As I was reading the kube documentation I saw a relevant optioned being mentioned here:
https://cloud.google.com/container-engine/docs/node-auto-repair

And as we discussed earlier, every single new kube feature adaption comes with its own problems, but if those old nodes keep getting stuck (see #70) 🤷‍♂️

And the crazy idea about even better scaling and becoming fault tolerant

For the project I currently am working on, we have ~90 patients and the processing should have happened as fast as it can and a majority of the patients had to go through the complete pipeline (all :disco: features on), resulting in massive JSONs being passed around and computationally challenging equivalanece calculations to be done on the kserver side and eventually slowing it. And once it starts slowing down it all goes downhill after that, since the less it can handle, the more jobs queue up (this time risking them to hit HDD/MEM-related issues.

My solution was to spin up 5 (5!) separate secos and manually load the share against all of them. It worked nicely, but then you end up with 5 different clusters to check, tens of NFSeses to maintain and transfer files from/to, etc. Which makes me think that, maybe having seco/ketrew/psql on the head node, we should run them as services within the node pool and replicate them to match the scale of jobs they have to deal with.

And since the ketrews will be now inside the cluster, we can just get a simple NGINX-like thing up and running and make it responsible for the sharing the load (our kclients will submit to this balancer thinking that they are talking to a ketrew server and inside, each ketrew/seco setup will receive fewer and therefore more manageable tasks.

If we ever go down such a plan, we can even come up with a mock-ketrew server that pools results from the other instances inside the network and restart the failed tasks a few times until it start believing that there is an important issue with that particular job. And ideally, then it will pull up the information to its own level for us to investigate.

From what I understand from people's writings online and your status report from a few months ago, I don't think adopting a Swarm-like new and shiny technology might not help; but who know, maybe we can use this opportunity to write custom mirages and deal with these issues on our own :P

Hope that I am not re-discovering the wheel here, but if so let me know. I would love to read on this topic.

Error when using many NFS servers

I get this when I use more than approximately 20 mounted NFS servers:

tavi@tavi:~/coclo$ secotrec-gke status
WARNING: Command "kserver-compose" is too big for `sh -c <>`: 277920 B
secotrec-gke: internal error, uncaught exception:
              (Failure "Script too long")

I modified secotrec to output some of this script; each NFS server adds approximately the following gunk:

 { 'printf' 'SECOTREC: Mounted-tavi-cloud-nfs: Checking...\n' ; }  ; if {  {  { 'test' '-f' '/nfs-pool/.witness.txt' ; }  ; [ $? -eq 0 ] ; } ; } ; then  { 'printf' 'SECOTREC: Mounted-tavi-cloud-nfs: Already Done.\n' ; }  ; else  { 'printf' 'SECOTREC: Mounted-tavi-cloud-nfs: Build In Progress\n' ; }  ; if {  { { { { true &&  { (  eval "$(printf -- "exec %s>%s" 1 '/tmp/cmd-Mounting-tavi-cloud-nfs-stdout-0')" || { echo 'Exec "exec %s>%s" 1 '/tmp/cmd-Mounting-tavi-cloud-nfs-stdout-0' failed' >&2 ; }  ;  eval "$(printf -- "exec %s>%s" 2 '/tmp/cmd-Mounting-tavi-cloud-nfs-stderr-0')" || { echo 'Exec "exec %s>%s" 2 '/tmp/cmd-Mounting-tavi-cloud-nfs-stderr-0' failed' >&2 ; }  ;  { if {  { { { { true &&  { (  eval "$(printf -- "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-0')" || { echo 'Exec "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-0' failed' >&2 ; }  ;  eval "$(printf -- "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-0')" || { echo 'Exec "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-0' failed' >&2 ; }  ;  {  { 'sudo' 'apt-get' 'update' ; }   ; } ; ) ; [ $? -eq 0 ] ; } ; } &&  { (  eval "$(printf -- "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-1')" || { echo 'Exec "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-1' failed' >&2 ; }  ;  eval "$(printf -- "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-1')" || { echo 'Exec "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-1' failed' >&2 ; }  ;  {  { 'sudo' 'apt-get' 'upgrade' '--yes' ; }   ; } ; ) ; [ $? -eq 0 ] ; } ; } &&  { (  eval "$(printf -- "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-2')" || { echo 'Exec "exec %s>%s" 1 '/tmp/cmd-apt-install-nfs-client-stdout-2' failed' >&2 ; }  ;  eval "$(printf -- "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-2')" || { echo 'Exec "exec %s>%s" 2 '/tmp/cmd-apt-install-nfs-client-stderr-2' failed' >&2 ; }  ;  {  { 'sudo' 'apt-get' 'install' '--yes' 'nfs-client' ; }   ; } ; ) ; [ $? -eq 0 ] ; } ; } ; [ $? -eq 0 ] ; } ; } ; then : ; else  { 'printf' 'SECOTREC: apt-install-nfs-client; FAILED:\n\n' ; }  ;  { 'printf' '``````````stdout\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stdout-0' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { 'printf' '``````````stderr\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stderr-0' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { 'printf' '``````````stdout\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stdout-1' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { 'printf' '``````````stderr\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stderr-1' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { 'printf' '``````````stdout\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stdout-2' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { 'printf' '``````````stderr\n' ; }  ;  { 'cat' '/tmp/cmd-apt-install-nfs-client-stderr-2' ; }  ;  { 'printf' '\n``````````\n' ; }  ;  { printf -- '%s\n' "EDSL.fail called" >&2 ; kill -s USR1 ${genspio_trap_19_33428} ; }  ; fi  ; } ; ) ; [ $? -eq 0 ] ; } ; } &&  { (  eval "$(printf -- "exec %s>%s" 1 '/tmp/cmd-Mounting-tavi-cloud-nfs-stdout-1')" || { echo 'Exec "exec %s>%s" 1 '/tmp/cmd-Mounting-tavi-cloud-nfs-stdout-1' failed' >&2 ; }  ;

@smondet says:

I'll find a way to pass that as a script instead of sh -c <cmd>

The default cluster settings might not be ideal to carry the workload that we do

Since people seem really concerned about the frequency you hit kubelet and other cores to make them live long and in prosper, some guys in black suits and ties have lots of articles about how they got to tame the clusters. I don't really believe their happily-ever-after endings, but they might have a point. See this default config for cluster setup:

SYNOPSIS
    gcloud container clusters create NAME [--additional-zones=ZONE,[ZONE,...]]
        [--async] [--cluster-ipv4-cidr=CLUSTER_IPV4_CIDR]
        [--cluster-version=CLUSTER_VERSION]
        [--disable-addons=[DISABLE_ADDONS,...]] [--disk-size=DISK_SIZE]
        [--no-enable-cloud-endpoints] [--no-enable-cloud-logging]
        [--no-enable-cloud-monitoring] [--image-type=IMAGE_TYPE]
        [--machine-type=MACHINE_TYPE, -m MACHINE_TYPE]
        [--max-nodes-per-pool=MAX_NODES_PER_POOL] [--network=NETWORK]
        [--node-labels=[NODE_LABEL,...]] [--num-nodes=NUM_NODES; default="3"]
        [--password=PASSWORD] [--scopes=SCOPE,[SCOPE,...]]
        [--subnetwork=SUBNETWORK] [--tags=TAG,[TAG,...]]
        [--username=USERNAME, -u USERNAME; default="admin"]
        [--zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG ...]

Wasn't able to spend too much of a time on it, but my understanding is that almost all the components of the cluster are optional and there are interesting alternatives for each of the services. For example, if we are not planning to make heavy use of the StackDriver, why not ditch the endpoints api, the monitoring and the cloud logging options? (Mind me if our setup heavily relies on them since I am still trying to figure out the magic behind the coclo+ketrew+seco triplet.

Also, I don't why (since we always can proxy into the cluster and access nodes/pods, the default GKE settings add new network rules for each of the nodes (I thought that they should be on a virtual/overlay network and not face public? This is normally not an issue (since it is good to be able to just ssh into them) it is significant capping our scaling fast and getting things done fast capacity (due to quotas on the number of IP/Routes). Have to look into further (we can split this discussion into multiple parts if needed :))

But if you also agree, it might worth giving a shot?


Relevant

Unable to install

I just got Ketrew installed and now trying to install Secotrec. Currently hitting some sort of version conflict. Looks like perhaps type_conv wants a lower version of OCaml?

$ opam install tls secotrec biokepi

=-=- Synchronising pinned packages =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[secotrec] https://github.com/hammerlab/secotrec.git already up-to-date
[biokepi] https://github.com/hammerlab/biokepi.git updated
The following dependencies couldn't be met:
  - secotrec -> coclobas >= 0.0.1 -> cohttp < 0.99 -> conduit = 0.5.1 -> sexplib < 113.01.00
  - secotrec -> coclobas >= 0.0.1 -> cohttp < 0.99 -> fieldslib < 113.01.00 -> type_conv (= 108.00.02 | = 108.07.00 | = 108.07.01 | = 108.08.00 | = 109.07.00 | = 109.08.00 | = 109.09.00 | = 109.10.00 | = 109.11.00 | = 109.12.00 | = 109.13.00 | = 109.14.00 | = 109.15.00)
  - secotrec -> coclobas >= 0.0.1 -> cohttp < 0.99 -> pa_fields_conv
  - secotrec -> coclobas >= 0.0.1 -> cohttp < 0.99 -> sexplib < 113.01.00
  - biokepi -> ketrew >= 2.0.0 -> js_of_ocaml >= 3.0
  - biokepi -> ketrew >= 2.0.0 -> cohttp-lwt-unix >= 0.99.0 -> cohttp-lwt -> cohttp >= 0.99.0
Your request can't be satisfied:
  - Conflicting version constraints for cohttp
  - cohttp.0.20.1 is in conflict with js_of_ocaml.3.0
  - cohttp.0.20.2 is in conflict with js_of_ocaml.3.0
  - cohttp.0.21.0 is in conflict with js_of_ocaml.3.0
  - cohttp.0.21.1 is in conflict with js_of_ocaml.3.0
  - cohttp.0.22.0 is in conflict with js_of_ocaml.3.0
  - pa_fields_conv is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03".
  - sexplib<113.01.00 is not available because your system doesn't comply with ocaml-version >= "4.02.1" & ocaml-version < "4.03".
  - type_conv.108.00.02 is not available because your system doesn't comply with ocaml-version >= "3.12.1" & ocaml-version < "4.03.0".
  - type_conv.108.07.00 is not available because your system doesn't comply with ocaml-version < "4.03.0".
  - type_conv.108.07.01 is not available because your system doesn't comply with ocaml-version >= "3.12.1" & ocaml-version < "4.03.0".
  - type_conv.108.08.00 is not available because your system doesn't comply with ocaml-version >= "3.12.1" & ocaml-version < "4.03.0".
  - type_conv.109.07.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.08.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.09.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.10.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.11.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.12.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.13.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.14.00 is not available because your system doesn't comply with ocaml-version >= "4.00.0" & ocaml-version < "4.03.0".
  - type_conv.109.15.00 is not available because your system doesn't comply with ocaml-version >= "4.00.1" & ocaml-version < "4.03.0".

If resource exist, secotrec errors:

Get a EDSL.fail called with traceback:

Traceback (most recent call last):
  File "/usr/bin/gcloudnfs", line 196, in <module>
    args.data_disk_size)
  File "/usr/bin/gcloudnfs", line 158, in create
    wait_for_operation(compute, project, zone, create_disk(compute, project, zone, data_disk_name, data_disk_type, data_disk_size)['name'])
  File "/usr/bin/gcloudnfs", line 51, in create_disk
    return compute.disks().insert(project=project, zone=zone, body=disk).execute()
  File "/usr/local/lib/python2.7/dist-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/googleapiclient/http.py", line 840, in execute
    raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 409 when requesting https://www.googleapis.com/compute/v1/projects/PROJECTID/zones/us-east1-c/disks?alt=json returned "The resource 'projects/PROJECTID/zones/us-east1-c/disks/NAMEOFDISK' already exists"

README needs better summary of the project.

The README currently starts like:

Secotrec: Deploy Coclobas/Ketrew with class.
secotrec is a library, it provides a bunch of hacks to create more or less generic deployments.

Assume the reader doesn't know what Cocoblas or Ketrew are, can the title describe what Secotrec does without reference to them? Similarly, can we say something more specific about the library's purpose in the first sentence.

As it stands I wouldn't expect anyone who isn't yet initiated to figure this out without a lot of effort.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.