Coder Social home page Coder Social logo

thenewnormal / kube-solo-osx Goto Github PK

View Code? Open in Web Editor NEW
574.0 25.0 30.0 1.78 GB

Local development Kubernetes Solo Cluster for macOS made very simple

License: Apache License 2.0

Objective-C 28.95% Shell 60.62% Swift 10.42%
kubernetes-solo kubernetes-setup kubernetes

kube-solo-osx's Introduction

Kubernetes Solo cluster for macOS

This project is not maintained anymore, please use minikube instead

Zero to Kubernetes development environment setup under two minutes

Kube-Solo for macOS is a status bar App which allows in an easy way to bootstrap and control Kubernetes cluster on a standalone CoreOS VM machine. VM can also be controlled via ksolo cli. Also VM's docker API is exposed to macOS, so you can build your docker images with the same app and use them with Kubernetes.

Kube-Solo for macOS is a similar app to minikube, just has more functionality and is an older project. You can run both Apps on your Mac even in parallel.

k8s-solo

It leverages macOS native Hypervisor virtualisation framework of using corectl command line tool, so there are no needs to use VirtualBox or any other virtualisation software anymore.

Includes: Helm v2 - The Kubernetes Package Manager and an option from shell to install Deis Workflow PaaS on top of Kubernetes with a simple: $ install_deis command.

App's menu looks as per image below:

Kube-Solo

Download

Head over to the Releases Page to grab the latest release.

How to install Kube-Solo

Requirements

  • macOS 10.10.3 Yosemite or later
  • Mac 2010 or later for this to work.
  • Note: Corectl App must be installed, which will serve as corectld server daemon control.
  • iTerm2 is required, if not found the app will install it by itself.
  • libev brew install libev

Install:

  • Download Corectl App latest dmg from the Releases Page and install it to /Applications folder, it allows to start/stop/update corectl tools needed to run CoreOS VMs on macOS
  • Open downloaded dmg file and drag the App e.g. to your Desktop. Start the Kube-Solo App and Initial setup of Kube-Solo VM will run, then follow the instructions there.

TL;DR

  • App's files are installed to ~/kube-solo folder
  • App will bootstrap master+worker Kubernetes cluster on the single VM
  • Mac user home folder is automaticly mounted via NFS (it has to work on Mac end of course) to /Users/my_user:/Users/my_user on each VM boot, check the PV example how to use Persistent Volumes.
  • macOS docker client is installed to ~/kube-solo/bin and preset in OS shell to be used from there, so you can build docker images on the VM and use with Kubernetes
  • After successful install you can control kube-solo VM via ksolo cli as well. Cli resides in ~/kube-solo/bin and ~/binfolders and has simple commands: ksolo start|stop|status|ip|ssh|shell, just add ~/bin to your pre-set path.

The install will do the following:

  • All dependent files/folders will be put under ~/kube-solo folder in the user's home folder e.g /Users/someuser/kube-solo.
  • Will download latest CoreOS ISO image (if there is no such one) and run corectl to initialise VM
  • When you first time do install or Up after destroying Kube-Solo setup, k8s binary files (with the version which was available when the App was built) get copied to VM, this allows to speed up Kubernetes setup.
  • It will install docker, helm, deis and kubectl clients to ~/kube-solo/bin/
  • Kubernetes Dashboard and DNS will be instlled as add-ons
  • Via assigned static IP (it will be shown in first boot and will survive VM's reboots) you can access any port on CoreOS VM
  • Persistent sparse disk (QCow2) data.img will be created and mounted to /data for these mount binds and other folders:
/data/var/lib/docker -> /var/lib/docker
/data/var/lib/rkt -> /var/lib/rkt
/var/lib/kubelet sym linked to /data/kubelet
/data/opt/bin
/data/var/lib/etcd2
/data/kubernetes

How it works

Just start Kube-Solo application and you will find a small icon of Kubernetes logo with S in the Status Bar.

Menu options:

  • There you can Up and Halt k8solo-01 VM
  • SSH to k8solo-01 will open VM shell
  • Under Up OS Shell will be opened after VM boots up and it will have such environment pre-set:
kubernetes master - export KUBERNETES_MASTER=http://192.168.64.xxx:8080
etcd endpoint - export ETCDCTL_PEERS=http://192.168.64.xxx:2379
DOCKER_HOST=tcp://192.168.64.xxx:2375
Path to `~/kube-solo/bin` where macOS clients and shell scripts are stored

ksolo cli options:

  • ksolo start will start k8solo-01 VM and shell environment will be pre-set as above.
  • ksolo stop will stop VM
  • ksolo statuswill show VM's status
  • ksolo ip will show VM's IP
  • ksolo ssh will ssh to VM
  • ksolo shell will open pre-set shell

Other menu options:

  • Kubernetes Dashboard will show nice Kubernetes Dashboard, where you can check Nodes, Pods, Replication, Deployments, Service Controllers, deploy Apps and etc.
  • Check for App updates will check for a new app version
  • Updates/Update Kubernetes to the latest version will update to latest version of Kubernetes.
  • Updates/Change Kubernetes version will download and install specified Kubernetes version from GitHub.
  • Updates/Update macOS helm and deis clients will update helm and deis to the latest version.
  • Setup/ will allow you to do:
- Change CoreOS Release Channel
- Change VM's RAM size
- Destroy Kube-Solo VM (just deletes data.img file)
- Initial setup of Kube-Solo VM

Example output of succesfull Kubernetes Solo install:

kubectl cluster-info:
Kubernetes master is running at http://192.168.64.3:8080
KubeDNS is running at http://192.168.64.3:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://192.168.64.3:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Cluster version:
Client version: v1.5.1
Server version: v1.5.1

kubectl get nodes:
NAME        STATUS    AGE
k8solo-01   Ready     12s

Usage

You're now ready to use Kubernetes cluster.

Some examples to start with Kubernetes examples.

Other CoreOS VM based Apps for macOS

Contributing

Kube-Solo for macOS is an open source project release under the Apache License, Version 2.0, hence contributions and suggestions are gladly welcomed!

kube-solo-osx's People

Contributors

adamreese avatar bfarias-godaddy avatar chrislovecnm avatar franz-josef-kaiser avatar interstateone avatar jgmize avatar martinhoefling avatar pplanel avatar rimusz avatar techmaniack avatar whitlockjc avatar zatricion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-solo-osx's Issues

Not able to start VM because of /etc/exports

First of all thank you for your efforts trying to make development on Kubernetes this easy on OSX.

I am now still doing a lot of development using Vagrant & Virtualbox. Therefor there are some Vagrant generated entries in my /etc/exports. With these entries the VM does not boot up. When I remove these, the VM does boot up.

I have not yet looked at the code, so I'm not sure if this is a problem within this project or maybe in a dependency.

Starting VM ...
> booting k8solo-01
[corectl] alpha/942.0.0 already available on your system
Error: unable to validate /etc/exports ('[]'

Error: 'k8solo-01' not found, or dead

Hi,

I'm having problems getting the VM to come up. I'm on OS X 10.13.3 with Virtual Box 5.0.14 r105127. I'm hoping you know exactly what the problem is :)

Thanks for making this project. I hope I can get it working!

➜  ~  /Volumes/Kube-Solo/Kube-Solo.app/Contents/Resources/up.command; exit;

Starting VM ...
> booting k8solo-01
[corectl] stable/835.13.0 already available on your system
[corectl] NFS started in order for '/Users/mike' to be made available to the VMs
[corectl] started 'k8solo-01' in background with IP 192.168.64.2 and PID 81819
Error: 'k8solo-01' not found, or dead

Usage:
  corectl query [VMids] [flags]

Aliases:
  query, q

Flags:
  -a, --all    display extended information about a running CoreOS instance
  -i, --ip     displays given instance IP address
  -j, --json   outputs in JSON for easy 3rd party integration

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
  corectl query [VMids] [flags]

Aliases:
  query, q

Flags:
  -a, --all    display extended information about a running CoreOS instance
  -i, --ip     displays given instance IP address
  -j, --json   outputs in JSON for easy 3rd party integration

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"


Waiting for VM to be ready...

kube-solo halts unexpectedly

While using kube-solo v0.6.4 (and earlier versions), I often see the k8s api become unresponsive. All I can tell is that the VM has halted. If I click the "Up" menu item, kube-solo starts up as if nothing were wrong.

This happens often and easily--my test case is running the workflow-v2.0.0-e2e chart on top of Deis workflow-v2.0.0. Let me know if there are any logs I can provide. I have also tried configuring with 4GB of RAM allocated to kube-solo instead of the default 2GB, but same results.

Connection refused on exposed port

I am trying to publish a Service using the NodePort type and we I try to access the service I keep getting refused connections. Any hints on how to solve this? I am using the Kubernetes' hello world example

bash-3.2$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}

bash-3.2$ kubectl run hello --image=192.168.64.1:5000/hello-node:v1 --port 8888
deployment "hello-node" created

bash-3.2$ kubectl describe deployments
Name:           hello-node
Namespace:      default
CreationTimestamp:  Sat, 23 Apr 2016 16:32:12 -0400
Labels:         run=hello-node
Selector:       run=hello-node
Replicas:       1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:     <none>
NewReplicaSet:      hello-node-1831504032 (1/1 replicas created)
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----                -------------   --------    ------          -------
  31s       31s     1   {deployment-controller }            Normal      ScalingReplicaSet   Scaled up replica set hello-node-1831504032 to 1

bash-3.2$ kubectl expose deployment hello-node --type="NodePort"
service "hello-node" exposed

bash-3.2$ kubectl describe pods hello-node
Name:       hello-node-1831504032-6djpf
Namespace:  default
Node:       192.168.64.3/192.168.64.3
Start Time: Sat, 23 Apr 2016 16:32:12 -0400
Labels:     pod-template-hash=1831504032,run=hello-node
Status:     Running
IP:     10.244.91.6
Controllers:    ReplicaSet/hello-node-1831504032
Containers:
  hello-node:
    Container ID:   docker://c70f4e7f316bd92c8bf1b5d2d0b9524073765300d918cb7f5c18eec89fee341b
    Image:      192.168.64.1:5000/hello-node:v1
    Image ID:       docker://8c1c89b16cb87910b2df28ce74b9400a4709010e9948f74d151a1adbf5f12ab6
    Port:       8888/TCP
    QoS Tier:
      memory:       BestEffort
      cpu:      BestEffort
    State:      Running
      Started:      Sat, 23 Apr 2016 16:32:12 -0400
    Ready:      True
    Restart Count:  0
    Environment Variables:
Conditions:
  Type      Status
  Ready     True
Volumes:
  default-token-acsjb:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-acsjb
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  2m        2m      1   {default-scheduler }                    Normal      Scheduled   Successfully assigned hello-node-1831504032-6djpf to 192.168.64.3
  2m        2m      1   {kubelet 192.168.64.3}  spec.containers{hello-node} Normal      Pulled      Container image "192.168.64.1:5000/hello-node:v1" already present on machine
  2m        2m      1   {kubelet 192.168.64.3}  spec.containers{hello-node} Normal      Created     Created container with docker id c70f4e7f316b
  2m        2m      1   {kubelet 192.168.64.3}  spec.containers{hello-node} Normal      Started     Started container with docker id c70f4e7f316b

bash-3.2$ kubectl describe services hello-node
Name:           hello-node
Namespace:      default
Labels:         run=hello-node
Selector:       run=hello-node
Type:           NodePort
IP:         10.100.19.32
Port:           <unset> 8888/TCP
NodePort:       <unset> 30174/TCP
Endpoints:      10.244.91.6:8888
Session Affinity:   None
No events.

bash-3.2$ http http://192.168.64.3:30174

http: error: ConnectionError: HTTPConnectionPool(host='192.168.64.3', port=30174): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x10a75b290>: Failed to establish a new connection: [Errno 61] Connection refused',))

bash-3.2$ http http://192.168.64.3:8080/version
HTTP/1.1 200 OK
Content-Length: 146
Content-Type: application/json
Date: Sat, 23 Apr 2016 20:37:03 GMT

{
    "gitCommit": "528f879e7d3790ea4287687ef0ab3f2a01cc2718",
    "gitTreeState": "clean",
    "gitVersion": "v1.2.2",
    "major": "1",
    "minor": "2"
}

v0.8.0 kube-ui not started

I did a fresh install of 0.8.0 and found that kube-ui is not started:

bash-3.2$ kubectl --namespace=kube-system get po
NAME                                READY     STATUS    RESTARTS   AGE
kube-dns-v11-75rxn                  4/4       Running   1          52m
kubedash-2697969532-htxgl           2/2       Running   0          52m
kubernetes-dashboard-v1.1.0-hjrut   1/1       Running   0          52m

[docs] Enabling authn/authz

What would your suggestion be for enabling authn/authz when using kube-solo? I can think of a few ways that might work but before doing extra work or doing it in an unsuggested way, I figured I would ask. There are really two topics here:

  1. Taking an existing kube-solo VM and altering it so that the apiserver's startup has the appropriate flags
  2. Possibly making it so that kube-solo itself can be provided with these details using an environment variable or some configuration file and kube-solo will just use these upon startup

Thoughts?

Private Docker registry

Thanks a lot for your work. I was wondering if you've had any thoughts about running a local Docker registry that Kubernetes is configured to use. This would simplify development/testing quite a bit because you could build an image on the host system, push to the Docker registry and create a new pod/service deployment for Kubernetes without having to do all of the SSH dancing mentioned in TheNewNormal/kube-cluster-osx/issues/9.

Fresh install has DNS issue

I tried to install this, and ran into some problems. Here's my procedure:

  1. Copy the app to /Applications and run it. Menubar item shows up.
  2. Select Setup > Initial Setup to create the VM, and follow the prompts (10GB disk, beta channel). This sometimes spins forever, but sometimes it succeeds.
  3. Select "OS Shell", and try to create something:
bash-3.2$ kubectl run nginx --image=nginx
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS
nginx        nginx          nginx      run=nginx   1

bash-3.2$ kubectl get pods,svc,rc
NAME          READY     STATUS                                  RESTARTS   AGE
nginx-do1xo   0/1       Image: nginx is not ready on the node   0          6s
NAME         LABELS                                    SELECTOR   IP(S)        PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>     10.100.0.1   443/TCP
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS
nginx        nginx          nginx      run=nginx   1

bash-3.2$ kubectl describe pod nginx-do1xo
Name:               nginx-do1xo
Namespace:          default
Image(s):           nginx
Node:               192.168.64.2/192.168.64.2
Labels:             run=nginx
Status:             Pending
Reason:
Message:
IP:
Replication Controllers:    nginx (1/1 replicas created)
Containers:
  nginx:
    Image:      nginx
    State:      Waiting
      Reason:       Image: nginx is not ready on the node
    Ready:      False
    Restart Count:  0
Conditions:
  Type      Status
  Ready     False
Events:
  FirstSeen             LastSeen            Count   From            SubobjectPath               Reason      Message
  Wed, 28 Oct 2015 11:17:20 -0700   Wed, 28 Oct 2015 11:17:20 -0700 1   {scheduler }                            scheduled   Successfully assigned nginx-do1xo to 192.168.64.2
  Wed, 28 Oct 2015 11:17:21 -0700   Wed, 28 Oct 2015 11:17:31 -0700 2   {kubelet 192.168.64.2}  implicitly required container POD   failed      Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (API error (500): unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: Temporary failure in name resolution
 v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io: Temporary failure in name resolution
)
  Wed, 28 Oct 2015 11:17:21 -0700   Wed, 28 Oct 2015 11:17:31 -0700 2   {kubelet 192.168.64.2}      failedSync  Error syncing pod, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (API error (500): unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: Temporary failure in name resolution
 v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io: Temporary failure in name resolution
)

So there's a DNS issue. The resolv.conf only has nameserver 192.168.64.1 in it, but the DNS helper doesn't seem to be running. What comes next?

Latest release cannot start VM

I just updated to the latest kube-solo and when attempting to bring up the VM, I get this:

reating 20GB sparse disk (QCow2)...
/Applications/Kube-Solo.app/Contents/Resources/functions.sh: line 152: /Users/notyou/bin/qcow-tool: No such file or directory
-
Created  Data disk


Starting VM ...

/Applications/Kube-Solo.app/Contents/Resources/functions.sh: line 195: /Users/notyou/bin/corectl: No such file or directory

VM has not booted, please check '~/kube-solo/logs/vm_up.log' and report the problem !!!

I've updated to the latest corectl as well but nothing works.

deis launched applications unreachable from host

I discover kube-solo-osx today from a deis blog post. The ultimate intent in experimenting with kube-solo was to have a fast path for working with deis. Toward that, I've received a mostly successful result, but am having an issuing reaching my launched services.

I've followed through the kube-solo and install_deis process successfully. I then was able to create and deploy using 'deis create' and 'git push deis master' from one of the example apps.

However, I'm unable to reach the running service from the host machine using the service clusterIP. After sshing into the guest vm, I am able to reach the running service.

Am I supposed to be able reach my services running from kube-solo vm kubernetes cluster from the host? Are there are recommendations for how to diagnose and fix this issue. Please let me know what collection of diagnostics that might help.

Thanks for creating a straight forward process and tools for launching a cluster and deis!

Error during initialization.

Running Kube-Solo v0.5.6 on OSX 10.11.2

During the initial setup i get the following error

Error: rename /tmp/coreos670171957/coreos_production_pxe.vmlinuz /Users/bruno/.coreos/images/stable/899.15.0/coreos_production_pxe.vmlinuz: cross-device link

Here the complete output

$ /Applications/Kube-Solo.app/Contents/Resources/first-init.command; exit;

Setting up Kubernetes Solo Cluster on OS X

Reading ssh key from /Users/bruno/.ssh/id_rsa.pub

/Users/bruno/.ssh/id_rsa.pub found, updating configuration files ...

Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!

This is not the password to access VM via ssh or console !!!

Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!


Set CoreOS Release Channel:
 1)  Alpha (may not always function properly)
 2)  Beta
 3)  Stable (recommended)

Select an option: 3


Please type VM's RAM size in GBs followed by [ENTER]:
[default is 2]: 4
Changing VM's RAM to 4GB...


Please type Data disk size in GBs followed by [ENTER]:
[default is 15]: 30

Creating 30GB disk (it could take a while for big disks)...
  30GiB 0:00:31 [ 972MiB/s] [============================================================================================================================================================>] 100%
Created 30GB Data disk

Starting VM ...
> booting k8solo-01
[corectl] downloading and verifying stable/899.15.0
30.07 MB / 30.07 MB [============================================================================================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe.vmlinuz OK
192.65 MB / 192.65 MB [==========================================================================================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe_image.cpio.gz OK
Error: rename /tmp/coreos670171957/coreos_production_pxe.vmlinuz /Users/bruno/.coreos/images/stable/899.15.0/coreos_production_pxe.vmlinuz: cross-device link

Usage:
  corectl load path/to/yourProfile [flags]

Examples:
  corectl load profiles/demo.toml

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"


VM have not booted, please check '~/kube-solo/logs/vm_up.log' and report the problem !!!

Press [Enter] key to continue...

I assume that the corectl rather than moving the file after the validation it tries to link it (such as: ln)
and this causes problem when the temp directory is in a different mount point than the kube-solo.

can anyone help me with this?

Update homebrew formula

Current homebrew formula installs 0.9.1. Is updating this recipe to use the new version the responsibility of someone on your team?

Also, I wonder why it doesn't auto-include new releases despite using an appcast feed (appcast 'https://github.com/TheNewNormal/kube-solo-osx/releases.atom'). Probably I don't know how appcast works.

New install - 0.6.6 (tried previous versions too) - fleet units not starting up properly. Hangs at waiting...

Identity added: /Users/duncan.mcnaught/.ssh/id_rsa (/Users/duncan.mcnaught/.ssh/id_rsa)
--- ~ » /Users/duncan.mcnaught/Desktop/Kube-Solo.app/Contents/Resources/first-init.command; exit;

Setting up Kubernetes Solo Cluster on OS X

Reading ssh key from /Users/duncan.mcnaught/.ssh/id_rsa.pub

/Users/duncan.mcnaught/.ssh/id_rsa.pub found, updating configuration files ...

Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!

This is not the password to access VM via ssh or console !!!

Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!


Set CoreOS Release Channel:
 1)  Alpha (may not always function properly)
 2)  Beta
 3)  Stable (recommended)

Select an option: 3


Please type VM's RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing VM's RAM to 2GB...


Please type Data disk size in GBs followed by [ENTER]:
[default is 15]:

Creating 15GB disk ...
  15GiB 0:00:13 [1.12GiB/s] [==================================================================================================>] 100%
Created 15GB Data disk

Starting VM ...
> booting k8solo-01
[corectl] stable/1010.5.0 already available on your system
[corectl] NFS started in order for '/Users/duncan.mcnaught' to be made available to the VMs
[corectl] started 'k8solo-01' in background with IP 192.168.64.2 and PID 3739

Installing Kubernetes files on to VM...
[corectl] uploading 'kube.tgz' to 'k8solo-01:/home/core/kube.tgz'
58.55 MB / 58.55 MB [==================================================================================================================] 100.00 %
Done with k8solo-01


fleetctl is up to date ...

Downloading latest version of helmc for OS X

Installed latest helmc helmc/0.8.1%2Be4b3983 to ~/kube-solo/bin ...
---> Checking repository charts
Updated 1 charts
jenkins
---> Checking repository kube-charts
Already up-to-date.
---> Checking repository deis
Updated 1 charts
workflow-dev
---> Done
[ERROR] Remote kube-charts already exists, and is pointed to https://github.com/TheNewNormal/kube-charts
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Checking repository deis
Already up-to-date.
---> Done

fleetctl list-machines:
MACHINE     IP      METADATA
a16e7089... 192.168.64.2    -

Starting all fleet units in ~/kube-solo/fleet:
Unit fleet-ui.service inactive
Unit fleet-ui.service launched on a16e7089.../192.168.64.2
Unit kube-apiserver.service inactive
Unit kube-apiserver.service launched on a16e7089.../192.168.64.2
Unit kube-controller-manager.service inactive
Unit kube-controller-manager.service launched on a16e7089.../192.168.64.2
Unit kube-scheduler.service inactive
Unit kube-scheduler.service launched on a16e7089.../192.168.64.2
Unit kube-kubelet.service inactive
Unit kube-kubelet.service launched on a16e7089.../192.168.64.2
Unit kube-proxy.service inactive
Unit kube-proxy.service launched on a16e7089.../192.168.64.2

fleetctl list-units:
UNIT                MACHINE             ACTIVE      SUB
fleet-ui.service        a16e7089.../192.168.64.2    activating  start-pre
kube-apiserver.service      a16e7089.../192.168.64.2    inactive    dead
kube-controller-manager.service a16e7089.../192.168.64.2    inactive    dead
kube-kubelet.service        a16e7089.../192.168.64.2    activating  start-pre
kube-proxy.service      a16e7089.../192.168.64.2    activating  start-pre
kube-scheduler.service      a16e7089.../192.168.64.2    inactive    dead

Generate kubeconfig file ...
cluster "k8solo-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...

When I ssh to VM:

CoreOS stable (1010.5.0)
Update Strategy: No Reboots
Failed Units: 2
  kube-certs.service
  update-engine-stub.service
core@k8solo-01 ~ $
core@k8solo-01 ~ $ fleetctl list-units
UNIT                MACHINE             ACTIVE      SUB
fleet-ui.service        a16e7089.../192.168.64.2    activating  auto-restart
kube-apiserver.service      a16e7089.../192.168.64.2    inactive    dead
kube-controller-manager.service a16e7089.../192.168.64.2    inactive    dead
kube-kubelet.service        a16e7089.../192.168.64.2    activating  start-pre
kube-proxy.service      a16e7089.../192.168.64.2    activating  start-pre
kube-scheduler.service      a16e7089.../192.168.64.2    inactive    dead```

Nice. Roadmap ?

This is great.

Will this allow easy deployment to providers. I need to push out to cge using gcloid and kubectl.

Feature request: tmux integration

My iTerm2 profile is configured with Send text at start: tmux attach -t base || tmux new -s base, which automatically attaches to or launches a tmux session. Which means that when kube-solo-osx tries to launch commands in my terminal, it doesn't work (it just opens a new tab in iTerm2 which attaches to my tmux session and nothing else happens). Would it be possible to configure the app to open a new tmux window instead of trying to launch iTerm2?

kubectl bin script overrides context

So just ran into an issue where I wanted to override the envvar KUBECONFIG and pass in my own file. However, kubectl in ~/kube-solo/bin/kubectl overrides that with a command line flag. Much confusion was had.

Add NFS mount point

Is it possible to add another NFS mount point to the image when it starts?

Thanks

Chaging mac password

Things start getting weird. It seems that you cant destroy a VM if that happens.

external-ip stuck in pending state

More of a general kubernetes question, is something additional needed for the wordpress service in this tutorial: https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd/ to automatically have an external ip assigned? external-ip stays forever in <pending> status. According to the tutorial it should get an external ip assigned.

I can however access the service through 192.168.64.14:31437 (<node ip>:<node port>)

This is how the service looks like after being deployed according to the tutorial:

 kubectl get service wordpress
NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
wordpress   10.100.20.207   <pending>     80/TCP    17h

$ kubectl describe service wordpress
Name:                   wordpress
Namespace:              default
Labels:                 app=wordpress
Selector:               app=wordpress,tier=frontend
Type:                   LoadBalancer
IP:                     10.100.20.207
Port:                   <unset> 80/TCP
NodePort:               <unset> 31437/TCP
Endpoints:              10.244.87.6:80
Session Affinity:       None
No events.

cannot get 0.4.4 to start

╰─○ /Applications/Kube-Solo.app/Contents/Resources/first-init.command; exit;

Setting up Kubernetes Solo Cluster on OS X

Reading ssh key from /Users/jonathanchauncey/.ssh/id_rsa.pub

/Users/jonathanchauncey/.ssh/id_rsa.pub found, updating configuration files ...
Enter passphrase for /Users/jonathanchauncey/.ssh/id_rsa:

Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!

This is not the password to access VM via ssh or console !!!

Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!


Set CoreOS Release Channel:
 1)  Alpha
 2)  Beta
 3)  Stable

Select an option: 2

Please type Data disk size in GBs followed by [ENTER]:
[default is 5]: 60

Creating 60GB disk (it could take a while for big disks)...
-
Created 60GB Data disk

Starting VM ...

> booting k8solo-01
[corectl] downloading and verifying beta/877.1.0
29.93 MB / 29.93 MB [===================================================================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe.vmlinuz OK
191.75 MB / 191.75 MB [=================================================================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe_image.cpio.gz OK
[corectl] beta/877.1.0 ready
[corectl] '/Users/jonathanchauncey' was made available to VMs via NFS
[corectl] started 'k8solo-01' in background with IP 192.168.64.2 and PID 33386

Installing Kubernetes files on to VM...
[corectl] uploading 'kube.tgz' to 'k8solo-01:/home/core/kube.tgz'
40.49 MB / 40.49 MB [===================================================================================================================================================] 100.00 %
Done with k8solo-01

Downloading fleetctl v0.11.5 for OS X
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   607    0   607    0     0    531      0 --:--:--  0:00:01 --:--:--   531
100 2605k  100 2605k    0     0   631k      0  0:00:04  0:00:04 --:--:--  956k
Downloading latest version of helm for OS X
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 4760k  100 4760k    0     0   846k      0  0:00:05  0:00:05 --:--:-- 1044k

Installed latest helm 0.3.0%2Bec5ec24 to ~/kube-solo/bin ...
[WARN] A new version of Helm is available. You have 0.1.0. The latest is 0.3.0
---> Download version 0.3.0 here: https://github.com/helm/helm/releases/tag/0.3.0
---> Checking repository charts
Updated 2 charts
deis-dev                     deis
Added 2 charts
router-dev                   router
---> Checking repository deis
Updated 2 charts
deis-dev                     deis
Added 2 charts
router-dev                   router
---> Checking repository jchauncey
[ERROR] Not all repos could be updated: Repository 'jchauncey' is dirty.  Commit changes before updating
[ERROR] Remote kube-charts already exists, and is pointed to https://github.com/TheNewNormal/kube-charts
[WARN] A new version of Helm is available. You have 0.1.0. The latest is 0.3.0
---> Download version 0.3.0 here: https://github.com/helm/helm/releases/tag/0.3.0
---> Checking repository charts
Already up-to-date.
---> Checking repository deis
Already up-to-date.
---> Checking repository jchauncey
[ERROR] Not all repos could be updated: Repository 'jchauncey' is dirty.  Commit changes before updating

fleetctl list-machines:
MACHINE     IP      METADATA
25aa25b0... 192.168.64.2    -

Starting all fleet units in ~/kube-solo/fleet:
Unit fleet-ui.service inactive
Unit fleet-ui.service launched on 25aa25b0.../192.168.64.2
Unit kube-apiserver.service inactive
Unit kube-apiserver.service launched on 25aa25b0.../192.168.64.2
Unit kube-controller-manager.service inactive
Unit kube-controller-manager.service launched on 25aa25b0.../192.168.64.2
Unit kube-scheduler.service inactive
Unit kube-scheduler.service launched on 25aa25b0.../192.168.64.2
Unit kube-kubelet.service inactive
Unit kube-kubelet.service launched on 25aa25b0.../192.168.64.2
Unit kube-proxy.service inactive
Unit kube-proxy.service launched on 25aa25b0.../192.168.64.2

fleetctl list-units:
UNIT                MACHINE             ACTIVE      SUB
fleet-ui.service        25aa25b0.../192.168.64.2    activating  auto-restart
kube-apiserver.service      25aa25b0.../192.168.64.2    inactive    dead
kube-controller-manager.service 25aa25b0.../192.168.64.2    inactive    dead
kube-kubelet.service        25aa25b0.../192.168.64.2    activating  start-pre
kube-proxy.service      25aa25b0.../192.168.64.2    activating  start-pre
kube-scheduler.service      25aa25b0.../192.168.64.2    inactive    dead

Generate kubeconfig file ...
cluster "k8solo-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
error: couldn't read version from server: Get http://192.168.64.2:8080/api: dial tcp 192.168.64.2:8080: connection refused
/Applications/Kube-Solo.app/Contents/Resources/first-init.command: line 103: spin: i++%0: division by 0 (error token is "0")
error: couldn't read version from server: Get http://192.168.64.2:8080/api: dial tcp 192.168.64.2:8080: connection refused

Hangs at "waiting for VM to be ready"

I clicked the "up" menu option for the first time and it's hangs at "Waiting for VM to be ready..." for >5 mins. Any idea of what might be wrong?

Starting VM ...

> booting k8solo-01
[corectl] alpha/928.0.0 already available on your system
Error: unable to validate /etc/exports ('[]')

Usage:
  corectl load path/to/yourProfile [flags]

Examples:
  corectl load profiles/demo.toml

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
  corectl query [VMids] [flags]

Aliases:
  query, q

Flags:
  -a, --all    display extended information about a running CoreOS instance
  -i, --ip     displays given instance IP address
  -j, --json   outputs in JSON for easy 3rd party integration

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
  corectl query [VMids] [flags]

Aliases:
  query, q

Flags:
  -a, --all    display extended information about a running CoreOS instance
  -i, --ip     displays given instance IP address
  -j, --json   outputs in JSON for easy 3rd party integration

Global Flags:
      --debug   adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"


Waiting for VM to be ready...

unable to start any k8s fleet units when creating new vm

fleet-ui.service        4b99c2b8.../192.168.64.2    inactive    dead
kube-apiserver.service      4b99c2b8.../192.168.64.2    inactive    dead
kube-controller-manager.service 4b99c2b8.../192.168.64.2    inactive    dead
kube-kubelet.service        4b99c2b8.../192.168.64.2    activating  auto-restart
kube-proxy.service      4b99c2b8.../192.168.64.2    active      running
kube-scheduler.service      4b99c2b8.../192.168.64.2    inactive    dead
Starting all fleet units in ~/kube-solo/fleet:
Unit fleet-ui.service inactive
Unit fleet-ui.service launched on 4b99c2b8.../192.168.64.2
Unit kube-apiserver.service inactive
Unit kube-apiserver.service launched on 4b99c2b8.../192.168.64.2
Unit kube-controller-manager.service inactive
Unit kube-controller-manager.service launched on 4b99c2b8.../192.168.64.2
Unit kube-scheduler.service inactive
Unit kube-scheduler.service launched on 4b99c2b8.../192.168.64.2
Unit kube-kubelet.service inactive
Unit kube-kubelet.service launched on 4b99c2b8.../192.168.64.2
Unit kube-proxy.service inactive
Unit kube-proxy.service launched on 4b99c2b8.../192.168.64.2

fleetctl list-units:
UNIT                MACHINE             ACTIVE      SUB
fleet-ui.service        4b99c2b8.../192.168.64.2    inactive    dead
kube-apiserver.service      4b99c2b8.../192.168.64.2    inactive    dead
kube-controller-manager.service 4b99c2b8.../192.168.64.2    inactive    dead
kube-kubelet.service        4b99c2b8.../192.168.64.2    activating  auto-restart
kube-proxy.service      4b99c2b8.../192.168.64.2    active      running
kube-scheduler.service      4b99c2b8.../192.168.64.2    inactive    dead


Waiting for Kubernetes cluster to be ready. This can take a few minutes...

It basically just waits forever for the cluster to come up. It was working fine and then it just stopped. Ive updated to the latest 0.4.1 release and nothing.

Logs -

╰─± fleetctl journal kube-controller-manager.service
-- Logs begin at Wed 2016-01-06 03:55:15 UTC, end at Wed 2016-01-06 03:56:33 UTC. --
Jan 06 03:55:49 k8solo-01 systemd[1]: Dependency failed for Kubernetes Controller Manager.
Jan 06 03:55:49 k8solo-01 systemd[1]: kube-controller-manager.service: Job kube-controller-manager.service/start failed with result 'dependency'.
Jan 06 03:56:08 k8solo-01 systemd[1]: Stopped Kubernetes Controller Manager.
Jan 06 03:56:15 k8solo-01 systemd[1]: Dependency failed for Kubernetes Controller Manager.
Jan 06 03:56:15 k8solo-01 systemd[1]: kube-controller-manager.service: Job kube-controller-manager.service/start failed with result 'dependency'.
╭─jonathanchauncey at ENG000637 in ~/projects/deis/example-go on master✔ using ‹2.2.2›
╰─± alias fl='fleetctl journal'
╭─jonathanchauncey at ENG000637 in ~/projects/deis/example-go on master✔ using ‹2.2.2›
╰─± flr kube-scheduler.service
Unit kube-scheduler.service loaded on 93c05a6d.../192.168.64.2
Unit kube-scheduler.service launched on 93c05a6d.../192.168.64.2
╭─jonathanchauncey at ENG000637 in ~/projects/deis/example-go on master✔ using ‹2.2.2›
╰─± fl kube-scheduler.service
-- Logs begin at Wed 2016-01-06 03:55:15 UTC, end at Wed 2016-01-06 03:56:53 UTC. --
Jan 06 03:55:49 k8solo-01 systemd[1]: Dependency failed for Kubernetes API Server.
Jan 06 03:55:49 k8solo-01 systemd[1]: kube-scheduler.service: Job kube-scheduler.service/start failed with result 'dependency'.
Jan 06 03:56:47 k8solo-01 systemd[1]: Stopped Kubernetes API Server.
Jan 06 03:56:51 k8solo-01 systemd[1]: Dependency failed for Kubernetes API Server.
Jan 06 03:56:51 k8solo-01 systemd[1]: kube-scheduler.service: Job kube-scheduler.service/start failed with result 'dependency'.

╰─±  fl kube-proxy.service
-- Logs begin at Wed 2016-01-06 04:01:32 UTC, end at Wed 2016-01-06 04:04:28 UTC. --
Jan 06 04:02:38 k8solo-01 kube-proxy[1166]: E0106 04:02:38.507658    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:02:48 k8solo-01 kube-proxy[1166]: E0106 04:02:48.510308    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:02:58 k8solo-01 kube-proxy[1166]: E0106 04:02:58.512439    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:08 k8solo-01 kube-proxy[1166]: E0106 04:03:08.517159    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:18 k8solo-01 kube-proxy[1166]: E0106 04:03:18.518617    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:28 k8solo-01 kube-proxy[1166]: E0106 04:03:28.520578    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:38 k8solo-01 kube-proxy[1166]: E0106 04:03:38.521518    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:48 k8solo-01 kube-proxy[1166]: E0106 04:03:48.523057    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:58 k8solo-01 kube-proxy[1166]: E0106 04:03:58.524398    1166 event.go:197] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: connection refused' (may retry after sleeping)
Jan 06 04:03:58 k8solo-01 kube-proxy[1166]: E0106 04:03:58.524463    1166 event.go:131] Unable to write event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"k8solo-01.1426ba72374e0f71", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"k8solo-01", UID:"k8solo-01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kube-proxy.", Source:api.EventSource{Component:"kube-proxy", Host:"k8solo-01"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63587649729, nsec:590464369, loc:(*time.Location)(0x1289e20)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63587649729, nsec:590464369, loc:(*time.Location)(0x1289e20)}}, Count:1}' (retry limit exceeded!)

Can't get the dashboard working

I get the following response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "endpoints \"kubernetes-dashboard\" not found",
  "reason": "NotFound",
  "details": {
    "name": "kubernetes-dashboard",
    "kind": "endpoints"
  },
  "code": 404
}

Docker local repository unreachable

I've tried to push a docker container built on the k8solo machine (via the sh) and now i would like to push it to use it in my replication controller (it's a custom one).
Built with : docker build -t localhost:5000/my_container:0.1 -f Dockerfile .
When i try to push it (i believe to local host) i only have Put http://localhost:5000/v1/repositories/my_container/: dial tcp 127.0.0.1:5000: getsockopt: connection refused (i've tried with mutilples ports)

Any Idea ??

Installation error installing v0.8.3 - /usr/local/sbin... does not exist

function check_corectld_server() in functions.sh tries to execute the following...
CHECK_SERVER_STATUS=$(/usr/local/sbin/corectld status 2>&1 | grep "Uptime:")

which returns an error as /usr/local/sbin does not exist. manually creating 'sbin' directory under /usr/local solves the issue.

[question] Mounting local directories to VM

While working on enabling authn, I want to run dex within the VM. To allow me to work on a dev build, I wanted to mount some host directory within the VM so I could run dex via a Service Unit by altering ~/kube-solo/cloud-init/user-data. Can you tell me how to do this?

I'm currently looking at the sources so I might end up figuring it out myself. I've tried a few things already and spent enough time on it to see if I could get help.

NoSuchFileException: /var/run/secrets/kubernetes.io/serviceaccount/token

Thank you, kube-solo-osx that is light on resources and is a nice way to experiment with kubernetes on mac.

Running some examples emit the following error

WARN 17:41:57 Request to kubernetes apiserver failed
java.nio.file.NoSuchFileException: /var/run/secrets/kubernetes.io/serviceaccount/token

It seems like other containers are now relying on /var/run/secrets/kubernetes.io/serviceaccount/token, which is initialized by default with "kube-up.sh". Is it possible to do the same with "kube-solo-osx" initialization

Make viewing new releases optional

I love that when I startup kube-solo that I'm alerted that there is a new version but is there any way to click Cancel or something so that it does not open a new browser window? It's very disruptive.

Deis Workflow deis-builder component fails to start

deis-builder fails to spin up in v0.8.3 with the following message.

2016-07-13T02:32:39.709661147Z 2016/07/13 02:32:39 Error getting storage parameters (read /var/run/secrets/deis/objectstore/creds/..7987_12_07_19_53_36.730762557: is a directory)

desi-builder version info.
quay.io/deis/builder:v2.1.0

Multiple Nodes

I understand the project is "solo", but how hard do you think it would be to allow for multiple nodes? =)

When demoing (without wifi say at a conference), I typically want to have everything local. I've had varied success with the current offerings via Vagrant (works great, then need internet to eventually reboot everything).

I'm happy to help contribute the work, just thought I'd ask to see your thoughts on multiple nodes. (2-3 max).

~ Steve

Possible to run k8s on the command line instead of using the app?

Is it possible to run Kubernetes on corectl via kube-solo entirely via the command line, instead of using the app? I have been using corectl directly instead of using the coreos-osx app. I prefer the transparency of knowing what's going on, to learn the sysadmin aspect that will translate to managing Kubernetes on real servers. Is kube-solo-osx good for learning the commands for running Kubernetes directly?

PS: Any other channel for having these kind of discussions - are you guys on Slack/IRC?

First time install - Hangs at "Waiting for Kubernetes cluster to be ready. This can take a few minutes..."

Hi gang,

Latest everything.
OS X 10.11.3

Output during initial setup. Looks like fleetctl is not getting installed:

dcvirts-Mac:~ dcvirt$ sudo /Applications/kube-Solo.app/Contents/Resources/first-init.command

Setting up Kubernetes Solo Cluster on OS X

Reading ssh key from /Users/dcvirt/.ssh/id_rsa.pub

/Users/dcvirt/.ssh/id_rsa.pub found, updating configuration files ...

Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!

This is not the password to access VM via ssh or console !!!

Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!

Set CoreOS Release Channel:

  1. Alpha
  2. Beta
  3. Stable

Select an option: 3

Please type Data disk size in GBs followed by [ENTER]:
[default is 5]:

Creating 5GB disk ...

Created 5GB Data disk

Starting VM ...

Error: While parsing config: Near line 10 (last key parsed 'k8solo-01.sshkey'): Key 'k8solo-01.sshkey' has already been defined.

Usage:
corectl load path/to/yourProfile [flags]

Examples:
corectl load profiles/demo.toml

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
corectl query [VMids] [flags]

Aliases:
query, q

Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
corectl query [VMids] [flags]

Aliases:
query, q

Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
corectl query [VMids] [flags]

Aliases:
query, q

Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Installing Kubernetes files on to VM...
Error: 'k8solo-01' not found, or dead

Usage:
corectl put path/to/file VMid:/file/path/on/destination [flags]

Aliases:
put, copy, cp, scp

Examples:
// copies 'filePath' into '/destinationPath' inside VMid
corectl put filePath VMid:/destinationPath

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
corectl ssh VMid ["command1;..."] [flags]

Aliases:
ssh, attach

Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Error: 'k8solo-01' not found, or dead

Usage:
corectl ssh VMid ["command1;..."] [flags]

Aliases:
ssh, attach

Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Done with k8solo-01

Error: 'k8solo-01' not found, or dead

Usage:
corectl ssh VMid ["command1;..."] [flags]

Aliases:
ssh, attach

Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits

Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users

All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"

Downloading fleetctl v for OS X
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21 0 21 0 0 45 0 --:--:-- --:--:-- --:--:-- 45
Downloading latest version of helm for OS X
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4760k 100 4760k 0 0 2152k 0 0:00:02 0:00:02 --:--:-- 4225k

Installed latest helm 0.3.1%2Bd4c0fa8 to ~/kube-solo/bin ...
---> Checking repository charts
---> Cloning into '/Users/dcvirt/.helm/cache/charts'...
Already up-to-date.
---> Done
---> Cloning into '/Users/dcvirt/.helm/cache/kube-charts'...
---> Hooray! Successfully added the repo.
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done

fleetctl list-machines:
/Applications/kube-Solo.app/Contents/Resources/first-init.command: line 82: fleetctl: command not found

Starting all fleet units in ~/kube-solo/fleet:
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 288: fleetctl: command not found
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 289: fleetctl: command not found
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 290: fleetctl: command not found
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 291: fleetctl: command not found
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 292: fleetctl: command not found
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 293: fleetctl: command not found

fleetctl list-units:
/Applications/kube-Solo.app/Contents/Resources/functions.sh: line 296: fleetctl: command not found

Generate kubeconfig file ...
cluster "k8solo-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...

/etc/exports blocks install

I used docker-machine-nfs which left the following entries in my /etc/exports:
$ cat /etc/exports

/Users 192.168.99.100 -alldirs -mapall=563785388:1221986466
/Users 192.168.99.106 -alldirs -mapall=563785388:1221986466
/Users 192.168.99.101 -alldirs -mapall=563785388:1221986466

Note the blank line. I was consistently getting an error from corectl about not being able to parse /etc/exports ([]). Might be worth fixing.

Allow using CLI version of corectl

Kube-Solo-OSX requires you to install the .app version of corectl which is not really necessary. And the brew install corectl version does not work as a result.
We can do which corectl and which corectld and use those path to start daemon and execute commands.

errors on run...

i'm receiving some errors, and the instructions are a little unclear (although I think I've done things correctly?)

JINKITOSXLT02:~ bjozsa$ /Users/bjozsa/Library/Developer/Xcode/DerivedData/Kube-Solo-frcfrvqaeruemvcvxjgrykpibbnn/Build/Products/Debug/Kube-Solo.app/Contents/Resources/up.command; exit;
cat: /Users/bjozsa/kube-solo/.env/resouces_path: No such file or directory
cp: /bin/xhyve: No such file or directory
chmod: /Users/bjozsa/kube-solo/bin/xhyve: No such file or directory
/Users/bjozsa/Library/Developer/Xcode/DerivedData/Kube-Solo-frcfrvqaeruemvcvxjgrykpibbnn/Build/Products/Debug/Kube-Solo.app/Contents/Resources/up.command: line 18: /bin/webserver: No such file or directory
File with saved password is not found:

Your Mac user's password will be saved to '~/kube-solo/.env/password' file
and later one will be used for 'sudo' command to start VM !!!
This is not the password for the VM access via ssh or console !!!
Please type your Mac user's password followed by [ENTER]:
/Users/bjozsa/Library/Developer/Xcode/DerivedData/Kube-Solo-frcfrvqaeruemvcvxjgrykpibbnn/Build/Products/Debug/Kube-Solo.app/Contents/Resources/functions.sh: line 278: /Users/bjozsa/kube-solo/.env/password: No such file or directory
chmod: /Users/bjozsa/kube-solo/.env/password: No such file or directory

ls: /Users/bjozsa/kube-solo/imgs/.*.vmlinuz: No such file or directory
Couldn't find anything to load locally ( channel).
Fetching lastest  channel ISO ...

/Users/bjozsa/Library/Developer/Xcode/DerivedData/Kube-Solo-frcfrvqaeruemvcvxjgrykpibbnn/Build/Products/Debug/Kube-Solo.app/Contents/Resources/functions.sh: line 219: /bin/coreos-xhyve-fetch: No such file or directory

Starting VM ...
/Users/bjozsa/Library/Developer/Xcode/DerivedData/Kube-Solo-frcfrvqaeruemvcvxjgrykpibbnn/Build/Products/Debug/Kube-Solo.app/Contents/Resources/up.command: line 43: /bin/dtach: No such file or directory
You can connect to VM console from menu 'Attach to VM's console'
When you done with console just close it's window/tab with CMD+W
Waiting for VM to boot up...
\

what am I missing?

just to show that xhyve is installed...

JINKITOSXLT02:src bjozsa$ xhyve -h
Usage: xhyve [-behuwxACHPWY] [-c vcpus] [-g <gdb port>] [-l <lpc>]
             [-m mem] [-p vcpu:hostcpu] [-s <pci>] [-U uuid] -f <fw>
       -A: create ACPI tables
       -c: # cpus (default 1)
       -C: include guest memory in core file
       -e: exit on unhandled I/O access
       -f: firmware
       -g: gdb port
       -h: help
       -H: vmexit from the guest on hlt
       -l: LPC device configuration
       -m: memory size in MB
       -p: pin 'vcpu' to 'hostcpu'
       -P: vmexit from the guest on pause
       -s: <slot,driver,configinfo> PCI slot config
       -u: RTC keeps UTC time
       -U: uuid
       -v: show build version
       -w: ignore unimplemented MSRs
       -W: force virtio to use single-vector MSI
       -x: local apic is in x2APIC mode
       -Y: disable MPtable generation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.