Coder Social home page Coder Social logo

sodafoundation / api Goto Github PK

View Code? Open in Web Editor NEW
822.0 822.0 317.0 95.49 MB

SODA Terra Project API module : is an open source implementation of SODA API connecting storage to platforms like Kubernetes, OpenStack, and VMware

License: Apache License 2.0

Go 93.04% Makefile 0.25% Shell 6.23% Python 0.36% Dockerfile 0.11%
ceph cloudnative cloudnativestorage kubernetes multi-cloud openstack sds storage swordfish

api's People

Contributors

anvithks avatar baihuoyu avatar click2cloud-gamma avatar hannibalhuang avatar himanshuvar avatar hirokikmr avatar jackhaibo avatar jimccfun avatar joseph-v avatar kumarashit avatar leonwanghui avatar lijuncloud avatar madhu-1 avatar najmudheenct avatar pravinran avatar pravinranjan10 avatar qwren avatar rhsakarpos avatar satya-gorli avatar shruthi-1mn avatar skdwriting avatar stmcginnis avatar thisisclark avatar twiwi avatar vaibhav2ghadge avatar wisererik avatar xiangrumei avatar xing-yang avatar xxwjj avatar zengyingzhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api's Issues

Ups Storage Plugin Runtime Error

What version of env (opensds, os, golang etc) are you using?

  • opensds: master branch
  • os: ubuntu 17.04
  • golang: go1.7.6 linux/amd64

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

What happened?

When I tried to test the function of whole system, I found that every time I called ups-storage-driver to create volume (attachment and snapshot) resource, the dock process will go crush and print some error info:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x52af2f]

goroutine 26 [running]:
panic(0x89e840, 0xc42000c0f0)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/opensds/opensds/pkg/dock/api.CreateVolume(0xc4201921e0, 0x2, 0x2, 0x872e60)
	/home/krej/gopath/src/github.com/opensds/opensds/pkg/dock/api/api.go:48 +0x17f
github.com/opensds/opensds/pkg/grpc/dock/server.(*dockServer).CreateVolume(0xc42018e420, 0x7fb1fbbbe120, 0xc420184de0, 0xc4201921e0, 0x0, 0x16b, 0x16b)
	/home/krej/gopath/src/github.com/opensds/opensds/pkg/grpc/dock/server/server.go:70 +0xdb
github.com/opensds/opensds/pkg/grpc/opensds._Dock_CreateVolume_Handler(0x8eb0a0, 0xc42018e420, 0x7fb1fbbbe120, 0xc420184de0, 0xc420664500, 0x0, 0x0, 0x0, 0x0, 0x0)
	/home/krej/gopath/src/github.com/opensds/opensds/pkg/grpc/opensds/opensds.pb.go:374 +0x27d
google.golang.org/grpc.(*Server).processUnaryRPC(0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300, 0xc420184810, 0xb9f560, 0xc420184e70, 0x0, 0x0)
	/home/krej/gopath/src/google.golang.org/grpc/server.go:781 +0xd14
google.golang.org/grpc.(*Server).handleStream(0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300, 0xc420184e70)
	/home/krej/gopath/src/google.golang.org/grpc/server.go:981 +0x7a0
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4206666b0, 0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300)
	/home/krej/gopath/src/google.golang.org/grpc/server.go:551 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/home/krej/gopath/src/google.golang.org/grpc/server.go:552 +0xa3

After doing some analysis to ups-storage-driver, I found that there is some issues when initializing all structures like this:

func (p *Plugin) CreateVolume(name string, size int64) (*api.VolumeSpec, error) {
	return &api.VolumeSpec{}, nil
}

I will pull a request for solving this issue soon later, please take your time to have a review at it. Thanks!

Enhance CI + New CI system integration request

Hi all, I'd like to introduce a CI system which is built based zuul[1] and nodepool[2] tools, and it can used for tests(both unit tests and acceptance tests) based on a devstack[3] environment. For now, it has basicially finished the CI system building, and have tested with Gophercloud, it can work OK. It will long-term maintain the CI system and would like to propose to intergrate with Gophercloud official repo(also will try with Terraform project).

FYI, the zuul jobs definition can be found in[4], the zuul jobs status web page is[5], the test jobs log server is[6] and there is a job log example[7], tests result can be found in the "job-output.txt.gz" file of the log page.

If it is possible to integrate the CI system with OpenSDS official repo, there are two things need to to with OpenSDS repo to integrate with this CI system:
a). Need to add a webhook to trigger the CI system running testing jobs when new pull request comming
b). Need to add a ".zuul.yaml" file as a CI jobs entrypoint into the opensds repo.

[1] https://docs.openstack.org/infra/zuul/
[2] https://docs.openstack.org/infra/nodepool/
[3] https://docs.openstack.org/devstack/latest/
[4] https://github.com/theopenlab/openlab-zuul-jobs
[5] http://80.158.20.68/
[6] http://80.158.20.68/logs/
[7] http://80.158.20.68/logs/5/5/3a173240e5d1ca246990330d1176361fc8161a6b/check/gophercloud-unittest/96d4f26/

Add global config support

Is this a BUG REPORT or FEATURE REQUEST?:

help wanted

What you expected to happen:

I found that the command tools start up opensds project by reading the command-line options and by setting up the configuration of servers, databases, etc. IMO, the command-line options are not the best choice when more and more configuration options are required. There is an another way to manage configuration by reading and parsing the specified configuration file into a global golang structure. The configuration structure may look like:

type Config struct{
    Server ServerConfig
    DB DBConfig
    Log LogConfig
}

type ServerConfig struct{
    Host string
    Port int
    ReadTimeOut int
    WriteTimeOut int
   // ......
}

type DBConfig struct{}

type LogConfig struct{}

osdsdock status synchronization

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:

There is no mechanism to probe the status of osdsdock, once the osdsdock service is crashed, the schduler (osdslet) doesn't notified it , the osdslet will go on to schedule the osdsdock which is crashed. There will be some unpredictable mistakes or status that confuse the user.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Can not test Containerized Installation Deployment: osdsctl CLI cannot be downloaded

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

I followed the instructions in:
https://github.com/opensds/opensds/wiki/How-to-Run-Containerized-OpenSDS-for-Testing-Work

When I got to the following step, it failed because the file was not found:
root@ubuntu:~# curl -sSL https://raw.githubusercontent.com/opensds/opensds/development/osdsctl/bin/osdsctl | mv osdsctl /usr/local/bin/

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Ubuntu 16.04

DRBD: Shared state in OpenSDS

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

DRBD uses shared information that has to be consistent and available
on both hosts. This mainly includes the chosen TCP/IP ports and the
DRBD minor number. As there are no real means to share that
information in the OpenSDS cluster, this works as follows:

use the same port/minor minimum version (drbd.yaml) on both hosts
store meta data on CreateReplication() that stores used ports/minors.
check which ones are locally used, and chose the next one.
This can fail in various scenarios, and if that machinery gets out of
"sync" once, the different hosts will chose differnt ports/minors.
This is ways to fragile for production use.
This shared information is attached to the local volumes, which has
another side effect: If you create a replication for LVM1+LVM2, delete
that replication, create it again for these two devices, the stored
meta data still exists, because it got attached to the devices. In
that case the next valid port/minor number combination is chosen. It
would be a lot more robust, if we could fail at the point where we
detect that a volume already has a port/minor combination.
Unfortunately this is not possible, as it would break the
create/delete/create again case.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
This is from #425

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Note: When replication is deleted, the replication data (replicationDriverData) saved in each volume should be deleted as well. This is a bug that should be fixed.

DRBD: Export of underlying backing devices

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Currently, OpenSDS exports created devices for access. For example the
DRBD backing devices (LVM volumes) are iSCSI exported. Accessing the
underlying backing devices directly breaks DRBD. This would also
apply to every layered block devices in Linux (e.g., accessing the
underlying devices of a DM-crypt device also breaks the encryption
device; or you also do not access the underlying devices of a software
RAID. That is simply how the Linux block layer works). OpenSDS is
currently not able to handle how Linux works, namely layered block
devices. IMO, this design issue will need to be solved anyways,
otherwise layering of Linux block devices will not be possible.
Hopefully fixing the OpenSDS design then fixes this problem for DRBD.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
This is from #425.

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

We can create a mapping between the exported device path and DRBD device path and inform users to use the DRBD device path only.

Multi-tenant instructions for testing instructs to source files that don't exist.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

I was following steps in: https://github.com/opensds/opensds/wiki/OpenSDS-Local-Cluster-with-Multi-tenants-Installation

The section for Testing instructs:

If you choose keystone for authentication strategy, you need to execute different commands for logging in as different roles:

For admin role
source /opt/stack/devstack/openrc admin admin
For user role
source /opt/stack/devstack/openrc

However those files and directories do not exist. The rest of the commands seemed to have worked, but I've added the output of them here, just in case something in the installation process failed?

It did not seem to impact my ability to create or see the created volume later in the wiki.

root@lglw1039:~/gopath/src/github.com/opensds/opensds# cd $GOPATH/src/github.com/opensds/opensds && script/devsds/install.sh
Starting install...

  • set -o errexit
    +++ dirname script/devsds/install.sh
    ++ cd script/devsds
    ++ pwd
  • TOP_DIR=/home/bill/gopath/src/github.com/opensds/opensds/script/devsds
    ++ cd /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/../..
    ++ pwd
  • OPENSDS_DIR=/home/bill/gopath/src/github.com/opensds/opensds
  • OPENSDS_CONFIG_DIR=/etc/opensds
  • OPENSDS_DRIVER_CONFIG_DIR=/etc/opensds/driver
  • mkdir -p /etc/opensds/driver
  • OPT_DIR=/opt/opensds
  • OPT_BIN=/opt/opensds/bin
  • mkdir -p /opt/opensds/bin
  • export PATH=/opt/opensds/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/home/bill/gopath/bin
  • PATH=/opt/opensds/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/home/bill/gopath/bin
  • LOGFILE=/var/log/opensds/devsds.log
  • TIMESTAMP_FORMAT=%F-%H%M%S
  • LOGDAYS=7
    ++ date +%F-%H%M%S
  • CURRENT_LOG_TIME=2018-05-23-090701
  • LOGFILE_DIR=/var/log/opensds
  • LOGFILE_NAME=devsds.log
  • mkdir -p /var/log/opensds
  • find /var/log/opensds -maxdepth 1 -name 'devsds.log.*' -mtime +7 -exec rm '{}' ';'
  • LOGFILE=/var/log/opensds/devsds.log.2018-05-23-090701
  • SUMFILE=/var/log/opensds/devsds.log.2018-05-23-090701.summary.2018-05-23-090701
  • exec
  • exec
    ++ /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/tools/outfilter.py -v -o /var/log/opensds/devsds.log.2018-05-23-090701
    2018-05-23 13:07:01.838 | + exec
    2018-05-23 13:07:01.839 | ++ /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/tools/outfilter.py -o /var/log/opensds/devsds.log.2018-05-23-090701.summary.2018-05-23-090701
    2018-05-23 13:07:01.839 | + osds::echo_summary 'install.sh log /var/log/opensds/devsds.log.2018-05-23-090701'
    2018-05-23 13:07:01.839 | + echo -e install.sh log /var/log/opensds/devsds.log.2018-05-23-090701
    2018-05-23 13:07:01.839 | + ln -sf /var/log/opensds/devsds.log.2018-05-23-090701 /var/log/opensds/devsds.log
    2018-05-23 13:07:01.839 | + ln -sf /var/log/opensds/devsds.log.2018-05-23-090701.summary.2018-05-23-090701 /var/log/opensds/devsds.log.summary
    2018-05-23 13:07:01.839 | + source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/lib/util.sh
    2018-05-23 13:07:01.839 | + source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/sdsrc
    2018-05-23 13:07:01.839 | ++ source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/local.conf
    2018-05-23 13:07:01.839 | +++ OPENSDS_AUTH_STRATEGY=noauth
    2018-05-23 13:07:01.839 | +++ OPENSDS_BACKEND_LIST=lvm
    2018-05-23 13:07:01.839 | ++ HOST_IP=
    2018-05-23 13:07:01.839 | +++ osds::util::get_default_host_ip '' inet
    2018-05-23 13:07:01.839 | +++ local host_ip=
    2018-05-23 13:07:01.839 | +++ local af=inet
    2018-05-23 13:07:01.839 | +++ '[' -z '' ']'
    2018-05-23 13:07:01.839 | +++ host_ip=
    2018-05-23 13:07:01.839 | ++++ ip -f inet route
    2018-05-23 13:07:01.839 | ++++ head -1
    2018-05-23 13:07:01.839 | ++++ awk '/default/ {print $5}'
    2018-05-23 13:07:01.839 | +++ host_ip_iface=ens32
    2018-05-23 13:07:01.839 | +++ local host_ips
    2018-05-23 13:07:01.839 | ++++ LC_ALL=C
    2018-05-23 13:07:01.839 | ++++ ip -f inet addr show ens32
    2018-05-23 13:07:01.839 | ++++ sed /temporary/d
    2018-05-23 13:07:01.839 | ++++ awk '/inet/ {split($2,parts,"/"); print parts[1]}'
    2018-05-23 13:07:01.844 | +++ host_ips=10.247.101.39
    2018-05-23 13:07:01.844 | +++ local ip
    2018-05-23 13:07:01.844 | +++ for ip in '$host_ips'
    2018-05-23 13:07:01.844 | +++ host_ip=10.247.101.39
    2018-05-23 13:07:01.844 | +++ break
    2018-05-23 13:07:01.844 | +++ echo 10.247.101.39
    2018-05-23 13:07:01.844 | ++ HOST_IP=10.247.101.39
    2018-05-23 13:07:01.844 | ++ '[' 10.247.101.39 == '' ']'
    2018-05-23 13:07:01.845 | ++ OPENSDS_VERSION=v1beta
    2018-05-23 13:07:01.845 | ++ OPENSDS_AUTH_STRATEGY=noauth
    2018-05-23 13:07:01.845 | ++ OPENSDS_SERVER_NAME=opensds
    2018-05-23 13:07:01.845 | ++ OPENSDS_BACKEND_LIST=lvm
    2018-05-23 13:07:01.845 | ++ STACK_GIT_BASE=https://git.openstack.org
    2018-05-23 13:07:01.845 | ++ STACK_USER_NAME=stack
    2018-05-23 13:07:01.845 | ++ STACK_PASSWORD=opensds@123
    2018-05-23 13:07:01.845 | ++ STACK_HOME=/opt/stack
    2018-05-23 13:07:01.845 | ++ STACK_BRANCH=stable/queens
    2018-05-23 13:07:01.845 | ++ DEV_STACK_DIR=/opt/stack/devstack
    2018-05-23 13:07:01.845 | ++ ETCD_VERSION=3.2.0
    2018-05-23 13:07:01.845 | ++ ETCD_HOST=10.247.101.39
    2018-05-23 13:07:01.845 | ++ ETCD_PORT=62379
    2018-05-23 13:07:01.846 | ++ ETCD_PEER_PORT=62380
    2018-05-23 13:07:01.846 | ++ ETCD_DIR=/opt/opensds/etcd
    2018-05-23 13:07:01.846 | ++ ETCD_LOGFILE=/opt/opensds/etcd/etcd.log
    2018-05-23 13:07:01.846 | ++ ETCD_DATADIR=/opt/opensds/etcd/data
    2018-05-23 13:07:01.846 | ++ OPENSDS_ENABLED_SERVICES=opensds,etcd
    2018-05-23 13:07:01.846 | ++ '[' noauth = keystone ']'
    2018-05-23 13:07:01.846 | ++ OPENSDS_ENABLED_SERVICES+=,lvm
    2018-05-23 13:07:01.846 | ++ SUPPORT_SERVICES=keystone,lvm,ceph,etcd,opensds
    2018-05-23 13:07:01.846 | + osds::backendlist_check lvm
    2018-05-23 13:07:01.846 | + local backendlist=lvm
    2018-05-23 13:07:01.847 | ++ echo lvm
    2018-05-23 13:07:01.847 | ++ tr , ' '
    2018-05-23 13:07:01.849 | + for backend in '$(echo $backendlist | tr "," " ")'
    2018-05-23 13:07:01.849 | + case $backend in
    2018-05-23 13:07:01.849 | + :
    2018-05-23 13:07:01.849 | + osds::util::serice_operation install
    2018-05-23 13:07:01.849 | + local action=install
    2018-05-23 13:07:01.850 | ++ echo keystone,lvm,ceph,etcd,opensds
    2018-05-23 13:07:01.850 | ++ tr , ' '
    2018-05-23 13:07:01.852 | + for service in '$(echo $SUPPORT_SERVICES|tr ''',''' ''' ''')'
    2018-05-23 13:07:01.852 | + osds::util::is_service_enabled keystone
    2018-05-23 13:07:01.856 | + return 1
    2018-05-23 13:07:01.856 | + for service in '$(echo $SUPPORT_SERVICES|tr ''',''' ''' ''')'
    2018-05-23 13:07:01.856 | + osds::util::is_service_enabled lvm
    2018-05-23 13:07:01.861 | + return 0
    2018-05-23 13:07:01.861 | + source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/lib/lvm.sh
    2018-05-23 13:07:01.869 | + osds::lvm::install
    2018-05-23 13:07:01.869 | + local vg=opensds-volumes-default
    2018-05-23 13:07:01.869 | + local size=20G
    2018-05-23 13:07:01.869 | + osds::lvm::pkg_install
    2018-05-23 13:07:01.869 | + sudo apt-get install -y lvm2 tgt open-iscsi
    2018-05-23 13:07:01.937 | Reading package lists...
    2018-05-23 13:07:02.300 | Building dependency tree...
    2018-05-23 13:07:02.300 | Reading state information...
    2018-05-23 13:07:02.492 | lvm2 is already the newest version (2.02.133-1ubuntu10).
    2018-05-23 13:07:02.492 | open-iscsi is already the newest version (2.0.873+git0.3b4b4500-14ubuntu3.4).
    2018-05-23 13:07:02.492 | The following additional packages will be installed:
    2018-05-23 13:07:02.494 | libconfig-general-perl libibverbs1 librdmacm1 libsgutils2-2 sg3-utils
    2018-05-23 13:07:02.496 | Suggested packages:
    2018-05-23 13:07:02.496 | tgt-rbd
    2018-05-23 13:07:02.497 | The following NEW packages will be installed:
    2018-05-23 13:07:02.498 | libconfig-general-perl libibverbs1 librdmacm1 libsgutils2-2 sg3-utils tgt
    2018-05-23 13:07:02.717 | 0 upgraded, 6 newly installed, 0 to remove and 56 not upgraded.
    2018-05-23 13:07:02.717 | Need to get 1,021 kB of archives.
    2018-05-23 13:07:02.717 | After this operation, 3,104 kB of additional disk space will be used.
    2018-05-23 13:07:02.717 | Get:1 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libconfig-general-perl all 2.60-1 [54.6 kB]
    2018-05-23 13:07:02.743 | Get:2 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libsgutils2-2 amd64 1.40-0ubuntu1 [56.0 kB]
    2018-05-23 13:07:03.017 | Get:3 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 sg3-utils amd64 1.40-0ubuntu1 [632 kB]
    2018-05-23 13:07:03.239 | Get:4 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libibverbs1 amd64 1.1.8-1.1ubuntu2 [25.0 kB]
    2018-05-23 13:07:03.244 | Get:5 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 librdmacm1 amd64 1.0.21-1 [49.1 kB]
    2018-05-23 13:07:03.253 | Get:6 http://us.archive.ubuntu.com/ubuntu xenial-updates/main amd64 tgt amd64 1:1.0.63-1ubuntu1.1 [204 kB]
    2018-05-23 13:07:03.614 | Fetched 1,021 kB in 0s (1,347 kB/s)
    2018-05-23 13:07:03.648 | Selecting previously unselected package libconfig-general-perl.
    (Reading database ... 143900 files and directories currently installed.)
    2018-05-23 13:07:03.736 | Preparing to unpack .../libconfig-general-perl_2.60-1_all.deb ...
    2018-05-23 13:07:03.740 | Unpacking libconfig-general-perl (2.60-1) ...
    2018-05-23 13:07:03.905 | Selecting previously unselected package libsgutils2-2.
    2018-05-23 13:07:03.926 | Preparing to unpack .../libsgutils2-2_1.40-0ubuntu1_amd64.deb ...
    2018-05-23 13:07:03.930 | Unpacking libsgutils2-2 (1.40-0ubuntu1) ...
    2018-05-23 13:07:04.038 | Selecting previously unselected package sg3-utils.
    2018-05-23 13:07:04.058 | Preparing to unpack .../sg3-utils_1.40-0ubuntu1_amd64.deb ...
    2018-05-23 13:07:04.062 | Unpacking sg3-utils (1.40-0ubuntu1) ...
    2018-05-23 13:07:04.225 | Selecting previously unselected package libibverbs1.
    2018-05-23 13:07:04.243 | Preparing to unpack .../libibverbs1_1.1.8-1.1ubuntu2_amd64.deb ...
    2018-05-23 13:07:04.251 | Unpacking libibverbs1 (1.1.8-1.1ubuntu2) ...
    2018-05-23 13:07:04.319 | Selecting previously unselected package librdmacm1.
    2018-05-23 13:07:04.336 | Preparing to unpack .../librdmacm1_1.0.21-1_amd64.deb ...
    2018-05-23 13:07:04.340 | Unpacking librdmacm1 (1.0.21-1) ...
    2018-05-23 13:07:04.402 | Selecting previously unselected package tgt.
    2018-05-23 13:07:04.419 | Preparing to unpack .../tgt_1%3a1.0.63-1ubuntu1.1_amd64.deb ...
    2018-05-23 13:07:04.425 | Unpacking tgt (1:1.0.63-1ubuntu1.1) ...
    2018-05-23 13:07:04.512 | Processing triggers for man-db (2.7.5-1) ...
    2018-05-23 13:07:06.142 | Processing triggers for libc-bin (2.23-0ubuntu10) ...
    2018-05-23 13:07:06.176 | Processing triggers for systemd (229-4ubuntu21.1) ...
    2018-05-23 13:07:06.650 | Processing triggers for ureadahead (0.100.0-19) ...
    2018-05-23 13:07:06.893 | Setting up libconfig-general-perl (2.60-1) ...
    2018-05-23 13:07:06.906 | Setting up libsgutils2-2 (1.40-0ubuntu1) ...
    2018-05-23 13:07:06.932 | Setting up sg3-utils (1.40-0ubuntu1) ...
    2018-05-23 13:07:06.945 | Setting up libibverbs1 (1.1.8-1.1ubuntu2) ...
    2018-05-23 13:07:06.956 | Setting up librdmacm1 (1.0.21-1) ...
    2018-05-23 13:07:06.973 | Setting up tgt (1:1.0.63-1ubuntu1.1) ...
    2018-05-23 13:07:07.733 | Processing triggers for libc-bin (2.23-0ubuntu10) ...
    2018-05-23 13:07:07.757 | Processing triggers for systemd (229-4ubuntu21.1) ...
    2018-05-23 13:07:07.870 | Processing triggers for ureadahead (0.100.0-19) ...
    2018-05-23 13:07:09.785 | + osds::lvm::create_volume_group opensds-volumes-default 20G
    2018-05-23 13:07:09.785 | + local vg=opensds-volumes-default
    2018-05-23 13:07:09.785 | + local size=20G
    2018-05-23 13:07:09.785 | + local backing_file=/opt/opensds/lvm/opensds-volumes-default-backing-file
    2018-05-23 13:07:09.786 | + sudo vgs opensds-volumes-default
    2018-05-23 13:07:09.801 | Volume group "opensds-volumes-default" not found
    2018-05-23 13:07:09.801 | Cannot process volume group opensds-volumes-default
    2018-05-23 13:07:09.802 | + [[ -f /opt/opensds/lvm/opensds-volumes-default-backing-file ]]
    2018-05-23 13:07:09.802 | + truncate -s 20G /opt/opensds/lvm/opensds-volumes-default-backing-file
    2018-05-23 13:07:09.832 | + local vg_dev
    2018-05-23 13:07:09.832 | ++ sudo losetup -f --show /opt/opensds/lvm/opensds-volumes-default-backing-file
    2018-05-23 13:07:09.889 | + vg_dev=/dev/loop0
    2018-05-23 13:07:09.889 | + sudo vgs opensds-volumes-default
    2018-05-23 13:07:09.903 | Volume group "opensds-volumes-default" not found
    2018-05-23 13:07:09.903 | Cannot process volume group opensds-volumes-default
    2018-05-23 13:07:09.904 | + sudo vgcreate opensds-volumes-default /dev/loop0
    2018-05-23 13:07:09.985 | Physical volume "/dev/loop0" successfully created
    2018-05-23 13:07:09.990 | Volume group "opensds-volumes-default" successfully created
    2018-05-23 13:07:09.992 | + sudo tgtadm --op show --mode target
    2018-05-23 13:07:09.992 | + awk '/Target/ {print $3}'
    2018-05-23 13:07:09.998 | + sudo xargs -r -n1 tgt-admin --delete
    2018-05-23 13:07:10.006 | + osds::lvm::remove_volumes opensds-volumes-default
    2018-05-23 13:07:10.006 | + local vg=opensds-volumes-default
    2018-05-23 13:07:10.006 | + sudo lvremove -f opensds-volumes-default
    2018-05-23 13:07:10.021 | + osds::lvm::set_configuration
    2018-05-23 13:07:10.021 | + cat
    2018-05-23 13:07:10.024 | + cat
    2018-05-23 13:07:10.026 | + osds::lvm::set_lvm_filter
    2018-05-23 13:07:10.026 | + local 'filter_suffix="r|.|" ] # from devsds'
    2018-05-23 13:07:10.026 | + local 'filter_string=global_filter = [ '
    2018-05-23 13:07:10.026 | + local pv
    2018-05-23 13:07:10.026 | + local vg
    2018-05-23 13:07:10.026 | + local line
    2018-05-23 13:07:10.027 | ++ sudo pvs --noheadings -o name
    2018-05-23 13:07:10.277 | + for pv_info in '$(sudo pvs --noheadings -o name)'
    2018-05-23 13:07:10.278 | ++ echo -e /dev/loop0
    2018-05-23 13:07:10.279 | ++ sed 's//dev///g'
    2018-05-23 13:07:10.279 | ++ sed 's/ //g'
    2018-05-23 13:07:10.283 | + pv=loop0
    2018-05-23 13:07:10.283 | + new='"a|loop0|", '
    2018-05-23 13:07:10.283 | + filter_string='global_filter = [ "a|loop0|", '
    2018-05-23 13:07:10.284 | + for pv_info in '$(sudo pvs --noheadings -o name)'
    2018-05-23 13:07:10.285 | ++ echo -e /dev/sda5
    2018-05-23 13:07:10.285 | ++ sed 's/ //g'
    2018-05-23 13:07:10.285 | ++ sed 's//dev///g'
    2018-05-23 13:07:10.290 | + pv=sda5
    2018-05-23 13:07:10.290 | + new='"a|sda5|", '
    2018-05-23 13:07:10.290 | + filter_string='global_filter = [ "a|loop0|", "a|sda5|", '
    2018-05-23 13:07:10.290 | + filter_string='global_filter = [ "a|loop0|", "a|sda5|", "r|.
    |" ] # from devsds'
    2018-05-23 13:07:10.290 | + osds::lvm::clean_lvm_filter
    2018-05-23 13:07:10.290 | + sudo sed -i 's/^.# from devsds$//' /etc/lvm/lvm.conf
    2018-05-23 13:07:10.312 | + sudo sed -i '/# global_filter = [.
    ]/a\ global_filter = [ "a|loop0|", "a|sda5|", "r|.|" ] # from devsds' /etc/lvm/lvm.conf
    2018-05-23 13:07:10.339 | + osds::echo_summary 'set lvm.conf device global_filter to: global_filter = [ "a|loop0|", "a|sda5|", "r|.
    |" ] # from devsds'
    2018-05-23 13:07:10.339 | + echo -e set lvm.conf device global_filter to: global_filter = '[' '"a|loop0|",' '"a|sda5|",' '"r|.*|"' ']' '#' from devsds
    2018-05-23 13:07:10.339 | + for service in '$(echo $SUPPORT_SERVICES|tr ''',''' ''' ''')'
    2018-05-23 13:07:10.339 | + osds::util::is_service_enabled ceph
    2018-05-23 13:07:10.345 | + return 1
    2018-05-23 13:07:10.345 | + for service in '$(echo $SUPPORT_SERVICES|tr ''',''' ''' ''')'
    2018-05-23 13:07:10.345 | + osds::util::is_service_enabled etcd
    2018-05-23 13:07:10.350 | + return 0
    2018-05-23 13:07:10.350 | + source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/lib/etcd.sh
    2018-05-23 13:07:10.355 | + osds::etcd::install
    2018-05-23 13:07:10.355 | + which etcd
    2018-05-23 13:07:10.358 | + osds::etcd::download
    2018-05-23 13:07:10.358 | + cd /opt/opensds
    2018-05-23 13:07:10.359 | + url=https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:10.359 | + download_file=etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:10.359 | + osds::util::download_file https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:10.359 | + local -r url=https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:10.359 | + local -r destination_file=etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:10.359 | + rm etcd-v3.2.0-linux-amd64.tar.gz 2
    2018-05-23 13:07:10.361 | + true
    2018-05-23 13:07:10.362 | ++ seq 5
    2018-05-23 13:07:10.363 | + for i in '$(seq 5)'
    2018-05-23 13:07:10.363 | + curl -fsSL --retry 3 --keepalive-time 2 https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz -o etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:15.626 | + echo 'Downloading https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz succeed'
    2018-05-23 13:07:15.627 | Downloading https://github.com/coreos/etcd/releases/download/v3.2.0/etcd-v3.2.0-linux-amd64.tar.gz succeed
    2018-05-23 13:07:15.627 | + return 0
    2018-05-23 13:07:15.627 | + tar xzf etcd-v3.2.0-linux-amd64.tar.gz
    2018-05-23 13:07:15.986 | + cp etcd-v3.2.0-linux-amd64/etcd bin
    2018-05-23 13:07:16.008 | + cp etcd-v3.2.0-linux-amd64/etcdctl bin
    2018-05-23 13:07:16.026 | + mkdir -p /opt/opensds/etcd
    2018-05-23 13:07:16.028 | + echo 15985
    2018-05-23 13:07:16.028 | + osds::echo_summary 'Waiting for etcd to come up.'
    2018-05-23 13:07:16.028 | + echo -e Waiting for etcd to come up.
    2018-05-23 13:07:16.028 | + osds::util::wait_for_url http://10.247.101.39:62379/v2/machines 'etcd: ' 0.25 80
    2018-05-23 13:07:16.028 | + local url=http://10.247.101.39:62379/v2/machines
    2018-05-23 13:07:16.028 | + local 'prefix=etcd: '
    2018-05-23 13:07:16.028 | + nohup etcd --advertise-client-urls http://10.247.101.39:62379 --listen-client-urls http://10.247.101.39:62379 --listen-peer-urls http://10.247.101.39:62380 --data-dir /opt/opensds/etcd/data --debug
    2018-05-23 13:07:16.028 | + local wait=0.25
    2018-05-23 13:07:16.028 | + local times=80
    2018-05-23 13:07:16.028 | + which curl
    2018-05-23 13:07:16.029 | + local i
    2018-05-23 13:07:16.030 | ++ seq 1 80
    2018-05-23 13:07:16.031 | + for i in '$(seq 1 $times)'
    2018-05-23 13:07:16.031 | + local out
    2018-05-23 13:07:16.032 | ++ curl --max-time 1 -gkfs http://10.247.101.39:62379/v2/machines
    2018-05-23 13:07:16.043 | + out=
    2018-05-23 13:07:16.043 | + sleep 0.25
    2018-05-23 13:07:16.295 | + for i in '$(seq 1 $times)'
    2018-05-23 13:07:16.295 | + local out
    2018-05-23 13:07:16.296 | ++ curl --max-time 1 -gkfs http://10.247.101.39:62379/v2/machines
    2018-05-23 13:07:16.892 | + out=http://10.247.101.39:62379
    2018-05-23 13:07:16.892 | + osds::echo_summary 'On try 2, etcd: : http://10.247.101.39:62379'
    2018-05-23 13:07:16.892 | + echo -e On try 2, etcd: : http://10.247.101.39:62379
    2018-05-23 13:07:16.892 | + return 0
    2018-05-23 13:07:16.892 | + curl -fs -X PUT http://10.247.101.39:62379/v2/keys/_test
    2018-05-23 13:07:16.900 | {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
    2018-05-23 13:07:16.900 | + for service in '$(echo $SUPPORT_SERVICES|tr ''',''' ''' ''')'
    2018-05-23 13:07:16.900 | + osds::util::is_service_enabled opensds
    2018-05-23 13:07:16.904 | + return 0
    2018-05-23 13:07:16.904 | + source /home/bill/gopath/src/github.com/opensds/opensds/script/devsds/lib/opensds.sh
    2018-05-23 13:07:16.908 | + osds::opensds::install
    2018-05-23 13:07:16.909 | + osds:opensds:configuration
    2018-05-23 13:07:16.909 | + cat
    2018-05-23 13:07:16.911 | + cd /home/bill/gopath/src/github.com/opensds/opensds
    2018-05-23 13:07:16.911 | + sudo build/out/bin/osdslet --daemon --alsologtostderr
    2018-05-23 13:07:16.932 | build/out/bin/osdslet [PID] 16021 running...
    2018-05-23 13:07:16.934 | + sudo build/out/bin/osdsdock --daemon --alsologtostderr
    2018-05-23 13:07:16.960 | build/out/bin/osdsdock [PID] 16039 running...
    2018-05-23 13:07:16.962 | + osds::echo_summary 'Waiting for osdslet to come up.'
    2018-05-23 13:07:16.962 | + echo -e Waiting for osdslet to come up.
    2018-05-23 13:07:16.963 | + osds::util::wait_for_url localhost:50040 osdslet 0.25 80
    2018-05-23 13:07:16.963 | + local url=localhost:50040
    2018-05-23 13:07:16.963 | + local prefix=osdslet
    2018-05-23 13:07:16.963 | + local wait=0.25
    2018-05-23 13:07:16.963 | + local times=80
    2018-05-23 13:07:16.963 | + which curl
    2018-05-23 13:07:16.965 | + local i
    2018-05-23 13:07:16.965 | ++ seq 1 80
    2018-05-23 13:07:16.966 | + for i in '$(seq 1 $times)'
    2018-05-23 13:07:16.967 | + local out
    2018-05-23 13:07:16.967 | ++ curl --max-time 1 -gkfs localhost:50040
    2018-05-23 13:07:16.982 | + out='[{"description":"v1beta version","name":"v1beta","status":"CURRENT","updatedAt":"2017-07-10T14:36:58.014Z"}]'
    2018-05-23 13:07:16.982 | + osds::echo_summary 'On try 1, osdslet: [{"description":"v1beta version","name":"v1beta","status":"CURRENT","updatedAt":"2017-07-10T14:36:58.014Z"}]'
    2018-05-23 13:07:16.982 | + echo -e On try 1, osdslet: '[{"description":"v1beta' 'version","name":"v1beta","status":"CURRENT","updatedAt":"2017-07-10T14:36:58.014Z"}]'
    2018-05-23 13:07:16.982 | + return 0
    2018-05-23 13:07:16.982 | + '[' noauth == keystone ']'
    2018-05-23 13:07:16.983 | + export OPENSDS_AUTH_STRATEGY=noauth
    2018-05-23 13:07:16.983 | + OPENSDS_AUTH_STRATEGY=noauth
    2018-05-23 13:07:16.983 | + export OPENSDS_ENDPOINT=http://localhost:50040
    2018-05-23 13:07:16.983 | + OPENSDS_ENDPOINT=http://localhost:50040
    2018-05-23 13:07:16.983 | + build/out/bin/osdsctl profile create '{"name": "default", "description": "default policy"}'
    2018-05-23 13:07:16.994 | +-------------+--------------------------------------+
    2018-05-23 13:07:16.995 | | Property | Value |
    2018-05-23 13:07:16.995 | +-------------+--------------------------------------+
    2018-05-23 13:07:16.995 | | Id | 32d1c25d-f1c0-4f4c-b036-94970547b91f |
    2018-05-23 13:07:16.995 | | CreatedAt | 2018-05-23T09:07:16 |
    2018-05-23 13:07:16.995 | | UpdatedAt | |
    2018-05-23 13:07:16.995 | | Name | default |
    2018-05-23 13:07:16.995 | | Description | default policy |
    2018-05-23 13:07:16.995 | | Extras | null |
    2018-05-23 13:07:16.995 | +-------------+--------------------------------------+
    2018-05-23 13:07:16.995 | + cp /home/bill/gopath/src/github.com/opensds/opensds/osdsctl/completion/osdsctl.bash_completion /etc/bash_completion.d/
    2018-05-23 13:07:16.997 | + '[' 0 == 0 ']'
    2018-05-23 13:07:16.998 | + osds::echo_summary devsds installed successfully '!!'
    2018-05-23 13:07:16.998 | + echo -e devsds installed successfully '!!'

root@lglw1039:~/gopath/src/github.com/opensds/opensds#

Execute commands blow to set up ENVs which are needed by OpenSDS CLI:

export OPENSDS_AUTH_STRATEGY=noauth
export OPENSDS_ENDPOINT=http://localhost:50040

Enjoy it !!

root@lglw1039:/gopath/src/github.com/opensds/opensds# cp build/out/bin/osdsctl /usr/local/bin
root@lglw1039:
/gopath/src/github.com/opensds/opensds# export OPENSDS_ENDPOINT=http://127.0.0.1:50040
exproot@lglw1039:/gopath/src/github.com/opensds/opensds# export OPENSDS_AUTH_STRATEGY=noauth
**root@lglw1039:
/gopath/src/github.com/opensds/opensds# source /opt/stack/devstack/openrc admin admin
bash: /opt/stack/devstack/openrc: No such file or directory**

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Ubuntu 16.04

Full lifecycle management of the project

Motivation

As we know, LCM (life-cycle management) of one project SHOULD contain these stages below: design, coding, building, testing, deployment and releasing. From this point, opensds controller project is far away from being ready for producibility. Inside these stages, we still did poor work in building, testing, deployment and releasing these four stages.

What should we do?

Building

As for building work, currently we use go build tool to manage the whole project. Although it works well now, but IMO there will be more problems as this project grows.
So I suggest we add more building options (make, shell scripts and so forth) for system scalability and availability.

Testing

A productive testing framework should contain unit test, integration test and e2e test. Right now we are trying to work on unit test, but never forget those two others. Considering there are so many mature testing frameworks, it would be a good choice if we build our integration and e2e test on popular opensource project (ginkgo etc).

Deployment

With the development of IaaS and PaaS, automatic deployment and installation become more and more important in a distributed system cluster. As a cloud-native storage system, opensds controller project MUST support (at least one of) automatic deployment as shown below:

  • Scripts
  • Automatic tools (such as ansible)
  • Containerization (docker, helm, bosh etc)

Releasing

To be ready for productization, publishing a release is the necessary but complex work. Here are some references from some other popular projects (service catalog, osba) for better understanding.

Lastly, please notice that it is just some initial thoughts from my side about what a formal project should be, so more suggestions or comments are welcomed.

lvm driver bug

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind bug

What happened:

In opensds framework, we distinguish volume by uuid, so the diffrent volumes can have the same name.
But the LVM driver does not support this feature, it will return an error if you create two volumes which have the same name.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Update OpenSDS Source Code

In OpenSDS latest version, we standardize the OpenSDS SourthBound Interface and improve the functionality of system. Right now we have finished OpenSDS integration of Cinder and the show and list method of volume have worked.

Add authentication and request filter in osdsctl and client package

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
According to keystone design (#289) and API request filter support (#284), opensds api-server has supported authentication and request filter features, but as for client and cli tool, these features also need to be supported.

What you expected to happen:
Add keystone authentication and api request filter support in cli and client tool.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Need a real etcd cluster for integration test

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
In the integration test, we used mocked db to simplify the configuration, but it should be removed because we should gurantee the stability of system in real environment.

What you expected to happen:
Replace the mocked db with real etcd cluster (standalone), and add etcd initializing script into prepare.sh, for testing code add some db test.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Can not install via bootstrap.sh script: 404 not found

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

I tried to install via Installation README located at:
https://github.com/opensds/opensds/wiki/OpenSDS-Local-Cluster-with-Multi-tenants-Installation

Attempt to run the command provided yielded an error.

curl -sSL https://raw.githubusercontent.com/opensds/opensds/master/script/cluster/bootstrap.sh | sudo bash

bash: line 1: 404:: command not found

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

ubuntu 16.x

Missing standard Command Line Interface in OpenSDS

Currently OpenSDS CLI is just a simple parser with string. To make it more convenient to use, we should develop OpenSDS CLI as a well-accepted function, just like OpenStack and Kubernetes (kubectl).

Containerized osdsdock service doesn't work when calling ceph backend

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
When I test the opensds cluster using docker-compose tool, it failed when calling ListPools method in ceph driver. And then osdsdock service would crash down because of some unexpected errors:

root@ubuntu:~# docker logs 25745df796a0
E0209 02:45:35.278842       1 ceph.go:114] exit status 1
E0209 02:45:35.279239       1 ceph.go:434] [Error]:exit status 1
E0209 02:45:35.279317       1 ceph.go:506] exit status 1
E0209 02:45:35.279360       1 discovery.go:90] Call driver to list pools failed:exit status 1
panic: exit status 1

goroutine 1 [running]:
main.main()
	/root/gopath/src/github.com/opensds/opensds/cmd/osdsdock/osdsdock.go:54 +0x1a0

What you expected to happen:
Get this bug fixed.

How to reproduce it (as minimally and precisely as possible):

  • Install ceph cluster
  • Configure opensds variable
  • Start containerized opensds service

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version: master
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a):
  • Install tools: docker-compose
  • Others:

Add osdsctl bash completion feature

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind feature

What happened:
The command line tool of opensds is not support bash completion feature, it not very friendly for users.

What you expected to happen:
Add bash completion feature

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Installation of OpenSDS through docker-compose hangs

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

I was following steps in the opensds installation guide for using docker-compose to install.
It seems to work but keeps timing out. Not sure if it really did start up and the docker-compose just didn't finish, or if it's really trying to wait for something that isn't happening. Let me know if there's something else I can provide to give more detail. Thanks.

sudo docker-compose up
bill_osdsdb_1 is up-to-date
bill_osdslet_1 is up-to-date
bill_osdsdock_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
root@ubuntu:# COMPOSE_HTTP_TIMEOUT=300
root@ubuntu:
# sudo docker-compose up
bill_osdsdb_1 is up-to-date
bill_osdslet_1 is up-to-date
bill_osdsdock_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
root@ubuntu:~# sudo COMPOSE_HTTP_TIMEOUT=300 docker-compose up
bill_osdsdb_1 is up-to-date
bill_osdsdock_1 is up-to-date
bill_osdslet_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdsdock_1, bill_osdslet_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:34:01.114775 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:34:01.114958 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:35:01.117435 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:35:01.117502 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:36:01.120567 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:36:01.120702 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:37:01.125978 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:37:01.126012 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:38:01.128279 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:38:01.128300 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:39:01.130688 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:39:01.130711 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 300).

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Ubuntu 16.04

Ceph backend doesn't work when integrating opensds with k8s csi

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

When ceph backend enabled in k8s csi scenario, there are some bugs in InitializeConnection method which shown below:

func (d *Driver) InitializeConnection(opt *pb.CreateAttachmentOpts) (*model.ConnectionInfo, error) {
	vol, err := d.PullVolume(opt.GetVolumeId())
	if err != nil {
		log.Error("When get image:", err)
		return nil, err
	}

	return &model.ConnectionInfo{
		DriverVolumeType: "rbd",
		ConnectionData: map[string]interface{}{
			"secret_type":  "ceph",
			"name":         "rbd/" + opensdsPrefix + ":" + vol.Name + ":" + vol.Id,
			"cluster_name": "ceph",
			"hosts":        []string{opt.GetHostInfo().Host},
			"volume_id":    vol.Id,
			"access_mode":  "rw",
			"ports":        []string{"6789"},
		},
	}, nil
}
  • The PullVolume method has been implemented, so that the system will return error
  • The pool name rbd can't be pre-defined

What you expected to happen:
Fix the bugs above.

How to reproduce it (as minimally and precisely as possible):
Here are error logs:

root@test:/var/log/opensds# cat osdsdock.test.root.log.ERROR.20180318-123908.17880 
Log file created at: 2018/03/18 12:39:08
Running on machine: test
Binary: Built with gc go1.9.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0318 12:39:08.190890   17880 context.go:44] {"auth_token":"","user_id":"","tenant_id":"ef305038-cd12-4f3b-90bd-0612f83e14ee","domain_id":"","user_domain_id":"","project_domain_id":"","is_admin":true,"read_only":"","show_deleted":"","request_id":"","resource_uuid":"","overwrite":"","roles":null,"user_name":"","project_name":"","domain_name":"","user_domain_name":"","project_domain_name":"","is_admin_project":false,"service_token":"","service_user_id":"","service_user_name":"","service_user_domain_id":"","service_user_domain_name":"","service_project_id":"","service_project_name":"","service_project_domain_id":"","service_project_domain_name":"","service_roles":"","token":"","uri":"/v1beta/ef305038-cd12-4f3b-90bd-0612f83e14ee/block/volumes"}
E0318 12:40:14.736517   17880 ceph.go:319] When get image:Ceph PullVolume has not implemented yet.
E0318 12:40:14.736663   17880 dock.go:165] Call driver to initialize volume connection failed:Ceph PullVolume has not implemented yet.
E0318 12:40:14.736682   17880 server.go:138] Error occurred in dock module when create volume attachment:Ceph PullVolume has not implemented yet.
E0318 12:40:19.268962   17880 ceph.go:319] When get image:Ceph PullVolume has not implemented yet.
E0318 12:40:19.268996   17880 dock.go:165] Call driver to initialize volume connection failed:Ceph PullVolume has not implemented yet.
E0318 12:40:19.269007   17880 server.go:138] Error occurred in dock module when create volume attachment:Ceph PullVolume has not implemented yet.

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

RFE: automatic deployment using Saltstack

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What you expected to happen:
According to #60, "automatic deployment and installation become more and more important in a distributed system cluster. As a cloud-native storage system, opensds controller project MUST support (at least one of) automatic deployment

I'm interested in storage management, backup/restore, & SDS standards (i.e DMTF SMI-S, OpenLMI).

I want to work on automatic deployment of OpenSDS via saltstack.
Do we have list of use case to satisfy?

Anything else we need to know?:

I'm thinking of publishing an opensds-formula upstream, and integration instructions can go here.

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release): Linux/Darwin Containerized.
  • Kernel (e.g. uname -a): any supported
  • Install tools: saltstack
  • Others:

Too many parameters when calling DeleteVolumeSnapshot in cli tool

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
From the code in api package, there is only one parameter snapshot_id needed when calling DeleteVolumeSnapshot function, but currently it still needs two parameters (volume_id and snapshot_id) to delete volume snapshot, which is unnecessary and makes user confused.

What you expected to happen:
We should remove volume_id parameter and only leave snapshot_id to call DeleteVolumeSnapshot` in osdsctl tool.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Helm Charts Support

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
Helm is a popular package manager in Kubernetes. For easier deployment and operation, we should develop opensds charts to support helm.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Modify Extend Volume error message

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind bug

What happened:
When the volume is expanded, if the new size is not greater than the old size, the error message is not written to this.Ctx.Output.Body.

What you expected to happen:
When the volume is expanded, if the new size is not greater than the old size, the error message should be written to this.Ctx.Output.Body.

How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
None

Environment:

  • Hotpot(release/branch) version: None
  • OS (e.g. from /etc/os-release): None
  • Kernel (e.g. uname -a): None
  • Install tools: None
  • Others: None

Scheduler Enhancement

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind feature

What happened:

  1. In the current version, opensds does not have a filtering framework that manages backend storage capabilities.
  2. In the current opensds version, users cannot customize the required pool when they create a volume.

What you expected to happen:

  1. opensds implements a filtering framework for managing backend storage capabilities.
  2. In the future, users can customize the required pool when creating a volume.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
Implement the required functionality by modifying filter.go and selector.go.

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

OpenSDS Kubernetes External Provisioner Design

As we all know, Kubernetes provides a framework of storage for container orchestration layer.

When users create a PVC, Kubernetes controller will retrieve PV from backends through a "Provisioner" interface. But right now each backend has its own provisioner(nfs, efs, cephfs and so on), and it's unbearable for vendors to maintain all provisoners if they want to support multiple backends.

To solve this problem, we launched a proposal #28 about designing a unified provisioner for different banckend storage types. Please be free to talk if you have any question : )

OpenSDS Controller API Request Filter

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:
Standard methods for REST are List, Get, Create, Update and Delete. Specially,The List method takes a collection name and zero or more parameters as input and returns a list of resources that match the input. In the current version , the OpenSDS Restfull API doesn't support the paination, order sorting, criteria query, total count of items obatainment.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Compiling error when building osdsdock.go

Is this a BUG REPORT or FEATURE REQUEST?:

BUG

What happened:

A few minutes ago, when I built the opensds project with LiteIDE, it reported the error and showed
.\osdsdock.go:88: undefined: server.ListenAndServe
It's caused by programming errors, but there is no suitable protective measure to help check this error.

What you expected to happen:

I expect to increase the build mechanism in CI to prevent compilation error.

Profile to volume_type mapping

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
Currently there is no mapping between OpenSDS profile and Cinder volume_type.

What you expected to happen:

  1. When creating a new profile in OpenSDS, OpenSDS should create a corresponding volume_type in Cinder if Cinder is a backend.
  2. When using an existing Cinder setup, OpenSDS should create a profile that is mapped to an existing volume_type in Cinder.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

When expanding a volume, OpenSDS needs to check the upper limit of the new size of the volume.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
When extending a volume, OpenSDS did not check the upper limit of the new size of the volume.

What you expected to happen:
When expanding a volume, OpenSDS needs to check the upper limit of the new size of the volume.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
None
Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Add LVM Driver Support

Motivation

As a storage management and orchestration layer in multi-cloud environment, it's vital for opensds controller project to enlarge its southbound eco-system. Currently we have Ceph and Cinder driver which stand for storage system, but we still miss something in storage device management.

Goal

To solve this problem, we are trying to add LVM storage device driver, which can be considered as a starting point. The reason for pushing this suggestion out is that we can prove it that eventually opensds will become a storage controller that manages different kind of storage resources.

Plan

IMO, this design should be finished before alpha release, so I think we should start it ASAP if nobody has objection.

Add CLI tool for opensds controller project

Motivation

Currently opensds controller project don't have any cli tool which is vital for any large system, so it's urgent for us to design cli tool before alpha release.

Introduction

My initial thought is to build our cli tool at the basis of some frameworks such as cobra, cli and so on. And I suggest we choose cobra for its easy-usage and popularity. We can create a package called cli and make it call client package (see #102) to send request to opensds service.

The opensds project has no roadmap and api panorama.

Is this a BUG REPORT or FEATURE REQUEST?:

help wanted

What happened:

Currently, the opensds project has no roadmap and api panorama.

What you expected to happen:

I expect to discuss and draw up the roadmap and api panorama of the opensds project together.

Anything else we need to know?:

Plz refer to previous discussion on opensds-dev

Extend Volume

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:
In the current version, the OpenSDS doesn't support "Extend Volume".

What you expected to happen:
Extends the size of a volume to a requested size, in gibibytes (GiB).

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
List of work items:

  1. Add an api (v1/{ProjectID}/ volumes/{VolumeID} /action).
  2. Add a cli command (osdsctl volume extend ).
  3. Implement extend volume in volume controller.
  4. Add logic in the selector to handle extend volume.
  5. Implement extend volume in drivers.
    Environment:
  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

OpenStack Cinder driver support

Motivation

According to the work tracking, it's an emergency to support more storage drivers. After discussion on weekly meeting, we are expected to support more than 2 drivers except Ceph driver, and Cinder driver is one of them at the moment.

Solution

As you can see from some discussions in OpenStack community, they plan to adopt gophercloud as their official golang client, so it means that we could develop our cinder driver based on this library. Besides, this project supports latest version of OpenStack API and it's under active development, so we don't need to worry a lot about maintaining this project.

Any thoughts?

OpenSDS SouthBound Ceph Driver Design

As a SDS controller, it's essential for OpenSDS to build its eco-system of sourthbound interface.

At the first stage, our strategy is to quickly make up for our lack with the help of OpenStack(Cinder, Manila). Now it's time to move to next stage where we should build our own eco-system competitive.

So we draft this proposal #30 , aiming to making OpenSDS better!

Enable containerized deployment in ansible script

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
It was a great move we could set up opensds cluster in one command using ansible, but currently we only support opensds daemon services installation from source code, so a lot time was wasted on downloading and building work. We should try to improve it and provide a better user-experience to end-users.

What you expected to happen:
Since we has supported installing opensds service in container mannually, it's time to integrate it into ansible script so that all this work could be done automatically.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

OpenSDS SouthBound Interface Design

From the current version of OpenSDS southbound interface, we found that mount and unmount are operations of host, and MountVolume and UnmountVolume methods should be executed by OpenSDS Hub module rather than backend drivers, thus they should be removed from the southbound interface.

So we drafted this proposal #33 #35, aiming to make it easier for vendors to use by redesigning southbound interface.

Please speak freely if you have any questions : )

Half-baked framework and low coverage of testing

IMO, testing plays a very important role in code development, which means that every method even structure should be validated by testing code.

But here is what our CI told us:

$ go test github.com/opensds/opensds/cmd/osdslet -cover && go test github.com/opensds/opensds/cmd/osdsdock -cover
?   	github.com/opensds/opensds/cmd/osdslet	[no test files]
?   	github.com/opensds/opensds/cmd/osdsdock	[no test files]
The command "go test github.com/opensds/opensds/cmd/osdslet -cover && go test github.com/opensds/opensds/cmd/osdsdock -cover" exited with 0.
2.58s$ go test github.com/opensds/opensds/pkg/... -cover
?   	github.com/opensds/opensds/pkg/api	[no test files]
ok  	github.com/opensds/opensds/pkg/controller	0.019s	coverage: 24.7% of statements
?   	github.com/opensds/opensds/pkg/controller/policy	[no test files]
?   	github.com/opensds/opensds/pkg/controller/policy/executor	[no test files]
?   	github.com/opensds/opensds/pkg/controller/volume	[no test files]
?   	github.com/opensds/opensds/pkg/db	[no test files]
ok  	github.com/opensds/opensds/pkg/db/drivers/etcd	0.016s	coverage: 0.3% of statements
?   	github.com/opensds/opensds/pkg/db/drivers/mysql	[no test files]
?   	github.com/opensds/opensds/pkg/dock	[no test files]
?   	github.com/opensds/opensds/pkg/dock/api	[no test files]
?   	github.com/opensds/opensds/pkg/grpc/dock/client	[no test files]
?   	github.com/opensds/opensds/pkg/grpc/dock/server	[no test files]
?   	github.com/opensds/opensds/pkg/grpc/opensds	[no test files]
?   	github.com/opensds/opensds/pkg/model	[no test files]
ok  	github.com/opensds/opensds/pkg/utils	0.010s	coverage: 12.1% of statements
The command "go test github.com/opensds/opensds/pkg/... -cover" exited with 0.

From some outputs above, it's obvious that we are still at the very early stage in testing world. To make our system code stronger and more convincing, we have to make up for the lack of testing work and improve the testing coverage up to passing line before our first release.

Any thoughts?

Add Object-Relational Mapping (ORM) supports for database.

Is this a BUG REPORT or FEATURE REQUEST?:

feature
enhancement

What happened:

The opensds controller only supports mysql database.

What you expected to happen:

I expect to support multiple database such as mysql, sqlite, postgresql etc, via adding ORM library.

Anything else we need to know?:

There are some libraries that implement ORM or datamapping techniques:

Replace project id with tenant id.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind bug

What happened:
According to our previous discussion,we should use tenant id instead of project id. So I summit this issue to finnish it.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Need to clean some third-parth packages with unfriendly license

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
In the first stage of project devlopment, we ignored some potential danger caused from external projects with unfriendly license. Now we should be noticed that it could cause large damage in knowledge authority.

What you expected to happen:
Search all third-party projects and clean those unfriendly porjects, please notice that this could result a lot of code changes.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Include documentation for use with EMC VMAX and/or Unity

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened: No documentation explaining how to use OpenSDS with Dell EMC VMAX or Unity

What you expected to happen: Simple guide added to docs section, explaining how to configure OpenSDS with VMAX or Unity for users who wish to begin using OpenSDS.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?: There is a wide user base of VMAX and Unity users who are increasingly deploying container workloads and OpenSDS is a great choice for these users. However there is currently no instructions to introduce new users and enable them to begin to use OpenSDS with VMAX or Unity arrays.

Environment:

  • Hotpot(release/branch) version: Current Stable
  • OS (e.g. from /etc/os-release): Any
  • Kernel (e.g. uname -a): Current stable (recent 4.x)
  • Install tools:
  • Others:

Global configuration log optimize

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind bug

What happened:
when osdslet or osdsdock start up, it will load the config file firstly , if some item is not setted in config file , the system will use default value and it will record a warnning information to log file. The config structure will load twice, one for default value and the other for real config file. so this log record will make developer confused. Remove it will be better.
The log records like blows

Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
W0208 16:28:13.325458   24662 config.go:132] Get key(osdslet.api_endpoint) failed, using default key(0.0.0.0:50040).
W0208 16:28:13.325721   24662 config.go:132] Get key(osdslet.graceful) failed, using default key(True).
W0208 16:28:13.325731   24662 config.go:132] Get key(osdslet.socket_order) failed, using default key(inc).
W0208 16:28:13.325739   24662 config.go:132] Get key(osdslet.daemon) failed, using default key(false).
W0208 16:28:13.325747   24662 config.go:132] Get key(osdsdock.api_endpoint) failed, using default key(localhost:50050).
W0208 16:28:13.325754   24662 config.go:132] Get key(osdsdock.enabled_backends) failed, using default key(lvm).
W0208 16:28:13.325763   24662 config.go:132] Get key(osdsdock.daemon) failed, using default key(false).
W0208 16:28:13.325774   24662 config.go:132] Get key(ceph.name) failed, using default key().
W0208 16:28:13.325782   24662 config.go:132] Get key(ceph.description) failed, using default key().
W0208 16:28:13.325789   24662 config.go:132] Get key(ceph.driver_name) failed, using default key().
W0208 16:28:13.325796   24662 config.go:132] Get key(ceph.config_path) failed, using default key().
W0208 16:28:13.325804   24662 config.go:132] Get key(cinder.name) failed, using default key().
W0208 16:28:13.325811   24662 config.go:132] Get key(cinder.description) failed, using default key().
W0208 16:28:13.325818   24662 config.go:132] Get key(cinder.driver_name) failed, using default key().
W0208 16:28:13.325825   24662 config.go:132] Get key(cinder.config_path) failed, using default key().

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

State machines for volume, snapshot, attachment, dock(agent), pool

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature
Design proposal is submitted here: sodafoundation/architecture-analysis#4

What happened:
There is no mechanism to show the real-time status and state transitions of volume, snapshot, attachment, dock and pool in the current opensds version.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Add self-define pool for ceph driver

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:
/kind bug

What happened:
In current version ceph driver dosen`t support the self-define image pool. it just only uses the default pool rbd . So it is not flexible.

What you expected to happen:
Add self-define pool feature.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Add client package for opensds controller project

Motivation

As a collaborative project, opensds controller project will work with other projects such as OpenStack, Kubernetes and so forth. To better integrated with nbp and other Golang project, it's necessary to design client package for opensds controller project to set up connection with opensds service.

Introduction

Since we have adopted beego as the framework of api-server, I suggest we also use it to implement the http client module. Besides, we can utilize all global module (model, config, log etc) to have minimum workload.

Any thoughts?

When deleting volume in LVM environment, the snapshots remain in OpenSDS DB.

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
In lvm environment, it is possible to delete volume in case of the snapshot is existing. When a volume is deleted, the snapshots are also deleted. But the snapshots exist in OpenSDS DB.
Because cinder and ceph prevent to delete volume if the snapshot is exists, this issue only happens in lvm.

What you expected to happen:
OpenSDS prevents to delete volume if the snapshots exist.

How to reproduce it (as minimally and precisely as possible):

  1. Create a volume in lvm storage pool. (osdsctl volume create)
  2. Create a snapshot of the volume. (osdsctl volume snapshot create)
  3. Delete the volume. (osdsctl volume delete)

Anything else we need to know?:

Environment:

  • Hotpot(release/branch) version:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.