sodafoundation / api Goto Github PK
View Code? Open in Web Editor NEWSODA Terra Project API module : is an open source implementation of SODA API connecting storage to platforms like Kubernetes, OpenStack, and VMware
License: Apache License 2.0
SODA Terra Project API module : is an open source implementation of SODA API connecting storage to platforms like Kubernetes, OpenStack, and VMware
License: Apache License 2.0
BUG REPORT
When I tried to test the function of whole system, I found that every time I called ups-storage-driver to create volume (attachment and snapshot) resource, the dock process will go crush and print some error info:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x52af2f]
goroutine 26 [running]:
panic(0x89e840, 0xc42000c0f0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/opensds/opensds/pkg/dock/api.CreateVolume(0xc4201921e0, 0x2, 0x2, 0x872e60)
/home/krej/gopath/src/github.com/opensds/opensds/pkg/dock/api/api.go:48 +0x17f
github.com/opensds/opensds/pkg/grpc/dock/server.(*dockServer).CreateVolume(0xc42018e420, 0x7fb1fbbbe120, 0xc420184de0, 0xc4201921e0, 0x0, 0x16b, 0x16b)
/home/krej/gopath/src/github.com/opensds/opensds/pkg/grpc/dock/server/server.go:70 +0xdb
github.com/opensds/opensds/pkg/grpc/opensds._Dock_CreateVolume_Handler(0x8eb0a0, 0xc42018e420, 0x7fb1fbbbe120, 0xc420184de0, 0xc420664500, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/krej/gopath/src/github.com/opensds/opensds/pkg/grpc/opensds/opensds.pb.go:374 +0x27d
google.golang.org/grpc.(*Server).processUnaryRPC(0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300, 0xc420184810, 0xb9f560, 0xc420184e70, 0x0, 0x0)
/home/krej/gopath/src/google.golang.org/grpc/server.go:781 +0xd14
google.golang.org/grpc.(*Server).handleStream(0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300, 0xc420184e70)
/home/krej/gopath/src/google.golang.org/grpc/server.go:981 +0x7a0
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4206666b0, 0xc42018a3c0, 0xb7cca0, 0xc420088840, 0xc42006c300)
/home/krej/gopath/src/google.golang.org/grpc/server.go:551 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
/home/krej/gopath/src/google.golang.org/grpc/server.go:552 +0xa3
After doing some analysis to ups-storage-driver, I found that there is some issues when initializing all structures like this:
func (p *Plugin) CreateVolume(name string, size int64) (*api.VolumeSpec, error) {
return &api.VolumeSpec{}, nil
}
I will pull a request for solving this issue soon later, please take your time to have a review at it. Thanks!
Hi all, I'd like to introduce a CI system which is built based zuul[1] and nodepool[2] tools, and it can used for tests(both unit tests and acceptance tests) based on a devstack[3] environment. For now, it has basicially finished the CI system building, and have tested with Gophercloud, it can work OK. It will long-term maintain the CI system and would like to propose to intergrate with Gophercloud official repo(also will try with Terraform project).
FYI, the zuul jobs definition can be found in[4], the zuul jobs status web page is[5], the test jobs log server is[6] and there is a job log example[7], tests result can be found in the "job-output.txt.gz" file of the log page.
If it is possible to integrate the CI system with OpenSDS official repo, there are two things need to to with OpenSDS repo to integrate with this CI system:
a). Need to add a webhook to trigger the CI system running testing jobs when new pull request comming
b). Need to add a ".zuul.yaml" file as a CI jobs entrypoint into the opensds repo.
[1] https://docs.openstack.org/infra/zuul/
[2] https://docs.openstack.org/infra/nodepool/
[3] https://docs.openstack.org/devstack/latest/
[4] https://github.com/theopenlab/openlab-zuul-jobs
[5] http://80.158.20.68/
[6] http://80.158.20.68/logs/
[7] http://80.158.20.68/logs/5/5/3a173240e5d1ca246990330d1176361fc8161a6b/check/gophercloud-unittest/96d4f26/
After we have opensds poc code successfully communicate with Cinder, we need to make it also talk to k8s via Fuxi successfully.
Is this a BUG REPORT or FEATURE REQUEST?:
help wanted
What you expected to happen:
I found that the command tools start up opensds project by reading the command-line options and by setting up the configuration of servers, databases, etc. IMO, the command-line options are not the best choice when more and more configuration options are required. There is an another way to manage configuration by reading and parsing the specified configuration file into a global golang structure. The configuration structure may look like:
type Config struct{
Server ServerConfig
DB DBConfig
Log LogConfig
}
type ServerConfig struct{
Host string
Port int
ReadTimeOut int
WriteTimeOut int
// ......
}
type DBConfig struct{}
type LogConfig struct{}
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
There is no mechanism to probe the status of osdsdock, once the osdsdock service is crashed, the schduler (osdslet) doesn't notified it , the osdslet will go on to schedule the osdsdock which is crashed. There will be some unpredictable mistakes or status that confuse the user.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
I followed the instructions in:
https://github.com/opensds/opensds/wiki/How-to-Run-Containerized-OpenSDS-for-Testing-Work
When I got to the following step, it failed because the file was not found:
root@ubuntu:~# curl -sSL https://raw.githubusercontent.com/opensds/opensds/development/osdsctl/bin/osdsctl | mv osdsctl /usr/local/bin/
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Ubuntu 16.04
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
DRBD uses shared information that has to be consistent and available
on both hosts. This mainly includes the chosen TCP/IP ports and the
DRBD minor number. As there are no real means to share that
information in the OpenSDS cluster, this works as follows:
use the same port/minor minimum version (drbd.yaml) on both hosts
store meta data on CreateReplication() that stores used ports/minors.
check which ones are locally used, and chose the next one.
This can fail in various scenarios, and if that machinery gets out of
"sync" once, the different hosts will chose differnt ports/minors.
This is ways to fragile for production use.
This shared information is attached to the local volumes, which has
another side effect: If you create a replication for LVM1+LVM2, delete
that replication, create it again for these two devices, the stored
meta data still exists, because it got attached to the devices. In
that case the next valid port/minor number combination is chosen. It
would be a lot more robust, if we could fail at the point where we
detect that a volume already has a port/minor combination.
Unfortunately this is not possible, as it would break the
create/delete/create again case.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
This is from #425
Environment:
uname -a
):Note: When replication is deleted, the replication data (replicationDriverData) saved in each volume should be deleted as well. This is a bug that should be fixed.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Currently, OpenSDS exports created devices for access. For example the
DRBD backing devices (LVM volumes) are iSCSI exported. Accessing the
underlying backing devices directly breaks DRBD. This would also
apply to every layered block devices in Linux (e.g., accessing the
underlying devices of a DM-crypt device also breaks the encryption
device; or you also do not access the underlying devices of a software
RAID. That is simply how the Linux block layer works). OpenSDS is
currently not able to handle how Linux works, namely layered block
devices. IMO, this design issue will need to be solved anyways,
otherwise layering of Linux block devices will not be possible.
Hopefully fixing the OpenSDS design then fixes this problem for DRBD.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
This is from #425.
Environment:
uname -a
):We can create a mapping between the exported device path and DRBD device path and inform users to use the DRBD device path only.
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
I was following steps in: https://github.com/opensds/opensds/wiki/OpenSDS-Local-Cluster-with-Multi-tenants-Installation
The section for Testing instructs:
If you choose keystone for authentication strategy, you need to execute different commands for logging in as different roles:
For admin role
source /opt/stack/devstack/openrc admin admin
For user role
source /opt/stack/devstack/openrc
However those files and directories do not exist. The rest of the commands seemed to have worked, but I've added the output of them here, just in case something in the installation process failed?
It did not seem to impact my ability to create or see the created volume later in the wiki.
root@lglw1039:~/gopath/src/github.com/opensds/opensds# cd $GOPATH/src/github.com/opensds/opensds && script/devsds/install.sh
Starting install...
root@lglw1039:~/gopath/src/github.com/opensds/opensds#
Enjoy it !!
root@lglw1039:/gopath/src/github.com/opensds/opensds# cp build/out/bin/osdsctl /usr/local/bin/gopath/src/github.com/opensds/opensds# export OPENSDS_ENDPOINT=http://127.0.0.1:50040
root@lglw1039:
exproot@lglw1039:/gopath/src/github.com/opensds/opensds# export OPENSDS_AUTH_STRATEGY=noauth/gopath/src/github.com/opensds/opensds# source /opt/stack/devstack/openrc admin admin
**root@lglw1039:
bash: /opt/stack/devstack/openrc: No such file or directory**
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Ubuntu 16.04
As we know, LCM (life-cycle management) of one project SHOULD contain these stages below: design, coding, building, testing, deployment and releasing. From this point, opensds controller project is far away from being ready for producibility. Inside these stages, we still did poor work in building, testing, deployment and releasing these four stages.
As for building work, currently we use go build tool to manage the whole project. Although it works well now, but IMO there will be more problems as this project grows.
So I suggest we add more building options (make, shell scripts and so forth) for system scalability and availability.
A productive testing framework should contain unit test, integration test and e2e test. Right now we are trying to work on unit test, but never forget those two others. Considering there are so many mature testing frameworks, it would be a good choice if we build our integration and e2e test on popular opensource project (ginkgo etc).
With the development of IaaS and PaaS, automatic deployment and installation become more and more important in a distributed system cluster. As a cloud-native storage system, opensds controller project MUST support (at least one of) automatic deployment as shown below:
To be ready for productization, publishing a release is the necessary but complex work. Here are some references from some other popular projects (service catalog, osba) for better understanding.
Lastly, please notice that it is just some initial thoughts from my side about what a formal project should be, so more suggestions or comments are welcomed.
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
In opensds framework, we distinguish volume by uuid, so the diffrent volumes can have the same name.
But the LVM driver does not support this feature, it will return an error if you create two volumes which have the same name.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):In OpenSDS latest version, we standardize the OpenSDS SourthBound Interface and improve the functionality of system. Right now we have finished OpenSDS integration of Cinder and the show and list method of volume have worked.
Is there any plan of security guarantee, such as https support, sensitive data encryption, params checking and so on.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
According to keystone design (#289) and API request filter support (#284), opensds api-server has supported authentication and request filter features, but as for client and cli tool, these features also need to be supported.
What you expected to happen:
Add keystone authentication and api request filter support in cli and client tool.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):OpenSDS log system is so poor,we just using printing string to diff log level.
We could have referenced k8s's way and implemented it using glog.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
In the integration test, we used mocked db to simplify the configuration, but it should be removed because we should gurantee the stability of system in real environment.
What you expected to happen:
Replace the mocked db with real etcd cluster (standalone), and add etcd initializing script into prepare.sh, for testing code add some db test.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
I tried to install via Installation README located at:
https://github.com/opensds/opensds/wiki/OpenSDS-Local-Cluster-with-Multi-tenants-Installation
Attempt to run the command provided yielded an error.
bash: line 1: 404:: command not found
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):ubuntu 16.x
Currently OpenSDS CLI is just a simple parser with string. To make it more convenient to use, we should develop OpenSDS CLI as a well-accepted function, just like OpenStack and Kubernetes (kubectl).
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When I test the opensds cluster using docker-compose tool, it failed when calling ListPools method in ceph driver. And then osdsdock service would crash down because of some unexpected errors:
root@ubuntu:~# docker logs 25745df796a0
E0209 02:45:35.278842 1 ceph.go:114] exit status 1
E0209 02:45:35.279239 1 ceph.go:434] [Error]:exit status 1
E0209 02:45:35.279317 1 ceph.go:506] exit status 1
E0209 02:45:35.279360 1 discovery.go:90] Call driver to list pools failed:exit status 1
panic: exit status 1
goroutine 1 [running]:
main.main()
/root/gopath/src/github.com/opensds/opensds/cmd/osdsdock/osdsdock.go:54 +0x1a0
What you expected to happen:
Get this bug fixed.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened:
The command line tool of opensds is not support bash completion feature, it not very friendly for users.
What you expected to happen:
Add bash completion feature
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
I was following steps in the opensds installation guide for using docker-compose to install.
It seems to work but keeps timing out. Not sure if it really did start up and the docker-compose just didn't finish, or if it's really trying to wait for something that isn't happening. Let me know if there's something else I can provide to give more detail. Thanks.
sudo docker-compose up
bill_osdsdb_1 is up-to-date
bill_osdslet_1 is up-to-date
bill_osdsdock_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
root@ubuntu:# COMPOSE_HTTP_TIMEOUT=300# sudo docker-compose up
root@ubuntu:
bill_osdsdb_1 is up-to-date
bill_osdslet_1 is up-to-date
bill_osdsdock_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdslet_1, bill_osdsdock_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
root@ubuntu:~# sudo COMPOSE_HTTP_TIMEOUT=300 docker-compose up
bill_osdsdb_1 is up-to-date
bill_osdsdock_1 is up-to-date
bill_osdslet_1 is up-to-date
Attaching to bill_osdsdb_1, bill_osdsdock_1, bill_osdslet_1
osdsdb_1 | 2018-05-22 19:16:00.598449 I | etcdmain: etcd Version: 3.3.5
osdsdb_1 | 2018-05-22 19:16:00.601260 I | etcdmain: Git SHA: 70c872620
osdsdb_1 | 2018-05-22 19:16:00.601263 I | etcdmain: Go Version: go1.9.6
osdsdb_1 | 2018-05-22 19:16:00.601264 I | etcdmain: Go OS/Arch: linux/amd64
osdsdock_1 | I0522 12:16:00.925956 7 server.go:81] Dock server initialized! Start listening on port:[::]:50050
osdsdock_1 | I0522 12:16:00.929136 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:16:00.929197 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.601267 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
osdsdb_1 | 2018-05-22 19:16:00.601273 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
osdsdb_1 | 2018-05-22 19:16:00.603388 I | embed: listening for peers on http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.603416 I | embed: listening for client requests on localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605182 I | etcdserver: name = default
osdsdb_1 | 2018-05-22 19:16:00.605191 I | etcdserver: data dir = default.etcd
osdsdb_1 | 2018-05-22 19:16:00.605194 I | etcdserver: member dir = default.etcd/member
osdsdb_1 | 2018-05-22 19:16:00.605196 I | etcdserver: heartbeat = 100ms
osdsdock_1 | I0522 12:17:00.935296 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:17:00.935500 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:18:00.939719 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:18:00.939811 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:19:00.947580 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.605198 I | etcdserver: election = 1000ms
osdsdb_1 | 2018-05-22 19:16:00.605200 I | etcdserver: snapshot count = 100000
osdsdb_1 | 2018-05-22 19:16:00.605205 I | etcdserver: advertise client URLs = http://localhost:2379
osdsdb_1 | 2018-05-22 19:16:00.605207 I | etcdserver: initial advertise peer URLs = http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.605211 I | etcdserver: initial cluster = default=http://localhost:2380
osdsdb_1 | 2018-05-22 19:16:00.608813 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.608830 I | raft: 8e9e05c52164694d became follower at term 0
osdsdb_1 | 2018-05-22 19:16:00.608836 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
osdsdb_1 | 2018-05-22 19:16:00.608838 I | raft: 8e9e05c52164694d became follower at term 1
osdsdb_1 | 2018-05-22 19:16:00.612974 W | auth: simple token is not cryptographically signed
osdslet_1 | I0522 19:16:00.831572 1 auth.go:49] noauth
osdslet_1 | I0522 19:16:00.831655 1 auth.go:58] &{}
osdslet_1 | 2018/05/22 19:16:00.834 [I] http server Running on http://0.0.0.0:50040
osdsdock_1 | I0522 12:19:00.947895 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:20:00.957388 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:20:00.957818 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:21:00.979851 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:21:00.979946 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:22:00.981566 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.613485 I | etcdserver: starting server... [version: 3.3.5, cluster version: to_be_decided]
osdsdb_1 | 2018-05-22 19:16:00.616458 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
osdsdb_1 | 2018-05-22 19:16:00.618103 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809034 I | raft: 8e9e05c52164694d is starting a new election at term 1
osdsdb_1 | 2018-05-22 19:16:00.809117 I | raft: 8e9e05c52164694d became candidate at term 2
osdsdock_1 | I0522 12:22:00.981645 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:23:00.983995 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:23:00.984080 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:24:00.986451 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:24:00.986473 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdb_1 | 2018-05-22 19:16:00.809144 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809167 I | raft: 8e9e05c52164694d became leader at term 2
osdsdb_1 | 2018-05-22 19:16:00.809183 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
osdsdb_1 | 2018-05-22 19:16:00.809499 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
osdsdb_1 | 2018-05-22 19:16:00.809585 I | etcdserver: setting up the initial cluster version to 3.3
osdsdock_1 | I0522 12:25:00.989864 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:25:00.990066 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:26:00.993738 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:26:00.993867 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:27:00.997656 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdb_1 | 2018-05-22 19:16:00.809758 I | embed: ready to serve client requests
osdsdb_1 | 2018-05-22 19:16:00.811977 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
osdsdb_1 | 2018-05-22 19:16:00.813688 N | etcdserver/membership: set the initial cluster version to 3.3
osdsdb_1 | 2018-05-22 19:16:00.813781 I | etcdserver/api: enabled capabilities for version 3.3
osdsdock_1 | I0522 12:27:00.997714 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:28:01.000436 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:28:01.000520 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:29:01.005855 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:29:01.009141 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:30:01.064180 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:30:01.064266 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:31:01.067030 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:31:01.067057 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:32:01.070350 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:32:01.070456 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:33:01.077091 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:33:01.083595 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:34:01.114775 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:34:01.114958 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:35:01.117435 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:35:01.117502 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:36:01.120567 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:36:01.120702 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:37:01.125978 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:37:01.126012 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:38:01.128279 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:38:01.128300 7 discovery.go:152] Backend default discovered pool sample-pool-02
osdsdock_1 | I0522 12:39:01.130688 7 discovery.go:152] Backend default discovered pool sample-pool-01
osdsdock_1 | I0522 12:39:01.130711 7 discovery.go:152] Backend default discovered pool sample-pool-02
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 300).
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Ubuntu 16.04
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When ceph backend enabled in k8s csi scenario, there are some bugs in InitializeConnection
method which shown below:
func (d *Driver) InitializeConnection(opt *pb.CreateAttachmentOpts) (*model.ConnectionInfo, error) {
vol, err := d.PullVolume(opt.GetVolumeId())
if err != nil {
log.Error("When get image:", err)
return nil, err
}
return &model.ConnectionInfo{
DriverVolumeType: "rbd",
ConnectionData: map[string]interface{}{
"secret_type": "ceph",
"name": "rbd/" + opensdsPrefix + ":" + vol.Name + ":" + vol.Id,
"cluster_name": "ceph",
"hosts": []string{opt.GetHostInfo().Host},
"volume_id": vol.Id,
"access_mode": "rw",
"ports": []string{"6789"},
},
}, nil
}
PullVolume
method has been implemented, so that the system will return errorrbd
can't be pre-definedWhat you expected to happen:
Fix the bugs above.
How to reproduce it (as minimally and precisely as possible):
Here are error logs:
root@test:/var/log/opensds# cat osdsdock.test.root.log.ERROR.20180318-123908.17880
Log file created at: 2018/03/18 12:39:08
Running on machine: test
Binary: Built with gc go1.9.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0318 12:39:08.190890 17880 context.go:44] {"auth_token":"","user_id":"","tenant_id":"ef305038-cd12-4f3b-90bd-0612f83e14ee","domain_id":"","user_domain_id":"","project_domain_id":"","is_admin":true,"read_only":"","show_deleted":"","request_id":"","resource_uuid":"","overwrite":"","roles":null,"user_name":"","project_name":"","domain_name":"","user_domain_name":"","project_domain_name":"","is_admin_project":false,"service_token":"","service_user_id":"","service_user_name":"","service_user_domain_id":"","service_user_domain_name":"","service_project_id":"","service_project_name":"","service_project_domain_id":"","service_project_domain_name":"","service_roles":"","token":"","uri":"/v1beta/ef305038-cd12-4f3b-90bd-0612f83e14ee/block/volumes"}
E0318 12:40:14.736517 17880 ceph.go:319] When get image:Ceph PullVolume has not implemented yet.
E0318 12:40:14.736663 17880 dock.go:165] Call driver to initialize volume connection failed:Ceph PullVolume has not implemented yet.
E0318 12:40:14.736682 17880 server.go:138] Error occurred in dock module when create volume attachment:Ceph PullVolume has not implemented yet.
E0318 12:40:19.268962 17880 ceph.go:319] When get image:Ceph PullVolume has not implemented yet.
E0318 12:40:19.268996 17880 dock.go:165] Call driver to initialize volume connection failed:Ceph PullVolume has not implemented yet.
E0318 12:40:19.269007 17880 server.go:138] Error occurred in dock module when create volume attachment:Ceph PullVolume has not implemented yet.
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What you expected to happen:
According to #60, "automatic deployment and installation become more and more important in a distributed system cluster. As a cloud-native storage system, opensds controller project MUST support (at least one of) automatic deployment
I'm interested in storage management, backup/restore, & SDS standards (i.e DMTF SMI-S, OpenLMI).
I want to work on automatic deployment of OpenSDS via saltstack.
Do we have list of use case to satisfy?
Anything else we need to know?:
I'm thinking of publishing an opensds-formula upstream, and integration instructions can go here.
Environment:
uname -a
): any supportedIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
From the code in api
package, there is only one parameter snapshot_id
needed when calling DeleteVolumeSnapshot function, but currently it still needs two parameters (volume_id
and snapshot_id
) to delete volume snapshot, which is unnecessary and makes user confused.
What you expected to happen:
We should remove volume_id
parameter and only leave snapshot_id
to call DeleteVolumeSnapshot` in osdsctl tool.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Helm is a popular package manager in Kubernetes. For easier deployment and operation, we should develop opensds charts to support helm.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
When the volume is expanded, if the new size is not greater than the old size, the error message is not written to this.Ctx.Output.Body.
What you expected to happen:
When the volume is expanded, if the new size is not greater than the old size, the error message should be written to this.Ctx.Output.Body.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
None
Environment:
uname -a
): NoneIs this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Implement the required functionality by modifying filter.go and selector.go.
Environment:
uname -a
):As we all know, Kubernetes provides a framework of storage for container orchestration layer.
When users create a PVC, Kubernetes controller will retrieve PV from backends through a "Provisioner" interface. But right now each backend has its own provisioner(nfs, efs, cephfs and so on), and it's unbearable for vendors to maintain all provisoners if they want to support multiple backends.
To solve this problem, we launched a proposal #28 about designing a unified provisioner for different banckend storage types. Please be free to talk if you have any question : )
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Standard methods for REST are List, Get, Create, Update and Delete. Specially,The List method takes a collection name and zero or more parameters as input and returns a list of resources that match the input. In the current version , the OpenSDS Restfull API doesn't support the paination, order sorting, criteria query, total count of items obatainment.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
BUG
What happened:
A few minutes ago, when I built the opensds project with LiteIDE, it reported the error and showed
.\osdsdock.go:88: undefined: server.ListenAndServe
It's caused by programming errors, but there is no suitable protective measure to help check this error.
What you expected to happen:
I expect to increase the build mechanism in CI to prevent compilation error.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Currently there is no mapping between OpenSDS profile and Cinder volume_type.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
When extending a volume, OpenSDS did not check the upper limit of the new size of the volume.
What you expected to happen:
When expanding a volume, OpenSDS needs to check the upper limit of the new size of the volume.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
None
Environment:
uname -a
):As a storage management and orchestration layer in multi-cloud environment, it's vital for opensds controller project to enlarge its southbound eco-system. Currently we have Ceph and Cinder driver which stand for storage system, but we still miss something in storage device management.
To solve this problem, we are trying to add LVM storage device driver, which can be considered as a starting point. The reason for pushing this suggestion out is that we can prove it that eventually opensds will become a storage controller that manages different kind of storage resources.
IMO, this design should be finished before alpha release, so I think we should start it ASAP if nobody has objection.
Currently opensds controller project don't have any cli tool which is vital for any large system, so it's urgent for us to design cli tool before alpha release.
My initial thought is to build our cli tool at the basis of some frameworks such as cobra, cli and so on. And I suggest we choose cobra for its easy-usage and popularity. We can create a package called cli
and make it call client package (see #102) to send request to opensds service.
Is this a BUG REPORT or FEATURE REQUEST?:
help wanted
What happened:
Currently, the opensds project has no roadmap and api panorama.
What you expected to happen:
I expect to discuss and draw up the roadmap and api panorama of the opensds project together.
Anything else we need to know?:
Plz refer to previous discussion on opensds-dev
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
In the current version, the OpenSDS doesn't support "Extend Volume".
What you expected to happen:
Extends the size of a volume to a requested size, in gibibytes (GiB).
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
List of work items:
uname -a
):According to the work tracking, it's an emergency to support more storage drivers. After discussion on weekly meeting, we are expected to support more than 2 drivers except Ceph driver, and Cinder driver is one of them at the moment.
As you can see from some discussions in OpenStack community, they plan to adopt gophercloud as their official golang client, so it means that we could develop our cinder driver based on this library. Besides, this project supports latest version of OpenStack API and it's under active development, so we don't need to worry a lot about maintaining this project.
Any thoughts?
As a SDS controller, it's essential for OpenSDS to build its eco-system of sourthbound interface.
At the first stage, our strategy is to quickly make up for our lack with the help of OpenStack(Cinder, Manila). Now it's time to move to next stage where we should build our own eco-system competitive.
So we draft this proposal #30 , aiming to making OpenSDS better!
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
It was a great move we could set up opensds cluster in one command using ansible, but currently we only support opensds daemon services installation from source code, so a lot time was wasted on downloading and building work. We should try to improve it and provide a better user-experience to end-users.
What you expected to happen:
Since we has supported installing opensds service in container mannually, it's time to integrate it into ansible script so that all this work could be done automatically.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):From the current version of OpenSDS southbound interface, we found that mount and unmount are operations of host, and MountVolume and UnmountVolume methods should be executed by OpenSDS Hub module rather than backend drivers, thus they should be removed from the southbound interface.
So we drafted this proposal #33 #35, aiming to make it easier for vendors to use by redesigning southbound interface.
Please speak freely if you have any questions : )
IMO, testing plays a very important role in code development, which means that every method even structure should be validated by testing code.
But here is what our CI told us:
$ go test github.com/opensds/opensds/cmd/osdslet -cover && go test github.com/opensds/opensds/cmd/osdsdock -cover
? github.com/opensds/opensds/cmd/osdslet [no test files]
? github.com/opensds/opensds/cmd/osdsdock [no test files]
The command "go test github.com/opensds/opensds/cmd/osdslet -cover && go test github.com/opensds/opensds/cmd/osdsdock -cover" exited with 0.
2.58s$ go test github.com/opensds/opensds/pkg/... -cover
? github.com/opensds/opensds/pkg/api [no test files]
ok github.com/opensds/opensds/pkg/controller 0.019s coverage: 24.7% of statements
? github.com/opensds/opensds/pkg/controller/policy [no test files]
? github.com/opensds/opensds/pkg/controller/policy/executor [no test files]
? github.com/opensds/opensds/pkg/controller/volume [no test files]
? github.com/opensds/opensds/pkg/db [no test files]
ok github.com/opensds/opensds/pkg/db/drivers/etcd 0.016s coverage: 0.3% of statements
? github.com/opensds/opensds/pkg/db/drivers/mysql [no test files]
? github.com/opensds/opensds/pkg/dock [no test files]
? github.com/opensds/opensds/pkg/dock/api [no test files]
? github.com/opensds/opensds/pkg/grpc/dock/client [no test files]
? github.com/opensds/opensds/pkg/grpc/dock/server [no test files]
? github.com/opensds/opensds/pkg/grpc/opensds [no test files]
? github.com/opensds/opensds/pkg/model [no test files]
ok github.com/opensds/opensds/pkg/utils 0.010s coverage: 12.1% of statements
The command "go test github.com/opensds/opensds/pkg/... -cover" exited with 0.
From some outputs above, it's obvious that we are still at the very early stage in testing world. To make our system code stronger and more convincing, we have to make up for the lack of testing work and improve the testing coverage up to passing line before our first release.
Any thoughts?
Is this a BUG REPORT or FEATURE REQUEST?:
feature
enhancement
What happened:
The opensds controller only supports mysql database.
What you expected to happen:
I expect to support multiple database such as mysql, sqlite, postgresql etc, via adding ORM library.
Anything else we need to know?:
There are some libraries that implement ORM or datamapping techniques:
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
According to our previous discussion,we should use tenant id instead of project id. So I summit this issue to finnish it.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
In the first stage of project devlopment, we ignored some potential danger caused from external projects with unfriendly license. Now we should be noticed that it could cause large damage in knowledge authority.
What you expected to happen:
Search all third-party projects and clean those unfriendly porjects, please notice that this could result a lot of code changes.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened: No documentation explaining how to use OpenSDS with Dell EMC VMAX or Unity
What you expected to happen: Simple guide added to docs section, explaining how to configure OpenSDS with VMAX or Unity for users who wish to begin using OpenSDS.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: There is a wide user base of VMAX and Unity users who are increasingly deploying container workloads and OpenSDS is a great choice for these users. However there is currently no instructions to introduce new users and enable them to begin to use OpenSDS with VMAX or Unity arrays.
Environment:
uname -a
): Current stable (recent 4.x)Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
when osdslet or osdsdock start up, it will load the config file firstly , if some item is not setted in config file , the system will use default value and it will record a warnning information to log file. The config structure will load twice, one for default value and the other for real config file. so this log record will make developer confused. Remove it will be better.
The log records like blows
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
W0208 16:28:13.325458 24662 config.go:132] Get key(osdslet.api_endpoint) failed, using default key(0.0.0.0:50040).
W0208 16:28:13.325721 24662 config.go:132] Get key(osdslet.graceful) failed, using default key(True).
W0208 16:28:13.325731 24662 config.go:132] Get key(osdslet.socket_order) failed, using default key(inc).
W0208 16:28:13.325739 24662 config.go:132] Get key(osdslet.daemon) failed, using default key(false).
W0208 16:28:13.325747 24662 config.go:132] Get key(osdsdock.api_endpoint) failed, using default key(localhost:50050).
W0208 16:28:13.325754 24662 config.go:132] Get key(osdsdock.enabled_backends) failed, using default key(lvm).
W0208 16:28:13.325763 24662 config.go:132] Get key(osdsdock.daemon) failed, using default key(false).
W0208 16:28:13.325774 24662 config.go:132] Get key(ceph.name) failed, using default key().
W0208 16:28:13.325782 24662 config.go:132] Get key(ceph.description) failed, using default key().
W0208 16:28:13.325789 24662 config.go:132] Get key(ceph.driver_name) failed, using default key().
W0208 16:28:13.325796 24662 config.go:132] Get key(ceph.config_path) failed, using default key().
W0208 16:28:13.325804 24662 config.go:132] Get key(cinder.name) failed, using default key().
W0208 16:28:13.325811 24662 config.go:132] Get key(cinder.description) failed, using default key().
W0208 16:28:13.325818 24662 config.go:132] Get key(cinder.driver_name) failed, using default key().
W0208 16:28:13.325825 24662 config.go:132] Get key(cinder.config_path) failed, using default key().
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
Design proposal is submitted here: sodafoundation/architecture-analysis#4
What happened:
There is no mechanism to show the real-time status and state transitions of volume, snapshot, attachment, dock and pool in the current opensds version.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
In current version ceph driver dosen`t support the self-define image pool. it just only uses the default pool rbd . So it is not flexible.
What you expected to happen:
Add self-define pool feature.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):As a collaborative project, opensds controller project will work with other projects such as OpenStack, Kubernetes and so forth. To better integrated with nbp
and other Golang project, it's necessary to design client package for opensds controller project to set up connection with opensds service.
Since we have adopted beego
as the framework of api-server, I suggest we also use it to implement the http client module. Besides, we can utilize all global module (model, config, log etc) to have minimum workload.
Any thoughts?
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
In lvm environment, it is possible to delete volume in case of the snapshot is existing. When a volume is deleted, the snapshots are also deleted. But the snapshots exist in OpenSDS DB.
Because cinder and ceph prevent to delete volume if the snapshot is exists, this issue only happens in lvm.
What you expected to happen:
OpenSDS prevents to delete volume if the snapshots exist.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.