Coder Social home page Coder Social logo

shock's Issues

Docker build instructions

At least for me, the docker build command in README.md didn't work:

sudo docker build -t shock:latest docker/Dockerfile 
unable to prepare context: context must be a directory: /home/[bla]/localgit/Shock/docker/Dockerfile

This does though:

sudo docker build -t shock:latest docker
docker -v
Docker version 1.9.1, build a34a1d5

[Feature request] Add option to copy metadata along with file

Currently the copy operation only copies the file. Add an option to copy the metadata as well, keeping the version string constant between the copies. This allows you to check if you already have a copy of a node by querying on the file version and metadata version.

?verbosity=metadata

I've seen the ?verbosity=metadata query parameter when GETting a node in code in the wild, and I was just wondering what it does. I can't seem to find anything about it in the documentation.

Problems during building

I used the option 1 to build shock in CentOS 7. And the terminal give some error as follows:

[root@localhost gopath]# make install
go get -v github.com/MG-RAST/Shock/...
github.com/MG-RAST/Shock/shock-server/node
github.com/MG-RAST/Shock/vendor/gopkg.in/mgo.v2/internal/sasl
github.com/MG-RAST/Shock/vendor/gopkg.in/mgo.v2/internal/sasl
src/github.com/MG-RAST/Shock/vendor/gopkg.in/mgo.v2/internal/sasl/sasl.go:15:24: fatal error: sasl/sasl.h: No such file or directory
// #include <sasl/sasl.h>
^
compilation terminated.
github.com/MG-RAST/Shock/shock-server/node
src/github.com/MG-RAST/Shock/shock-server/node/db.go:31: cannot convert "priority" to type int
src/github.com/MG-RAST/Shock/shock-server/node/db.go:31: cannot use "priority" (type string) as type int in array or slice literal
make: *** [get] error 2

Can you help to solve the problem?

Best!

Need to modify mongo timeouts.

Shock should try connecting to mongo several times (with 10 second timeout) at startup time. If it continues to fail, Shock should exit with error. If it is successful, Shock should start a new new mongo connection with a longer timeout (~5 minutes). This is necessary because the mgo library does not distinguish between a dial timeout and a rw timeout.

File download not working?

I'm unable to download a file. This returns the metadata for the node:

node/75ea6172-885c-4fe5-b99b-eed83fc824de?download

I've also tried with download=1. This is on a file I uploaded without any auth information.

return error on invalid queries

If a node query is done without using the keywords 'query' or 'querynode', the rest of the query string is ignored and all nodes are returned. The system should just return an error stating that the query type was not specified - or possibly default to assuming 'query' type.

Copy node

Feature request: Add the ability to copy a node to which a user has read permissions to a new node under said user's ownership. The API could be as simple as:

https:///node/{id}/copy

...which returns a new shock node with the copied data but empty (except for the new owner) acls.

The workaround is to pull the data and attributes and send them back to the server on a POST, but this approach avoids the round trip for the data.

Shock asks re upgrading nodes/acls even if DB is empty

As of 0.9.6, on startup, Shock always asks if you want to upgrade the nodes and acls to version 2, even if the database is empty. Naively, if the DB is empty Shock should just start with the most recent versions selected.

[Feature request] Expose metadata / acl / file versions

Per Jared, individual version information for the metadata, file, and acls are stored but not accessible via the API:

            "version_parts" : {
                "attributes_ver" : "37a6259cc0c1dae299a7866489dff0bd",
                "acl_ver" : "6d944fe19b5ac11f16d9ab9d39ab5c11",
                "file_ver" : "ced507f0e5833a048e07dbb5c18c0813"
            },

Make these visible so that a user can determine what portion of the node changed. The current exposed version field is, IIUC, a version for the node as a whole.

Possible regression re inserting admins from conf on startup

There was a bug in Shock prior to version 0.8.24 such that administrators listed in the Shock conf files were not added to the database, and therefore had no admin privs, if the Users collection was empty. This was fixed in ab571f8, but 0.9.6 appears to have a regression:

kbtestuser@dev03:~$ /kb/deployment/bin/shock-server --conf /kb/deployment/services/shock_service/conf/shock.cfg
The ACL schema version in your database needs updating.  Would you like the update to run? (y/n): y
Updating ACL's to version: 2
ACL schema version update complete.
The Node schema version in your database needs updating.  Would you like the update to run? (y/n): y
Updating Nodes to version: 2
Node schema version update complete.

 +-------------+  +----+    +----+  +--------------+  +--------------+  +----+      +----+
 |             |  |    |    |    |  |              |  |              |  |    |      |    |
 |    +--------+  |    |    |    |  |    +----+    |  |    +---------+  |    |      |    |
 |    |           |    +----+    |  |    |    |    |  |    |            |    |     |    |
 |    +--------+  |              |  |    |    |    |  |    |            |    |    |    |
 |             |  |    +----+    |  |    |    |    |  |    |            |    |   |    |
 +--------+    |  |    |    |    |  |    |    |    |  |    |            |    +---+    +-+
          |    |  |    |    |    |  |    |    |    |  |    |            |               |
 +--------+    |  |    |    |    |  |    +----+    |  |    +---------+  |    +-----+    |
 |             |  |    |    |    |  |              |  |              |  |    |     |    |
 +-------------+  +----+    +----+  +--------------+  +--------------+  +----+     +----+
####### Anonymous ######
read:   false
write:  false
delete: false

##### Auth #####
type:   globus
token_url:  https://nexus.api.globusonline.org/goauth/token?grant_type=client_credentials
profile_url:    https://nexus.api.globusonline.org/users

##### Admin #####
users:  lolcatservice

##### Paths #####
site:   /mnt/Shock/site
data:   /mnt/Shock/data
logs:   /mnt/Shock/logs
local_paths:    

##### SSL disabled #####

##### Mongodb #####
host(s):    localhost
database:   ShockDB

##### Address #####
ip: 0.0.0.0
port:   7044

##### Log rotation disabled #####

##### Versions ####
name: ACL   version number: 2
name: Auth  version number: 1
name: Node  version number: 2

##### Procs #####
Number of available CPUs = 4
Running Shock server with GOMAXPROCS = 2

kbtestuser@dev03:~$ curl http://localhost:7044
{"attribute_indexes":[""],"contact":"[email protected]","documentation":"http://localhost:7044/wiki/","id":"Shock","auth":["globus"],"anonymous_permissions":{"read":false,"write":false,"delete":false},"resources":["node"],"server_time":"2015-11-21T13:56:27-08:00","type":"Shock","url":"http://localhost:7044/","version":"0.9.6"}

kbtestuser@dev03:~$ /kb/runtime/bin/mongo
MongoDB shell version: 2.4.3
connecting to: test
> show dbs
ShockDB 0.203125GB
local   0.078125GB
workspace   0.203125GB
workspace_types 0.203125GB
> use ShockDB
switched to db ShockDB
> db.getCollectionNames()
[ "Nodes", "PreAuth", "Users", "Versions", "system.indexes" ]
> db.Users.find()
> exit
bye

Prior to starting Shock, I dropped the ShockDB database from MongoDB.

Shock opens all file parts at once when closing files, can exceed FD limit

When closing a node created with a set of file parts, it appears as though Shock opens all the file parts at once rather than serially opening and closing each file (or perhaps there's just a missing close instruction). For large files with many parts this can cause an error by exceeding the file descriptor limit. Example:


In [8]: !curl --insecure -X POST -H "Authorization: OAuth $token" -F "parts=unknown" https://dev03.berkeley.kbase.us/services/shock-api/node
{"status":200,"data":{"id":"91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91","version":"5522419bcbbc023ec6af9cf205c138d8","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":null},"attributes":null,"indexes":{},"tags":null,"linkages":null,"created_on":"2016-01-21T14:13:26.691215595-08:00","last_modified":"0001-01-01T00:00:00Z","type":"parts"},"error":null}
In [9]: !curl --insecure -X PUT -H "Authorization: OAuth $token" -F "[email protected]" https://dev03.berkeley.kbase.us/services/shock-api/node/91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91
{"status":200,"data":{"id":"91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91","version":"5522419bcbbc023ec6af9cf205c138d8","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":[]},"attributes":null,"indexes":{},"tags":[],"linkages":[],"created_on":"2016-01-21T14:13:26.691-08:00","last_modified":"0001-01-01T00:00:00Z","type":"parts"},"error":null}
In [10]: for i in xrange(2, 2000):
   ....:     !curl --insecure -X PUT -H "Authorization: OAuth $token" -F "[email protected]" https://dev03.berkeley.kbase.us/services/shock-api/node/91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91
   ....:     
{"status":200,"data":{"id":"91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91","version":"5522419bcbbc023ec6af9cf205c138d8","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":[]},"attributes":null,"indexes":{},"tags":[],"linkages":[],"created_on":"2016-01-21T14:13:26.691-08:00","last_modified":"0001-01-01T00:00:00Z","type":"parts"},"error":null}
...

In [12]: !curl --insecure -X PUT -H "Authorization: OAuth $token" -F "parts=close" https://dev03.berkeley.kbase.us/services/shock-api/node/91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91
{"status":400,"data":null,"error":["err@node_Update: 91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91: open /mnt/Shock/data/91/ac/e5/91ace5e9-6ab1-4fdf-b86b-0f03bd9d7d91/parts/1013: too many open files"]}

Shock ver is 0.9.6.

More explicit error messages for bad node ids (and elsewhere?)

Requesting a badly formatted or non-existent node returns a 500 Internal Server Error:

$ curl -X GET http://kbase.us/services/shock-api/node/9ae2658e-057f-4f89-81a1-a41c09c7313a
{"status":200,"data":{"id":"9ae2658e-057f-4f89-81a1-a41c09c7313a","version":"76a295479a82ddacee098be507bd31cf","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":null},"attributes":null,"indexes":{},"tags":null,"linkages":null},"error":null}
$ curl -X GET http://kbase.us/services/shock-api/node/9ae2658e-057f-4f89-81a1-a41c09c7313
{"status":500,"data":null,"error":["Internal Server Error"]}
$ curl -X GET http://kbase.us/services/shock-api/node/9ae2658e-057f-4f89-81a1-a41c09c7313b
{"status":500,"data":null,"error":["Internal Server Error"]}

A 500 makes it sound like there's a problem with the server itself - should probably be 400 with a 'Non existent node id' or 'Incorrect node format' error.

There may be other places in the code that have similarly opaque errors - I'll add to this issue if I find more.

Possible regression of #292

This is with shock 0.9.21 pulled from the docker image obtained via docker pull mgrast/shock

Background: #292

It appears, unless I'm doing something incorrectly, that now even restarting shock doesn't result in the proper permissions being given to admin users:

Conf:

$ head shock.cfg
[Address]
# IP and port for api
# Note: use of port 80 may require root access
# 0.0.0.0 will bind Shock to all IP's
api-ip=0.0.0.0
api-port=7044

[Admin]
[email protected]
users=kbasetest2

First start:

$ ./shock-server --conf shock.cfg
read shock.cfg
conf.LOG_OUTPUT: console
[05/26/17 11:27:08] [INFO] Starting...

 +-------------+  +----+    +----+  +--------------+  +--------------+  +----+      +----+
 |             |  |    |    |    |  |              |  |              |  |    |      |    |
 |    +--------+  |    |    |    |  |    +----+    |  |    +---------+  |    |      |    |
 |    |           |    +----+    |  |    |    |    |  |    |            |    |     |    |
 |    +--------+  |              |  |    |    |    |  |    |            |    |    |    |
 |             |  |    +----+    |  |    |    |    |  |    |            |    |   |    |
 +--------+    |  |    |    |    |  |    |    |    |  |    |            |    +---+    +-+
          |    |  |    |    |    |  |    |    |    |  |    |            |               |
 +--------+    |  |    |    |    |  |    +----+    |  |    +---------+  |    +-----+    |
 |             |  |    |    |    |  |              |  |              |  |    |     |    |
 +-------------+  +----+    +----+  +--------------+  +--------------+  +----+     +----+
####### Anonymous ######
read:	false
write:	false
delete:	false

##### Auth #####
type:	globus
token_url:	https://nexus.api.globusonline.org/goauth/token?grant_type=client_credentials
profile_url:	https://nexus.api.globusonline.org/users

##### Admin #####
users:	kbasetest2

##### Paths #####
*snip*

kbasetest2 is not recognized as an admin:

$ curl -X POST -H "Authorization: OAuth $KBASETEST_TOKEN" http://localhost:7044/node
{"status":200,"data":{"id":"2b968abd-3ec8-4002-8b4c-4701dc629ebf","version":"be501822649bdaff0373d9c25bf9f8b3","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":null,"created_on":"0001-01-01T00:00:00Z"},"attributes":null,"indexes":{},"version_parts":{"acl_ver":"fc2f8e6613c1c26aa359c63d492d24c7","attributes_ver":"2d7c3414972b950f3d6fa91b32e7920f","file_ver":"3923024235a8a78eacd25de78bff7d2e","indexes_ver":"99914b932bd37a50b983c5e7c90ae93b"},"tags":null,"linkage":null,"priority":0,"created_on":"2017-05-26T11:27:29.16138828-07:00","last_modified":"0001-01-01T00:00:00Z","expiration":"0001-01-01T00:00:00Z","type":"basic","parts":null},"error":null}

$ curl -X PUT -H "Authorization: OAuth $KBASETEST2_TOKEN" http://localhost:7044/node/2b968abd-3ec8-4002-8b4c-4701dc629ebf/acl/read?users=kbasetest8
{"status":400,"data":null,"error":["Users that are not node owners can only delete themselves from ACLs."]}

After restarting shock, kbasetest2 is still not recognized as an admin:

$ curl -X PUT -H "Authorization: OAuth $KBASETEST2_TOKEN" http://localhost:7044/node/2b968abd-3ec8-4002-8b4c-4701dc629ebf/acl/read?users=kbasetest8
{"status":400,"data":null,"error":["Users that are not node owners can only delete themselves from ACLs."]}

> db.Users.find().pretty()
{
	"_id" : ObjectId("59287391b7fa0cf2c2ed538f"),
	"uuid" : "8fcea642-149b-4bd2-b660-366589916461",
	"username" : "kbasetest",
	"fullname" : "KBase Test Account",
	"email" : [censored],
	"password" : "",
	"shock_admin" : false
}
{
	"_id" : ObjectId("592873a6b7fa0cf2c2ed5391"),
	"uuid" : "c370f5ec-0cf7-4b2c-bc96-2a226b277c43",
	"username" : "kbasetest2",
	"fullname" : "kbase test account #2",
	"email" : [censored],
	"password" : "",
	"shock_admin" : false
}
{
	"_id" : ObjectId("592873a6b7fa0cf2c2ed5392"),
	"uuid" : "e2d41d69-dad5-4c16-951a-584361fdd8e0",
	"username" : "kbasetest8",
	"fullname" : "",
	"email" : "",
	"password" : "",
	"shock_admin" : false
}

Is there something I'm missing here? I can't seem to get an admin account working.

Correct invocation of copy_attributes

Hi, I'm trying to use the copy_attributes query param added in 0.9.13:

https://github.com/MG-RAST/Shock/blob/master/RELEASE_NOTES.txt#L51
1e66f15#diff-e02cc717234e16efa9f75eeb8502920fR213

This is with shock 0.9.21 from the binary extracted from the docker image from docker pull mgrast/shock.

I think this is the right invocation, but the attributes aren't copied:

$ curl http://localhost:7044
{"attribute_indexes":[""],"contact":"[email protected]","documentation":"http://localhost:7044/wiki/","id":"Shock","auth":["globus"],"anonymous_permissions":{"read":false,"write":false,"delete":false},"resources":["node"],"server_time":"2017-05-26T12:29:01-07:00","type":"Shock","url":"http://localhost:7044/","version":"0.9.21"}

$ curl -X POST -H "Authorization: OAuth $KBASETEST_TOKEN" -F 'attributes_str={"foo":"bar"}' http://localhost:7044/node/
{"status":200,"data":{"id":"6573064d-7c2d-4e38-a80f-82b071f456cb","version":"d6155c6f5cf5a6953a6d6e5a91cb7dfc","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":null,"created_on":"0001-01-01T00:00:00Z"},"attributes":{"foo":"bar"},"indexes":{},"version_parts":{"acl_ver":"fc2f8e6613c1c26aa359c63d492d24c7","attributes_ver":"a146dbbdaa396ca414a878b76f8c588c","file_ver":"3923024235a8a78eacd25de78bff7d2e","indexes_ver":"99914b932bd37a50b983c5e7c90ae93b"},"tags":null,"linkage":null,"priority":0,"created_on":"2017-05-26T12:29:27.554346739-07:00","last_modified":"2017-05-26T12:29:27.555334545-07:00","expiration":"0001-01-01T00:00:00Z","type":"basic","parts":null},"error":null}

$ curl -X POST -H "Authorization: OAuth $KBASETEST_TOKEN" -F 'copy_data=6573064d-7c2d-4e38-a80f-82b071f456cb' http://localhost:7044/node?copy_attributes=1
{"status":200,"data":{"id":"516c49d0-b5b2-4097-aec8-065c252723d9","version":"9f67489ed0002ff1dfaa778c46af76cc","file":{"name":"","size":0,"checksum":{},"format":"","virtual":false,"virtual_parts":null,"created_on":"2017-05-26T12:30:17.439834551-07:00"},"attributes":null,"indexes":{},"version_parts":{"acl_ver":"fc2f8e6613c1c26aa359c63d492d24c7","attributes_ver":"2d7c3414972b950f3d6fa91b32e7920f","file_ver":"730c30429ae6b6349affbbdd115e1c44","indexes_ver":"99914b932bd37a50b983c5e7c90ae93b"},"tags":null,"linkage":null,"priority":0,"created_on":"2017-05-26T12:30:17.43999921-07:00","last_modified":"2017-05-26T12:30:17.441578965-07:00","expiration":"0001-01-01T00:00:00Z","type":"copy","parts":null},"error":null}

Could you point out what I'm doing wrong? TIA

backup nodes

nodes of type backup: Shock keeps all backups of last couple days and two older backup, but automatically deletes all older backups if newer backup are available. Deletion may happen in Shock as a go routine or via external process.

Node type discrepancy

When an empty node is created by AWE and then becomes a parts node, the node type still says "basic". Need to fix this and check what happens in other cases where a node begins as an empty node and then node type changes.

Setting the 'Content-Length' Header for API Calls

I don't know if this is possible, but it would be nice if you could set the 'Content-Length' header when returning data from API calls. This tells the caller the length of the data being returned.

For the data_downloader, I'm attempting to show a progress bar while downloading the list of nodes. Right now it's ~5 MB, which takes some time to download, so this is a cool feature. If it's not possible that's fine, just curious.

Unauthorized request returns 200 instead of 401

Trying to access a private file will return a 200 yet the JSON output shows a 401. Is this intentional?

Host: kbase.us

Accept: /

< HTTP/1.1 200 OK
< Server: nginx/1.4.1
< Date: Fri, 21 Mar 2014 16:47:46 GMT
< Content-Type: application/json
< Content-Length: 56
< Connection: keep-alive
< Access-Control-Allow-Headers: Authorization
< Access-Control-Allow-Methods: POST, GET, PUT, DELETE, OPTIONS
< Access-Control-Allow-Origin: *
<

  • Connection #0 to host kbase.us left intact
  • Closing connection #0
  • SSLv3, TLS alert, Client hello (1):
    {"status":401,"data":null,"error":["User Unauthorized"]}canon@login1:~$

Indexing on uploaded files of certain types

In shock, when the user uploads a gff file and only wants a subset of data from the file, it is not possible without downloading the complete file (76 Mb) and parsing it.
It would be great if indexing was done on the uploaded file and the index was saved as part of the metadata. Uploaded vcf file for a large sequencing project could be in the range ~100Gb - 600Gb.

Tabix is a software used for indexing files of certain formats including: gff, bed, sam, vcf and psltab and lets user get a subset of data from the file.
eg. gff file has the following format and is used to store information about features on the genome.
(See ftp://ftp.jgi-psf.org/pub/compgen/phytozome/v9.0/Ptrichocarpa/annotation/Ptrichocarpa_210_gene.gff3.gz for a sample file)

Chr01 phytozome9_0 gene 1660 2502 . - . ID=Potri.001G000100;Name=Potri.001G000100
Chr01 phytozome9_0 mRNA 1660 2502 . - . ID=PAC:27043735;Name=Potri.001G000100.1;pacid=27043735;longest=1;Parent=Potri.001G000100
Chr01 phytozome9_0 CDS 1660 2502 . - 0 ID=PAC:27043735.CDS.1;Parent=PAC:27043735;pacid=27043735
Chr01 phytozome9_0 gene 2906 6646 . - . ID=Potri.001G000200;Name=Potri.001G000200
Chr01 phytozome9_0 mRNA 2906 6646 . - . ID=PAC:27045395;Name=Potri.001G000200.1;pacid=27045395;longest=1;Parent=Potri.001G000200
Chr01 phytozome9_0 CDS 6501 6644 . - 0 ID=PAC:27045395.CDS.1;Parent=PAC:27045395;pacid=27045395

Following is the way I would do it if gff file was on my local system but not sure how to do this in shock.

(grep ^"#" in.gff; grep -v ^"#" in.gff | sort -k1,1 -k4,4n) | bgzip > sorted.gff.gz;
tabix -p gff sorted.gff.gz;
tabix sorted.gff.gz chr01:6644;

Read-only file system

Return error message when user tries to create a node on a Shock server whose files system went into read-only mode. Currently connection just times out.

auth check done after upload

Trying an upload of a large file and got this:

olson@maple:/scratch/DataForKmers/Data.2014-05-03$ time curl -X POST --data-binary @kmer.table.mem_map http://kbase.us/services/shock-api/node
{"status":401,"data":null,"error":["No Authorization"]}
real 9m27.975s
user 0m18.317s
sys 1m5.413s

Seems like that check should be done before committing to the large upload.

Uploading file in parts with python requests

I'm trying to upload a file in parts to Shock via python with the requests library and I can't quite figure out how to get it to work. I've tried a number of incantations, and this is the closest I've gotten:

In [5]: import requests
In [6]: from requests_toolbelt.multipart.encoder import MultipartEncoder
In [7]: requests.get('http://localhost:40317').json()
Out[7]: 
{u'anonymous_permissions': {u'delete': False, u'read': False, u'write': False},
 u'attribute_indexes': [u''],
 u'auth': [u'globus'],
 u'contact': u'[email protected]',
 u'documentation': u'http://localhost:40317/wiki/',
 u'id': u'Shock',
 u'resources': [u'node'],
 u'server_time': u'2016-07-06T11:59:34-07:00',
 u'type': u'Shock',
 u'url': u'http://localhost:40317/',
 u'version': u'0.9.14'}

In [8]: mpdata = MultipartEncoder(fields={'attributes_str': '{"foo": "bar"}', 'parts': 'unknown'})
In [9]: headers = {'Authorization': 'OAuth ' + token}
In [10]: mpheaders = dict(headers)
In [11]: mpheaders['Content-Type'] = mpdata.content_type
In [12]: res = requests.post('http://localhost:40317/node/', headers=mpheaders, data=mpdata)

In [13]: j = res.json()
In [14]: j
Out[14]: 
{u'data': {u'attributes': {u'foo': u'bar'},
  u'created_on': u'2016-07-06T12:00:36.86892903-07:00',
  u'expiration': u'0001-01-01T00:00:00Z',
  u'file': {u'checksum': {},
   u'created_on': u'0001-01-01T00:00:00Z',
   u'format': u'',
   u'name': u'',
   u'size': 0,
   u'virtual': False,
   u'virtual_parts': None},
  u'id': u'0b25ef24-368d-4c78-87e3-8bb3643e2b3b',
  u'indexes': {},
  u'last_modified': u'2016-07-06T12:00:36.874577795-07:00',
  u'linkage': None,
  u'parts': {u'compression': u'',
   u'count': 0,
   u'length': 0,
   u'parts': [],
   u'varlen': True},
  u'tags': None,
  u'type': u'parts',
  u'version': u'7d913c698cfceb9ffaca04c83ed9cb23',
  u'version_parts': {u'acl_ver': u'022c1343f7631cecc0e61bb1809dfdd9',
   u'attributes_ver': u'9bb58f26192e4ba00f01e2e7b136bbd8',
   u'file_ver': u'cbe3da041b769ef292a3441ea9c5a205',
   u'indexes_ver': u'99914b932bd37a50b983c5e7c90ae93b'}},
 u'error': None,
 u'status': 200}

In [15]: mpdata = MultipartEncoder(fields={'1': 'whee'})
In [16]: mpheaders['Content-Type'] = mpdata.content_type
In [17]: res = requests.put('http://localhost:40317/node/' + j['data']['id'], headers=mpheaders, data=mpdata)

In [18]: res.text
Out[18]: u'{"status":400,"data":null,"error":["err@node_ParseMultipartForm: invalid param: 1"]}'

Based on the docs, 1 is definitely a valid multipart form parameter, so I'm not quite sure what I'm doing wrong. Do you have any idea?

Node querying returns possibly invalid result:

I accidentally the issue too early.

GET /node?query&type=foo
{"S":200,"D":null,"E":null}

The return value should either have "D" : [] which is valid or "S" : 404.
Doing 200 and null seems like a problem since "D" is invalid and "2" indicates that the result is good.

Shock API to Download File Saves as Node ID Instead of Filename

When you download a file using the Node API it should save the file with the original filename and extension. Right now it saves using the Node ID which is uninformative.

e.g. http://kbase.us/services/shock-api/node/d446054f-ce0d-4ac2-b63f-79bd40071c20?download
saves to 'd446054f-ce0d-4ac2-b63f-79bd40071c20' instead of '47.fna.gz'

You can use the Content-Disposition header to specify the filename: http://stackoverflow.com/questions/1628260/downloading-a-file-with-a-different-name-with-php

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.