meteor-community-packages / meteor-collectionfs Goto Github PK
View Code? Open in Web Editor NEWReactive file manager for Meteor
License: MIT License
Reactive file manager for Meteor
License: MIT License
Switch to EJSON for handling then binary data when sending/receiving chunks in methods:
data: { '$binary': data }
hopefully this would improve performance?
This is probably a case of me not having done something simple but necessary...
In order to get the example in examples/filemanager
working, I had to do mrt remove collectionFS
and then mrt add collectionFS
. That got it running, using mrt
in the said directory.
I am able to upload and download files, but unlike in the live demo at http://collectionfs.meteor.com/, the jpg examples do not show filehandlers, they just say "no cache." I just tried deploying it and got the same result. Any idea what might be different?
Thanks!
Seems that it should not matter how its treated internally - as long as the IO is the same.
If the file structure doesn't have the following structure (e.g a fresh new meteor app) it will throw an error looking for .meteor/local/build/static/cfs
/server
/client
/public
When doing:
var a = collectionFS.find(id);
a contains the fileRecord but it would be handy if it contained functions eg.:
a.retrieveDateUrl
a.remove()
a.update()
a.retrieveBlob()
// + more on the client-side api
This way there would be no need for working with literals to define what collection the file is in - its just a file object.
The api could be transformed, the collectionFS.find - would not have the option for user to make custom transformes
I'm using meteorite 0.4.9, meteor 0.6.x and the latest collectionFS and when I upload any files (.jpg and .png tested) the resulting files are corrupt and shorter. Could this be an encoding issue? I did a file comparison and besides being a bit shorter the saved file also had a lot of characters that were different (in hex).
I'm hoping coffeescript doesn't have something to do with it because all my server code uses that and meteor 0.6.x got rid of global variables so it has caused some minor issues declaring collection instances.
Handle this, either via meteor event listeners or?
Create a server cache for completed files in the public
/collectionFS._name
folder.
Update fs.files record attribute: fileURL[]
Prepare abillity for special versin caching options creating converting images, docs, tts, sound, video etc.
eg. further details in server/collectionFS.server.js
CollectionFS.fileHandlers({
//Default image cache
handler['default']: function(fileId, blob) {
return blob;
},
//Some specific
handler['40x40']: function(fileId, blob) {
//Some serverside image/file handling functions, user can define this
return blob;
},
//Upload to remote server
handler['remote']: function(fileId, blob) {
//Some serverside imagick/file handling functions, user can define this
return null;
},
});
The server runs caching in a fifo
que by fileId handling cache request pr. file looping through the versions array, saving the blob in a collection named folder having fileId
+arrayRef
and content type
for naming. eg.: public/contacts/d4er-ee3-eeer3-ww3.40x40.jpg
When saved the fs.files fileURL['40x40']
is updated? causing reactive update of UI
Files mime type set to application/octet-stream
Chrome files following warning:
Resource interpreted as Image but transferred with MIME type application/octet-stream:
Safarie doesn't show images -
In gidFS spec it speaks about attribute .md5
this is added in collectionFS but not implemented. Should be uses to verify file when resuming file upload?
Important to do when server receives client total upload confirmation?
Try out:
var lastFilesCurrentChunk[]
function isUploading(fileId) {
var self = this;
var fileItem = self._getItem(fileId);
if (fileItem.complete)
return false; //nobody is uploading since it's allready completed? (what about updating files data?)
var answer = (self.lastFilesCurrentChunk[fileId])?(fileItem.currentChunk == self.lastFilesCurrentChunk[fileId]):false;
//set last to current
self.lastFilesCurrentChunk[fileId] = fileItem.currentChunk;
return answer;
}
Instead of options the filehandler should have 'this' filled with same as options...
For the following code under Meteor 0.6.4.1 & CollectionFS 0.2.3 I am getting
Object [object Object] has no method 'filter'
on attempting to add a filter as per the documentation.
media_items.coffee
@MediaItemsFS = new CollectionFS 'media_items'
MediaItemsFS.allow
insert: (userId, doc) ->
return userId && doc.owner == userId
update: (userId, doc, fields, modifier) ->
_.all files, (file) ->
return userId == file.owner
remove: (userId, doc) ->
return false
MediaItemsFS.filter
allow:
contentTypes: ['image/*']
Rest point for files
cfs/collection/fileid.ext
It's an issue when using muliple instances of collectionFS
then multiple filehandlers could spawn making the server slow - pr. default only one filehandler at a time - but maxFilehandlers
could be set as an option to overwrite this behaviour.
collectionFS_server.js
ln 119maxFilehandlers
reached otherwise spawn filehandlerrunningFilehandlers
- In the beginning+end of workFileHandlers
Should be able to removeFile
this includes files created by filehandlers
in the fileURL
Remote method:
remove fileHandlers
remove fileChunks
remove fileFiles
return status of remove?
Make alias functions find, allow, insert etc. pointing at files collection
Making usage more like a regular Meteor.Collection
Declare a filehandler named 'master', then this could trigger that the chunks would be removed from the database - all other filehandlers would get handed the master file - not the database file.
The master could do all normal stuff, changing size of image (limit data usage on filesystem)
This is usable when addressing the whole server caching thing. Using gridFS serverside makes great sens. (saving dev time and optimizing speed)
Two things:
gridFS
chunkSize locked to 256
bytes? shouldn't be a problem being flexible collectionFS is easy altered, currently being 1024
bytesTest the gridFS functions on the collectionFS tables .files
and .chunks
This would improve safe upload and maybe improve for offline useage?
Indexeddb should be the best option, since localstorage is slow and websql is deprecating.
In using this awesome file tool I ran across a couple issues that I had to address.
Patch Below:
diff --git a/collectionFS_common.js b/collectionFS_common.js index 0ec7fb6..60a4338 100644 --- a/collectionFS_common.js +++ b/collectionFS_common.js @@ -16,13 +16,17 @@ //Auto subscribe if (Meteor.isClient) { - Meteor.subscribe(self._name+'.files'); //TODO: needed if nullable? + if(!options || !options.hasOwnProperty('autosubscribe') || options.autosubscribe) { + Meteor.subscribe(self._name+'.files'); //TODO: needed if nullable? + } } //EO isClient if (Meteor.isServer) { - Meteor.publish(self._name+'.files', function () { //TODO: nullable? autopublish? - return self.files.find({}); - }); + if(!options || !options.hasOwnProperty('autopublish') || options.autopublish) { + Meteor.publish(self._name+'.files', function () { //TODO: nullable? autopublish? + return self.files.find({}); + }); + } } //EO initializesServer var methodFunc = {}; @@ -43,6 +47,8 @@ "data" : data, // the chunk's payload as a BSON binary type }); + numChunks = self.chunks.find({"files_id":fileId}).count(); + /* Improve chunk index integrity have a look at TODO in uploadChunk() */ if (cId) { //If chunk added successful /*console.log('update: '+self.files.update({_id: fileId}, { $inc: { currentChunk: 1 }})); @@ -55,11 +61,11 @@ if (complete || updateFiles) //update file status self.files.update({ _id:fileId }, { - $set: { complete: complete, currentChunk: chunkNumber+1 } + $set: { complete: complete, currentChunk: chunkNumber+1, numChunks:numChunks } }) else self.files.update({ _id:fileId }, { - $set: { currentChunk: chunkNumber+1 } + $set: { currentChunk: chunkNumber+1, numChunks:numChunks} }); //** Only update currentChunk if not complete? , complete: {$ne: true} } //If cId @@ -86,7 +92,7 @@ } //EO isServer }; //EO saveChunck+name - methodFunc['getMissingChunk'] = function(fileId) { + methodFunc['getMissingChunk'+self._name] = function(fileId) { console.log('getMissingChunk: '+fileRecord._id); var self = this; var fileRecord = self.files.findOne({_id: fileId}); @@ -182,6 +188,8 @@ _.extend(CollectionFS.prototype, { find: function(arguments, options) { return this.files.find(arguments, options); }, findOne: function(arguments, options) { return this.files.findOne(arguments, options); }, + update: function(selector, modifier, options) { return this.files.update(selector, modifier, options); }, + remove: function(selector) { return this.files.remove(selector); }, allow: function(arguments) { return this.files.allow(arguments); }, deny: function(arguments) { return this.files.deny(arguments); }, fileHandlers: function(options) { diff --git a/collectionFS_server.js b/collectionFS_server.js index 1a97b02..c330111 100644 --- a/collectionFS_server.js +++ b/collectionFS_server.js @@ -63,7 +63,9 @@ var _fileHandlersFileWrite = true; console.log('Path: '+self.path); }); //EO Exists - if (err) { + // errno 47 is EEXISTS, which is fine + if (err && err.errno != 47) { + console.log(err); _fileHandlersSymlinks = false; //Use 'public' folder instead of uploads self.cfsMainFolder = 'public';
Downloading not working, guess it's Filereader, file or blob that IE breaks:
Log: Exception while delivering result of invoking 'loadChunckfilesystem'undefined
When removing a file, the chunks and files related should be deleted.
Create a serverside observer on the collectionFS handling this.
Refractor common, make a seperate client
and server
version
_.extend(CollectionFS.prototype, {
find: function(arguments, options) { return this.files.find(arguments, options); },
findOne: function(arguments, options) { return this.files.findOne(arguments, options); },
allow: function(arguments) { return this.files.allow(arguments); },
deny: function(arguments) { return this.files.deny(arguments); },
fileHandlers: function(options) {
var self = this;
self._fileHandlers = options; // fileHandlers({ handler['name']: function() {}, ... });
}
});
Add observer on server:
find: function(arguments, options) {
var query = this.files.find(arguments, options);
var handle = query.observe({
removed: function(doc) {
// TODO: remove all chunks
// TODO: remove all files related *( delete each fileUrl path begins with '/' )*
}
});
return query;
},
Rewrite deps for those in 0.6.0
Should polyfill for pre 0.6.0
Use all armes in sight.. Use it to contain the last 5Mb? of the file, if filesize is smaller then the whole file.
If a user wants to upload a foto just taken? or is the foto allready stored so resume is posible. investigate.
Add some helpers / controllers for handling files / drag&drop etc.
#5 Look at it for more details
Run some tests + examples and make sure that they work with the new linker scope.
Usefull helpers eg. {{fileProgress}}
, {{fileInQue}}
, {{fileAsURL}}
, {{fileURL _id}}
All using eg. fileId, param or fallback to this._id
AND UI helpers, one for the upload-loading-saveas and one for the upload a file?
Have events:
this
= self
type
= errorObject
fileRecord
= fileRecord
Depends on the new package system - if it allows depending on community packages eg. via atmosphere then we should split the project up into smaller packages to improve simplicity and usabillity.
Again not all wants to use filehandlers - then why have the code as extra payload?
I'm thinking something like packages:
For some reason when I update my server code and the meteor server restarts, the only way for me to get file uploads working again is to close my client window and re-open it. Is there some kind of JS thread on the client that is keeping the client from working when the server restarts?
To control server load keeping it sync...
Maybe consider streams instead of buffer to lower memory use?
Do some refractoring giving the filehandler more options.
var result = fileHandlers[func]({ fileRecord: fileRecord, blob: blob });
/* add:
destination: function( [extension] ) - Returns complete filename
fileUrl: { path, extension } */
Where destination referrers to local filesystem and fileUrl referrers to url saved in database.
The filehandler can return the fileUrl
if saving file using gm
or using a remote host.
Add documentation for this.
Is it possible to use CollectionFS on the server side? So that my code on the server downloads a file, puts it into the CollectionFS, which can then be downloaded/viewed by the client?
Use EJSON $binary - base64 encoded when in ddp transport
What does that change in collectionFS?
Meaning:
Some bad things right? but some good stuff too, a more clean client-side code is on its way
Okay this is an odd issue for me. Whenever I attempt to store a buffered image via the server method storeBuffer
and then retrieve and pipe them to a http response, the images do not appear in the browser. Strangely, if I use the client method storeFile
, and then retrieve them the same way, the images are displayed properly in the browser.
I'm not sure what's going on here. As far as I can tell the buffers are identical whether I use the client or server store method. What am I doing wrong? Here is my code:
PagesFS.storeBuffer(filename, buf, {
contentType: "image/png"
});
// Later in an express middleware function
buf = PagesFS.retrieveBuffer(image._id);
res.send(200, buf); // Browser displays a blank image
PagesFS.storeFile(file);
// Later in an express middleware function on server
buf = PagesFS.retrieveBuffer(image._id);
res.send(200, buf); // Browser displays the actual image
This might be a little incomplete. Let me know what else you need to diagnose this problem.
Refractored files to match development in the meteor/packages
folder. Added smart.json and package.js
tiny-test
Would be nice to get in test driven development with Meteor/node.js
EDIT: Waiting for Meteor linker to settle - currently the package tests are not in package scope - so no option for test driven development or specific testing. Havent tried Laika - maybe that would be a solution
Example and client side code needs refractoring
It would be nice with a simple example like the dropbox web interface - for showing how to use CollectionFS
in a 'real' usecase' - Depends on the new client-side API to be finished.
Might do a video tutorial on how to do that
So I tried to implement CollectionFS for the first time.
I'm still using autopublish and the insecure package So I basically only defined
MyCollection = new CollectionFS('mycollection');
First problem is that when I try to use
MyCollection.filter({
allow: {
contentTypes: ['image/*']
}
});
Meteor throws an error
TypeError: Object [object Object] has no method 'filter'
second problem is that when I try to make an upload, I get the following error:
Uncaught TypeError: Object # has no method 'userId'
Again, I don't use any userIds yet, everything is "insecure" right now.
What am I doing wrong?
Best regards
Patrick
options: {
blob, // Type of node.js Buffer()
fileRecord: {
chunkSize : self.chunkSize, // Default 256kb ~ 262.144 bytes
uploadDate : Date.now(), // Client set date
handledAt: null, // datetime set by Server when handled
fileHandler:{}, // fileHandler supplied data if any
md5 : null, // Not yet implemented
complete : false, // countChunks == numChunks
currentChunk: -1, // Used to coordinate clients
owner: Meteor.userId(),
countChunks: countChunks, // Expected number of chunks
numChunks: 0, // number of chunks in database
filename : file.name, // Original filename
length: ''+file.size, // Issue in Meteor
contentType : file.type,
encoding: encoding, // Default 'utf-8'
metadata : (options) ? options : null, // Custom data
},
destination: function, // When in filehandler
getExtension: function,
getBlob: function,
getDataUrl: function,
remove: function,
sumFailes: 0..3 (times filehandler failed in this recovery session)
}
Run some tests on that, se if anything can be done. Is it queing or is it just network lag
Checkup on the filereader.slice - pretty slow for a basic operation like that...
At the moment its on autopublish
- this should handled differently, there is a Meteor constant or function returning whether or not to autopublish
.
The subscribe
/ publish
implementation is natural handled in issue for creating function aliases
Correct the use of self.chunkSize
(this is a default value)
Should use fileItem.chunkSize
or `fileRecord.chunkSizeas available instead. This way an app can customize
chunkSize``and still load files with a different``chunkSize``
Fix in collectionFS_client
and collectionFS_server
Deprecate use of __meteor_bootstrap__.require
replace with npm.require
Add temperary solution in server
until then Meteor.engine
branche gets to master:
if (!npm) {
var npm = {
require: __meteor_bootstrap__.require
};
}
kind of a stranger, but fairly simple to implement and gives some flexibility for different use cases.
Follow the TODO in the code
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.