Coder Social home page Coder Social logo

jimmywarting / streamsaver.js Goto Github PK

View Code? Open in Web Editor NEW
3.9K 64.0 405.0 279 KB

StreamSaver writes stream to the filesystem directly asynchronous

Home Page: https://jimmywarting.github.io/StreamSaver.js/example.html

License: MIT License

HTML 37.29% JavaScript 62.71%
stream service-worker filesaver ram html5

streamsaver.js's People

Contributors

artyuum avatar bonjur avatar emersion avatar frank-dspeed avatar gwdp avatar haraldson avatar jeremyckahn avatar jimmywarting avatar johannesleander avatar kewlkris avatar lekakid avatar machawk1 avatar nomeji avatar orekish avatar petethomas avatar qkdreyer avatar robpethick avatar sirbarrence avatar stockholmux avatar supermaai avatar taniadaniela avatar texkiller avatar vobruba-martin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

streamsaver.js's Issues

navigator.connect

This looks cool and potential useful (instead of mitm), to bad it's deprecated. just keeping this as a reference, not something I will implement here

Timeout (?) on writing file with webtorrent

Hi,
got this error where I try to write the document I'm downloading from webtorrent slowly as he downloads, the write stream just crashes (download says "Failed - Network error"). I also tried to write it all as he ends downloading but the file is way to big and page crashes :/ Any idea on how I could write this damn 1.5 gb buffer into a file ?

Uncaught Reference Error!!

Hi,

  1. Thanks for sharing the library.
  2. When I try to use the library on Raspberry Pi, I get the following errors
    • Uncaught Reference Error: WriteableStream is not defined
    • Uncaught Reference Error: encoder is not defined

Can you please advise to get it working.
Thanks.

Edit : Using the example : "Writing some plain text" in ReadMe document.

'dangerous use of the global this object' from the Closure compiler

Hi, I'm just trying out StreamSaver.js, and I'm a relative novice at JavaScript. Our build system compiles code using the Closure compiler, and I gather this is normally a warning but for us it's an error.

this[name] = definition() inside one of the lambdas seems to give this error, as flattening could change which 'this' object is modified by this call. If you don't want to fix it could you please explain what object this should be there, and I can have a go?

Does this run completely offline in airgap environments?

So I was reading this:

This will scream high restriction just by mentioning service worker. It's such a powerful tool that it need to run on https but there is a workaround for http sites: popups + 3rd party https site. Who would have guess that? But I won't go into details on how that works. (The idea is to use a middle man to send a dataChannel from http to a serviceWorker that runs on https).

Does it send the data to any server?

Does this run completely offline in airgap environments?

Retry download

You can not just click the on the "TRY AGAIN" button b/c you are not on the same page any more and the link has expired since it's a one time download... When I aborted the download I will see this in my download history
skarmavbild 2016-09-02 kl 11 24 01
The "TRY AGAIN" button don't work... it tries to download intercept-me-nr

Any ideas to how to be able to re-download it would be welcome

One idea is to include a referer parameter in the url /intercept-me-nr0.123?referer=https://example.com/download/:file-id
So when a user tries the link again he will be redirected back to the page that the download happend from

Poll: Do you want a popup or tab when using http?

I use a mitm.html to forward a dataChannel to a service worker, unfortunately we can't register the service worker inside a iframe cuz the top window has to be https

So i have to open mitm.html in a new window or a new tab if your own domain don't run on https
This only has to be open for a split second and after that we can close the new window when the dataChannel has been forwarded to the worker.

So do you want a popup or a new tab when creating a new download? What is the least disturbing/confusing?

👍 for new popup
👎 for new tab

Investigate 206 partial download

One issue service worker have is a hard 5 minutes to live before it restarts/shutdown to defend itself from malicious code that drains ram and memory leaks.

In order too work around it I though about "What if we could resume the download"

I think it might work. Firefox don't have ReadableStream, so by using partial response i think we can get grate support for all browsers that supports service workers since it's just basic http

  • Test to see if partial download works
  • Test if unknown length is acceptabel
  • Test to see if the download can recover from service workers reboot (after 5m)
  • Use partial response instead of ReadableStream in service worker or use both?
  • Add a support for accept bytes range on all downloads

Writting multiple files at once

Hi guys,
Is there a known issue regarding downloading multiple files in the same event loop?
It seems only the last writer closed get to write the file.

var encoder = new TextEncoder();
var s1 = streamSaver.createWriteStream("felix.txt");
var w1 = s1.getWriter();
var uint8array = encoder.encode("first");
w1.write(uint8array);
w1.close();

var s2 = streamSaver.createWriteStream("salut.txt");
var w2 = s2.getWriter();
var uint8array = encoder.encode("second");
w2.write(uint8array);
w2.close();

With this example, only the second stream gets written. I believe this is a matter of async, but not sure where it comes from.
Any help would be appreciated :)
Thanks

Import failure on es6 with "import { createWriteStream } from 'StreamSaver'"

When adding this to client-side code:

import { createWriteStream } from 'StreamSaver';

I keep getting the following error:

[size, queuingStrategy] = [queuingStrategy, size]
^^^^^^^^^^^^^^^^^^^^^^^
ReferenceError: Invalid left-hand side in assignment

I'm using es6. Is there something else I need to worry about? This seems to me the most reasonable way to import module and the utility methods. Thanks!

Electron/Ember app

Hi,

I'm trying to use StreamSaver in an Electron/Ember app.

install:
npm install streamsaver --save-dev
npm install web-streams-polyfill --save-dev

import:
import { createWriteStream, supported, version } from 'npm:StreamSaver';

but when I try to use it:
const fileStream = streamSaver.createWriteStream('filename.txt');

I get an error:
Uncaught ReferenceError: streamSaver is not defined

Any ideas?
thank you!

NPM module

Could you please distribute StreamSaver.js and its dependencies as an NPM module. I'd like to use it in angular 2 and it would be much nicer as an NPM.

StreamSaver Not Defined Error

Hi,

I am trying to use StreamSaver with AngularJS. When I try to load the page, I am getting ReferenceError: streamSaver is not defined error. I used Chrome v53 as the browser. I can successfully execute the example code and download a 10MB file. I am not sure what I am missing.

<script src="StreamSaver.js"></script>
<script src="https://rawgit.com/jimmywarting/browser-su/master/build/permissions.js"></script>
<script src="https://wzrd.in/standalone/[email protected]"
    integrity="sha384-8EYry4yokV53rGHMFtPqeVlAPgxn8yxr/RvxC4bZt3vlneaDPzWkSJCvBDBTuXAV"
    crossorigin>
</script>
<script>
    !streamSaver.supported && prompt(
        'ReadableStream is not supported, you can enable it in chrome, or wait until v52',
        'chrome://flags/#enable-experimental-web-platform-features'
    )
.........
</script>

Incompatibility with create-react-app build process

Thanks for your work. I'm having this problem when I try to build my react project (created with create-react-app).

Creating an optimized production build...
Failed to compile.

Failed to minify the code from this file:

        ./node_modules/streamsaver/StreamSaver.js:1:1

Read more here: http://bit.ly/2tRViJ9

Network failed!

I using StreamSaver and NodeJS together for save 10,000,000 Records into a file with .Json format. After run app and download about many records(200,000 or 300,000 or more), downloading canceled and abort file.json and show error "Network failed" on downloader in browser.
Notice: I haven't any error in console and I downloaded about 500MB or more...So no problem in range/memory.
Why?

Server.js

var http = require('http');
var express = require('express');
var app = express();
var server = http.Server(app);
var io = require('socket.io')(server);

io.on('connection', function (socket) {
    console.log('Connection is ready!')
    socket.on('get_records', function (data) {
    var connection = new sql.Connection(config, function (err) {
        if (err) {
            console.log(err.message);
        }

        var request = new sql.Request(connection);
        request.stream = true;

        request.query("Select * from my_table");
        // ... error checks
        request.on('recordset', function (recordset) {
            // Emitted once for each recordset in a query
        });

        request.on('row', function (record) {
            // Emitted for each row in a recordset
            socket.emit('recieve_records', record); //send record by record to client
        });

        request.on('error', function (err) {
            console.log(err.message);
        });

        request.on('done', function (returnValue) {
            // Always emitted as the last one
        });
    });
});

client.js

var fileStream = streamSaver.createWriteStream('filename.json');
writer = fileStream.getWriter();
encoder = new TextEncoder();

socket.on('recieve_records', function (data) {
    var uint8array = encoder.encode(data + "\n"); // data='every things'
    writer.write(uint8array);
});

Image of Error

What should the syntax be like?

I would want the syntax to be dead simple like FileSaver.js where you just type

saveAs(blob, "filename")

// So something along with this?
saveStream(stream, "filename")

// The stream api defines something like .pipeTo()
// so the correct syntax would be something where you create two streams
// This is very much like how NodeJS dose it so there is a familiar language here
stream.pipeTo( createWriteStream("filename") )

One problem I see with .pipeTo() is that it's not widely adopted yet.
Maybe we want save more than just streams that are not streams but could as well act like a stream? What could that be? what can we save?

  • Webpacks or browserify streams
  • Fetch's .getReader()
  • Chromes sandboxed filesystem?
  • Blobs and files
  • FileReader?
  • userMedia streams
  • DataChannel
  • postMessage
  • sockets
  • streams (camera, screen, audio & ReadableStream)

Surely we can't adopt everything into the same function that dose all magic depending on what type it's
but we can help turn each of them into a stream that can then be used by each other

Use GitHub Releases

Pertaining to jsdelivr/jsdelivr#13597 (comment) I would like this in a CDN. JSDelivr auto-updates, but they rely on Git releases to do it. Would you be able to utilise Git releases at some point?

I saw you comment somewhere that this wasn't stable, and it's clearly actively developed, but I would love to see this released as I require it to implement WebTorrent.

A more butiful interactive example

The example.html doesn't look that nice.

Would someone want to help me make a nicer example? Make it more beautiful, interactive, maybe with have some progress bar, pause, resume, close, cancel button - maybe even display how much has been written... show a warning that it isn't supported, make it responsive?

The example should work on both http & https to show the different behavior in what is going to happen with the new tab window when saving on http, so the example has to disable getUserMedia streams (screen, audio, camera) that has been restricted to only https

Another cool example could be to use webtorrent with streaming compatibility - add chunks to a pool, once we have the prices in the beginning we flush the start of the pool that can be written to the beginning of the file.

Also thought about doing canvas recording ...make a film of your drawing and at the same time write it to the HDD - way to overkill perhaps not even possible? We will use canvas.captureStream(frameRate) for this 😉

I'm too busy to do it. Or to un-interactive to come up with a good design. (I'm a coder not a designer)
I could ofc do it if someone where to send me a design proposal. Credits will go to the person who did it in the footer or anywhere else on the page if he/she wants

Maybe to much to ask, this are just ideas

Failed to minify code for production build

I've written a webapp using ReactJS and StreamSaver and it works great but am having difficulty in building a production release.

npm run build
Failed to minify the code from this file: 
 	./node_modules/streamsaver/StreamSaver.js

Possibility to detect interrupted downloads

Dear jimmywarting,

The StreamSaver is a really awesome tool, It allows to do things that are not achievable by other libraries.
I'm using it to save a lot of data and had a few non-critical issues.

When I close the page - the download continues and never shuts down.
Is there's any theoretical possibility to detect existing "open" stream-saver sessions or recover them?
I'm seeking for your advice.

When I supply wrong-type object to write function - the code just fails without any explanations. I've tried many solutions, including placing sw.js on my own webserver. Finally I've found issue that I've supplied wrong object to the function :). I think simple type-detection would help a lot for freshmen.

HTTPS?

You were a bit unclear on what the SSL issues are.

I tried running the example from my server over HTTP/2 + SSL and it gave an error 👍 Mixed Content: The page at 'secret' was loaded over HTTPS, but requested an insecure resource 'http://localhost:3000/'. This request has been blocked; the content must be served over HTTPS.

Is there any way around that or is the solution wrong for HTTPS?

p.s. I'm looking into implementing this into Seedr.cc, a lot of users would benefit from this!

WriteStream File not found error

Good morning,
i'm trying to use the library to save basically files created in memory.
The piece of files are stored in array of base64 value that are converted in uint8array, code below (React with Typescript):

let fr: FileReader = new FileReader();
let myFile = createWriteStream(fileName);
let writer = myFile.getWriter();
let currentDoc: number = 0;

fr.onload = () => {
    let uint8array = FileUtils.Base64Binary.decodeArrayBuffer(fr.result);
    console.log(uint8array);
    writer.write(uint8array).then(() => {
        currentDoc++;
        if (currentDoc === gFile!.docs.length) {
            writer.close();
        } else {
            readDoc();
        }
    });
}

let readDoc = () => {
    let blob: Blob = new Blob([gFile!.docs[currentDoc].encodedFile], { type: "text/plan;charset=utf-8" });
    fr.readAsBinaryString(blob);
}

readDoc();

The problem is that when it starts to do write method an error occured and site redirect to a page like:
https://jimmywarting.github.io/StreamSaver.js/intercept-me-nr0.9707839350708563

that not exists.

I tried to understand what's wrong with my code but didn't find something, can you help me?

Is it possible to use this to create large GIF files?

Hi there, I posted this question in the gifshot repo

The situation here is I am using gif shot to create an animated GIF from multiple images. gifshot creates the blobs in memory, so obviously, after a while, it crashes after eating up all memory.

Is it somehow possible to combine gifshot and this library to be able to create an animated gif from a series of images, that can be save to disk/image gallery for mobiles - memory not being a limiting factor? Happy to put the same $100 bounty on it I offered in the gif shot repo if any talented dev. can take this up :-) (bounty awarded to jimmywarting)

Nativefier when download file streaming generate ClientAbortException

I chrome work´s fine but nativefier dont work correcly

Back-end fullstack trace

org.apache.catalina.connector.ClientAbortException: java.io.IOException: Uma conexão estabelecida foi anulada pelo software no computador host
	at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:333)
	at org.apache.catalina.connector.OutputBuffer.appendByteArray(OutputBuffer.java:718)
	at org.apache.catalina.connector.OutputBuffer.append(OutputBuffer.java:647)
	at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:368)
	at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:346)
	at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:96)
	at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
	at org.glassfish.jersey.servlet.internal.ResponseWriter$NonCloseableOutputStreamWrapper.write(ResponseWriter.java:320)
	at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:218)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:294)
	at com.alcatel.idativo.service.internal.demand.DemandService.lambda$downloadFile$0(DemandService.java:301)
	at org.glassfish.jersey.message.internal.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:78)
	at org.glassfish.jersey.message.internal.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:60)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
	at org.glassfish.jersey.server.internal.JsonWithPaddingInterceptor.aroundWriteTo(JsonWithPaddingInterceptor.java:106)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
	at org.glassfish.jersey.server.internal.MappableExceptionWrapperInterceptor.aroundWriteTo(MappableExceptionWrapperInterceptor.java:86)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
	at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
	at org.glassfish.jersey.server.ServerRuntime$Responder.writeResponse(ServerRuntime.java:711)
	at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:444)
	at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
	at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:329)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:410)
	at org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:169)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:475)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
	at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:498)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:796)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1366)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Uma conexão estabelecida foi anulada pelo software no computador host
	at sun.nio.ch.SocketDispatcher.write0(Native Method)
	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51)
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
	at sun.nio.ch.IOUtil.write(IOUtil.java:65)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
	at org.apache.tomcat.util.net.NioChannel.write(NioChannel.java:134)
	at org.apache.tomcat.util.net.NioBlockingSelector.write(NioBlockingSelector.java:101)
	at org.apache.tomcat.util.net.NioSelectorPool.write(NioSelectorPool.java:157)
	at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.doWrite(NioEndpoint.java:1170)
	at org.apache.tomcat.util.net.SocketWrapperBase.doWrite(SocketWrapperBase.java:687)
	at org.apache.tomcat.util.net.SocketWrapperBase.writeBlocking(SocketWrapperBase.java:467)
	at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:405)
	at org.apache.coyote.http11.Http11OutputBuffer$SocketOutputBuffer.doWrite(Http11OutputBuffer.java:543)
	at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:76)
	at org.apache.coyote.http11.Http11OutputBuffer.doWrite(Http11OutputBuffer.java:201)
	at org.apache.coyote.Response.doWrite(Response.java:513)
	at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:328)
	... 63 more

Client fullstacktrace

sw.js:68 Handleing  https://jimmywarting.github.io/StreamSaver.js/intercept-me-nr0.7942744777995843
StreamSaver.js:111 All data successfully read!
filesUploadController.js:66 Conexão fechada
native ReadableStream.js:393 Uncaught (in promise) TypeError: network error
bluebird.js:1542 Unhandled rejection TypeError: network error
printWarning @ bluebird.js:1542
formatAndLogError @ bluebird.js:1258
fireRejectionEvent @ bluebird.js:1283
Promise._notifyUnhandledRejection @ bluebird.js:729
Async._drainQueue @ bluebird.js:190
Async._drainQueues @ bluebird.js:198
Async.drainQueues @ bluebird.js:69
$eval @ angular.js:17972
$digest @ angular.js:17786
(anonymous) @ angular.js:18011
completeOutstandingRequest @ angular.js:6111
(anonymous) @ angular.js:6390
sw.js:68 Handleing  https://jimmywarting.github.io/StreamSaver.js/intercept-me-nr0.8095568425031883

Save to different folder

Hi,
Cheers for the great js file..

I am looking into saving to other folder, may i know if it is possible?
I noticed currently the file alway download to the default downloads folder.
Thanks in advance.

fetch example code to angularjs

I have converted the fetch example code to angularjs syntax. However the url file is not dowloaded. I guess the done value was not set anytime and the tab allocates as much memory as possible.

$scope.file = streamSaver.createWriteStream("filename.txt");

fetch(url).then(function(res) {
    var reader = res.body.getReader();
    $scope.pump = function(file, reader) {
        return reader.read().then(function(value, done) {
            if (done) {
                file.close();
                return;
            }
            file.write(value);
            return $scope.pump(file, reader);
        });
    };
    $scope.pump($scope.file, reader);
});

Download url showing as github.io

When downloading chrome shows that the download is coming from

https://jimmywarting.github.io/StreamSaver.js/intercept-me-nr0.5528359912470291

Is there a way we can change this behaviour. I am running from https with a valid certificate.

capture

Safari support detection

Update:

(2019-06-18) by @jimmywarting

Want to keep this issue open until safari gets "real" support for downloading something that is generated by service worker. (even if it's not a stream) think this as some good reference to some safari bugs and background to what's wrong with safari

cuz we eventually need to remove the special edge case - hard coded scenario that isn't such a graceful degradation when it comes to Safari

let useBlobFallback = /constructor/i.test(window.HTMLElement) || !!window.safari

I don't know yet how we should detect support for Safari as they yield false/positive atm, maybe unwillingly do UA sniffing instead of feature detection... but that will be an task for later when safari can really save something from service worker

Safari bug:
https://bugs.webkit.org/show_bug.cgi?id=182848 <- related but dose not solve the issue.
https://bugs.webkit.org/show_bug.cgi?id=201580 <- new bug report

Original issue

First of all, just want to thank you for this great work, it works beautifully in Chrome!

I see Safari's missing support for serviceWorker is already supported in their latest version 11.1. https://caniuse.com/#search=serviceWorker
So I tested in v11.1.2 Safari but it is not working, am I missing something?
(for testing, I removed support check for WritableStream)

Local test doesn't work in chrome

I tried out the local test in Chromium 53.
Attempting the text example leads to:
"Uncaught (in promise) DOMException: Failed to get a ServiceWorkerRegistration: The user denied permission to use Service Worker."

There was no request for any permission.

Javascript - Abort a download

The Response constructor only accepts a ReadableStream atm, but a WritableStream will come eventually (I hope). Until then I'm trying to map a getWriter().abort() function to call ReadableStream.cancel() But that is not possible

TypeError: Cannot cancel a readable stream that is locked to a reader(…)

So I try to cancel the last thing it was locked to... The Response constructor.
res.body.cancel()
But that is also locked to a reader. A reader out of my control

So I'm going with my last resort which is to make the readableStream to a errored state
1108a2c, f60b23e

It doesn't seem right to me to call error when the there was no error. It has only been aborted/canceled

But even after making the readable stream state a errored state it still looks like chrome is downloading the file.
I don't know if it's a bug in chrome/streams/fetch or if I'm doing something incorrect. Or isn't this going to be possible until the Response constructor can accept a writableStream as body?

Maybe a friendly ping to @domenic can help me

Chrome doesn't seem to write to crdownload until more than 32KB of data is written

First off, amazing work!

My expectation is that a partially written file will have contents in the corresponding .crdownload file. The caveat in the first aha moment suggests that data should show up after the first 1024 bytes or so. But from a simple test, it seems like Chrome 61 isn't actually writing to crdownload until more than 32KB is written.

Here's a simple test case: https://jsfiddle.net/04tdcvb5/1/ (it writes ~32KB, shows an alert, and writes to file ~20 seconds later)

Fiddle Code (click to show)
const encoder = new TextEncoder;
const { createWriteStream, supported, version } = window.streamSaver;

if(!supported) throw new Error("StreamSaver not supported");

const fileStream = streamSaver.createWriteStream('streamsaver.txt');
const writer = fileStream.getWriter()

writer.write(encoder.encode("Hello " + " ".repeat(1024*32)));
alert("Check streamsaver.txt!  In 20 seconds 'World!' will be appended");
setTimeout( () => {
	writer.write(encoder.encode("World!"));
	writer.close();
}, 20000);

During the timeout, the streamsaver.txt.crdownload file is empty. Note: The final streamsaver.txt file has the correct output.

Changing the size to ~34KB does work: https://jsfiddle.net/04tdcvb5/

Is this expected behavior or a regression in Chrome?

Pause stream when user pauses the download

ReadableStream in the service worker keep pulling for more data even doe client paused the download. (The desiredSize is always 1)

This makes you think it's okey to keep pulling data from internet and piping it to the service worker while in fact it isn't

cc @yutakahirano

polyfill broken!

The polyfill file you recommend using is missing dependencies ! :(

Save Failure - Network Error (due to Service Worker Restart)

Great work on StreamSaver.js! I'm trying to use this to download and save a large file but I am consistently running across a problem with Chrome where the download fails about 10 minutes in with "Failed - Network Error". I've done a good amount of debugging and see that Chrome, which manages the Service Worker lifecycle, seems to kill/restart the service worker every 10 minutes or so. Have you run into this issue?

I was able to reproduce it using the your example.html file by simply slowing down the lorum ipsum text generator as follows (I added a 1000 ms timeout):

writer.write(text).then(() => { setTimeout(pump, 1000) })

You'll notice that when you generate 30GB, the save fails about 10 minutes in. Make sure you do not have Developer Tools running as Chrome doesn't restart the service worker when developer tools are open (for troubleshooting purposes).

Any thoughts on how to get around this problem?

ReferenceError: WritableStream is not defined

Hello,
I am working on a js react application using webpack.
I installed the npm:
npm install streamsaver

In the file where I am using the feature I imported the streamSaver

import streamSaver from 'streamsaver';

In the code I added

let fileStream = streamSaver.createWriteStream('teststream.csv');
let writer = fileStream.getWriter();

As soon as the code try to create the WriteStream I have the following error:

uncaught at appSaga 
 at takeEvery 
 at callApiReadExport 
    ReferenceError: WritableStream is not defined
    at Object.createWriteStream (webpack:///./~/StreamSaver/StreamSaver.js?:89:3)
    at callApiReadExport$ (webpack:///./app/containers/App/sagas.js?:347:77)
    at tryCatch (webpack:///./~/regenerator-runtime/runtime.js?:63:40)

Where I am making a mistake?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.