Coder Social home page Coder Social logo

particle-iot / spark-server Goto Github PK

View Code? Open in Web Editor NEW
441.0 99.0 136.0 86 KB

UNMAINTAINED - An API compatible open source server for interacting with devices speaking the spark-protocol

Home Page: https://www.particle.io/

License: GNU Affero General Public License v3.0

JavaScript 100.00%

spark-server's People

Contributors

dmiddlecamp avatar kennethlimcp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-server's Issues

photon not works with the spark-server

I have bought a photon device, and setup my own spark-server, but not works, seems the spark-protocol updated, but not the spark-server, and updates coming soon?

DFU failing on raspberry pi

Hi, I followed your instructions for setting up spark-server on my raspberry pi. Now If I want to get the keys onto the core this happens:

pi@raspberrypi /spark/spark-server/js $ spark keys server default_key.pub.pem 192.168.8.14
Creating DER format file
running openssl rsa -in  default_key.pub.pem -pubin -pubout -outform DER -out default_key.pub.der
checking file  default_key.pub192_168_8_14.der
spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pub192_168_8_14.der
dfu-util 0.7

Filter on vendor = 0x1d50 product = 0x607f
Cannot open device
Opening DFU capable USB device... Make sure your core is in DFU mode (blinking yellow), and is connected to your computer
Error -

but the core is connected and the yellow LED is blinking + dfu-util -l gives me this:

Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=0, name="UNDEFINED"
Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="UNDEFINED"

Do you have an idea what's wrong here? Thanks.

Core list not refreshed when claimed

Currently, the server is required to restart in order for spark list to show the new cores added to the local ☁️

Seems like there's a cache at play since the files required are found in the core_keys directory but not read by the server.

SparkServer in production

Hi, i want to config a spark-server in my server cluster, there will be a private net to launch this server.Use load balance to open 8080 port for this server.
But i can not use Photon to connect the cloud. Are there anything i need to know about the load Balance setting? Except 8080 port, do i need open other port for their connection?

Not compatible with 0.5.x firmware

Updated one core with 0.5.1 spark-firmware, and now it does not connect to the spark-server with this error:

Connection from: 10.10.10.48, connId: 13888
on ready { coreID: '1a0028001847343338333633',
  ip: '10.10.10.48',
  product_id: 6,
  firmware_version: 3,
  cache_key: '_13887' }
Core online!
1: Core disconnected: socket error Error: read ECONNRESET { coreID: '1a0028001847343338333633',
  cache_key: '_13886',
  duration: 15.027 }
Session ended for _13886

Photons subscribed to SSEs do not return data

I've tested my local cloud and spotted the following:

  • Photons are able to publish using the Spark.publish() routine
  • This same data can be read off the Cloud API (e.g. http://cloud.local:8080/v1/events/someVar?access_token=[accesstoken]
  • However, another Photon set up to subscribe to that published event will never receive the data

EDIT: running particle subscribe (pointed at the local cloud) does reveal that the 1st Photon's published events are being accepted by spark-server.

The same basic code works when tested against the Particle.io cloud.

Installing dependencies fails

OS: Windows 10

C:\Users\Viktor\Documents\spark-server>npm install
-


> [email protected] install C:\Users\Viktor\Documents\spark-server\node_modules\ursa
> node-gyp rebuild

\
C:\Users\Viktor\Documents\spark-server\node_modules\ursa>if not defined npm_config_node_gyp (node "C:\Program Files\nodejs\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.j
s" rebuild )  else (node  rebuild )
Building the projects in this solution one at a time. To enable parallel build, please add the "/m" switch.
MSBUILD : error MSB4132: The tools version "2.0" is unrecognized. Available tools versions are "4.0".
gyp ERR! build error
gyp ERR! stack Error: `C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe` failed with exit code: 1
gyp ERR! stack     at ChildProcess.onExit (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\build.js:269:23)
gyp ERR! stack     at ChildProcess.emit (events.js:110:17)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (child_process.js:1074:12)
gyp ERR! System Windows_NT 6.3.9600
gyp ERR! command "node" "C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
gyp ERR! cwd C:\Users\Viktor\Documents\spark-server\node_modules\ursa
gyp ERR! node -v v0.12.7
gyp ERR! node-gyp -v v2.0.1
gyp ERR! not ok
npm ERR! Windows_NT 6.3.9600
npm ERR! argv "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install"
npm ERR! node v0.12.7
npm ERR! npm  v2.11.3
npm ERR! code ELIFECYCLE

npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! This is most likely a problem with the ursa package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node-gyp rebuild
npm ERR! You can get their info via:
npm ERR!     npm owner ls ursa
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\Users\Viktor\Documents\spark-server\npm-debug.log

C:\Users\Viktor\Documents\spark-server>

ignore `.DS_Store` and other files

For functions checking on dir for files etc, we should try to add a checking feature for the list in settings.js which is:

notSourceExtensions: [
        ".ds_store",
        ".jpg",
        ".gif",
        ".png",
        ".include",
        ".ignore",
        ".ds_store",
        ".git",
        ".bin"
    ],

ursa dependency

Can we switch over to soft-dependency for this module?

Had this issue with Spark-CLI previously as it requires Python in order to install correctly.

Encryption Scheme

I hava a question about the encryption scheme spark used (RSA&AES) , why not SSL/TLS, are there some advantages?

Events not displaying properly

Using spark subscribe i can see the events published by the core to the local cloud.

However, on the chrome browser, the only message displayed is the initial:

ok:

Maybe a better tool can be used to see if the events are publishing properly and able to be received by tools other than the spark-cli

`spark login` failed with LC

I added the "apiUrl": "http://192.168.1.10:8080"

and the error after giving an email + password gives:

login error:  { [Error: getaddrinfo ENOTFOUND] code: 'ENOTFOUND', errno: 'ENOTFOUND', syscall: 'getaddrinfo' }

Access tokens expiring caught me completely off guard

Please let me know if I should move this issue to a better spot.

My access token expired and I had no idea that this happens. I understand it is probably for good reason (security, etc.), but I think the user should be better informed by any of the following ways:

  1. Notice of expiration date when creating access tokens ("This token will expire on xxxx-xx-xx.")
  2. Email reminding 1 week before token expiration
  3. Change the error message from "Authorization is required to perform that action." to "Expired access token."

Thank you for truly amazing platform -- can't wait to see where this goes!

submit public key error

used spark keys doctor and got this

attempting to add a new public key for core kennethlimcp
submitPublicKey got error:  Permission Denied
Okay!  New keys in place, your core should restart.

the keys file generated in spark-server/js folder

looking for core pub.pem in wrong location

Expected to find public key for core 48ff6a065067555008342387 at /Volumes/dd/github/spark-server/js/node_modules/spark-protocol/data/48ff6a065067555008342387.pub.pem
1: Core disconnected: Handshake did not complete in 120 seconds { coreID: '48ff6a065067555008342387', cache_key: undefined }
Session ended for 4
Handshake failed:  Handshake did not complete in 120 seconds { ip: '172.16.10.240',
  cache_key: undefined,
  coreID: '48ff6a065067555008342387' }
1: Core disconnected: Handshake did not complete in 120 seconds { coreID: '48ff6a065067555008342387', cache_key: undefined }
Session ended for 4
Handshake failed:  Handshake did not complete in 120 seconds { ip: '172.16.10.240',
  cache_key: undefined,
  coreID: '48ff6a065067555008342387' }
Connection from: 172.16.10.240, connId: 5
Expected to find public key for core 48ff6a065067555008342387 at /Volumes/dd/github/spark-server/js/node_modules/spark-protocol/data/48ff6a065067555008342387.pub.pem

Error: Invalid CoAP version

I get errors like this:

Connection from: ::ffff:10.10.10.88, connId: 166
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '3e0023000747343232363230',
  ip: '::ffff:10.10.10.88',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_165' }
Core online!
CryptoStream transform error TypeError: Cannot read property 'length' of null
Coap Error: Error: Invalid CoAP version. Expected 1, got: 0
routeMessage got a NULL coap message  { coreID: '3e0023000747343232363230' }
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
Coap Error: Error: Unknown message code: 58
routeMessage got a NULL coap message  { coreID: '3e0023000747343232363230' }

I think this error CryptoStream transform error TypeError: Cannot read property 'length' of null is related to this PR but probably not merged yet.

I wonder about this one too though: Coap Error: Error: Invalid CoAP version. Expected 1, got: 0 - I see that a lot, and sometimes got: 2 or got: 3.

Any ideas?

reduce timeout for `spark list

When no cores are online, the response takes a long time before returning the list of core status.

Would be awesome for this to be controlled by the person commissioning the server.

Unable to Call function or variable via api.

I dont quite understand what's going on here. My curl/api request is returning:
'Not Found' instead of json replies.

I used the following command in my cmd:
curl http://[LOCAL IP:PORT]/v1/device/[SPARK ID]/[FUNCTION]?access_token=[TOKEN]

Over on the server end, it's getting :
GET /v1/device/[SPARK ID]/[FUNCTION]?access_token=[TOKEN] HTTP/1.1" 404 9 "-" "curl/7.30.0"

Hope someone can enlighten me on what's going on. Thanks in advance!

Subscribe does not work

Photon with 0.5.1 firmware does not subscribe.

The photon becomes unresponsive (steady (not breathing) cyan mode light) and no further code is executed.

I have tested the same code against the particle cloud and it works there.

Subscribing to SSE using prefix filter does not work

Currently, subscribing to events on the local cloud requires the name of the subscribed/published event to match exactly.

This is different from the behaviour on the Particle cloud, which supports prefix filtering.

In other words, if 2 Particle devices on a local cloud are publishing weather/temp and weather/light respectively, there is no way to subscribe to a weather event and get both data streams simultaneously. This however is documented as a possible method for the Particle.io cloud – when will we see this supported on spark/particle-server?

Also, any use of the / slash presents Not Found connection errors through the cloud API.

p.s. this might be related to how it's not possible to subscribe to device-specific events (#53) on the spark-server, and how the api url requests are being parsed?

spark-server creates new keys in current directory

If you start the spark server with 'node /path/to/main.js' it will create new server keys in the directory it was called from instead of reading the keys already create in /path/to/

Example:
lets say I'm in /home/pi and I do 'node ./spark-server/js/main.js' it ignores the keys it already made in /home/pi/spark-server/js and creates a new set for the server in /home/pi

CryptoStream error when using Node 0.12

Hello, I've found a misbehaviour when running the server using nodejs 0.12.

It throws a CryptoStream error which prevents my Core to properly complete handshake with the server.

Here you can find the relevant forum post about the behaviour.

For future reference, I'm copying here the relevant part of the server log:

Your server IP address is: [...]
server started { host: 'localhost', port: 5683 }
Connection from: 192.168.1.6, connId: 1
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '[...]',
  ip: '192.168.1.6',
  product_id: 65535,
  firmware_version: 65535,
  cache_key: '_0' }
Core online!
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
onSocketData called, but no data sent.
1: Core disconnected: socket close false { coreID: '[...]',
  cache_key: '_0',
  duration: 25.082 }
Session ended for _0

As a workaround you can use node 0.10.36 ( download it from here ) and everything should work properly.

new Auth_token not detected by Spark-cli

When you 1st use spark setup and create a new account on the local ☁️, everything runs smoothly but the auth_token will not be accepted when used.

You need to CTRL + C to kill the spark-server, run it again before the new Auth_token gets checked against.

red flashes with user firmware

Seems like besides tinker app, running other firmware results in SOS panic red flashes.

Looks like 2 flash - Non-maskable interrupt fault

Flashing via OTA or DFU resulted in the same behavior.

I will be loading the code in via DFU, test on the LC and switch to Spark Cloud and see what's the result like.

Time/Date stamp for all events.

The GET, POST, etc requests have time/date stamps but several cloud related items do not have time/date stamps in the server console (e.g. cores going online, core disconnects (1: Core disconnected: Handshake did not complete in 120 seconds), etc.)

core won't stay connected to local server

For some reason, most of the time, my core won't stay connected to the server. It keeps going from cyan, to blinking green, to blinking red. and from the server I see:

Your server IP address is: 192.168.x.x
server started { host: 'localhost', port: 5683 }

Connection from: 192.168.x.y, connId: 1
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxx',
  ip: '192.168.x.y',
  product_id: 0,
  firmware_version: 0,
  cache_key: '_0' }
Core online!
Connection from: 192.168.x.y, connId: 2
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxx',
  ip: '192.168.x.y',
  product_id: 0,
  firmware_version: 0,
  cache_key: '_1' }
Core online!
Connection from: 192.168.x.y, connId: 3
on ready { coreID: 'xxxxxxxxxxxxxxxxxxxxx',
  ip: '192.168.x.y',
  product_id: 0,
  firmware_version: 0,
  cache_key: '_2' }
Core online!

etc....

Sometimes, I eventually see:

 1: Core disconnected: socket error Error: read ECONNRESET { coreID: 'xxxxxxxxxxxxxxxxxxxx',
   cache_key: '_3',
   duration: 240.642 }
 Session ended for _3

And then it will connect and stay connected. Is this indicative of something being configured incorrectly?

Pushing Events through HTTP API to subscribed core not working

I want to have my core subscribe to certain events, say "alerts" for example and then execute whatever I defined in the associated event handler. This should not be bound to a coreID, I want any core to react to all events named "alerts" when it subscribed that event name. Now, when I POST to /v1/devices/events using the name "alerts", the event never actually gets pushed to the core. After a lot of logger.log in different parts of the code I think I could isolate the problem down to node_modules/spark-protocol/clients/SparkCore.js:

       try {
           if (!global.publisher) {
                logger.error('No global publisher');
                return;
           }

           if (!global.publisher.publish(isPublic, obj.name, obj.userid, obj.data, obj.ttl, obj.published_at, this.getHexCoreID())) {
               //this core is over its limit, and that message was not sent.
               this.sendReply("EventSlowdown", msg.getId());
               logger.log('EventSlowdown triggered' + this.getHexCoreID());
           }
           else {
                 this.sendReply("EventAck", msg.getId());
                 logger.log("onCoreSentEvent: sent to " + this.getHexCoreID());
           }
       }

It seems to me that global.publisher.publish is always going into this limit. I haven't really understood how the publisher works and have some trouble interpreting the code and I might just be doing something completely wrong. If anyone else has something like this working, any advice/example is welcome, otherwise it feels like a bug to me :)

To make it more clear, I don't want the core to subscribe/react to events from other cores, I just want to have them subscribe to a designated channel "alerts" and have them decide what to depending on event data. The trigger should be a simple POST through the spark-server API (as defined in the spark-server docs) so that hubot scripts or whatever else can trigger these events.

Invalid CoAP version

CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
on ready { coreID: '40003b001347343339383037',
  ip: '::ffff:175.25.25.172',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_10' }
Core online!
CryptoStream transform error TypeError: Cannot read property 'length' of null
CryptoStream transform error TypeError: Cannot read property 'length' of null
Coap Error: Error: Invalid CoAP version. Expected 1, got: 3
routeMessage got a NULL coap message  { coreID: '40003b001347343339383037' }
CryptoStream transform error TypeError: Cannot read property 'length' of null
Coap Error: Error: Invalid CoAP version. Expected 1, got: 0
routeMessage got a NULL coap message  { coreID: '40003b001347343339383037' }
CryptoStream transform error Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt
Coap Error: Error: Invalid CoAP version. Expected 1, got: 3
routeMessage got a NULL coap message  { coreID: '40003b001347343339383037' }

I got this error message "Invalid CoAP version", do you know what's this mean?

Suggestion: safe mode healing / fleet management

Automatically handling system-part updates on spark-server will be awesome. Especially when firmware updates need to be updated en masse to a large number of devices on the local cloud.

Can there be a way to manage a local fleet of devices, giving administrators the ability to batch-update devices with new system-part firmware as well as user firmware? Right now it's a process that's done via particle-cli, one device at a time.

Perhaps there can a 'firmware' folder on spark-server where custom system-parts and user firmware can be uploaded to.

spark-server can then update information on each device's system+user firmware versions, and OTA what's different only to devices that need an update.

The [deviceid].json file stored for each claimed device in the core_keys folder seems an ideal place to track this additional information.

We can then serve a simple JS admin UI on the server to manage the files in the core_keys folder and effectively build our own fleet management console.

`spark list` not working

I have got the core breathing cyan, account setup, and core is connected to the cloud as shown by terminal but failed to list it

`provisioning` API endpoint safety check

It seems like the ability to deny adding core public keys to the spark cloud before a core is being claimed is not found in the local cloud.

This will be a great security feature to be included in the basic local cloud code.

getting auth_token over curl failed

This is the curl command:

curl http://208.xx.xxx.xxxx:8080/v1/access_tokens -u [email protected]:xxxxx

This is the output:

TypeError: Object function (options) {
    this.options = options;
} has no method 'basicAuth'
    at Object.AccessTokenViews.index (/Users/administrator/spark-server/js/lib/AccessTokenViews.js:37:44)
    at callbacks (/Users/administrator/spark-server/js/node_modules/express/lib/router/index.js:164:37)
    at param (/Users/administrator/spark-server/js/node_modules/express/lib/router/index.js:138:11)
    at pass (/Users/administrator/spark-server/js/node_modules/express/lib/router/index.js:145:5)
    at Router._dispatch (/Users/administrator/spark-server/js/node_modules/express/lib/router/index.js:173:5)
    at Object.router (/Users/administrator/spark-server/js/node_modules/express/lib/router/index.js:33:10)
    at next (/Users/administrator/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)
    at next (/Users/administrator/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:195:9)
    at Object.handle (/Users/administrator/spark-server/js/node_modules/node-oauth2-server/lib/oauth2server.js:104:11)
    at next (/Users/administrator/spark-server/js/node_modules/express/node_modules/connect/lib/proto.js:193:15)

Works ok with spark-cli with spark subscribe but not via CURL

device list is not udpated after addition of new Photon

When I add new Photon to local cloud (using particle keys doctor ID on CLI) and then call particle list, the new device is not in the list, even it is successfully connected (keys stored in local cloud). I need to manually restart the main.js script on the local cloud server.

Recommended Node.js version

Hello all,

I'm just getting my local spark server installed for testing and I was wondering what the current recommended node.js version is (0.10.X, 0.12.X, or the 4.X). I didn't see it mentioned in the README, etc, so apologies if I failed to RTFM.

TIA,

Bill

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.