Coder Social home page Coder Social logo

did-io's Issues

Use a good default iteration count

Use this as a guideline for determining a good default PBE iteration count:

http://security.stackexchange.com/questions/3959/recommended-of-iterations-when-using-pkbdf2-sha256

You should use the maximum number of rounds which is tolerable,
performance-wise, in your application. The number of rounds is a slowdown factor,
which you use on the basis that under normal usage conditions, such a slowdown
has negligible impact for you (the user will not see it, the extra CPU cost does not
imply buying a bigger server, and so on). This heavily depends on the operational
context: what machines are involved, how many user authentications per second...
so there is no one-size-fits-all response.

The wide picture goes thus:

  • The time to verify a single password is v on your system. You can adjust this time
    by selecting the number of rounds in PBKDF2. A potential attacker can gather f times
    more CPU power than you (e.g. you have a single server, and the attacker has 100
    big PC, each being twice faster than your server: this leads to f=200).
  • The average user has a password of entropy n bits (this means that trying to guess
    a user password, with a dictionary of "plausible passwords", will take on
    average 2n-1 tries).
  • The attacker will find your system worth attacking if the average password can be
    cracked in time less than p (that's the attacker's "patience").

Your goal is to make the average cost to break a single password exceed the
attacker patience, so that he does not even tries to, and goes on to concentrate
on another, easier target.

...

So the remaining parameter is v. With f = 200 (an attacker with a dozen good
GPU), a patience of one month, and n = 32, you need v to be at least 8 milliseconds
.
So you should set the number of rounds in PBKDF2 such that computing it over a
single password takes at least that much time on your server. You will still be able
to verify 120 passwords per second with a single core, so the CPU impact should
be negligible for you. Actually, it is safer to use more rounds than that, because,
let's face it, getting 32 bits worth of entropy out of the average user password is a
bit optimistic.

So if we think a month is enough to deter an attacker with a dozen good GPUs, our time to generate a key should be at least 8ms. We can easily go higher than this without messing up the UX and should. We just need to consider different browsers and CPUs will be running this.

We may also want to randomize the number of iterations, with a certain required minimum.

CachedResolver is a weird name

I think CachedResolver is kind of a weird name. It sounds like the resolver is cached vs. what I'm guessing it actually is ... which is a resolver that has a cache for whatever it is that it resolves. It's not clear why we didn't just name it DidResolver and the fact that it has a cache is just a feature of it. Can we do that in a newer version?

Consider cache auto-updater

Since fetching some DID docs can be a slow process, we may want to engineer something to keep caches warm independently of resolution requests. What this means is that -- we could schedule a timer to check for cache entries that are close to expiring, and we could re-resolve them in a background process. This would help reduce wait times when popular values have expired from the cache and just need to be refreshed.

Ensure UUID type DIDs have DID url-to-document-contents integrity protection

Should we perhaps deprecate the uuid type DID generation? (Since unlike the cryptonym, it does not contain proof of integrity of the document in the url itself).

Ensure that UUID type DIDs have some sort of integrity protection (that ensures that the document that gets resolved at the URL is the same document you expected). Cryptonyms currently have this property (the DID is generated from a key in the DID Doc), but UUID types do not.

Perhaps we can use something like Resource Integrity Protection (or a simplified version of Decentralized Autonomic Data)?

Proper handling of unknown DIDs

const context = didDoc['@context'];

I'm dealing with a negative test in bedrock-ledger-validator-signature where the DID does not exist for a mock key 'did:v1:777ea7ad-ab68-4039-b85b-a45a795b2d93/keys/1'

I'm using v0.7.0 against genesis.testnet.veres.one, is that proper?

What seems to be happening is that the request for L86 is returning an empty document which leds to:

TypeError: Cannot read property '@context' of undefined
    at VeresOneClient.get (/home/matt/dev/bedrock-dev/bedrock-ledger-validator-signature/test/node_modules/did-io/lib/methods/veres-one/client.js:86:27)
    at process._tickCallback (internal/process/next_tick.js:68:7)

Where should `attachInvocationProof` live?

@dlongley @dmitrizagidulin

attachInvocationProof({operation, capability, capabilityAction, creator,
algorithm, privateKeyPem, privateKeyBase58}) {
// FIXME: use `algorithm` and validate private key, do not switch off of it
if(privateKeyPem) {
algorithm = 'RsaSignature2018';
} else {
algorithm = 'Ed25519Signature2018';
}
// FIXME: validate operation, capability, creator, and privateKeyPem
// TODO: support `signer` API as alternative to `privateKeyPem`
const jsigs = this.injector.use('jsonld-signatures');
return jsigs.sign(operation, {
algorithm,
creator,
privateKeyPem,
privateKeyBase58,
proof: {
'@context': constants.VERES_ONE_V1_CONTEXT,
proofPurpose: 'capabilityInvocation',
capability,
capabilityAction
}
});
}

Do we need it at all?

Should it be part of crypto-ld?

If we need the wrapper, we know that we want to be able to add these sorts of proofs to operations other that DID Documents.

Rewrite to remove node-specific dependencies

There are a number of dependencies that are node specific (or that can run in the browser but are unnecessarily large for what they are providing).

We should remove (from at least the browser-side implementation):

  • async
  • request
  • lodash

Add a buildDocumentLoader() convenience method to CachedResolver

Add a buildDocumentLoader() convenience method, with the following functionality:


An initialized CachedResolver instance provides a convenience method that
builds a documentLoader instance that works for its registered DID methods.
This loader can be further composed with other compatible JSON-LD document
loaders
.

const resolver = new CachedResolver();
resolver.use(didMethodDriver1);
resolver.use(didMethodDriver2);

const documentLoader = resolver.buildDocumentLoader();

// The resulting documentLoader function now supports getting DID documents
// for did method 1 and 2, as well as fetching public keys from those DIDs.

Add some validation to catch null/undefined hostnames

Default settings in bedrock-did-client set the hostname to 'null' which can get passed into did-io here:

https://github.com/digitalbazaar/bedrock-did-client/blob/did-io-0.7.x/lib/index.js#L35

This results in an error thrown out of this module:

Could not fetch ledger agents: { FetchError: request to https://null/ledger-agents failed, reason: getaddrinfo ENOTFOUND null null:443
    at ClientRequest.<anonymous> (/home/matt/dev/bedrock-dev/bedrock-ledger-validator-signature/test/node_modules/node-fetch/lib/index.js:1345:11)
    at ClientRequest.emit (events.js:182:13)
    at TLSSocket.socketErrorListener (_http_client.js:391:9)
    at TLSSocket.emit (events.js:182:13)
    at emitErrorNT (internal/streams/destroy.js:82:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  message:
   'request to https://null/ledger-agents failed, reason: getaddrinfo ENOTFOUND null null:443',
  type: 'system',
  errno: 'ENOTFOUND',
  code: 'ENOTFOUND' }

There should be some validation of the hostname before attempting to connect to https://null

If mode and did should be validated as well if they aren't already.

Support unregistered/pairwise DID use case

Since there's no way to tell if a DID is registered or unregistered/pairwise (and this is by design), the resolution logic should be:

  1. do a GET to the ledger, check if it exists
  2. If doesn't exist (specifically a 404 error), then check local didStore to see if it's there, and return.
  3. If the DID also does not exist in local did store, then re-throw the 404 error

Can bad things happen in the `default` case

return RSAKeyPair.from(data, options);

It appears to me that some there could be some unintended consequences from having this default case. This module really only supports two key types, is that correct? I think we just be explicit about that and throw if the keyType is not one of the acceptable types.

I would add that I think the use of switch with a fall through to a default case is a pattern to be avoided. I think there is good reasoning behind DB's strong preference for if...return pattern whenever possible. if...else when it's absolutely necessary. I just did a quick search and it appears that switch is used in only one place in the entire bedrock backend code base.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.