Coder Social home page Coder Social logo

config-array's People

Contributors

dependabot[bot] avatar fasttime avatar github-actions[bot] avatar krsriq avatar martinez-hugo avatar mdjermanovic avatar nzakas avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

config-array's Issues

Vulnerability in minimatch

Our project uses your package and there was found a vulnerability in minimatch version 3.0.4 which your package is dependent on. Please update your dependency to at least minimatch 3.0.5. see CVE-2022-3517

Perf: Pre-process extensions from files and ignores

For many configs, files patterns are likely to have an extension on the end (some form of **/*.xx or path/to/**/*.xx) in most cases.

As part of normalize, we could pre-process all the files patterns and identity the set of extensions that any file that could match must have. This could then provide a fast-path out of getConfig() by quickly eliminating any files that don't have an extension matching the known set of extensions. It could even be exposed publicly as part of the API so that they could be used to directly guide file enumeration by consumers.

There are obviously a few cases that would need to turn this fast-path off, such as functions in files, or a * pattern without an extension, but I would hope in a large number of cases (at least once flat config is more widely adopted) that this optimization would be enabled.

The same could be done for ignores, except that functions would no longer disable the optimization (since they can only ignore more files, they can't un-ignore), but care would need to be taken around negated ignore patterns.

I haven't worked out all the semantics of this yet or all the potential corner cases, but it'd be good to get your thoughts on whether it's worth it flesh this out further.

Perf: Pre-merge global elements at normalization time

Since global elements will always end up being applied to every file (if they are going to be processed at all), then all the global elements { !files } are going to need to be merged again and again for every file config fetched (that's not cached).

Rather than repeat this for potentially every file, I'd propose that during the normalization process, all global elements be pre-merged backwards through the array and eliminated.

This obviously has the benefit of each individual file having to process fewer array elements when its config is fetched, and the number of merge operations therefore required. And depending on the merge function of individual schemas, it could significantly reduce the work they have to do overall if the global elements apply things that reduce the amount of processing required.

It does have some upfront cost, but we're using a O(n) operation once at normalisation time to reduce the n in a O(n * fileCount) at getConfig time.

This process could also be used to gather all the global ignores and eliminate those elements as well, doing the work that this.ignores usually has to do for "free".

So, to be more concrete, here's some very quick and dirty example code of what this might look like:

function preMergeGlobals(configs) {
    const [...preMergedConfigs, globalIgnores] = [...preMergeGlobalsInner(configs.reverse())];
    this.length = 0;
    this.push(...preMergedConfigs.reverse());
    
    // TODO: Figure out where to put the global ignores so that `this.ignores` doesn't have to traverse the array
}

function* preMergeGlobalsInner(reversedConfigs) {
    const globalIgnores = [];
    let currentGlobalToMerge = undefined;
    for (const config in reversedConfigs) {

        if (!config.files && !config.ignores) {
            currentGlobalToMerge = currentGlobalToMerge
                ? this[ConfigArraySymbol.schema].merge(config, currentGlobalToMerge) // We're iterating backwards, so apply the earlier config on top of this one
                : config;
        } else if (config.ignores && Object.keys(config).length === 1) {
            globalIgnores.push(config.ignores);
        } else {
            yield currentGlobalToMerge
                ? this[ConfigArraySymbol.schema].merge(config, currentGlobalToMerge) // We're iterating backwards, so apply the earlier config on top of this one
                : config;
        }
    }

    yield { ignores: globalIgnores.reverse().flat() };
}

Happy to put together a PR with properly thought out code for this if you think it's worth implementing.

Perf: Pre-merge matching files patterns

In the same vein as #99, I think there's some up-front consolidation of config elements that could be done during normalization when adjacent elements share one or more files patterns in common (taking into account ignores as well).

(Adjacency might not even be required if it can be determined easily enough that two elements are completely disjoint (e.g. in the easiest case of ['**/*.a'] and ['**/*.b']), but I think that's something to leave for later.)

Given that configs are likely to have multiple elements that share some files patterns in common (due to importing configs from plugins that are applied to the same sets of files, e.g. many typescript related plugins will share { files: ['**/*.ts'] } in common), the potential for merging these configs together ahead of time presents the same benefits as merging through the global configurations - fewer elements in the array means less work per-file we get config for.

Even a partial match could be effectively merged by making the two elements disjoint, e.g.

[{ files: ['**/*.js'], something: {a: 1} }, { files: ['**/*.js', '**/*.ts'], something: { b: 2 }]
=>
[{ files: ['**/*.js'], something: {a: 1, b: 2} }, { files: ['**/*.ts'], something: { b: 2 }]

Again, happy to put together some code to flesh this idea out.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/nodejs-test.yml
  • actions/checkout v4
  • actions/setup-node v4
.github/workflows/release-please.yml
  • GoogleCloudPlatform/release-please-action v3
  • actions/checkout v4
  • actions/setup-node v4
npm
package.json
  • @humanwhocodes/object-schema ^2.0.2
  • debug ^4.3.1
  • minimatch ^3.0.5
  • @nitpik/javascript 0.4.0
  • @nitpik/node 0.0.5
  • chai 4.3.10
  • eslint 8.52.0
  • esm 3.2.25
  • lint-staged 15.0.2
  • mocha 6.2.3
  • nyc 15.1.0
  • rollup 3.28.1
  • yorkie 2.0.0
  • node >=10.10.0

  • Check this box to trigger a request for Renovate to run again on this repository

Vulnerability: debug package

Hi ! There a vulnerability identified by GitHub on debug package.

In fact, there are a ReDoS vulnerability on < 4.3.1 versions.

Affected versions of debug are vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter.

As it takes 50,000 characters to block the event loop for 2 seconds, this issue is a low severity issue.

This was later re-introduced in version v3.2.0, and then repatched in versions 3.2.7 and 4.3.1.

You have more infos here : GHSA-gxpj-cx7g-858c

Do you think that you can update your package.json file in consequence ?

Expand schema with a `canonicalize` method

While delving into all the code behind the new config system in ESLint, I've noted a repeating pattern of helper functions that convert various parts of configuration into a "canonical" form - e.g. rule configuration canonicalizing severities to their number form, raw severity -> array, plugin function-style to object-style, and so on.

More generally, for many parts of the configuration, there are multiple ways to express the same thing, but there's often a single canonical way to express it that the rest of the system would have an easier time processing.

I think config-array could help out here by allowing these operations to be centralized in the schema itself, and by applying the canonicalization at the right moment, it could mean that every other part of the system (including say, merge) would only need to need to handle the canonicalized form of the configuration, rather than applying it locally.

As an example, the ESLint rulesSchema could be expanded like so:

const ruleSeverities = { 0: 0, "off": 0, 1: 1, "warn": 1, 2: 2, "error": 2 };
const rulesSchema = {

    canonicalize(rules) {
        const canonicalisedRules = Object.create(null);
        for (const ruleName in Object.keys(rules) {

            if (ruleName === "__proto__") { continue; }

            let rule = rules[ruleName];

            if (typeof rule === 'string' || typeof rule === 'number') {
                rule = [rule];
            }

            rule[0] = ruleSeverities[rule[0]];
            
            if (rule[0] === 0) {
                rule = [0];
            }

            canonicalisedRules[ruleName] = rule;
        }

        return canonicalisedRules;
    }

    merge(first = {}, second = {}) {
        // Now merge only has to deal with the canonicalised forms of the rules and doesn't need to do any conversions itself
        const result = {
            ...first,
            ...second
        };

        for (const ruleId of Object.keys(result)) {

            /*
             * If either rule config is missing, then the correct
             * config is already present.
             */
            if (!(ruleId in first) || !(ruleId in second)) {
                continue;
            }

            const firstRuleOptions = first[ruleId];
            const secondRuleOptions = second[ruleId];

            // If the second rule config disabled the rule, then we're done.
            if (secondRuleOptions[0] === 0) {
                continue;
            }

            /*
             * If the second rule config only has a severity (length of 1),
             * then use that severity and keep the rest of the options from
             * the first rule config.
             */
            if (secondRuleOptions.length === 1) {
                result[ruleId] = [secondRuleOptions[0], ...firstRuleOptions.slice(1)];
                continue;
            }

            /*
             * In any other situation, then the second rule config takes
             * precedence. That means the value at `result[ruleId]` is
             * already correct and no further work is necessary.
             */
        }

        return result;
    },

    validate(value) {
        ...
    }
};

I know that preprocessConfig is in place already to possibly enable this sort of processing, but it seems like a common enough use-case that an explicit mechanism in the schema would improve the locality of these conversions, and simplify other parts of the system that no longer have to deal with different potential formats.

I'm happy to put together a PR of what this could look like, and work out the detailed semantics of exactly when/where the canonicalization would happen (preferably as lazily and late as possible).

Cannot install this package

Eslint depends on this package and when I was installing it, I had this error

npm install eslint                                                        
npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry.npmjs.org/@humanwhocodes%2fconfig-array - Not found
npm ERR! 404 
npm ERR! 404  '@humanwhocodes/config-array@^0.5.0' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
npm ERR! 404 It was specified as a dependency of 'eslint'
npm ERR! 404 
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, http url, or git url.

Following this message You should bug the author to publish it (or use the name yourself!) I am creating this bug.
Could you please fix package in npm?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.