Coder Social home page Coder Social logo

addons-linter's Introduction

CircleCI codecov npm version

addons-linter

The Add-ons Linter is being used by web-ext and addons.mozilla.org to lint WebExtensions.

It can also be used as a standalone binary and library.

You can find more information about the linter and it's implemented rules in our documentation.

Usage

Command Line

You need Node.js to use the add-ons linter.

To validate your add-on locally, install the linter from npm:

# Install globally so you can use the linter from any directory on
# your machine.
npm install -g addons-linter

After installation, run the linter and direct it to your add-on file:

addons-linter my-addon.zip

Alternatively you can point it at a directory:

addons-linter my-addon/src/

The addons-linter will check your add-on and show you errors, warnings, and friendly messages for your add-on. If you want more info on the options you can enable/disable for the command-line app, use the --help option:

addons-linter --help

Privileged extensions

The addons-linter can lint privileged extensions only when the --privileged option is passed to it. This option changes the behavior of the linter to:

  1. emit errors when the input file (or directory) is a regular extension (i.e. the extension does not use privileged features)
  2. hide messages related to privileged features (e.g., permissions and properties) when the input file (or directory) is a privileged extension

Linter API Usage

You can use the linter directly as a library to integrate it better into your development process.

import linter from 'addons-linter';

const sourceDir = process.cwd();

const linter = linter.createInstance({
  config: {
    // This mimics the first command line argument from yargs,
    // which should be the directory to the extension.
    _: [sourceDir],
    logLevel: process.env.VERBOSE ? 'debug' : 'fatal',
    stack: Boolean(process.env.VERBOSE),
    pretty: false,
    warningsAsErrors: false,
    metadata: false,
    output: 'none',
    boring: false,
    selfHosted: false,
    // Lint only the selected files
    //   scanFile: ['path/...', ...]
    //
    // Exclude files:
    shouldScanFile: (fileName) => true,
  },
  // This prevent the linter to exit the nodejs application
  runAsBinary: false,
});

linter.run()
  .then((linterResults) => ...)
  .catch((err) => console.error("addons-linter failure: ", err));

linter.output is composed by the following properties (the same of the 'json' report type):

{
  metadata: {...},
  summary: {
    error, notice, warning,
  },
  scanFile,
  count,
  error: [{
    type: "error",
    code, message, description,
    column, file, line
  }, ...],
  warning: [...],
  notice: [...]
}

Development

If you'd like to help us develop the addons-linter, that's great! It's pretty easy to get started, you just need Node.js installed on your machine.

Quick Start

If you have Node.js installed, here's the quick start to getting your development dependencies installed and running the tests

git clone https://github.com/mozilla/addons-linter.git
cd addons-linter
npm install
# Build the project.
npm run build
# Run the test-suite and watch for changes. Use `npm run test-once` to
# just run it once.
npm run test

You can also build the addons-linter binary to test your changes.

npm run build
# Now run it against your add-on. Please note that for every change
# in the linter itself you'll have to re-build the linter.
bin/addons-linter my-addon.zip

Required Node version

addons-linter requires Node.js v16 or greater. Have a look at our .circleci/config.yml file which Node.js versions we officially test.

Using nvm is probably the easiest way to manage multiple Node versions side by side. See nvm on GitHub for more details.

Install dependencies

Install dependencies with npm:

npm install

npm scripts

Script Description
npm test Runs the tests (watches for changes)
npm [run] build Builds the lib (used by CI)
npm run test-coverage Runs the tests with coverage (watches for changes)
npm run test-once Runs the tests once
npm run lint Runs ESLint
npm run test-coverage-once Runs the tests once with coverage
npm run test-integration-linter Runs our integration test-suite
npm run prettier Automatically format the whole code-base with Prettier
npm run prettier-ci Run Prettier and fail if some code has been changed without being formatted
npm run prettier-dev Automatically compare and format modified source files against the master branch

Building

You can run npm run build to build the library.

Once you build the library you can use the CLI in bin/addons-linter.

Testing

Run npm test. This will watch for file-changes and re-runs the test suite.

Coverage

We're looking to maintain coverage at 100%. Use the coverage data in the test output to work out what lines aren't covered and ensure they're covered.

Assertions and testing APIs

We are using using Sinon for assertions, mocks, stubs and more see the Sinon docs for the API available.

Jest is being used as a test-runner but also provides helpful tools. Please make sure you read their documentation for more details.

Logging

We use pino for logging:

  • By default logging is off (level is set to 'fatal') .
  • Logging in tests can be enabled using an env var e.g: LOG_LEVEL=debug jest test
  • Logging on the CLI can be enabled with --log-level [level].

Prettier

We use Prettier to automatically format our JavaScript code and stop all the on-going debates over styles. As a developer, you have to run it (with npm run prettier-dev) before submitting a Pull Request.

L10n extraction

The localization process is very similar to how we do it for addons-frontend: locales are always updated on the master branch, any PR that changes or introduces new localized strings should be merged on master first.

In order to update the locales (when new localized strings are added to the codebase), run the following script from the master branch. This script automates all the steps described in the addons-frontend docs, without any confirmation step.

./scripts/run-l10n-extraction

Architecture

In a nutshell the way the linter works is to take an add-on package, extract the metadata from the xpi (zip) format and then process the files it finds through various content scanners.

We are heavily relying on ESLint for JavaScript linting, cheerio for HTML parsing as well as fluent.js for parsing language packs.

Scanners

Each file-type has a scanner. For example: JavaScript files use JavaScriptScanner. Each scanner looks at relevant files and passes each file through a parser which then hands off to a set of rules that look for specific things.

Rules

Rules get exported via a single function in a single file. A rule can have private functions it uses internally, but rule code should not depend on another rule file and each rule file should export one rule.

Each rule function is passed data from the scanner in order to carry out the specific checks for that rule it returns a list of objects which are then made into message objects and are passed to the Collector.

Collector

The Collector is an in-memory store for all validation message objects "collected" as the contents of the package are processed.

Messages

Each message has a code which is also its key. It has a message which is a short outline of what the message represents, and a description which is more detail into why that message was logged. The type of the message is set as messages are added so that if necessary the same message could be an error or a warning for example.

Output

Lastly when the processing is complete the linter will output the collected data as text or JSON.

Deploys

We deploy to npm automatically using Circle CI. To release a new version, increment the version in package.json and create a PR. Make sure your version number conforms to the semver format eg: 0.2.1.

After merging the PR, create a new release with the same tag name as your new version. Once the build passes it will deploy. Magic! โœจ

Dispensary

As of November 2021, dispensary has been merged into this project and a CLI is available by running ./scripts/dispensary.

Libraries updates

This is the (manual) process to update the "dispensary" libraries:

  1. Open src/dispensary/libraries.json
  2. Open the release pages of each library. Here is a list:
  1. On each page, check whether there are newer release versions than what is in src/dispensary/libraries.json. Note that some libraries, like react, support several versions, so we need to check each "branch".
  2. For major upgrades, take a quick look at the code changes
  3. Add new versions to src/dispensary/libraries.json
  4. Run npm run update-hashes
  5. Commit the changes in src/dispensary/libraries.jsonand src/dispensary/hashes.txt
  6. Open a Pull Request

Note: hashes.txt will be embedded into the addons-linter bundle.

The scripts/update-dispensary-doc command updates the list of release pages above based on the src/dispensary/libraries.json file.

License

This source code is available under the Mozilla Public License 2.0.

Additionally, parts of the schema files originated from Chromium source code:

Copyright (c) 2012 The Chromium Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE-CHROMIUM file.

You are not granted rights or licenses to the trademarks of the Mozilla Foundation or any party, including without limitation the Firefox name or logo.

For more information, see: https://www.mozilla.org/foundation/licensing.html

addons-linter's People

Contributors

ag12r avatar andy-moz avatar dependabot[bot] avatar diox avatar entequak avatar fjoerfoks avatar greenkeeper[bot] avatar greenkeeperio-bot avatar itielman avatar jasnapaka avatar jimsp472000 avatar karm46 avatar marceloghelman avatar meskobalazs avatar milupo avatar mozilla-pontoon avatar mstriemer avatar muffinresearch avatar nolski avatar petercpg avatar ravneette avatar renovate-bot avatar renovate[bot] avatar rpl avatar selimsumlu avatar theochevalier avatar tofumatt avatar ujdhesa avatar wagnerand avatar willdurand avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

addons-linter's Issues

Allow CLI parser to be run from a (Celery) queue

@andymckay, @mstriemer, and I talked about #49. What we actually need is to use the existing https://github.com/mozilla/olympia Django app as our API server (having multiple front ends is needless) to handle the Validator's API (eg: upload a file for validation -> get an ID -> query said ID for its validation status).

Do need to get the validator to run as a Celery task that can get data from the Django API (put into celery) and then send data back.

The API would return a status of the validation (processing/complete) with a series of messages/warnings/errors (as JSON, which the validator can already easily produce being a JS app).

Quick review of existing structure before we move onto adding more rules.

Before we go too far and add lots of rules I think it might be worth taking a step back briefly to look at what we have in terms of parsers and how they are integrated and make sure we are being as consistent in our approach as possible for each parser so far.

Kind of questions I have in mind are:

  • Use of rule functions vs class methods.
  • How are the message objects setup and passed back to the collector.

We can then file issues for any things that need to be changed if necessary.

Port tests

This will need breaking down further.

Get ESLint to ignore /*eslint*/ comments

We need a way to either configure ESLint not to respect comments inside the code (usually in /* eslint nocheckforbadthing */-type comments).

If that's not possible, we'll need to strip out all comments when we parse code or something so the code can't tell the validator to back off ๐Ÿ˜„

Make tests less noisy

Output from the tests is a bit noisy because we have console.log calls inside the code path that aren't silenced for tests.

screenshot 2015-10-09 21 00 55

Realistically, we probably want an validator.config.output option that is nothing or null.

Not sure when the CLI would ever want it, but it's what we want for the tests and we should already have the infra for it via that config.output option.

Other console.log should be silenced by this option. We probably just want to use our own log method everywhere.

Plan architecture for processing files.

Starting with the xpi:

  • How with the various parsers and processors be glued together.
  • We should look to ensure the integrity of the scans (making sure all files are processed).

Add JSCS to tests

I've already made a few silly style nitpicks/raised some style questions in #20.

Shall we add some JS Style checks to the tests so we don't quibble over how we do long strings, line lengths, etc.?

It's worked well for localForage. Keeps styles in-line on the more fluid things eslint won't check.

Deal with known JS libs

  • Check against known library hashes #383
  • Build our own hash list #384
  • Hook up Dispensary inside the linter

We need a way to skip over validation of known safe libs.

Add binary file tests

There are a set of tests that read the first few bytes of a file to identify them.

Move RDF rules into their own files

Related to #78, we should keep our rules in single files (similarly to what we do with ESLint rules) so they are easy to find and we don't have massive validator classes full of methods.

I'll move the RDF ones that exist already.

Test for obsfucation/identify obsfucation attempts

Currently we test for identifiers (#44), as the previous validator does. This leaves us open to obsfucation of restricted identifiers, eg:

var m = "m";
var o = "o";
var z = "z";
var idb = "IndexedDB";
var tricksterVariable = m + o + z + idb;
// Bad!
var myDatabase = window[tricksterVariable];

We need to statically analyse variable paths and find out their eventual values if they're used to dynamically call a function.

Failing this: we need to identify heuristics that we can use to say that obfuscation appears likely and alert the reviewer for closer manual inspection.

Define list of layout tests

The list of rules covers a bunch of tests that relate to files and how they're laid-out in the zip (xpi). We should write a list of those and start implementing them.

Plan error handling

  • How should errors be handled?
  • How does the current validator expose various classes of error.

Defend against zip-bombs

For use via amo it's important we can blow-up gracefully if a zip bomb is submitted for validation.

Test to make sure number of rules match rules that are run in scan()

Write a test that checks the number of rule files in rules/$VALIDATOR_TYPE (eg: all files sans index.js) and checks how many rules are run in $VALIDATOR_TYPE.scan(). We can test this for at least the HTML and RDF checkers now, and could check the length of the ESLint rule list as well.

This makes sure of two things:

  1. All rule files contain exactly one rule.
  2. All rules are being exported/imported and run properly.

Try htmlparser2 in RDF

htmlparser2 has better test coverage and seems more robust, so let's try to use it for RDF parsing instead of XMLDom.

Detect addon-type

We need to know what addon-type we are dealing with as some validation tests are type-specific.

Add text output processing.

We've got a start on outputting JSON - we need to add something to output the rules in a nice way as just text with nice colours assuming boring is false.

Add message collector

This classes job is to collect up warnings, errors and notice objects and provide an API for doing so.

Put scanners/validators in their own directory

Just a minor thought, but as we add separate scanners (CSS, JS, RDF), the root of src/ is getting crowded. It would be easier to see what files did what if they were grouped (ie put all scanners in a scanners/ folder.

So instead of:

- src/
  - messages/
  - rules/
  - cli.js
  - css.js
  - javascript.js
  - rdf.js
  [etc.]

we can do:

- src/
  - messages/
  - scanners/
    - css.js
    - javascript.js
    - rdf.js
  - rules/
  - cli.js
  [etc.]

Refactor HTML/RDF scanner classes

Looking at the HTML and RDF scanner classes, they're nearly the same now that I've started on #79. Might be nice to define a "markup" scanner class and have them simply extend it (essentially the only difference is the parser used and the rules they're using).

Decide on CSS parser for CSS processing rules.

Let's investigate what's state of the art in CSS parsing for Node. Then the next step will be to prototype out something simple and double check the chosen solutions looks fit for purpose.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.