nasa / scrub Goto Github PK
View Code? Open in Web Editor NEWSCRUB is a platform for orchestration and aggregation of static code analysis tools.
License: Apache License 2.0
SCRUB is a platform for orchestration and aggregation of static code analysis tools.
License: Apache License 2.0
Certain types of warnings can generate the same warning on the same line of the same file. The parser should be updated to ensure that duplicate warnings are not reported.
The API call 'projects/export_findings' only works if the user has sys admin permissions. Need to revert back to the 'issues/search' API call and find another solution for exporting security hotspots.
Making calls to api/issues/search has the potential to miss some findings. The correct API call should be used to ensure that all findings are exported.
The Coverity json output is versioned and controlled better than the emacs style output.
Current mechanism for individual warning suppression is to place an inline comment on the source code line where the warning occurs. This doesn't work for file level warnings that are marked as occurring on line 0.
Users should be able to run SCRUB quietly as a background task. Suppressing stdout logging facilitates this.
SCRUB should have a default test case that can be executed to ensure that it has been installed properly.
The command 'scrub run-tool' was supported in previously deployed versions of SCRUB. This command should be supported under backwards compatibility.
Most tools group JavaScript and TypeScript analysis together, SCRUB should support this pattern.
When Collaborator fails during the upload process (after review creation) the log will provide users with an incorrect path to the log file.
Update the SARIF parser to handle valid SARIF results files that don't contain any static analysis results.
SCRUB results filtering contains many redundancies and needs to be simplified to improve performance of the filtering step.
If no build instructions are provided, SCRUB should attempt common build instructions for the language of choice.
VS Code, CodeSonar, SonarQube, etc.
SonarQube has a max limit of 10,000 results. Results retrieval should be capped at 20 pages at 500 results per page.
Nominal operation is for SCRUB to continue running if a tool fails, but users may want to exit on first failure. There should be a flag to enabled this. (--strict, --fail-fast, etc.)
The Semmle team has indicated that license will not be renewed beyond FY21. The Semmle tool and references should be removed from SCRUb to support this decision. Code associated with Semmle will be stored as archive if needed for future release.
The regex filtering functionality currently allows for filtering patterns that can cause files to appear multiple times. This can cause performance issues when uploading to an external system such as Collaborator.
Currently when the module_helper executes, all of the results are filtered again even if no new results have been generated for certain tools. This can take a long time for larger sets of results. The module_helper should only filter the newly generated results files.
There should be a testing utility to allow for developers to run all tests associated with a particular tool in order to focus testing activities.
Encoding should be set to UTF-8 when importing SARIF files for parsing. Parsing issues may occur when not set.
Larger SonarQube analyses have a delay period after analysis is completed before the results may be retrieved from the server. SCRUB should check to make sure the analysis is finished before attempting to retrieve the results.
Introducing Python as a supported language opens the opportunity to support pylint analysis. This is analogous to compiler analysis for compiled languages.
Distribution of results is closely tied to operation of the legacy GUI. It should be disabled by default and only activated if the legacy GUI is being used.
There should be a set of performance test(s) that have the intent of monitoring SCRUB runtime. This will help to ensure that new features/changes don't significantly affect runtime.
Add JavaScript analysis templates for all applicable tools.
Collaborator target integration generates an empty-ish file for warnings when none are included in the filter. The ccollab integration will error on this file, causing the upload to terminate and the review to be deleted.
SCRUB operates under the assumption that all tools have been properly configured before SCRUB is executed. The cov-configure command should be executed before SCRUB execution, not during.
Currently CodeSonar analysis implicitly performs the codesonar build
step as part of the codesonar analyze
step. This can be broken into two steps to allow for more flexibility in capturing the build process.
The update to template based analysis is causing issues with the filtering algorithm in the micro filtering logic. This affects the detection of misspellings in micro suppressions and can cause certain suppressions to be erroneously ignored.
The command execution error messages that are returned to the user are vague and should provide better information to the user.
Using the projects/export_findings
does not allow for tailoring the results that are pulled down from the server. Need to determine how results can be filtered before parsing.
Update the gbuild parser to capture gbuild warnings in addition to DoubleCheck warnings.
SCRUB should be able to interface with GitHub to decorate pull requests with output data from SCRUB analysis.
There are certain sets of parameters that are required for analysis to be performed. Currently the error message does indicate what parameter is missing. It would be helpful to inform the user which required parameters are missing as part of the error message.
When SCRUB is using a user-defined working directory to store analysis artifacts, it will fail if the user specifies a directory that already exists. This currently generates an exit code of 1, similar to an individual tool failure. This should have a unique exit code (10) for debugging purposes.
Path combinations should be performed using built-in python functions, not manually.
The documentation should provide details on how users can use custom tool configurations as part of SCRUB execution (Coverity, CodeSonar, etc...)
Storing SCRUB results in a hidden directory can be misleading for some users. Storing analysis in a non-hidden directory makes it easier to locate SCRUB results and log information.
With large codebases the process of recursively cleaning previous SCRUB artifacts from the repository can take a non-trivial amount of time. Adding debugging information would be a good way to indicate progress.
If no .py files are located at the source root, pylint analysis fails. Execution should be modified to support this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.