operator-framework / audit Goto Github PK
View Code? Open in Web Editor NEWaudit operator bundles and catalogs, producing a report.
License: Apache License 2.0
audit operator bundles and catalogs, producing a report.
License: Apache License 2.0
Once installed the audit-tool is a binary and can be run from any directory. If there is a tmp or output directory where it is run they will be wiped out without further notice:
Lines 150 to 155 in 65771ff
A standard way of working with tmp files and directories is to have them created in the OS directory designed for the the purpose, often /tmp on Linux. An example can be found here:
https://gobyexample.com/temporary-files-and-directories
The idea here would be add tests to cover the project.
Add a GitHub action to verify the title : https://github.com/kubernetes-sigs/kubebuilder/blob/master/.github/workflows/verify.yml
for we ensure that the title of the PRs respected the emojis like https://github.com/kubernetes-sigs/kubebuilder/blob/master/CONTRIBUTING.md#pr-process . So that, we can use the title to generate release notes.
Add a new flag enable-deprecates
where If set, the audit will also check the deprected operator bundles. By default they are ignored.
In this way, we need to ensure that this flag will be used to build the bundles, packages and channels queries. For further info see the Deprecating Bundles.
Note as well that we can only add the the deprecated check for images upper N tag that will be provided after that be implemented.
That would be easier if we can always filter by the package name:
E.g.
select * from operatorbundle, channel_entry where operatorbundle.name = channel_entry.operatorbundle_name AND channel_entry.package_name = “<value>”
It may be advantageous to have the output of the audit-tool stored in a different repository:
The index generation currently assumes that the report files are in a specific place in the directory structure of this repository. It could also be made configurable
Create a new command for the audit tool that can generate reports which will return all Operators of some index that has some specific annotation. This new command would work similar to audit custom validator [OPTIONS] --filter-validation="" that returns a list of Operators which containers that validation.
See the code for the validator command as an example:
https://github.com/operator-framework/audit/tree/main/cmd/custom/validator
See the code for a specific need script report that looks for a specific annotation:
https://github.com/operator-framework/audit/tree/main/hack/specific-needs/openshift-ns
(the command would perform the same but look for annotations where the key/value is like what is informed via flags)
The report should return:
Package Name | List of Bundles | Annotation found with value
We can do a template similar to multi-arches: https://operator-framework.github.io/audit/testdata/reports/redhat_redhat_operator_index/dashboards/multiarch_registry.redhat.io_redhat_redhat_operator_index_v4.11.html
Currently, we use SQL like to filter by the name of bundles/packages/channels. This task is for we allow the use of regex.
Now we have an index image for each OCP release and we are releasing some .z versions of our project only to a specific OCP index image.
For our project we really care about the upgrade path across OCP upgrades so I think we should add the capability to audit at the same time more than one index image from different OCP releases.
I am wondering if this can be added to the step registry for openshift-ci such that we can add it to the upstream tests that we are gating PR's on? I
A lot of systems use JUnit to parse test results I am wondering if this is able to be outputted to that format so that we can use it in Jenkins/openshift-ci/other systems
We need to add a new contributing guide with the basic info for new contributors, such as the i.e: https://github.com/kubernetes-sigs/kubebuilder/blob/master/CONTRIBUTING.md
Create a new command for the audit tool that can generate reports which will return all Operators of some index that has some specific RBAC set on the CSV. This new command would work similar to audit custom validator [OPTIONS] --filter-validation="" that returns a list of Operators which containers that validation.
See the code for the validator command as an example:
https://github.com/operator-framework/audit/tree/main/cmd/custom/validator
See the code for a specific need script report that looks for a specific RBAC:
https://github.com/operator-framework/audit/tree/main/hack/specific-needs/rbacmachine
The goal of this command is to generate the same report done with https://github.com/operator-framework/audit/tree/main/hack/specific-needs/rbacmachine but instead of we have the data fixed on the code we would be able to inform it from flags.
If scorecard checks are not disabled then check for kube running and print an error out if not.
Otherwise. the error faced is [unable to run scorecard: invalid character 'E' looking for beginning of value]
which is not very clear for users.
Currently, the tool looks inside of its repository to get the templates and then use them to generate the custom reports.
This task is for we are able to embed the templates into the bin.
The idea would be see if we can create a cron-job to run the full reports make full-report
and update the testdata/reports location for the latest tag with it. Also, we need to check if we would like to have commit in the repo all reports or just the latest ones.
Unmarshal and write in the tmp dir in the disk the bundle files from bundle column (operatorbundle table) when this info exist instead of downlad and extract the bundle from the image using the bundle path.
That is only required to run the scorecard. So, it should be only load in memory when the scorecard checks will not be executed to reduce the io effort required.
Would be nice we improve the code implementation to output the xls in order to make the columns be created dinamically in order to have a better maintainability. However, for now we can just add this option and then hidden the columns as it is done for scorecard and validators checks.
Currently, the audit tool downloads the index catalog and check the data in the SQLLite database provided in the index to generate the bundle's reports in the JSON format which is used for any report and custom dashboards. See:
Then, with the adoption of FBC audit tool will no longer be able to generate the bundle's reports for these images.
The goal here would generate the same JSON bundle report but with the data from the index using FBC format instead of SQL.
By running audit-tool index bundles --index-image=<image>
we download the index, we extract all bundles from the index and we create a JSON file with this info. However, currently, we just store the CSV from the index. The goal of this task would be to change the model for we store all data find on the bundles in the JSON format so that we can also generate reports looking for the other manifests.
Note that the mode is defined in: https://github.com/operator-framework/audit/blob/main/pkg/reports/bundles/columns.go#L33-L54
Note that after we download the bundle and prepare it we read the manifest and parse to the bundle here: https://github.com/operator-framework/audit/blob/main/pkg/actions/get_bundle.go#L86-L92
See here we are storing the annotations file as well: https://github.com/operator-framework/audit/blob/main/pkg/actions/get_bundle.go#L112
In pkg/helpers.go the RunCommand function uses cmd.CombinedOutput. I am wondering why this was chosen instead of cmd.Output.
An issue with it is that when the container engines sends warnings they are added to the result sent to the standard output and this for instance cannot be unmarshaled from json in: if err := json.Unmarshal(output, &dockerInspect); err != nil {
of the same file.
This resulted in the following error message:
unable to inspect the index image: invalid character 'i' in literal true (expecting 'r')
This task is for us to output the reports sorted by names.
The audit tool uses the SDK Scorecard tests for example to build the Project QA reports see an example here. If the bundle does not have a Scorecard tests audit will inject a fixed config more info.
The goal of this task is to add the following options:
--scorecard-config
: path of the scorecard test directory which will be used as the default scorecard config and be added to the operator bundles. If none is informed the default audit configuration will be used instead--scorecard-custom-tests
: set false to disable the scorecard checks and use the tests which are configured in the bundle. If this option is chosen audit tool will always set its default tests. (default true)Note that the OLM index images tags are re-builds then we need to start to output what was the hashtag and/or the build date of the image used to know that. The info in the header about only the tag and date when the report was generated does let us know the accurate version of the image used.
The idea is to implement the logic to be able to identify from the repository data the aspects of the projects to know its Builder, Language and versions. Currently, the audit only looks for this information in the annotations which means that we can only obtain that if the project was built with SDK and the bundle was also built with the make bundle command.
Also, see that to achive this goal properly we also need to add the implementation to handle with the issues scenarios and a timeout option for the bundles. See:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.