Coder Social home page Coder Social logo

microsoft / quilla Goto Github PK

View Code? Open in Web Editor NEW
57.0 3.0 5.0 6.23 MB

Declarative UI Testing with JSON

Home Page: https://microsoft.github.io/quilla

License: MIT License

Makefile 1.29% Python 98.49% Shell 0.22%
test ui frontent selenium selenium-python

quilla's Introduction

Quilla

CodeQL Code Analysis Test pipeline Release pipeline Documentation publish

License Package Version Supported Python Versions Pypi Downloads

Declarative UI Testing with JSON

Quilla is a framework that allows test-writers to perform UI testing using declarative syntax through JSON files. This enables test writers, owners, and maintainers to focus not on how to use code libraries, but on what steps a user would have to take to perform the actions being tested. In turn, this allows for more agile test writing and easier-to-understand test cases.

Quilla was built to be run in CI/CD, in containers, and locally. It also comes with an optional integration with pytest, so you can write your Quilla test cases as part of your regular testing environment for python-based projects. Check out the quilla-pytest docs for more information on how to configure pytest to auto-discover Quilla files, adding markers, and more.

Check out the features docs for an overview of all quilla can do!

Quickstart

  1. Run pip install quilla

  2. Ensure that you have the correct browser and drivers. Quilla will autodetect drivers that are in your PATH or in the directory it is called

  3. Write the following as Validation.json, substituting "Edge" for whatever browser you have installed and have the driver for:

    {
      "targetBrowsers": ["Edge"],
      "path": "https://www.bing.com",
      "steps": [
        {
          "action": "Validate",
          "type": "URL",
          "state": "Contains",
          "target": "bing"
        }
      ]
    }
  4. Run quilla -f Validation.json

Installation

Note: It is highly recommended that you use a virtual environment whenever you install new python packages. You can install Quilla by cloning the repository and running make install.

Quilla is available on PyPI, and can be installed by running pip install quilla.

For more information on installation options (such as installing from source) and packaging Quilla for remote install, check out the documentation for it here

Writing Validation Files

Check out the documentation for it here

Context Expressions

This package is able to dynamically inject different values, exposed through context objects and expressions whenever the validation JSON would ordinarily require a regular string (instead of an enum). This can be used to grab values specified either at the command-line, or through environment variables.

More discussion of context expressions and how to use them can be found in the documentation file here

Generating Documentation

Documentation can be generated through the make command make docs

Check out the documentation for it here

Make commands

A Makefile is provided with several convenience commands. You can find usage instructions with make help, or below:

Usage:
  make [target]

Targets:
  help                            Print this help message and exit
  package                         Create release packages
  package-deps                    Create wheel files for all runtime dependencies
  docs                            Build all the docs in the docs/_build directory
  clean-python                    Cleans all the python cache & egg files files
  clean-docs                      Clean the docs build directory
  clean-build                     Cleans all code build and distribution directories
  clean                           Cleans all build, docs, and cache files
  install                         Installs the package
  install-docs                    Install the package and docs dependencies
  install-tests                   Install the package and test dependencies
  install-all                     Install the package, docs, and test dependencies

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

quilla's People

Contributors

cryptaliagy avatar github-actions[bot] avatar microsoft-github-operations[bot] avatar microsoftopensource avatar rajeevdodda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

quilla's Issues

Add more plugin hooks

Right now only a few plugin hooks are exposed, but more could be beneficial, especially for allowing custom actions.

Some basic plugin hook ideas:

  • A hook after the TestStep selector is created to add/remove supported test steps
  • A hook after the Validation selector is created to add/remove supported validation objects (so things beyond XPathValidation and URLValidation)
  • A hook after the OutputValue selector is created to add/remove support output value sources
  • A hook for the browser options to allow more fine-grained configuration

More could be added over time, but these would allow plugins to extend Quilla way more substantially

Add plugin to Quilla for processing YAML inputs

Currently, Quilla supports only JSON documents for the validation, though it is pretty easy to go from YAML to JSON, so it would be good to have a plugin that can handle that natively.

A few possibilities for this are below:

  • Adding a --yaml CLI flag that, if present, reads the ctx.json value as a string to a YAML parser and sets ctx.json to the converted output. This would require adding the option in the quilla_addopts plugin hook, then doing the conversion in the quilla_configure hook, before the json data is read by Quilla
  • In the quilla_configure hook, check to see if the passed file is a .yaml or .yml extension, and if so run the ctx.json data through the YAML parser and convert it to a JSON string. Note, the file name is only available as part of the parsed args, not through the Context object. However, ctx.is_file is available to determine whether a file was passed in or not. This has the limitation that it would only convert YAML files, which could limit users who would pass in raw YAML strings. Not exactly sure that's a problem, though

This would require the addition of a YAML parser package to the dependency list, since to my knowledge Python does not have a YAML parser. If we don't want to add a dependency that feels optional, we could have it as an extra (so, adding to setup.py in the extra_dependencies dictionary), and then during the plugin execution we could check to make sure that the YAML parser is installed. That way, we can run the plugin only if a YAML parser is installed.

Once the plugin is written, make sure to add its class/module to the _load_bundled_plugins list.

Configure release pipeline

The release pipeline needs to be configured so that Quilla will be automatically deployed, with releases published in Github and PyPI

Add VisualParityReport class for better reporting on VisualParity validations

The VisualParity validation will require specific reporting elements that are not currently present in the ValidationReport class. Given how much it would add (baseline ID, baseline image URI, treatment image URI, delta image URI), it makes more sense to separate it out into its own class that subclasses ValidationReport instead of amending ValidationReport to include all this extra data that is only used for one validation state.

Add --version CLI flag

There isn't a proper way of getting the current version of the application, so a --version command will be helpful for debugging purposes, as users will be able to easily check which version of Quilla they have installed

Integrate "Running IDs" into Quilla

It would be beneficial for Quilla to have a "running ID" to uniquely differentiate between runs.

Some care will have to be taken to implement the running IDs in a way that allows multiple quilla tests to be categorized as part of the same "run", since if it were just added to the context and every time the context gets created a new running ID gets generated it would mean that each Quilla test would have a different run ID even though they could be executed as part of the same test suite

Add support for loading Quilla options from config file

Several frameworks and tools related to testing also include the ability to configure themselves through a configuration file of some sort. This allows users to not have to save their command-line options through aliases/scripts/etc, but still allow configs to be persistent.

An example of this is Pytest, which searches for one of a few possible config files and a root directory. This is explained in further detail here: https://docs.pytest.org/en/6.2.x/customize.html#config-file-formats

Other tools that do this include tox, mypy, flake8, etc.

Quilla could benefit from something similar, which would even allow configs to be versioned in VCS.

This functionality could be done as either a plugin, or alternatively it could be handled similarly to how Pytest does it. The latter option would require the whole parsing/config system that Quilla uses to be overhauled by making a new Parser class that would abstract adding cofig file options and CLI options, which is admittedly more work.

As a practical example, the Pytest Argument Parser can be found here: https://github.com/pytest-dev/pytest/blob/main/src/_pytest/config/argparsing.py . It might be helpful to see how they did it, since Quilla uses the same plugin library.

Some considerations with this ticket:

  • What language should the config file be written in? YAML, INI, TOML, JSON, etc. are various options, each with their own benefits and drawbacks
  • How will this configuration file integrate with the existing pytest-quilla plugin? Would we use a setup.cfg file system like flake8/mypy/pytest, or would we want to keep our files separate?
  • Do we actually want to support config files? Currently, the only way to configure Quilla is done through CLI options or alternatively through plugins (uiconf.py file/installed plugins). If this is good enough, we can label it as such but we should consider whether or not this is something worth supporting.

Move "quilla_prevalidate" hook

quilla_prevalidate can also be put in its own file to handle more generic cases.

Originally posted by @yucui-avengers in #19 (comment)

Once more hooks are created, it probably won't make sense to keep "quilla_prevalidate" in the configuration hookspec file, since it isn't exactly a configuration hook (but it also is not not a configuration hook, in a way).

Implement delta image creation function

As discussed in #27 and #28, there will need to be some way to compare the two images (the baseline and the newly generated one), and confirm that the image is the same image.

This issue covers the implementation of the delta image creation as well as verification that the screenshots match.

Originally I thought it might be best to have the plugins handle the logic for verification themselves, but now I think it might be wiser to let the VisualParityValidation object handle that since it will be shared across all storage methods, Instead, the plugins can be supplied with only the baselineID of the image and then they will return the image itself. The above tickets will be edited to reflect this.

Ideally, the number of dependencies added to the project to enable this should be 0, but it might be wise to use an image processing library to compare the images and to produce the delta. If that is the case, the Pillow library (https://python-pillow.org/) might be useful

Implement AzureStorage plugin for VisualParity

After the LocalStorage plugin is created (#27), the AzureStorage plugin would be invaluable for providing an automated way to view the delta images, especially when running in a system that does not allow the generation or uploading of artifacts.

  • Figure out what data is necessary for accessing Azure storage. This is most likely a username of some sort, a token, and a bucket ID, but finalizing this will be necessary. This ticket should be edited once that is finalized to define the CLI args or environment variables needed to make that happen.
  • Similar to the LocalStorage plugin, this will receive a baseline ID and a screenshot data from the plugin hook. If ALL of the necessary data to connect to blob storage is not available, return None
  • From a new hook (quilla_get_visualparity_baseline(baselineId: str, ctx: Context)), search the baseline cloud storage directory for an image that matches the baseline ID. This should support nested folders for ease of use.
  • Return the screenshot
  • From a second hook (quilla_store_image(ctx: Context, baselineId: str, imageData), not sure what imageData will be), store the delta and return the path to the file

Add mechanism to allow cleanup of old VisualParity results

It would be beneficial to have a plugin hook that allows various storage plugins to clean up old VisualParity test results. Since VisualParity produces images, and most likely will store 2-3 images per run per VisualParity validation, there will be storage concerns with keeping all test results.

Some mechanism for cleaning up old test runs would then be helpful to ensuring that unexpected storage problems don't happen down the line. This could maybe be a new plugin hook, or as part of the behavior of the plugin when storing new treatment images.

Add a RunProgram action

Quilla test files are just a collection of steps and minor configurations, which means that they can actually be reused by collecting the steps and ignoring most of the configurations. A RunProgram action would allow shared testing logic to be abstracted out either into its own Quilla test file (that can be run by Quilla), or possibly into a more barebones file that is configuration-neutral (i.e. that does not have the "targetBrowsers" and "path" fields, but would have a "steps" array and potentially a "definitions" object).

An example usecase would be having a "Login to portal" file that describes the steps and definitions required to log in to some application, which would then be used for other Quilla tests that require a user to be logged in to perform.

Add support for loading configuration from environment variables

At this time, Quilla exclusively gets its configuration from the CLI options. However, it might be of use to have the options be read from environment variables.

My recommendation for this would be to add a plugin that uses the quilla_configure hook, and check some environment variables and the CLI parsed args to see what should be pulled from the environment and what shouldn't. Alternatively, using the quilla_addopts hook would potentially allow the setting of defaults for the various configuration options, which would bypass the need for changing the context object later on

One potential limitation to this approach would be the plugin execution order. If a plugin (i.e. the BlobStorage plugin) needs the configuration options at a specific time, then the environment loader would necessarily need to run prior to the BlobStorage plugin. However, that can also be mitigated by only handling environment configuration of parser options that are created in the make_parser function, and adding logic in the BlobStorage plugin to handle its own needs

Integrate logging into Quilla

Quilla currently doesn't use the logging library, though the addition of it could be beneficial especially for debugging.

This issue would most implement a logging factory method, which configures a default stream handler and formatter, then passes it to a new hook (maybe quilla_configure_logging?)

I'm unsure if each logger should be bound at the object level or the module level. If it is bound at the object level, each class would effectively have deeply shared logic that should be abstracted away as a base class, which would be a bit annoying. However, if the loggers are bound at the module level, there would need to be some good way to get the plugin manager injected into it each time.

A tradeoff would be to just not forward the loggers to a plugin hook, though I am less happy with that solution

Add documentation for writing storage plugins with the BaseStorage abstract class

Currently there is no documentation/added examples of how to write storage classes. Although we have two direct examples of using this (LocalStorage and BlobStorage), adding some explanation on how to create this could be helpful for potential plugin writers. These examples should also be linked in the documentation as part of the Plugins docs.

Implement LocalStorage plugin for VisualParity validation

The first plugin for the VisualParity validation to be finalized would be the LocalStorage plugin, which should be a relatively simple plugin:

  • Add parser option '-b/--baseline-directory' to specify where images are stored
  • Set the plugin to run if and only if the baseline directory is provided by returning None if it is not set
  • From a new hook (quilla_get_visualparity_baseline(baselineId: str, ctx: Context)), search the baseline directory for an image that matches the baseline ID. This should support nested folders for ease of use.
  • Return the screenshot
  • From a second hook (quilla_store_image(ctx: Context, baselineId: str, imageData), not sure what imageData will be), store the delta and return the path to the file

This will work partly as a proof-of-concept, and partly as a way of storing the baselines with the tests themselves which is a valid usecase for it. That way the baselines can be version controlled as well, which would not be the case through blob storage by some other means (Azure Blob storage, S3, OneDrive, etc)

Configure documentation build + publish pipeline

As part of (or as follow-up to) the release pipeline, the documentation should be built and published through Github Pages.

Documentation should be made available in the following formats:

  • HTML (For the readthedocs and/or Github Pages doc distribution)
  • manfile (For ease of CLI use, though it wouldn't be installed with "pip install quilla" when released to pypi so extra docs should be made to explain this)

As a nice-to-have, releasing the PDF/Epub versions of the documentation would also be good, though at minimum the PDF version requires a lot of latex packages to be used as well, which would inflate the build time.

Ideally the manfile, and if available the PDF/Epub releases, would be included in the Github Releases page

Write Pytest/Quilla Integration Docs

Write documentation that explains the integration between Quilla and Pytest. This document should cover the following

  • Why Pytest was chosen as a framework to integrate with Quilla
  • How to enable the pytest-quilla plugin in the pytest file
  • How to choose the quilla test file prefix
  • How to add markers to quilla tests
  • How to specify options to quilla when running from pytest

That probably covers the spectrum of documentation that is relevant for Pytest

Configure test pipeline

Add github actions to perform test steps for this repository.

Testing tools include:

  • flake 8
  • mypy
  • pytest

Test pipeline should download appropriate browsers (Firefox, Chrome, and Edge), as well as the relevant browsers, so that the integration tests can be run appropriately

Rename "quilla_postvalidate" hook

Post-validation the reports are already generated and all the validations are done, I might rename this as "quilla_post_test" instead? Since this specific hook is called after "validate_all()" is called, the only thing left to do is do the reporting output. Not sure, I think I'm going to think on the new name some more and address it on a later PR

Originally posted by @taliamax in #19 (comment)

The quilla_postvalidate hook might be erroneously named, and should be changed to a more apt name

Enable user to specify exclusion XPath for VisualParity

Enabling users to specify the xpath of an element A that should be excluded from the visual parity check of element B if A is positioned inside of B. This feature can allow users to compare two sections of a page while ignoring some parts of it that are not in the interest, for example, website publish date.

Originally posted by @yucui-avengers in #53 (comment)

Discover all files if specified definition file is actually a directory

Currently, each definition file passed to quilla must be done so explicitly. This is in part because it's easier to push the burden onto the user, and in part because this enables the user to be certain about the order of precedence for the definitions file.

It could however be a usecase that a single team maintains a set of mutually exclusive definition files, such as:

{
    "LoginPage": {
        "my_definition": "some_xpath"
    }
}
{
    "HomePage": {
        "my_definition": "some_xpath"
    }
}

In this case, there will never be a conflict since each file will deal with a specific subset. It would then be beneficial to be able to specify to Quilla a directory where all such files exist, allowing users to have them be discovered by Quilla instead of manually specified.

Another idea would be to allow these folders to exist in a way that each file is namespaced to guarantee the lack of conflicts.

Add VisualParity related hooks to the hookspecs

So far I have outlined the following hooks that make sense for VisualParity:

  • quilla_get_visualparity_baseline(ctx: Context, baselineId: str)
  • quilla_store_image(ctx: Context, baselineId: str, imageData)

The quilla_store_image hook would be used for both the deltas (in which case the baselineId would be DELTA-<baselineId>, or something similar), and a possible --update-baseline CLI flag to add to quilla to push new images to the storage mechanisms

Add Visual Parity as a validation type

Currently Quilla relies on the contents of the XPath or URL to make validations, which makes sense for data-driven uses such as fetching data from websites and creating outputs, or verifying the existence of elements/their contents.

However, one critical element of UI testing is ensuring that the look of certain elements does not change between runs unexpectedly. As such, a 'Visual Parity' validation type would enable Quilla to compare screenshots, and fail the test if a change has been introduced.

Some problems in attempting to address this:

  1. Should prospective users be instructed to check the files into VCS? File-based storage would be the simplest, but not exactly elegant
  2. Is it possible to support some other storage mechanism, such as a cloud storage mechanism like Azure provides? Could each validation specify the storage mechanism that it uses, or should that be consistent per run? i.e. in one validation file, could I have one Visual Parity baseline that is stored as a file in VCS, and another that is stored in an Azure bucket?
  3. How will prospective users update these files? Should Quilla include a subcommand, for example quilla parity --update? Should Quilla just generate these files always, and just leave the user to decide what happens next?
  4. How, if at all, would plugins be able to interact with the VisualParity validation type? Would plugins be able to add new storage mechanisms?

My first instinct is to have a schema like so:

{
    "targetBrowsers": ["Edge"],
    "path": "https://bing.com",
    "steps": [
        {
            "action": "Validate",
            "type": "VisualParity",
            "target": "${{ Definitions.HomePage.ContainerElement }}",
            "parameters": {
                "baselineID": "HomePageContainerElement",
            }
        }
    ]
}

Where Definitions.HomePage.ContainerElement is the XPath of some container element that we want to screenshot, and the contents of baselineID are assumed to be unique. Quilla could then do the following:

  1. Take a screenshot of the given element, outputting it to a specific folder ('./outputs'?)
  2. Pass the screenshot (png? base64? I'm inclined to say "store it as base64" but that's less friendly to users who might want to actually see the screenshot) and the baselineID to a plugin hook

It would then be the responsibility of plugins to return a ValidationReport, where the first non-None return value determines the outcome for the test. This would mean that each plugin would then:

  1. Check for the existence of a baselineID that matches the specified one. If one does not exist under the plugin's storage method, return None
  2. Compare the existing baseline image and produce a ValidationReport

This makes it easier to handle it from Quilla's side of things, since it can just offload the task, but it leaves a lot of questions open about how various plugins would configure their own requirements. It might just be better to leave that to the design of plugin writers, who might have preferences on their setup, but it could mean that users get an inconsistent experience on how to configure their validations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.