Coder Social home page Coder Social logo

ui's Introduction

Packages

Adding new packages

To add a new package, run

yarn add packageName

Adding devDependency

yarn add packageName --dev

Updating a package

First, run the command:

yarn outdated

... to determine which packages may need upgrading.

We really should not upgrade all packages at once, but, one at a time and make darn sure to test.

To upgrade a single package named packageName:

yarn upgrade packageName

Unit Testing

Unit tests can be run via command line with yarn test, from within the /ui directory. For more detailed reporting, use yarn test -- --reporters=verbose.

Local dev

cloud platform

  1. From k8s-idpe, start the remocal service with loopback to your local ui changes.
    1. have your remocal instance build and deployed. See docs here.
    2. make remocal-dev APPS=ui DETACHED=1
  2. From the ui directory. Build the local dev server:
    1. set env vars for PUBLIC (your remocal url)
    2. yarn install && yarn start:dev:remocal
    3. yarn link different javascript libraries (e.g. giraffe, clockface, flux-lsp) as needed.
      • note that the flux-lsp wasm build produces an output directory which contains a package.json for the release. Make sure to yarn link to this directory.

oss platform

  1. From oss/influxdb repo, start the backend service.
    1. setup service following the contributing guide.
    2. run the service, goto browser localhost:8086
    3. create a login/password. (Remember these!)
    4. CAVEAT: influxdb/oss does not have live reload.
      • instead, run make and restart influxdb
  2. From the ui directory. Build the local dev server:
    1. yarn install && yarn start:dev:oss
    2. goto browser localhost:8080
    3. the same login/password pair should work.
    4. CAVEAT: live reload does not work, so you need to refresh the browser page.

Cypress Testing

cloud platform

  1. have your local dev running (see above).
  2. export NS=<your-remocal-namespace>
  3. run cypress test:
    • yarn test:e2e:remocal to test on tsm storage
    • yarn test:e2e:remocal:iox to test on iox storage

oss platform

  1. have your local dev running (see above).
    • Make sure to start your oss backend service with --e2e-testing flag.
  2. run cypress test:
    • yarn test:e2e:oss

Generating Test Reports

  • to run all tests [in series] locally, and generate a report: yarn test:e2e:<whatever>:report
    • e.g. yarn test:e2e:remocal:report, yarn test:e2e:remocal:iox:report, yarn test:e2e:oss:report
  • this will run in headless mode, so no browser will be shown. Just a stdout waiting as each test runs.
    • WARNING: This takes a long time, unlike CI which runs in parallel.
  • after all tests complete, it will automatically open a test result viewer.
  • any screenshots or vids will be saved in cypress/videos and cypress/screenshots

What is oats?

Oats is how we automatically generate our typescript definitions based open the openapi contract. See here for more details. After one of the yarn generate scripts are run, the typescript definitions are usually output to ./src/client/.

Zuora Form

Troubleshooting: If your Zuora form isn't rendering or calling your callback function which you passed in client.render. When running UI locally using Monitor CI, get Zuora PageID which you are using to render the form. Then, from Zuora admin console, get the Host and Port that PageID is corresponding to. Make sure to match those Host and Port with your INGRESS_HOST and PORT_HTTPS provided in the .env file of monitor-ci.

ui's People

Contributors

121watts avatar alexpaxton avatar aliriegray avatar appletreeisyellow avatar asalem1 avatar blegesse-git avatar bthesorceror avatar chitlangesahas avatar chnn avatar cryptoquick avatar desa avatar drdelambre avatar ebb-tide avatar goller avatar hoorayimhelping avatar imogenkinsman avatar ischolten avatar jaredscheib avatar jsternberg avatar lukevmorris avatar mavarius avatar nathanielc avatar nhaugo avatar ofthedelmer avatar palakp41 avatar tcl735 avatar timraymond avatar wdoconnell avatar wiedld avatar zoesteinkamp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ui's Issues

Encapsulate test resource boundaries

Right now when we run a test, we blow away the contents of the etcd store before every test, to prevent side effects of user interactions causing a collision between tests, and making them flaky.

There's some downsides to this, but it encompasses the ROI of it's current usage. The one project that's stressing the development of this ticket is deployments working loosely towards a system in which service tests can be coupled into a shared infrastructure before running all of their respective e2e tests.

It should also make our current test runs more reliable as the source of most of our test failures revolve around timeouts based on the /debug/provision step (aka: waiting for the cluster to recover from wiping test data)

The jist of the solution is to rejigger the debug server to handle pre test provisioning a little differently, fitting this rough description:

Given that we have a shared database amongst all the other tests across all the other teams:

  1. We need to generate a random org + user + token combo to use for the test runner, and be responsive to collisions during creation just at the org level
  2. Before every test, we cycle through the /api/v2/:resourceType -> /api/v2/:resourceType/delete/:id dance for that org to fetch all resources that could ever be generated, and delete them all.
  3. After all of the tests, do that dance one more time, and then kill the org + user + token combo
  4. Flush the test env every month or so for those times we didn't get to 3, because of reasons

The last point is mostly to encompass that there is a trade off between correctness and cost of development, and that there are operational means to solve technical problems. we can find the balance later.

As always, feel free to solve it a different way, this is just a starting point on an intent.

Phase parseResponse out in favor of fromFlux

The purpose of this issue is to address the fact that we currently have too many parsers in use. As a result of this, we end up needing to maintain a bunch of different code in a bunch of different places in order to make small changes to the UI.

Overview

Put simply, in most places that the parseResponse function is being used in the UI, it is doing so in order to obtain an array of values for a given column. Fortunately, fromFlux returns a more straightforward data structure that allows us to easily return this data without the need for complicated logic.

Criteria for Success

Find and replace all the usages of parseResponse in the UI in favor of fromFlux. This should be feature flagged in order to ensure current stability.

NOTE: DO NOT INCLUDE THE RAW DATA TABLE as part of this conversation process, that will be a separate issue

Flows: As a user I can define a bucket output

GIVEN that I am editing a flow
WHEN I click the Insert Cell Button
THEN I can add a "Output to Bucket" cell

WHEN I add a "Output to Bucket" cell
THEN I can choose a bucket

WHEN I choose a bucket
THEN I can see a dropdown with label "Time" set to _time

WHEN I click the time dropdown to change it
THEN I see a dropdown menu with _time, _start, and _stop as options
AND any other columns with type "time"

GIVEN I have a valid bucket and options configured
WHEN I open the flow again or run the flow
AND the Run Mode is set to Run
THEN I can see what would have been written to the bucket

GIVEN I have a valid bucket and options configured
WHEN I open the flow again or run the flow
AND the Run Mode is set to Run with Outputs
THEN I can see what would have been written to the bucket
AND the data should actually get written to the bucket

This issue is related to https://app.zenhub.com/workspaces/applicationmonitoring-5e3b05772922e316b3a210a4/issues/influxdata/influxdb/19694

Designs

Design Mock of Cell
Design Mock of option in Insert Cell menu

Technical Notes:

as a user, i should be logged out after my idle timeout

we should respect the idle timeout limits set in the org settings page and log the user out if they have not interacted with our app via the mouse of keyboard.

GIVEN that i have not interacted with the browser window for more than the idle timeout
WHEN I the next call to me is issued
THEN I am logged out of the app

WHEN I try to browse to another page in the app
THEN I am logged out of the app

Chore: Remove Unused Parsers

This is the final item of a multi-issue epic that spans unifying the parsers in the UI to use 1 parser as the source of truth.

Once we can successfully determine that the unification work is complete we should remove the following parsers from the UI:

[ ] parseResponse from src/shared/parsing/flux
[ ] fromFluxLegacy from src/shared/utils/fromFlux.legacy.ts
[ ] fromFluxLegacy from src/shared/utils/fromFlux.ts

we should also remove any & all feature flags associated with these.

We should also consider adding the tests that exist for these as tests for the current fromFlux function in giraffe (and whatever data transformation layer we are going to use for the RawDataTable

Org Timeout Settings

The goal of this epic is to allow the admin user to control the auto refresh and session timeouts for their org.

Add auto refresh back to dashboards

Background

We had originally removed the auto-refresh feature from dashboards because users were racking up large bills. However, as we have recently adjusted the pricing of the cloud product we would like to add the ability to auto-refresh. We would like to add some additional functionality to prompt the user to reconfirm that they want to continue auto-refreshing.

Acceptance Criteria

GIVEN that I am on a dashboard
WHEN I select an auto-refresh interval
THEN my dashboard should refresh all visible cells on the dashboard

GIVEN that I have a dashboard open
AND that I've selected an auto-refresh interval
WHEN I am on that page for more than 1h
THEN I should be prompted to reconfirm that I want my dashboard to continue to auto-refresh

Limit Dashboard Variable Hydration

Problem

Currently navigating to a dashboard will load and hydrate all the query variables available. This is a slow, tedious, and expensive process that is inefficient and unnecessary.

Steps to Reproduce

  1. Create a few query variables
  2. Create an empty dashboard
  3. Check the network tab to see all the variables being queried

Flows: Audit user eventing

we need to throw on that streamEvents feature flag and start making a gap list of all the buttons we haven't added events to. If a user clicks and it changes render state, i wanna see it in tools.

after that gap list is made, if it's small, just add the events. if there's a bunch, tell someone before starting, so that we can communicate the timelines better

as a user, i can set auto-refresh and idle timeout configuration options for my organization

As an admin user, i would like to set settings at the org level for the other users of my organization.

GIVEN that I am an admin and on the Settings area of the cloud app
WHEN i click on a tab for org settings
THEN I see a setting to enable/disable automatic refreshing for my organization
THEN I see a setting to configure idle logout in minutes for my organization

NOTES:

  • default automatic refresh is on
  • the default idle timeout is 30 minutes
  • Idle timeout is the amount of time with no input via mouse or keyboard from the user

WHEN i click on the control for enable/disable automatic refresh
THEN refreshing is toggled on or off for the app

WHEN i click in the area of the idle timeout value
THEN I can type in a new idle timeout value

WHEN i click the save button
THEN the settings are saved to my org

Notebooks: Add variables bar

i mean.. you can only add them with the raw flux editor right now, but that just means you gotta know a guy who knows a guy before it's useful. if you already know a guy, they should just pop up so you can look at the graphs you want. we can just look at all of the autogenerated flux queries and pick them out the same way we do in dashboards.

as a user, i can configure my dashboard to auto refresh

this allows the user to configure their dashboard to auto refresh in the browser as long as the setting is enabled for the org.

GIVEN that I am on a dashboard
AND auto-refresh is enabled for my org
WHEN I click to set an auto-refresh interval
THEN I am able to select from a predefined list of auto refresh intervals

WHEN I set an auto-refresh interval
THEN my dashboard should refresh all visible cells every interval

GIVEN that I have a dashboard open
AND that I've selected an auto-refresh interval
WHEN I am on that page for more than 1h
THEN I should be prompted to reconfirm that I want my dashboard to continue to auto-refresh

WHEN I confirm
THEN my dashboard should refresh all visible cells every interval

WHEN I do not confirm after 1 minute
THEN my dashboard should stop auto-refreshing

GIVEN that I have a dashboard open
AND I've selected an auto-refresh interval
WHEN I click on presentation mode
THEN my dashboard should refresh all visible cells on the dashboard with no prompt after an hour

GIVEN I have previously set a refresh interval on a dashboard
WHEN I returned to the dashboard
THEN my dashboard should not have a refresh interval set

Replace parseFiles with fromFlux-based data parser

For more on the background surrounding this issue, please see:

#39
#72
#73

Problem

Unlike the previous issues, the Raw Data Table component relies upon a different schema than what is currently provided by the fromFlux function.

What needs doing

Since our end goal is still to maintain 1 parser as our source of truth, we will want to create a data transformation layer in the UI that transforms the data returned from the fromFlux function into something that the Raw Data Table can ingest

Criteria for Success

Feature flag and use a data transformation layer that relies upon the fromFlux function in order to output the results in the format the is used in the Raw Data Table

Spike: why so many flux parsers?

we have so many floating around for no reason. some are behind feature flags, some only apply to the raw data view, some are used for the lsp. Lets make one parser, with giraffe being the source of truth, and fill any gap lists between them. End state should be only one flux parser, and it should be in giraffe. and why so much code that isn't papaparse?

Support fully custom time formats

Screenshot 2019-11-14 09 27 15

Ideally, we'd have some presets, but allow users to specify their own custom format if they'd like. We should present the user a preset list, but allow allow them to specify a custom format.

Community Template Import - bad url reloads data from previous import into popup values

Build details

OSS Nightly build

ts=2020-09-25T07:06:10.371504Z lvl=info msg="Welcome to InfluxDB" log_id=0PT1B8Gl000 version=2.0.0-beta.16 commit=838b3ea1b9 build_date=2020-09-25T05:17:55Z

Borwser: Chrome 85

Steps to reproduce

In OSS

  1. Open Settings > templates.
  2. Paste the Docker url into the Community template Github URL input, look it up and import it (https://github.com/influxdata/community-templates/blob/master/docker/docker.yml)
  3. Now Paste an invalid or broken URL into the Gitub URL Input. e.g. (https://github.com/influxdata/community-templates/blob/master/jenkins/jenkinz.yml)
  4. Click "Lookup Template"

Expected behavior

That the popup would load with empty or zeroed values for Dashboards, Telegraf Confs, Buckets, etc.
That the header in the popup would not be green and would include information that the URL or file is not valid
That the Install Template button in the popup would be disabled

Current behavior

The Template Installer popup contains data for the previously run Docker template import, and the Install Template button is green. There is an indication that the URL or File is bad is in the notifications on the top right, but the popup values and controls do not reflect the same state.

Screencast

CommunityTemplatesBadUrl01

Implement static legends for Graph (overlaid, stacked) and Band Plot

Target Customer / Background data / Context / Market Size

  • Target customer is the developer that want to share a static image of a graph with other people. Currently, the user would have to first hover over the graph so that the legend appears, then take a screenshot. Otherwise, they are stuck describing what the lines in the graph are manually or with a second screenshot.
  • This has frustrated our internal teams during incident management and has been requested
  • Target customer is also someone who wants to share charts on a wallboard which is a non-interactive session

Goals

  • This only applies to Graph and Band plots for now.
  • The legend should be displayed to the bottom of the cell
  • Allow a configuration option to turn on and off the legend display.
  • Allow a configuration option, which can display the most recent time value for each data series in the legend.
  • Static legend height has a configuration option (model on data explorer resize ui)

Acceptance Crit

  • as a user, when i configure a graph or band graph in the UI, then I have the option to always display a legend on the bottom of the cell.
  • as a user, when i enable the legend on a cell, no changes to hover tooltip display when hovering over different parts of the graph
  • as a user, hovering over a line will highlight the corresponding static legend
  • as a user, i can click on a line to keep it highlighted
  • as a user, i can cmd+click to highlight multiple lines
  • as a user, I can choose to configure whether recent time values will be displayed in the legend
  • as a user, I can choose the height of the static legend in a configuration option
  • as a product owner, i would like to see the number of times a user enables static legends

Release Plan

  • TBD

Team

alternatives considered

  • An alternative would be for someone to be invited to an org and linked to the cell or dashboard for the incident.

Technical Background

Inspired by the success of the graph bitmap hackathon project, we're going to make exporting a bitmap / screenshot of the graph a first class feature.

Currently all this functionality lives on the frontend as an aspect of the Canvas API.

Branch with image export - https://github.com/influxdata/giraffe/tree/export_images

Server-side rendering of the visualization would be preferred over front-end rendering only. This would unblock future hopes around notification endpoints being able to grab visualizations

Design Notes

https://www.figma.com/file/6pwGOv5jyMT2rpU8ugZNM1/Static-Legend?node-id=429%3A0

Query Builder: Attempt to deselect function should let the user know that the action is prohibited

Background

After delivering auto-aggregation, we have received reports from users that not being able to deselect an auto selected aggregate results in a confusing user experience. Specifically, they mention that it feels like a bug. We would like to express to users that this is indeed the expected behavior.

cant-deselect

Acceptance Criteria

WHEN I am using the custom section of Query Builder
AND I attempt to deselect the final aggregate function in the list
THEN I should receive a notification letting me know that this behavior is not possible

InfluxDB v2.0 Data Explorer Table Resizing

Steps to reproduce:
List the minimal actions needed to reproduce the behavior.

  1. ... Insert a measurement with a lot of field key/value pairs into InfluxDB v2.0
  2. ... Go to the InfluxDB v2.0 interface (localhost:9999) Data Explorer, and choose Table view
  3. ... Enter a Flux query on the measurement, piping to a pivot function that shows all the field key/value pairs as their own respective column (as shown below)
    image

Expected behavior:
Columns should resize to fit the width of the headers and have a horizontal scroll bar if there are too many columns to fit the page.

Actual behavior:
Columns were resized to fit the page instead so nothing is readable
image

Environment info:

  • System info: Run uname -srm and copy the output here
  • InfluxDB version: Run influxd version and copy the output here
    InfluxDB 2.0.0-beta.6 (git: acd09bf0b) build_date: 2020-03-25T20:14:01Z
  • Other relevant environment details: Container runtime, disk info, etc

Config:
Copy any non-default config values here or attach the full config as a gist or file.

Logs:
Include snippet of errors in log.

Performance:
Generate profiles with the following commands for bugs related to performance, locking, out of memory (OOM), etc.

# Commands should be run when the bug is actively happening.
# Note: This command will run for at least 30 seconds.
curl -o profiles.tar.gz "http://localhost:8086/debug/pprof/all?cpu=true"
curl -o vars.txt "http://localhost:8086/debug/vars"
iostat -xd 1 30 > iostat.txt
# Attach the `profiles.tar.gz`, `vars.txt`, and `iostat.txt` output files.

Replace parseResponse in the Raw Data Table to use fromFlux

For more on the background surrounding this issue, please see:

#39
#72

Problem

Unlike the other usages of the parseResponse function, the Raw Data Table component relies upon a different schema than what is currently provided by the fromFlux function.

What needs doing

Since our end goal is still to maintain 1 parser as our source of truth, we will want to create a data transformation layer in the UI that transforms the data returned from the fromFlux function into something that the Raw Data Table can ingest

Criteria for Success

Feature flag and use a data transformation layer that relies upon the fromFlux function in order to output the results in the format the is used in the Raw Data Table

Implement pagination for resources

Background

Implement pagination for tasks

When users have more than 100/500 tasks, we fail to properly display all of the tasks that they have. The intention of this feature is to implement pagination for tasks.

Open questions

  • What to do about the search box? Do we search locally on the page, or is there a corresponding API with paginated results that we use?
  • Do we want infinite scroll, or do we want the user to click through pages of results?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.