Coder Social home page Coder Social logo

cashapp / logquacious Goto Github PK

View Code? Open in Web Editor NEW
58.0 8.0 14.0 2.25 MB

Logquacious (lq) is a fast and simple log viewer.

License: Apache License 2.0

JavaScript 0.21% Dockerfile 0.50% Go 2.14% HTML 0.82% TypeScript 89.20% Makefile 0.07% Sass 7.05%
elasticsearch log-viewer fast simple

logquacious's Introduction

Archived

โš ๏ธ This project is no longer actively maintained!

Logquacious

Logquacious (lq) is a fast and simple log viewer built by Cash App.

It currently only supports exploration of logs stored in Elasticsearch, however the storage/indexing backend is pluggable. If you are interested in contributing more backends, open a pull request!

Demo use of Logquacious

Rationale

Putting application and system logs in an Elasticsearch index is a common way to store logs from multiple sources in a single place that can be searched. However, while there are many web-based user interfaces for Elasticsearch, most of them either focus on read/write access, treating Elasticsearch as a general purpose database, or are Elasticsearch query builders. We didn't find any modern, well-designed, minimalist web user interfaces designed with the explicit purpose of read-only log exploration.

Features

  • Fields, filters and data sources are customisable, see config.example.json.
  • Interactive histogram.
  • Time picker.
  • URLs can be shared.
  • Expandable log entries:
    • Multiple levels of JSON objects can be expanded.
    • Click on a value to add as a filter.
    • Can copy the whole JSON payload, or a single value by pressing the copy icon.
  • Customise log direction.
  • Customise light and dark theme.

Planned Features

  • Real time tailing.
  • Show log context around an entry.

Local demo

The local demo runs a basic web server which serves Logquacious. It also runs an instance of Elasticsearch with a script to generate demo log entries.

You'll need docker and docker-compose installed, then run:

cd demo
docker-compose up

Wait a while, then visit http://localhost:8080/ in your browser.

You should be presented with the Logquacious UI and a few logs that are continuously generated in the background.

Installation

Docker

You will need Docker installed.

You can configure the image in multiple ways:

  • A basic configuration that has a single ES endpoint, configured via command line arguments.
  • Mounting custom config.json and/or nginx files.

Command line

You can configure the instance via command line arguments or environment variables (e.g. ES_URL):

# docker run logquacious --help
Usage: lq-startup

Flags:
  --help                       Show context-sensitive help.
  --es-proxy                   Use a reverse proxy for Elasticsearch to avoid
                               needing CORS. (ES_PROXY)
  --es-url=STRING              Elasticsearch host to send queries to, e.g.:
                               http://my-es-server:9200/ (ES_URL)
  --es-index="*"               Elasticsearch index to search in. (ES_INDEX)
  --timestamp-field="@timestamp"
                               The field containing the main timestamp entry.
                               (TIMESTAMP_FIELD)
  --level-field="level"        The field containing the log level. (LEVEL_FIELD)
  --service-field="service"    The field containing the name of the service.
                               (SERVICE_FIELD)
  --message-field="message"    The field containing the main message of the log
                               entry. (MESSAGE_FIELD)
  --ignored-fields=_id,_index,...
                               Do not display these fields in the collapsed log
                               line. (IGNORED_FIELDS)

For example run the following for this configuration:

  • Your Elasticsearch service is at 192.168.0.1
  • The Elasticsearch indexes start with logs-
  • The message field is text
  • You want the host to listen on port 9999
docker run -p 0.0.0.0:9999:8080 squareup/logquacious \
  --es-url="http://192.168.0.1:9200" \
  --es-index="logs-*" \
  --message-field="text"

Typical output:

2020/01/13 21:39:32 Variables for this docker image looks like this:
{ESProxy:true ESURL:http://192.168.0.1:9200 ESIndex:logs-* TimestampField:@timestamp LevelField:level ServiceField:service MessageField:text IgnoredFields:[_id _index] IgnoredFieldsJoined:}
2020/01/13 21:39:32 Successfully generated/etc/nginx/conf.d/lq.conf
2020/01/13 21:39:32 Successfully generated/lq/config.json
2020/01/13 21:39:32 Running nginx...

http://localhost:9999/ should work in this example.

Custom config

If you have your own config.json, you can simply mount it at /lq/config.json.

docker run -p 0.0.0.0:9999:8080 -v `pwd`/custom-config.json:/lq/config.json squareup/logquacious

You can also mount your own nginx configuration at /etc/nginx/conf.d/lq.conf. By default it is generated for you based on command line arguments.

Build from source

  • Install Node.js
git clone https://github.com/cashapp/logquacious
cd logquacious
npm install
npm run build
  • npm run build will generate a dist directory containing all the files needed for a web server, including an index.html file.

Configure Logquacious in config.json.

Setting up a web server if you don't already have one:

  • Install Caddy: curl https://getcaddy.com | bash -s personal
  • Create a Caddyfile to listen on port 8080 with http, also to talk to your Elasticsearch server:
:8080
proxy /es my-elastic-search-hostname:9200 {
  without /es
}
  • Run caddy in the same directory as the Caddyfile
  • Point your browser at http://localhost:8080/. The Elasticsearch endpoint should be working at http://localhost:8080/es/.

Development

The development workflow is very similar to the "From Source" set up above. You can run a self reloading development server instead of npm run build.

You can either set up CORS on Elasticsearch or reverse proxy both the hot server and Elasticsearch. To do this, create Caddyfile in the root of the project:

:8080

# Redirect all /es requests to the Elasticsearch server
proxy /es my-elastic-search-hostname:9200 {
  without /es
}

# Redirect all other requests to parcel's development server.
proxy / localhost:1234

To run the parcel development server:

npm run hot

Run caddy. You should be able to hit http://localhost:8080/ and when you make any code changes the page should refresh.

There are tests which are executed with npm test.

Configuration

The top level structure of the json configuration is as follows:

{
  "dataSources": [],
  "fields": {
    "name-of-field-configuration": []
  },
  "filters": []
}

dataSources

Contains the URL, index, etc for querying Elasticsearch. An example:

"dataSources": [
  {
    "id": "elasticsearch-server",
    "type": "elasticsearch",
    "index": "{{.ESIndex}}",
    "urlPrefix": "{{if .ESProxy}}/es{{else}}{{.ESURL}}{{end}}",
    "fields": "main",
    "terms": "-service:lq-nginx"
  }
]

id is a reference that can be used to create a data source filter. (See below). If you only have one data source, you don't need to create a data source filter.

type must be elasticsearch until more data sources are implemented.

index is the Elasticsearch index to search in. You can use an asterisk as a wildcard. This corresponds to the URL in a query request, e.g. http://es:9200/index/_search

urlPrefix is the URL to connect to your Elasticsearch server, without a trailing slash. This will resolve to urlPrefix/index/_search.

fields is a reference to the key of the fields in the top level of the json configuration.

terms is a string containing Elasticsearch terms that will always be added to the user's terms. Useful to hide logs of queries to Logquacious.

fields

Configures how log entries are shown in the UI. You're able to transform, add classes, ignore fields, etc.

Here is an example:

"fields": {
  "main": {
    "timestamp": "@timestamp",
    "collapsedFormatting": [
      {
        "field": "@timestamp",
        "transforms": [
          "timestamp"
        ]
      },
      {
        "field": "message",
        "transforms": [
          {
            "addClass": "strong"
          }
        ]
      }
    ],
    "collapsedIgnore": ["_id", "_index"]
  }
}

This configuration will do the following:

  • It is called main which is the fields reference used in dataSources.
  • Place the @timestamp field at the start of each line and format it.
  • Place the message field afterwards and make it stand out.
  • All other fields in the log entry will be shown afterwards in the default grey colour, except _id and _index.

If you want to see an example of many transforms check out the example config.

filters

There is a menu drop down that is enabled when you use filters. It is between the search button and the time drop down.

You are able to customise it to have values you can filter on, e.g.:

"filters": [
  {
    "id": "region",
    "urlKey": "r",
    "title": "Region",
    "default": "ap-southeast-2",
    "type": "singleValue",
    "items": [
      {
        "title": "All Regions",
        "id": null
      },
      {
        "title": "Sydney",
        "id": "ap-southeast-2"
      },
      {
        "title": "London",
        "id": "eu-west-2"
      }
    ]
  }
]

This singleValue filter allows you filter log entries based on region equalling ap-southeast-2 for example. This is identical to searching for region:ap-southeast-2 in the search field.

The urlKey is what is used in the URL for this filter. For example the URL might look like: http://localhost:8080/?q=my+search&r=ap-southeast-2

title is shown as the the name of the field/value in the search drop down menu.

The null value signifies that the filter was not selected, so it does not filter on that key in that case.

Another type of filter is a dataSource filter for when you have multiple Elasticsearch instances. The id of each item must point to the id of a data source. You can see an example of this in the example config under the env filter.

Cross-Origin Resource Sharing (CORS)

If you want to be able to communicate to Elasticsearch on a different host and port to Logquacious, you will need to configure Elasticsearch to respond with the correct CORS headers.

For example, you are running https://lq.mycompany.com/ which serves the static content. You will need to set these configuration options in Elasticsearch:

http.cors.enabled: true
http.cors.allow-origin: "https://lq.mycompany.com/"

See the Elasticsearch documentation on the http configuration options for more information.

License

Copyright 2019 Square, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

logquacious's People

Contributors

adrw avatar cpakman avatar ewolak-sq avatar gak avatar jaischeema avatar jonwinton avatar lyallcooper avatar lyonlai avatar maniksurtani avatar mightyguava avatar rimpi05 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logquacious's Issues

Histograms don't always reflect amount of logs

I haven't tested this thoroughly, but sometimes the histogram doesn't go back as far as the logs go. Or on occasion the bar lengths don't match the numbers. I've heard a few people say they don't trust the histogram due to seeing inconsistencies themselves.

Repeating histogram errors

Haven't investigated why yet, but here is the output in the console:

Error: <rect> attribute y: Expected length, "NaN".
Error: <rect> attribute height: Expected length, "NaN".

Live/tail mode

The ability to follow the latest log entries would be an awesome feature.

Due to the nature of Elasticsearch latency and ordering, the implementation needs to be considered. I'm leaning towards just showing logs in the order they are received from es.

  • Poll Elasticsearch with an overlapping time frame, e.g. "everything in the last 5 minutes, order by timestamp.".
  • If a log entry hasn't already been added, add it to the end of the entries.

The implementation is simple and should be fast, versus inserting log entries between older ones when they are out of order.

I think a prototype is needed to make sure this works reasonably, and potentially look at other options if it is jarring, slow, etc.

Long expanded entry lines can't be copied

When expanded values are too long, you can not copy because the copy icon is past the end of the screen.

A solution could be to put the copy icon near the start.

Be more obvious between environments

A simple fix would be allowing a filter to apply a style class to particular values shown in the nav bar.

A more difficult one I've tried before is to change the colour of the whole top bar for an important filter, e.g. viewing production logs. This was complex because of the multiple styles using the same colour explicitly in the nav dropdowns, etc.

I think go with the first, and if the second is too complicated, create a separate issue.

Copied values should not be HTML escaped

It looks like HTML escaping is being applied to the value that gets copied out when you click the copy button too. I'm getting stuff like this:

[2020/01/31 21:31:36.088 +00:00] [WARN] [region_request.go:313] [&quot;tikv reports &#x60;StoreNotMatch&#x60; retry later&quot;] [storeNotMatch=&quot;request_store_id:4 actual_store_id:23398 &quot;] [ctx=&quot;region ID: 15, meta: id:15 start_key:\&quot;t\\200\\000\\000\\000\\000\\000\\000\\r\&quot; end_key:\&quot;t\\200\\000\\000\\000\\000\\000\\000\\017\&quot; region_epoch:&lt;conf_ver:8 version:7 &gt; peers:&lt;id:47 store_id:4 &gt; peers:&lt;id:66 store_id:41 &gt; peers:&lt;id:13381 store_id:13133 &gt; , peer: id:47 store_id:4 , addr: tidb-test-1-tikv-2.tidb-test-1-tikv-peer.tidb-test-1.svc:20160, idx: 0&quot;]

When it should look like

[2020/01/31 21:31:36.088 +00:00] [WARN] [region_request.go:313] ["tikv reports `StoreNotMatch` retry later"] [storeNotMatch="request_store_id:4 actual_store_id:23398 "] [ctx="region ID: 15, meta: id:15 start_key:\"t\\200\\000\\000\\000\\000\\000\\000\\r\" end_key:\"t\\200\\000\\000\\000\\000\\000\\000\\017\" region_epoch:<conf_ver:8 version:7 > peers:<id:47 store_id:4 > peers:<id:66 store_id:41 > peers:<id:13381 store_id:13133 > , peer: id:47 store_id:4 , addr: tidb-test-1-tikv-2.tidb-test-1-tikv-peer.tidb-test-1.svc:20160, idx: 0"]

Recent search history

Allow users to access previously used queries. Related to #45

It could potentially be a button, or pressing up in the search bar, or in the help area.

Don't compile inferno in development mode

Hinted by this warning:

main.436520c6.js:sourcemap:8 It looks like you're using a minified copy of the development build of Inferno. When deploying Inferno apps to production, make sure to use the production build which skips development warnings and is faster. See http://infernojs.org for more details.

Autocompletion

Allow autocompletion when users type.

For example, when you focus the cursor on the input immediately a list of keys are shown in a drop down. When the user starts typing it drills down and reduces the options.

Once the user gets to the colon, e.g. key:, a set of values could be displayed.

Need to also consider previously searched terms for the user to reuse. Maybe it can be part of the autocomplete functionality, or elsewhere.

#44 will help with this.

Some kind of column alignment for fields

This could just be a transform with the configuration specifying how much space a value should have, e.g. logging might be set to 4, for INFO, DEBU, etc, and when empty, just leaves a gap.

Update the README with details on new features and add them to the demo

New features needed to be documented. Maybe a separate page in the Wiki?

  • Ability to show Java exceptions (javaException) and link it to your repository.
  • View in context feature (contextFilters) where you can add a term to the query while maintaining focus on a single log entry.
  • Transforms: mapValue, mapClass, uppercase, shortenJavaFqcn, randomStableColor
  • Copy link

Also add these examples to the docker-compose demo.

Context subtrees/fixes

  • Allow sub-tree contexts.
  • Allow regular expressions.
  • Handle arrays for Elasticsearch.
  • Don't show context links when they don't apply.

Automatically expand arrays in expanded view

Say a log entry has the following:

{
  "bugs": [
    {"sev": 0, "title": "๐Ÿ•ท"},
    {"sev": 1, "title": "๐ŸฆŸ"},
    {"sev": 2, "title": "๐Ÿž"},
  ]
}

Expanding bugs will display a fairly useless list of collapsed object items. The user has to manually click on each item to see its contents.

Ideally these would all be expanded when clicking on bugs.

Switching Environments does not update log frequency historgram

  1. Lookup some logs (in AWS at least).
  2. Switch environment (e.g. from staging to production)

Expected behavior: Histogram changes to match new searched logs
Actual behavior: Histogram is the same as in the first selected environment, despite logs being different. Note that switching regions will update the histogram to match the logs from the first environment in the newly selected region (instead of the second selected environment). A hard refresh of the page will properly update the histogram (but then the problem will persist if a different environment is selected).

Selected filter value goes back to default after a refresh instead of the empty value

To reproduce:

  • Use this filter:
{
      "id": "region",
      "urlKey": "r",
      "title": "Region",
      "default": "us-west-2",
      "type": "singleValue",
      "items": [
        {
          "title": "All Regions",
          "id": null
        },
        {
          "title": "us-west-2",
          "id": "us-west-2"
        },
        {
          "title": "us-west-1",
          "id": "us-west-1"
        }
      ]
    }
  • Select All Regions
  • Refresh
  • You'll see that us-west-2 is selected again (the default).

Custom HTML, JS, CSS

Allow users to specify locations of custom assets that they host themselves. e.g. in config.json:

{
  "extraJS": [
    "customScript.js",
    "googleAnalytics.js"
  ],
  "extraCSS": [
     "https://fonts.googleapis.com/icon?family=Material+Icons"
  ],
  "introHTML": "./intro.html",
  "errorHTML": "./error.html"
}

introHTML would replace the introduction area with your own company specific intro. Similar with errorHTML, showing a custom error when a fetch request fails.

Also needs a documentation update.

Allow selection of expanded text values (remove href)

Unfortunately browsers think users want to drag links instead of selecting the text.

Need to work out a good way to allow selection of values, while allow users to easily add a value to the query.

One option is to trick the users to make it appear as a link (highlighting the text, use pointer cursor), but not actually be a href link. I like this option but users might not try to select out of habit of the browser not allowing that functionality.

Another option is to remove the link and replace it with an icon to "add" to the query.

Logquacious UI freeze when rendering many large logs

image

For a time some simple queries returned log lines with messages consistently with size ~1MB. This causes lq's UI to freeze for about 20 seconds after the search returns.

Looking at the profile, the bottleneck is almost entirely browser rendering. Though it's interesting to see that the page is reflowed twice on render (maybe that's the chunking logic?).

Regardless, we should probably truncate long fields in the compact view.

Negate search terms

This would be the opposite of clicking a value to add the key:value to the query. Maybe a shift click or an icon to click.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.