Coder Social home page Coder Social logo

cisco-ie / tdm Goto Github PK

View Code? Open in Web Editor NEW
36.0 12.0 4.0 3.23 MB

Telemetry Data Mapper to ease data discovery, correlation, and usage with YANG, MIBs, etc.

License: Apache License 2.0

Shell 1.05% Dockerfile 0.27% Python 56.57% HTML 42.00% CSS 0.11%
yang mib snmp grpc netconf telemetry mdt ios-xr ios-xe nx-os

tdm's Introduction

Telemetry Data Mapper (TDM)

Telemetry Data Mapper to map data identifiers from SNMP, gRPC, NETCONF, CLI, etc. to each other.

TDM provides an offline, immutable view into advertised data availability from data models, with search affordance to quickly identify data of interest and the capability to map between data to aid in keeping track of what is roughly equivalent to what.

Index Screenshot

For example, discovering and mapping...

SNMP OID YANG XPath
bgpLocalAS Cisco-IOS-XR-ipv4-bgp-oper:bgp/instances/instance/instance-active/default-vrf/global-process-info/global/local-as
ifMTU Cisco-IOS-XR-pfi-im-cmd-oper:interfaces/interface-xr/interface/mtu
cdpCacheDeviceId Cisco-IOS-XR-cdp-oper:cdp/nodes/node/neighbors/details/detail/device-id
... ...

Problem Statement

In its current state, network telemetry can be accessed in many different ways that are not easily reconciled – for instance, finding the same information in SNMP MIBs and NETCONF/YANG modules. Discovering the datapaths is often tedious and somewhat arcane, and there is no way of determining if the information gathered will have the same values, or which is more accurate than another. Further, the operational methods of deploying this monitoring varies across platforms and implementations. This makes networking monitoring a fragmented ecosystem of inconsistent and unverified data. There needs to be manageability, and cross-domain insight into data availability.

Solution

TDM seeks to solve this problem by providing an overlay platform to generically access network telemetry and capability purported to be supported by an OS/release or platform, and create relationships between individual datapaths to demonstrate qualities in consistency, validity, and interoperability. This will be both exposed by UI for human usage, and API for automated usage. TDM will not seek to provide domain-specific manageability, but serve as an overlay insight tool. There is significantly more documentation in doc/.

Architecture

Architecture

Schema

TDM Schema

Usage

System Requirements

This has only been lightly evaluated, but we recommend:

  • 4 cores
  • 16 GB RAM (ETL process is wildly inefficient at the moment)
  • 20 GB SSD/HDD

These specifications are also semi-dependent on the expected load levels.

Prerequisites

  • Docker CE (don't you love it when everything is simplified).
    • There are DockerHub/external dependencies, so proxy settings must be configured in the Docker daemon and some Dockerfiles if necessary. This is not already handled, and an issue should be opened for assistance if required.
  • docker-compose for deployment. Docker Swarm support has been deprecated to support ElasticSearch provisioning. Docker Swarm does not support the required ulimit settings.
  • Unix-like environment, bash et al. support.

Commands

  • setup.sh to install docker-compose for your user.
  • start.sh [http|https] to start the Docker stack.
  • stop.sh [http|https] to stop the Docker stack.
  • reset.sh [http|https] to stop the Docker stack and delete the persisted storage volumes.

Installation

Ensure there are no port conflicts or change the forwarded ports in docker-compose*.yml.

git clone https://www.github.com/cisco-ie/tdm.git
# If docker-compose is not installed...
./setup.sh
# Start the stack!
./start.sh [http|https]
# You're good to go :)
# To monitor ETL process...
docker logs -f tdm_etl_1

Once the containers are built and running, it currently takes ~8 hours for all the data to be fully parsed and TDM to be fully available. Progress is visible through the etl Docker container logs, usually easily accessible via docker logs -f tdm_etl_1. Once the etl container has died, TDM is loaded with the current snapshot of data available. Unfortunately there is not a recurring ETL process at this time.

If deploying with HTTPS, an SSL .crt and .key must be placed under nginx/ as tdm.cisco.com.crt and tdm.cisco.com.key respectively. Usage of a different filename requires changing these filenames in docker-compose.https.yml and nginx/nginx.https.conf. This allows components to run over plain HTTP and have NGINX proxy the encryption of public web traffic - which is wonderfully useful for now albeit not as inherently secure.

Access

All ports will be exposed on your Docker host interface. Typically you don't need to worry what this is, assume everything will be available from 127.0.0.1/localhost.

  • Port 80 (HTTP) or 443 (HTTPS) exposes the TDM Web UI.
    • /goaccess_web.html exposes website access statistics.
    • /goaccess_db.html exposes ArangoDB access statistics.
    • /goaccess_kibana.html exposes Kibana access statistics.
  • Port 8529 exposes the ArangoDB Web UI and API.
  • Port 5601 exposes Kibana for exploring the TDM Search cache (Elasticsearch).

Licensing

TDM is dual-licensed with Apache License, Version 2.0 applying to its softwares and Community Data License Agreement – Sharing – Version 1.0 applying to data, such as the mappings.

Related Projects

Special Thanks

Drew Pletcher, Einar Nilsen-Nygaard, Joe Clarke, Benoit Claise, Charlie Sestito, Glenn Matthews, Michael Ott

UTDM is dead, long live UTDM.

tdm's People

Contributors

byzek avatar remingtonc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tdm's Issues

Best effort Matchmaking

Regardless of whether a direct mapping exists for a supplied OID or XPath, attempt to provide best effort matches. Provides more immediate value for end-user. There needs to be a very visible distinction between what is best effort and a vetted mapping.

SNMP OIDs not linked to OS/Releases

This is a limitation of how we were able to find our MIBs. The supportlists provided in ftp://ftp.cisco.com/ detailed on a per-platform basis what was supported, with handmade HTML. The handmade HTML part is the painful part.

Cut unused code/schema?

Some aspects of TDM aren't being used, or are implemented in such a way that they won't accommodate the true nuance of reality. Should review and look at cutting out things which haven't actually been used yet.

Goaccess fails on startup

Appears to be trying to use global config file, shouldn't be the case. Command must be wrong.

Parsing... [0] [0/s]
GoAccess - version 1.3 - Jul 29 2019 14:03:26
Config file: /etc/goaccess/goaccess.conf

Fatal error has occurred
Error occurred at: src/parser.c - parse_log - 2764
No time format was found on your conf file.

Hard schema for well-defined entities

The current TDM schema is very generic. DataModel, DataPath, ... This is nice to work with but is also challenging as we need attributes to qualify XPaths which are not necessary for OIDs. It might be worth looking at making a harder schema around these well-defined entities. It would be beneficial in some areas such as reducing search space in some cases, but also increases complexity in other cases and requires treating things less generically.

Downloading MIBs from ftp.cisco.com/pub/mibs/v2 error

Hello
It seems that ftp.cisco.com/pub/mibs/v2 is no longer available.
Do you know any workaround to download MIBs to /data/extract/mib/ ?

Logs below.
Regards
Nuno

docker logs -f tdm_etl_1
Warning: Your Pipfile requires python_version 3.6, but you are using 3.8.1 (/root/.local/share/v/d/bin/python).
$ pipenv --rm and rebuilding the virtual environment may resolve the issue.
$ pipenv check will surely fail.
INFO:root:Loading configuration.
INFO:root:Awaiting DBMS availability.
INFO:root:Awaiting DBMS connectivity.
INFO:root:Creating database.
INFO:root:Creating database schema.
INFO:root:Populating static data.
INFO:root:Populating MIB data.
INFO:root:Downloading MIBs from ftp.cisco.com/pub/mibs/v2 to /data/extract/mib/
Traceback (most recent call last):
File "main.py", line 118, in
main()
File "main.py", line 96, in main
populate_snmp(db)
File "/data/snmp.py", line 296, in populate_snmp
snmppop.download_mibs()
File "/data/snmp.py", line 70, in download_mibs
ftp.retrlines(
File "/usr/local/lib/python3.8/ftplib.py", line 451, in retrlines
with self.transfercmd(cmd) as conn,
File "/usr/local/lib/python3.8/ftplib.py", line 382, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "/usr/local/lib/python3.8/ftplib.py", line 343, in ntransfercmd
conn = socket.create_connection((host, port), self.timeout,
File "/usr/local/lib/python3.8/socket.py", line 808, in create_connection
raise err
File "/usr/local/lib/python3.8/socket.py", line 796, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Operation timed out

Illegal variable in search ETL

2019-06-19T14:28:55.424072466Z   File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/pyArango/query.py", line 155, in __init__
2019-06-19T14:28:55.424164398Z     Query.__init__(self, request, database, rawResults)
2019-06-19T14:28:55.424179155Z   File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/pyArango/query.py", line 40, in __init__
2019-06-19T14:28:55.424188838Z     raise QueryError(self.response["errorMessage"], self.response)
2019-06-19T14:28:55.424197037Z pyArango.theExceptions.QueryError: AQL: variable name '_v' has an invalid format (while parsing). Errors: {'error': True, 'errorMessage': "AQL: variable name '_v' has an invalid format (while parsing)", 'code': 400, 'errorNum': 1510}

Order should be respected

Hi,

Added ifIndex, ifOperStatus, and ifAdminStatus mappings for IETF/OC.
The variable order is not respected

B.

Cannot resolve http://search:9200

While running start.sh I encountered the error shown below. Can you provide any direction to troubleshoot this? Thanks very much!

INFO:root:Populating search database with parsed data.
INFO:root:Acquiring DataPaths from TDM...
INFO:root:Setting up ES...
WARNING:elasticsearch:PUT http://search:9200/datapath [status:N/A request:0.001s]
Traceback (most recent call last):
  File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "/usr/local/lib/python3.7/socket.py", line 748, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name does not resolve

Add Data Healthcheck

Current mappings aren't necessarily verified or constrained leaf <-> leaf, etc. Should provide something which provides insight into that.

Improve MIB parsing

Need to make MIB parsing as robust as YANG model. Currently is not. Missing hierarchy, types, etc.

Comma separated values

If I insert the following, it works fine.
ifIndex
ifOperStatus
ifAdminStatus
However, the following doesn't work
ifIndex,
ifOperStatus,
ifAdminStatus

Actually it works only for the last entry, which is misleading.

Resolve leafref?

leafref has traditionally just been handled as the type, should we resolve that leafref in TDM for the full tree? Need to validate language.

Recurring ETL

Set a recurring, dynamic ETL to continually sync db with reality.

Check for ElasticSearch requirements

Per #24 and #36, ElasticSearch is causing some issues as it has some very specific and non-OoB requirements which causes it to fail on boot. We can 1) put it in development mode. 2) Check beforehand and fail explicitly if the necessary settings aren't configured.

YANG XPath Machine ID parsing is invalid

At the moment we parse XPaths around the IOS XR MDT XPath specfication, which isn't necessarily RFC compliant. We should switch to RFC 8040 XPaths of form /module:element/.../... but we also need to use /prefix:element/.../... for IOS XE as well so it's a bit of a conundrum. This needs to be resolved downstream, but we can band-aid in the meantime.

Which module revision do you use?

The reason is why you should point to the yc.o is because I don't know to which ietf-interface revision you point to.
https://tdm.cisco.com/datapath/view/938244

bclaise@bclaise-VirtualBox:/yanggithub/nadeau/yang/standard/ietf/RFC$ ll ietf-interfaces*
-rw-rw-r-- 1 bclaise bclaise 24219 Dez 7 04:00 [email protected]
-rw-rw-r-- 1 bclaise bclaise 39365 Dez 7 04:00 [email protected]
lrwxrwxrwx 1 bclaise bclaise 31 Dez 7 04:00 ietf-interfaces.yang -> [email protected]
bclaise@bclaise-VirtualBox:
/yanggithub/nadeau/yang/standard/ietf/RFC$
The different revisions make a real difference. In the new revision, the -state is obsolete (or deprecated, I don't recall).

I see on the same page two revisions.
ietf-interfaces@2013-12-23
ietf-interfaces@2014-05-08
So maybe we support a draft version.
anyway, you have to clearly mention on which one the search was done

Create theoretical term for mappings and calculations

Should migrate to terms for mappings so when something is mapped as equivalent then a ghost term is created which everything maps to so we get a notion of a "concept" that can later be named. Things which aren't directly equivalent are mapped via calculations, but only terms are calculated to terms so we can create generic calculations which can substitute anything mapped to term as solution or variable.
should

Refactor web UI views/APIs

Current web views.py is terrifically unorganized. Need to refactor.

Probably the best idea is to go full API and rewrite as SPA with something like Vue. Most of the UI is already dynamic once the page is loaded anyways.

YANG XPath human_id invalid

XPaths are parsed incorrectly and do not include proper prefixing e.g.
openconfig-network-instance:network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/openconfig-aft-network-instance:origin-network-instance in TDM is currently loaded as simply openconfig-network-instance:network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/origin-network-instance. This is less important for OpenConfig overall given the desire to have no prefixing whatsoever when the origin is OpenConfig, but ignore the specific example.

Use Docker volumes instead of bind mounts

It was a good idea to use bind mounts when first building TDM & the ETL process but it has outlasted its usefulness and there's greater understanding of how Docker works. Using Docker volumes will be as easily debuggable and avoid permission issues etc.

how to tell if ETL is done

i did a new install, everything went smoothly but i am waiting for about 24 hours now for ETL job to be completed. I can tell it is likely not done because i didn't see anything related to NX-OS (the only models i am interested into to be fair) into the logs.

What should i do next? if i try to search for "bgpLocalAS" i get a trace (see below)

the last log entries i get are those

DEBUG:root:Parsed 1 revision(s) for openconfig-isis-lsp.
DEBUG:root:Parsed 1 revision(s) for openconfig-isis-lsdb-types.
DEBUG:root:Parsed 1 revision(s) for openconfig-isis-routing.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-ipv4.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-common.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-types.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-ipv6.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-mpls.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-pf.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-ethernet.
DEBUG:root:Parsed 1 revision(s) for openconfig-network-instance-l2.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-optical-amplifier-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-optical-amplifier.
DEBUG:root:Parsed 1 revision(s) for openconfig-platform.
DEBUG:root:Parsed 1 revision(s) for openconfig-platform-types.
DEBUG:root:Parsed 1 revision(s) for openconfig-transport-line-common.
DEBUG:root:Parsed 1 revision(s) for openconfig-transport-types.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-platform-deviations.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-platform-transceiver-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-platform-transceiver.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-rib-bgp-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-rib-bgp.
DEBUG:root:Parsed 1 revision(s) for openconfig-rib-bgp-types.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-routing-policy-deviations.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-rsvp-sr-ext-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-rsvp-sr-ext.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-telemetry-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-telemetry.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-terminal-device-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-terminal-device.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-transport-line-protection-deviations.
DEBUG:root:Parsed 1 revision(s) for openconfig-transport-line-protection.
DEBUG:root:Parsed 1 revision(s) for cisco-xr-openconfig-vlan-deviations.
DEBUG:root:Parsed 1 revision(s) for ietf-netconf.
DEBUG:root:Parsed 1 revision(s) for ietf-restconf-monitoring.
DEBUG:root:Parsed 1 revision(s) for ietf-yang-library.
DEBUG:root:Parsed 1 revision(s) for nc-notifications.
DEBUG:root:Parsed 1 revision(s) for notifications.
DEBUG:root:Parsed 1 revision(s) for openconfig-aft-network-instance.
DEBUG:root:Parsed 1 revision(s) for openconfig-channel-monitor.
DEBUG:root:Parsed 1 revision(s) for openconfig-isis-policy.
DEBUG:root:Parsed 1 revision(s) for tailf-actions.
DEBUG:root:Parsed 922 module(s) for 6.5.1.
DEBUG:root:Parsed 7 version(s).
DEBUG:root:Resetting session to mitigate session timeout (???).
INFO:root:Loading IOS_XR 6.0.2 data.
INFO:root:Loading IOS_XR 6.1.2 data.
INFO:root:Loading IOS_XR 6.1.3 data.
INFO:root:Loading IOS_XR 6.2.1 data.
INFO:root:Loading IOS_XR 6.2.2 data.
INFO:root:Loading IOS_XR 6.3.1 data.
INFO:root:Loading IOS_XR 6.5.1 data.
INFO:root:Awaiting Search availability.

trace output

web_1 | ::ffff:172.25.0.6 - - [2019-02-12 18:43:16] "POST /api/v1/search/es HTTP/1.0" 500 431 14.238257
web_1 | WARNING:elasticsearch:GET http://search:9200/datapath/_search [status:N/A request:0.080s]
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
web_1 | (self._dns_host, self.port), self.timeout, **extra_kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
web_1 | for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
web_1 | File "/usr/local/lib/python3.7/socket.py", line 748, in getaddrinfo
web_1 | for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
web_1 | socket.gaierror: [Errno -2] Name does not resolve
web_1 |
web_1 | During handling of the above exception, another exception occurred:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 172, in perform_request
web_1 | response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
web_1 | _stacktrace=sys.exc_info()[2])
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/retry.py", line 343, in increment
web_1 | raise six.reraise(type(error), error, _stacktrace)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/packages/six.py", line 686, in reraise
web_1 | raise value
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
web_1 | chunked=chunked)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 354, in _make_request
web_1 | conn.request(method, url, **httplib_request_kw)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1229, in request
web_1 | self._send_request(method, url, body, headers, encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1275, in _send_request
web_1 | self.endheaders(body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1224, in endheaders
web_1 | self._send_output(message_body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1016, in _send_output
web_1 | self.send(msg)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 956, in send
web_1 | self.connect()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 181, in connect
web_1 | conn = self._new_conn()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 168, in _new_conn
web_1 | self, "Failed to establish a new connection: %s" % e)
web_1 | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f981121b6d8>: Failed to establish a new connection: [Errno -2] Name does not resolve
web_1 | WARNING:elasticsearch:GET http://search:9200/datapath/_search [status:N/A request:0.077s]
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
web_1 | (self._dns_host, self.port), self.timeout, **extra_kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
web_1 | for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
web_1 | File "/usr/local/lib/python3.7/socket.py", line 748, in getaddrinfo
web_1 | for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
web_1 | socket.gaierror: [Errno -2] Name does not resolve
web_1 |
web_1 | During handling of the above exception, another exception occurred:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 172, in perform_request
web_1 | response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
web_1 | _stacktrace=sys.exc_info()[2])
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/retry.py", line 343, in increment
web_1 | raise six.reraise(type(error), error, _stacktrace)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/packages/six.py", line 686, in reraise
web_1 | raise value
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
web_1 | chunked=chunked)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 354, in _make_request
web_1 | conn.request(method, url, **httplib_request_kw)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1229, in request
web_1 | self._send_request(method, url, body, headers, encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1275, in _send_request
web_1 | self.endheaders(body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1224, in endheaders
web_1 | self._send_output(message_body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1016, in _send_output
web_1 | self.send(msg)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 956, in send
web_1 | self.connect()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 181, in connect
web_1 | conn = self._new_conn()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 168, in _new_conn
web_1 | self, "Failed to establish a new connection: %s" % e)
web_1 | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f981121b908>: Failed to establish a new connection: [Errno -2] Name does not resolve
kibana_1 | {"type":"log","@timestamp":"2019-02-12T18:43:35Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://search:9200/"}
web_1 | WARNING:elasticsearch:GET http://search:9200/datapath/_search [status:N/A request:0.106s]
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
web_1 | (self._dns_host, self.port), self.timeout, **extra_kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
web_1 | for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
web_1 | File "/usr/local/lib/python3.7/socket.py", line 748, in getaddrinfo
web_1 | for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
web_1 | socket.gaierror: [Errno -2] Name does not resolve
web_1 |
web_1 | During handling of the above exception, another exception occurred:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 172, in perform_request
web_1 | response = self.pool.urlopen(method, url, body, retries=Retry(False), headers=request_headers, **kw)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
web_1 | _stacktrace=sys.exc_info()[2])
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/util/retry.py", line 343, in increment
web_1 | raise six.reraise(type(error), error, _stacktrace)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/packages/six.py", line 686, in reraise
web_1 | raise value
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
web_1 | chunked=chunked)
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 354, in _make_request
web_1 | conn.request(method, url, **httplib_request_kw)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1229, in request
web_1 | self._send_request(method, url, body, headers, encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1275, in _send_request
web_1 | self.endheaders(body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1224, in endheaders
web_1 | self._send_output(message_body, encode_chunked=encode_chunked)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 1016, in _send_output
web_1 | self.send(msg)
web_1 | File "/usr/local/lib/python3.7/http/client.py", line 956, in send
web_1 | self.connect()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 181, in connect
web_1 | conn = self._new_conn()
web_1 | File "/root/.local/share/virtualenvs/data-I7nS9QO2/lib/python3.7/site-packages/urllib3/connection.py", line 168, in _new_conn
web_1 | self, "Failed to establish a new connection: %s" % e)
web_1 | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f981121bb70>: Failed to establish a new connection: [Errno -2] Name does not resolve
kibana_1 | {"type":"log","@timestamp":"2019-02-12T18:43:40Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: http://search:9200/"}
kibana_1 | {"type":"log","@timestamp":"2019-02-12T18:43:40Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","@timestamp":"2019-02-12T18:43:40Z","tags":["license","warning","xpack"],"pid":1,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
kibana_1 | {"type":"log","@timestamp":"2019-02-12T18:43:40Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://search:9200/"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.