Coder Social home page Coder Social logo

lttng-analyses's People

Contributors

abusque avatar eepp avatar greenscientist avatar jdesfossez avatar jgalar avatar kienanstewart avatar milianw avatar mjeanson avatar psrcode avatar simark avatar stgraber avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lttng-analyses's Issues

Can't run LTTng analysis on traced data

I tried using LTTng analysis for tracing and started with the given samples from the manual:

I use ubuntu on a VM and installed all necessary packages. I used the given example to collect the trace data as described within the manual:

lttng-analyses-record

After I recorded some events I tried analysing them:

sudo lttng-cputop lttng-analysis-17045-20200818-174021

When executing this command I will get tons of warnings and an error at the end:

[warning] Unknown value 0 in enum.
[warning] Unknown value 0 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 1569 in enum.
[warning] Unknown value 1569 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
[warning] Unknown value 33 in enum.
Error: Cannot run analysis: 'pid'  

I also tried the same command on the sample trace provided in the manual and it works flawless.

Im really not sure what I am doing wrong here, I also used a clean VM and reinstalled all neccessary packages without success.

Validate event and field names in --period

Make sure the event and field names specified in the --period argument do exist in the metadata of the trace(s).
This will save time instead of waiting for the analysis to complete to realize the user made a typo.

We have the all the code needed in _check_period_args, we just need to wire it to the new period definition mechanism.

wrong arguments order in parse_trace_collection_time_range

09:57 < milian> there is a bug in lttng-analyses when I try to use the timerange arg for e.g. lttng-schedlog: in command.py:627 the second arg passed to parse_trace_collection_time_range is
self_handles, but the argument order for parse_trace_collection_time_range is different
09:57 < milian> collection, time_range, gmt, handles
09:57 < milian> it gets passed: collection, handles, time_range, gmt

combined per-XYZ I/O usage stats

Right now when I use the lttng-iousagetop command, I get useful Per-process I/O, Per-file I/O and Block I/O reports. What I'm missing is a way to combine this, to get e.g. the per-process I/O for every file and ideally also for every block.

That would allow one to figure out which file was written to by what process, and on what block that file lives.

cputop histogram should be relative to 100%

The histograms are relative to the values, but when printing percentages we should scale to 100%

Per-TID Usage Process Migrations Priorities

████████████████████████████████████████████████████████████████████████████████ 0.07 % rcu_sched (7) 0 [20]
█████████████████████████████████████████████████ 0.04 % sshd (9114) 0 [20]
██████████████████████████████████████████████ 0.04 % kworker/1:1 (42) 0 [20]
████████████████████████████████ 0.03 % kworker/u8:1 (9090) 0 [20]
█████████████████████████████ 0.02 % cpuburn (9917) 0 [-61]
███████████ 0.01 % iscsid (2309) 0 [10]
█████████ 0.01 % cpuburn (9916) 0 [-61]
████ 0.00 % ksoftirqd/3 (23) 0 [20]
████ 0.00 % ksoftirqd/1 (13) 0 [20]
██ 0.00 % kworker/u8:0 (8626) 0 [20]

python3-lttnganalyses does not exist in ppt

I have set Ubuntu ppt as described on github README.rst but python3-lttnganalyses can not be found. All other packages installed just fine.

OS: Linux Mint 17.3 Rosa
uname -a: Linux desktop 3.18.5-031805-generic #201501292218 SMP Fri Jan 30 03:19:17 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

tests.common.test_format_utils.TestFormatTimestamp fails on Ubuntu Trusty

On an Ubuntu Trusty 14.04, the tests.common.test_format_utils.TestFormatTimestamp fails with a delta of one second. The tzdata version on this system is 2016d-0ubuntu0.14.04 while it works on Xenial with tzdata 2016c-1.

======================================================================
FAIL: test_negative (tests.common.test_format_utils.TestFormatTimestamp)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/root/lttng-analyses/tests/common/test_format_utils.py", line 150, in test_negative
    self.assertEqual(result, '1948-05-08 23:02:51.876543211')
AssertionError: '1948-05-08 23:02:52.876543211' != '1948-05-08 23:02:51.876543211'
- 1948-05-08 23:02:52.876543211
?                   ^
+ 1948-05-08 23:02:51.876543211
?                   ^

This seems to indicate a leap second may have been introduced in this new tzdata, not sure what we can do about this.

iousagetop not working due to net_if_receive_skb

The current project uses net_dev_xmit and netif_receive_skb to trace network usage. However; my machine (4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u1 (2019-09-20) x86_64 GNU/Linux) reports net_if_receive_skb.

I've solved this for my self by replacing all string occurrences of netif_receive_skb by net_if_receive_skb.

diff --git a/lttnganalyses/core/io.py b/lttnganalyses/core/io.py
index 1337ab9..bb7297a 100644
--- a/lttnganalyses/core/io.py
+++ b/lttnganalyses/core/io.py
@@ -36,7 +36,7 @@ class IoAnalysis(Analysis):
     def __init__(self, state, conf):
         notification_cbs = {
             'net_dev_xmit': self._process_net_dev_xmit,
-            'netif_receive_skb': self._process_netif_receive_skb,
+            'net_if_receive_skb': self._process_netif_receive_skb,
             'block_rq_complete': self._process_block_rq_complete,
             'io_rq_exit': self._process_io_rq_exit,
             'create_fd': self._process_create_fd,
diff --git a/lttnganalyses/linuxautomaton/net.py b/lttnganalyses/linuxautomaton/net.py
index 9517422..6249159 100644
--- a/lttnganalyses/linuxautomaton/net.py
+++ b/lttnganalyses/linuxautomaton/net.py
@@ -28,7 +28,7 @@ class NetStateProvider(sp.StateProvider):
     def __init__(self, state):
         cbs = {
             'net_dev_xmit': self._process_net_dev_xmit,
-            'netif_receive_skb': self._process_netif_receive_skb,
+            'net_if_receive_skb': self._process_netif_receive_skb,
         }
 
         super().__init__(state, cbs)
@@ -63,7 +63,7 @@ class NetStateProvider(sp.StateProvider):
                 proc.fds[fd].fd_type = sv.FDType.maybe_net
 
     def _process_netif_receive_skb(self, event):
-        self._state.send_notification_cb('netif_receive_skb',
+        self._state.send_notification_cb('net_if_receive_skb',
                                          iface_name=event['name'],
                                          recv_bytes=event['len'],
                                          cpu_id=event['cpu_id'])
diff --git a/tests/integration/trace_writer.py b/tests/integration/trace_writer.py
index dc69051..a949d9b 100644
--- a/tests/integration/trace_writer.py
+++ b/tests/integration/trace_writer.py
@@ -261,7 +261,7 @@ class TraceWriter():
         self.add_event(self.net_dev_xmit)
 
     def define_netif_receive_skb(self):
-        self.netif_receive_skb = CTFWriter.EventClass("netif_receive_skb")
+        self.netif_receive_skb = CTFWriter.EventClass("net_if_receive_skb")
         self.netif_receive_skb.add_field(self.uint64_type, "_skbaddr")
         self.netif_receive_skb.add_field(self.uint32_type, "_len")
         self.netif_receive_skb.add_field(self.string_type, "_name")

I think it would be desired to fix this in this repo as well. Is this solution sufficient? Would all functions need to be renamed as well?

Tests fail when LC_ALL=C

Tests fail when LC_ALL=C

$ python3 setup.py test  
running test
running egg_info
writing entry points to lttnganalyses.egg-info/entry_points.txt
writing requirements to lttnganalyses.egg-info/requires.txt
writing top-level names to lttnganalyses.egg-info/top_level.txt
writing dependency_links to lttnganalyses.egg-info/dependency_links.txt
writing lttnganalyses.egg-info/PKG-INFO
reading manifest file 'lttnganalyses.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'lttnganalyses.egg-info/SOURCES.txt'
running build_ext
test_parse_date_invalid (tests.common.test_parse_utils.TestParseDate) ... ok
test_parse_full_date (tests.common.test_parse_utils.TestParseDate) ... ok
test_parse_full_date_nsec (tests.common.test_parse_utils.TestParseDate) ... ok
test_parse_time (tests.common.test_parse_utils.TestParseDate) ... ok
test_parse_time_nsec (tests.common.test_parse_utils.TestParseDate) ... ok
test_parse_timestamp (tests.common.test_parse_utils.TestParseDate) ... ok
test_garbage (tests.common.test_parse_utils.TestParseDuration) ... ok
test_invalid_units (tests.common.test_parse_utils.TestParseDuration) ... ok
test_no_units (tests.common.test_parse_utils.TestParseDuration) ... ok
test_valid_units (tests.common.test_parse_utils.TestParseDuration) ... ok
test_binary_units (tests.common.test_parse_utils.TestParseSize) ... ok
test_coreutils_units (tests.common.test_parse_utils.TestParseSize) ... ok
test_garbage (tests.common.test_parse_utils.TestParseSize) ... ok
test_invalid_units (tests.common.test_parse_utils.TestParseSize) ... ok
test_no_units (tests.common.test_parse_utils.TestParseSize) ... ok
test_si_units (tests.common.test_parse_utils.TestParseSize) ... ok
test_invalid_date (tests.common.test_parse_utils.TestParseTraceCollectionDate) ... ok
test_multi_day_date (tests.common.test_parse_utils.TestParseTraceCollectionDate) ... ok
test_multi_day_time (tests.common.test_parse_utils.TestParseTraceCollectionDate) ... ok
test_single_day_date (tests.common.test_parse_utils.TestParseTraceCollectionDate) ... ok
test_single_day_time (tests.common.test_parse_utils.TestParseTraceCollectionDate) ... ok
test_invalid_format (tests.common.test_parse_utils.TestParseTraceCollectionTimeRange) ... ok
test_multi_day_date (tests.common.test_parse_utils.TestParseTraceCollectionTimeRange) ... ok
test_multi_day_time (tests.common.test_parse_utils.TestParseTraceCollectionTimeRange) ... ok
test_single_day_date (tests.common.test_parse_utils.TestParseTraceCollectionTimeRange) ... ok
test_single_day_time (tests.common.test_parse_utils.TestParseTraceCollectionTimeRange) ... ok
test_integer (tests.common.test_format_utils.TestFormatIpv4) ... ok
test_sequence (tests.common.test_format_utils.TestFormatIpv4) ... ok
test_with_port (tests.common.test_format_utils.TestFormatIpv4) ... ok
test_empty (tests.common.test_format_utils.TestFormatPrioList) ... ok
test_multiple_prios (tests.common.test_format_utils.TestFormatPrioList) ... ok
test_one_prio (tests.common.test_format_utils.TestFormatPrioList) ... ok
test_repeated_prio (tests.common.test_format_utils.TestFormatPrioList) ... ok
test_repeated_prios (tests.common.test_format_utils.TestFormatPrioList) ... ok
test_huge (tests.common.test_format_utils.TestFormatSize) ... ok
test_negative (tests.common.test_format_utils.TestFormatSize) ... ok
test_reasonable (tests.common.test_format_utils.TestFormatSize) ... ok
test_zero (tests.common.test_format_utils.TestFormatSize) ... ok
test_print_date (tests.common.test_format_utils.TestFormatTimeRange) ... ok
test_time_only (tests.common.test_format_utils.TestFormatTimeRange) ... ok
test_date (tests.common.test_format_utils.TestFormatTimestamp) ... ok
test_negative (tests.common.test_format_utils.TestFormatTimestamp) ... ok
test_time (tests.common.test_format_utils.TestFormatTimestamp) ... ok
test_not_syscall (tests.common.test_trace_utils.TestGetSyscallName) ... ok
test_sys (tests.common.test_trace_utils.TestGetSyscallName) ... ok
test_syscall_entry (tests.common.test_trace_utils.TestGetSyscallName) ... ok
test_multi_day (tests.common.test_trace_utils.TestGetTraceCollectionDate) ... ok
test_single_day (tests.common.test_trace_utils.TestGetTraceCollectionDate) ... ok
test_different_day (tests.common.test_trace_utils.TestIsMultiDayTraceCollection) ... ok
test_same_day (tests.common.test_trace_utils.TestIsMultiDayTraceCollection) ... ok
test_iolatencytop (tests.integration.test_io.IoTest) ... ok
test_iousagetop (tests.integration.test_io.IoTest) ... ERROR
test_cputop (tests.integration.test_cputop.CpuTest) ... ERROR
test_irqlog (tests.integration.test_irq.IrqTest) ... ok
test_irqstats (tests.integration.test_irq.IrqTest) ... ok
test_disable_intersect (tests.integration.test_intersect.IntersectTest) ... ok
test_no_intersection (tests.integration.test_intersect.IntersectTest) ... skipped 'not supported by Babeltrace < 1.4.0'

======================================================================
ERROR: test_iousagetop (tests.integration.test_io.IoTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/mjeanson/Git/Debian/lttnganalyses/tests/integration/test_io.py", line 69, in test_iousagetop
    expected = self.get_expected_output(test_name)
  File "/home/mjeanson/Git/Debian/lttnganalyses/tests/integration/analysis_test.py", line 61, in get_expected_output
    return expected_file.read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 303: ordinal not in range(128)

======================================================================
ERROR: test_cputop (tests.integration.test_cputop.CpuTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/mjeanson/Git/Debian/lttnganalyses/tests/integration/test_cputop.py", line 45, in test_cputop
    expected = self.get_expected_output(test_name)
  File "/home/mjeanson/Git/Debian/lttnganalyses/tests/integration/analysis_test.py", line 61, in get_expected_output
    return expected_file.read()
  File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 297: ordinal not in range(128)

----------------------------------------------------------------------
Ran 57 tests in 0.540s

FAILED (errors=2, skipped=1)

Crash during statedump parsing

$ ~/projects/src/lttng/lttng-analyses/lttng-iousagetop --skip-validation --tid 2873 --debug lttng-startup-1551/
Traceback (most recent call last):      
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/cli/command.py", line 73, in _run_step
    fn()
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/cli/command.py", line 341, in _run_analysis
    self._automaton.process_event(event)
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/linuxautomaton/automaton.py", line 75, in process_event
    sp.process_event(ev)
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/linuxautomaton/sp.py", line 33, in process_event
    self._cbs[name](ev)
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/linuxautomaton/statedump.py", line 102, in _process_lttng_statedump_file_descriptor
    cpu_id=event['cpu_id'])
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/linuxautomaton/automaton.py", line 56, in send_notification_cb
    cb(**kwargs)
  File "/ssd/milian/projects/src/lttng/lttng-analyses/lttnganalyses/core/io.py", line 314, in _process_update_fd
    fd_list = self.tids[tid].fds[fd]
KeyError: 1662
Error: Cannot run analysis: 1662

The error at the end is wrong, it has nothing to do with an analysis, 1662 is a FD.
It looks like we are trying to update the filename of an FD that is not in the analysis state.

We cannot have access to the trace that triggers this problem.

Dependency on 'babeltrace' binary is not enforced

I installed the analyses using sudo ./setup.py install on a system with the python3-babeltrace package, but not the babeltrace binary itself. No error was reported on installation.

Running for example the lttng-cputop-mi --metadata command works, but running the actual analysis does not, and the following message is printed:

Error: Cannot open trace: [Errno 2] No such file or directory: 'babeltrace'

lttng-schedtop output results outside the specified timerange

With that trace:
http://secretaire.dorsal.polymtl.ca/~fgiraldeau/traces/cyclictest-20160224.tar.gz

$ lttng-schedtop --timerange "[15:12:34.037491350, 15:12:34.477824044]" /tmp/cyclictest-20160224/snapshot-1-20160224-201238-0
Checking the trace for lost events...
Processing the trace: 100% [##################################################################] Time: 0:00:16 
Timerange: [2016-02-24 15:12:34.037491350, 2016-02-24 15:12:34.477824960]

Scheduling top
Wakeup               Switch                 Latency (us)   Priority  CPU   Wakee                      Waker                    
[15:12:19.174492582, 15:12:34.037505017]    14863012.435         20    3   lttng-consumerd (13358)    ktimersoftd/0 (4)        
[15:12:19.454490433, 15:12:34.037491350]    14583000.917         -2    3   ktimersoftd/3 (32)         ktimersoftd/0 (4)        
[15:12:20.049464909, 15:12:34.218199859]    14168734.950        -51    2   irq/22-Tegra PC (159)      stress (31182)           
[15:12:19.965607003, 15:12:34.064508073]    14098901.070         20    3   kworker/3:1 (1424)         kworker/1:2 (1402)       
[15:12:32.768959888, 15:12:34.037927682]     1268967.794       -100    3   cyclictest (27117)         ktimersoftd/2 (25)       
[15:12:34.064534489, 15:12:34.074489032]        9954.543         20    0   kworker/0:2 (1425)         kworker/1:2 (1402)       
[15:12:34.469802243, 15:12:34.477268463]        7466.220         -2    2   ktimersoftd/2 (25)         irq/154-hpd (161)        
[15:12:34.474464807, 15:12:34.477591545]        3126.738         20    2   rcuc/2 (24)                irq/154-hpd (161)        
[15:12:34.402909268, 15:12:34.403239100]         329.832         20    3   lttng-consumerd (13358)    ktimersoftd/3 (32)       
[15:12:34.477364879, 15:12:34.477608545]         243.666         20    2   rs:main Q:Reg (470)        in:imklog (469)          

option --freq-series is not honored in lttng-schedfreq

We would expect lttng-schedfreq to combine all pids into a single table (or all priorities if --per-prio is specified) giving multi-series, but it currently does not (tested stable-0.4 and master). It keeps outputting multiple tables (one per pid, or one per priority).

The feature seems to work fine with lttng-irqfreq, but not lttng-schedfreq.

cputop doesn't work with Python 3.10

docker@a6e0d2a2abac:~/ansible$ lttng-cputop result/hostdpdk/600/config_s8_60/
Traceback (most recent call last):
  File "/usr/local/bin/lttng-cputop", line 5, in <module>
    from lttnganalyses.cli.cputop import run
  File "/usr/local/lib/python3.10/dist-packages/lttnganalyses/cli/cputop.py", line 27, in <module>
    from .command import Command
  File "/usr/local/lib/python3.10/dist-packages/lttnganalyses/cli/command.py", line 33, in <module>
    from . import mi, progressbar, period_parsing
  File "/usr/local/lib/python3.10/dist-packages/lttnganalyses/cli/period_parsing.py", line 24, in <module>
    from ..core import period
  File "/usr/local/lib/python3.10/dist-packages/lttnganalyses/core/period.py", line 23, in <module>
    from . import event as core_event
  File "/usr/local/lib/python3.10/dist-packages/lttnganalyses/core/event.py", line 40, in <module>
    class Event(collections.Mapping):
AttributeError: module 'collections' has no attribute 'Mapping'
docker@a6e0d2a2abac:~/GE/yocto-build/ansible$ 

With previous version:

<stdin>:1: DeprecationWarning: Using or importing the ABCs
  from 'collections' instead of from 'collections.abc' is deprecated
  since Python 3.3, and in 3.10 it will stop working

filter_time_range compares int and str

Traceback (most recent call last):
File "/usr/local/bin/lttng-iolatencytop-mi", line 9, in
load_entry_point('lttnganalyses==0.4.3+24.gccf1554', 'console_scripts', 'lttng-iolatencytop-mi')()
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 1179, in runlatencytop_mi
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 1138, in _runlatencytop
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 1118, in _run
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/command.py", line 70, in run
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/command.py", line 296, in _run_analysis
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/core/analysis.py", line 83, in end
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/core/analysis.py", line 178, in _end_period
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/core/analysis.py", line 95, in _send_notification_cb
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/command.py", line 597, in _analysis_tick_cb
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 213, in _analysis_tick
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 900, in _get_top_result_tables
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 879, in _fill_log_result_table_from_io_requests
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 880, in
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 277, in _filter_io_request
File "/usr/local/lib/python3.5/dist-packages/lttnganalyses-0.4.3+24.gccf1554-py3.5.egg/lttnganalyses/cli/io.py", line 272, in _filter_time_range
TypeError: unorderable types: int() > str()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.