Coder Social home page Coder Social logo

salt-eventsd's Introduction

This project is not maintained anymore. Current salt-versions can do the same out of the box with engines.

This project is not maintained anymore. Current salt-versions can do the same out of the box with engines.

This project is not maintained anymore. Current salt-versions can do the same out of the box with engines.

This project is not maintained anymore. Current salt-versions can do the same out of the box with engines.

salt-eventsd

A project based on but not related to saltstack

The current stable release is tagged as: 0.9.3

If you are already using salt-eventsd, check the changelog for the latest changes and fixes.

Due to public request, i pushed the develop-branch to github for everyone to try out. From today on, the latest bleeding-edge salt-eventsd will always be in the develop branch with new release getting tagged.

Please note, that i reserve the right to brake develop. Even though i always test all changes locally before pushing them to github, it may happen.

Updating from 0.9 to 0.9.3

See the changelog for improvements in 0.9.3. For more info see installation.txt.

IMPORTANT: If you're coming from 0.9 make sure, that you make the following changes to your config:

Rename: 'stat_upd' to 'stat_timer' Add: 'stat_worker: False' (see installation.txt for details on it)

Availability Notes

Pypi

As of Jan 22nd, we are on pypi: https://pypi.python.org/pypi/salt-eventsd/

Debian / Ubuntu

A debian-package can be built straight from the repo by running 'dpkg-buildpackage -b'. All dependencies have to be installed of course.

Redhat / CentOS

There are no packages for redhat yet. If you have the knowledge and the ressources to support that, feel free to submit the necessary changes.

What it does

A event-listener daemon for saltstack that writes data into mysql, postgres, statistical data into graphite, mongo, etc. All events that occur on saltstacks eventbus can be handled and pushed to other daemons, databases, etc. You decide yourself!

The daemon connects to the salt-masters event-bus and listens for all events. Depending on the configuration, certain events can be collected by their tag and/or function-name and handed down to different workers. The workers then extract the desired data-fields from the return and process them further in a user-definable way.

Dependencies

Required python runtime dependencies:

  • salt >= 0.16.2
  • mysql-python
  • argparse
  • pyzmq

Optional/usefull dependencies

  • simplejson (Install with: pip install simplejson)

Usage Examples

  • collect all events with tag 'new_job' to have a job-history that lasts longer than saltstacks job-cache
  • collect all job returns by matching on job-return-tagged event returned from minions to have a database with all returns you can index, search, etc.
  • filter events into different backends like graphite, mongodb, mysql, postgres, whatever...
  • collect historic data like load average etc. by collecting events with tag 'load' which are created by your own load-monitoring module
  • create and collect your own custom backends that process you event-data
  • etc.

Why this is useful / Who needs this?

Currently saltstack does not have an external job-cache that works without a returner. Using returners and by that losing salt encryption is not always desirable or maybe not even be an option. With this daemon, you can collect all data right where its created and returned: on the salt-master.

While saltstacks job-cache works well in smaller environments, in larger environments the job-cache can become a burden for the salt-master. Especially if the job-cache should be kept for a longer period of time, and im talking weeks and month here. This is where the salt-eventsd jumps in. With the default mysql-backend, its easy to collect data for weeks and weeks without burdening the salt-master to keep track of jobs and their results in the job-cache.

Saltstacks job-cache can be completely disabled because all the data is in an independent database, fully indexed, searcheable and easily cleaned up and/or archived with a few querys.

In larger environments it is also a good idea, to seperate different services from one another. With salt-eventsd you can use saltstack for communication and salt-eventsd to collect the actual data. The benefit is, that the salt-master does not need to be restarted just because changes were done for example to a reactor or a runner.

Features

  • collect events from the salt-event-bus into a different backends
  • collect a configurable amount of events before pushing them into different backends
  • define Prio1 events that are pushed immediately without queuing them first
  • write your own backends with ease (some python knowledge required)
  • use regular expressions for matching on events, very flexible and powerful
  • have events send to two backends for having a command+return history as well as having the data pushed elsewhere
  • create your own sql-query-templates for inserting data into the database
  • fully saltstack-job-cache independant database to hold all data you want in it
  • example workers are found in the doc-directory

Testing

py.test is used to run all available tests.

To install all test dependencies you must first install all test dependencies by running

$ pip install -r dev-requirements.txt

It is reccomended to install all dependencies inside a virtualenv for easy isolation.

To run all tests simply the following in the root folder

py.test

Good options to use is -x for pdb debugging and -s for showing prints and log output.

Benchmark

There is a simple benchmark script that can be used to test the performance of the code manually.

The script setups almost all required mocking and config inside the script.

Dependencies that is required is:

  • mock (pip install mock)

Copy the worker file doc/share/doc/eventsd_workers/Bench_Worker.py to /etc/salt/eventsd_workers/Bench_Worker.py

Run the script with python benchmark.py

salt-eventsd's People

Contributors

cameronnemo avatar felskrone avatar grokzen avatar jasonhancock avatar kev009 avatar khabi avatar syphernl avatar theksk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

salt-eventsd's Issues

First event not pushed when event_limit > 1

I hope i got this reliably reproducable. Tested on Tag: 0.9.1

Use the workers and config that exists in doc/ but set event_limit=10 and dump_timer=5.

Then have a salt-master and minion on the same machine and run salt <machine_id> test.ping and watch the output in EVD and wait 5 seconds for the time to trigger. And nothing happens and no events is emited out from EVD.

I think the main problem is that the timer do not fire even if there is events in the queue and this can cause problems if any number of events is stuck in the queue and nothing happens until next event is sent in.

Add support for elasticsearch output

Elasticsearch output would be great, because then it would be trivial to use kibana to access the logs and get statistics, whereas with other outputs you'd need to prepare something yourself.

https://github.com/comperiosearch/vagrant-elk-box seems to be an OK sample of elasticsearch with kibana, with logstash in the mix if you want to try that. Please poke me on IRC if there's anything I can assist with.

daemon fails to forward events

On one of my masters the daemon stopped forwarding events, only the stat-worker kept running. After running it in a screen-session and observing the same behaviour again, stopped it with CTRL+C. It seemed to be stuck with creating and joining threads.

This needs investigating...

...
[INFO ] Stats_data: {'events_hdl': 425637, 'threads_joined': 437620, 'threads_created': 442582, 'events_rec': 1169759, 'events_hdl_sec': 0.0, 'events_tot_sec': 0.0}
[DEBUG ] Event-timer finished, calling reference
[DEBUG ] Stat-timer finished, calling reference
[INFO ] 442584# started
[INFO ] 442584# Stat_Worker initiated
[INFO ] Stats_data: {'events_hdl': 425637, 'threads_joined': 437620, 'threads_created': 442583, 'events_rec': 1169759, 'events_hdl_sec': 0.0, 'events_tot_sec': 0.0}
[DEBUG ] Stat-timer finished, calling reference
[INFO ] 442585# started
[INFO ] 442585# Stat_Worker initiated
[INFO ] Stats_data: {'events_hdl': 425637, 'threads_joined': 437620, 'threads_created': 442584, 'events_rec': 1169759, 'events_hdl_sec': 0.0, 'events_tot_sec': 0.0}
[DEBUG ] Event-timer finished, calling reference
[DEBUG ] Stat-timer finished, calling reference
[INFO ] 442586# started
[INFO ] 442586# Stat_Worker initiated
[INFO ] Stats_data: {'events_hdl': 425637, 'threads_joined': 437620, 'threads_created': 442585, 'events_rec': 1169759, 'events_hdl_sec': 0.0, 'events_tot_sec': 0.0}
[DEBUG ] Event-timer finished, calling reference
[DEBUG ] Stat-timer finished, calling reference
[INFO ] 442587# started
[INFO ] 442587# Stat_Worker initiated
[INFO ] Stats_data: {'events_hdl': 425637, 'threads_joined': 437620, 'threads_created': 442586, 'events_rec': 1169759, 'events_hdl_sec': 0.0, 'events_tot_sec': 0.0}

^C[INFO ] Received CTRL+C, shutting down
[INFO ] Received signal 15
[DEBUG ] Joined worker #437622
[DEBUG ] Joined worker #437623
[DEBUG ] Joined worker #437624
[DEBUG ] Joined worker #437625
[DEBUG ] Joined worker #437626
[DEBUG ] Joined worker #437627
[DEBUG ] Joined worker #437628
[DEBUG ] Joined worker #437629
[DEBUG ] Joined worker #437630
[DEBUG ] Joined worker #437631
[DEBUG ] Joined worker #437632
[DEBUG ] Joined worker #437633
[DEBUG ] Joined worker #437634
[DEBUG ] Joined worker #437635
[DEBUG ] Joined worker #437636
[DEBUG ] Joined worker #437637
[DEBUG ] Joined worker #437638
[DEBUG ] Joined worker #437639
[DEBUG ] Joined worker #437640
[DEBUG ] Joined worker #437641
[DEBUG ] Joined worker #437642
[DEBUG ] Joined worker #437643
[DEBUG ] Joined worker #437644
[DEBUG ] Joined worker #437645
[DEBUG ] Joined worker #437646
[DEBUG ] Joined worker #437647
[DEBUG ] Joined worker #437648
[DEBUG ] Joined worker #437649
[DEBUG ] Joined worker #437650
[DEBUG ] Joined worker #437651
[DEBUG ] Joined worker #437652
[DEBUG ] Joined worker #437653
[DEBUG ] Joined worker #437654
[DEBUG ] Joined worker #437655
[DEBUG ] Joined worker #437656
[DEBUG ] Joined worker #437657
[DEBUG ] Joined worker #437658
[DEBUG ] Joined worker #437659
[DEBUG ] Joined worker #437660
[DEBUG ] Joined worker #437661
[DEBUG ] Joined worker #437662
[DEBUG ] Joined worker #437663
[DEBUG ] Joined worker #437664
[DEBUG ] Joined worker #437665
[DEBUG ] Joined worker #437666
[DEBUG ] Joined worker #437667
[DEBUG ] Joined worker #437668
[DEBUG ] Joined worker #437669
[DEBUG ] Joined worker #437670
[DEBUG ] Joined worker #437671
[DEBUG ] Joined worker #437672
[DEBUG ] Joined worker #437673
[DEBUG ] Joined worker #437674
[DEBUG ] Joined worker #437675
[DEBUG ] Joined worker #437676
[DEBUG ] Joined worker #437677
[DEBUG ] Joined worker #437678
[DEBUG ] Joined worker #437679
[DEBUG ] Joined worker #437680
[DEBUG ] Joined worker #437681
[DEBUG ] Joined worker #437682
[DEBUG ] Joined worker #437683
[DEBUG ] Joined worker #437684
[DEBUG ] Joined worker #437685
[DEBUG ] Joined worker #437686
[DEBUG ] Joined worker #437687
[DEBUG ] Joined worker #437688
[DEBUG ] Joined worker #437689
[DEBUG ] Joined worker #437690
[DEBUG ] Joined worker #437691
[DEBUG ] Joined worker #437692
[DEBUG ] Joined worker #437693
[DEBUG ] Joined worker #437694
[DEBUG ] Joined worker #437695
[DEBUG ] Joined worker #437696
[DEBUG ] Joined worker #437697
[DEBUG ] Joined worker #437698
[DEBUG ] Joined worker #437699
[DEBUG ] Joined worker #437700
[DEBUG ] Joined worker #437701
[DEBUG ] Joined worker #437702
[DEBUG ] Joined worker #437703
[DEBUG ] Joined worker #437704
[DEBUG ] Joined worker #437705
[DEBUG ] Joined worker #437706
[DEBUG ] Joined worker #437707
[DEBUG ] Joined worker #437708
[DEBUG ] Joined worker #437709
[DEBUG ] Joined worker #437710
[DEBUG ] Joined worker #437711
[DEBUG ] Joined worker #437712
[DEBUG ] Joined worker #437713
[DEBUG ] Joined worker #437714
[DEBUG ] Joined worker #437715
[DEBUG ] Joined worker #437716
[DEBUG ] Joined worker #437717
[DEBUG ] Joined worker #437718
[DEBUG ] Joined worker #437719
[DEBUG ] Joined worker #437720
[DEBUG ] Joined worker #437721
[DEBUG ] Joined worker #437722
[DEBUG ] Joined worker #437723
[DEBUG ] Joined worker #437724
[DEBUG ] Joined worker #437725
[DEBUG ] Joined worker #437726
[DEBUG ] Joined worker #437727
[DEBUG ] Joined worker #437728
[DEBUG ] Joined worker #437729
[DEBUG ] Joined worker #437730
[DEBUG ] Joined worker #437731
[DEBUG ] Joined worker #437732
[DEBUG ] Joined worker #437733
[DEBUG ] Joined worker #437734
[DEBUG ] Joined worker #437735
[DEBUG ] Joined worker #437736
[DEBUG ] Joined worker #437737
[DEBUG ] Joined worker #437738
[DEBUG ] Joined worker #437739
[DEBUG ] Joined worker #437740
[DEBUG ] Joined worker #437741
[DEBUG ] Joined worker #437742
[DEBUG ] Joined worker #437743
[DEBUG ] Joined worker #437744
[DEBUG ] Joined worker #437745
[DEBUG ] Joined worker #437746
[DEBUG ] Joined worker #437747
[DEBUG ] Joined worker #437748
[DEBUG ] Joined worker #437749
[DEBUG ] Joined worker #437750
[DEBUG ] Joined worker #437751
[DEBUG ] Joined worker #437752
[DEBUG ] Joined worker #437753
[DEBUG ] Joined worker #437754
[DEBUG ] Joined worker #437755
[DEBUG ] Joined worker #437756
[DEBUG ] Joined worker #437757
[DEBUG ] Joined worker #437758
[DEBUG ] Joined worker #437759
[DEBUG ] Joined worker #437760
[DEBUG ] Joined worker #437761
[DEBUG ] Joined worker #437762
[DEBUG ] Joined worker #437763
[DEBUG ] Joined worker #437764
[DEBUG ] Joined worker #437765
[DEBUG ] Joined worker #437766
[DEBUG ] Joined worker #437767
[DEBUG ] Joined worker #437768
[DEBUG ] Joined worker #437769
[DEBUG ] Joined worker #437770
[DEBUG ] Joined worker #437771
[DEBUG ] Joined worker #437772
[DEBUG ] Joined worker #437773
[DEBUG ] Joined worker #437774
[DEBUG ] Joined worker #437775
[DEBUG ] Joined worker #437776
[DEBUG ] Joined worker #437777
[DEBUG ] Joined worker #437778
[DEBUG ] Joined worker #437779
[DEBUG ] Joined worker #437780
[DEBUG ] Joined worker #437781
[DEBUG ] Joined worker #437782
[DEBUG ] Joined worker #437783
[DEBUG ] Joined worker #437784
[DEBUG ] Joined worker #437785
[DEBUG ] Joined worker #437786
[DEBUG ] Joined worker #437787
[DEBUG ] Joined worker #437788
[DEBUG ] Joined worker #437789
[DEBUG ] Joined worker #437790
[DEBUG ] Joined worker #437791
[DEBUG ] Joined worker #437792
[DEBUG ] Joined worker #437793
[DEBUG ] Joined worker #437794
[DEBUG ] Joined worker #437795
[DEBUG ] Joined worker #437796
[DEBUG ] Joined worker #437797
[DEBUG ] Joined worker #437798
[DEBUG ] Joined worker #437799
[DEBUG ] Joined worker #437800
[DEBUG ] Joined worker #437801
[DEBUG ] Joined worker #437802
[DEBUG ] Joined worker #437803
[DEBUG ] Joined worker #437804
[DEBUG ] Joined worker #437805
[DEBUG ] Joined worker #437806
[DEBUG ] Joined worker #437807
[DEBUG ] Joined worker #437808
[DEBUG ] Joined worker #437809
[DEBUG ] Joined worker #437810
[DEBUG ] Joined worker #437811
[DEBUG ] Joined worker #437812
[DEBUG ] Joined worker #437813
[DEBUG ] Joined worker #437814
[DEBUG ] Joined worker #437815
[DEBUG ] Joined worker #437816
[DEBUG ] Joined worker #437817
[DEBUG ] Joined worker #437818
[DEBUG ] Joined worker #437819
[DEBUG ] Joined worker #437820
[DEBUG ] Joined worker #437821
[DEBUG ] Joined worker #437822
[DEBUG ] Joined worker #437823
[DEBUG ] Joined worker #437824
[DEBUG ] Joined worker #437825
[DEBUG ] Joined worker #437826
[DEBUG ] Joined worker #437827
[DEBUG ] Joined worker #437828
[DEBUG ] Joined worker #437829
[DEBUG ] Joined worker #437830
[DEBUG ] Joined worker #437831
[DEBUG ] Joined worker #437832
[DEBUG ] Joined worker #437833
[DEBUG ] Joined worker #437834
[DEBUG ] Joined worker #437835
[DEBUG ] Joined worker #437836
[DEBUG ] Joined worker #437837
[DEBUG ] Joined worker #437838
[DEBUG ] Joined worker #437839
[DEBUG ] Joined worker #437840
[DEBUG ] Joined worker #437841
[DEBUG ] Joined worker #437842
[DEBUG ] Joined worker #437843
[DEBUG ] Joined worker #437844
[DEBUG ] Joined worker #437845
[DEBUG ] Joined worker #437846
[DEBUG ] Joined worker #437847
[DEBUG ] Joined worker #437848
[DEBUG ] Joined worker #437849
[DEBUG ] Joined worker #437850
[DEBUG ] Joined worker #437851
[DEBUG ] Joined worker #437852
[DEBUG ] Joined worker #437853
[DEBUG ] Joined worker #437854
[DEBUG ] Joined worker #437855
[DEBUG ] Joined worker #437856
[DEBUG ] Joined worker #437857
[DEBUG ] Joined worker #437858
[DEBUG ] Joined worker #437859
[DEBUG ] Joined worker #437860
[DEBUG ] Joined worker #437861
[DEBUG ] Joined worker #437862
[DEBUG ] Joined worker #437863
[DEBUG ] Joined worker #437864
[DEBUG ] Joined worker #437865
[DEBUG ] Joined worker #437866
[DEBUG ] Joined worker #437867
[DEBUG ] Joined worker #437868
[DEBUG ] Joined worker #437869
[DEBUG ] Joined worker #437870
[DEBUG ] Joined worker #437871
[DEBUG ] Joined worker #437872
[DEBUG ] Joined worker #437873
[DEBUG ] Joined worker #437874
[DEBUG ] Joined worker #437875
[DEBUG ] Joined worker #437876
[DEBUG ] Joined worker #437877
[DEBUG ] Joined worker #437878
[DEBUG ] Joined worker #437879
[DEBUG ] Joined worker #437880
[DEBUG ] Joined worker #437881
[DEBUG ] Joined worker #437882
[DEBUG ] Joined worker #437883
[DEBUG ] Joined worker #437884
[DEBUG ] Joined worker #437885
[DEBUG ] Joined worker #437886
[DEBUG ] Joined worker #437887
[DEBUG ] Joined worker #437888
[DEBUG ] Joined worker #437889
[DEBUG ] Joined worker #437890
[DEBUG ] Joined worker #437891
[DEBUG ] Joined worker #437892
[DEBUG ] Joined worker #437893
[DEBUG ] Joined worker #437894
[DEBUG ] Joined worker #437895
[DEBUG ] Joined worker #437896
[DEBUG ] Joined worker #437897
[DEBUG ] Joined worker #437898
[DEBUG ] Joined worker #437899
[DEBUG ] Joined worker #437900
[DEBUG ] Joined worker #437901
[DEBUG ] Joined worker #437902
[DEBUG ] Joined worker #437903
[DEBUG ] Joined worker #437904
[DEBUG ] Joined worker #437905
[DEBUG ] Joined worker #437906
[DEBUG ] Joined worker #437907
[DEBUG ] Joined worker #437908
[DEBUG ] Joined worker #437909
[DEBUG ] Joined worker #437910
[DEBUG ] Joined worker #437911
[DEBUG ] Joined worker #437912
[DEBUG ] Joined worker #437913
[DEBUG ] Joined worker #437914
[DEBUG ] Joined worker #437915
[DEBUG ] Joined worker #437916
[DEBUG ] Joined worker #437917
[DEBUG ] Joined worker #437918
[DEBUG ] Joined worker #437919
[DEBUG ] Joined worker #437920
[DEBUG ] Joined worker #437921
[DEBUG ] Joined worker #437922
[DEBUG ] Joined worker #437923
[DEBUG ] Joined worker #437924
[DEBUG ] Joined worker #437925
[DEBUG ] Joined worker #437926
[DEBUG ] Joined worker #437927
[DEBUG ] Joined worker #437928
[DEBUG ] Joined worker #437929
[DEBUG ] Joined worker #437930
[DEBUG ] Joined worker #437931
[DEBUG ] Joined worker #437932
[DEBUG ] Joined worker #437933
[DEBUG ] Joined worker #437934
[DEBUG ] Joined worker #437935
[DEBUG ] Joined worker #437936
[DEBUG ] Joined worker #437937
[DEBUG ] Joined worker #437938
[DEBUG ] Joined worker #437939
[DEBUG ] Joined worker #437940
[DEBUG ] Joined worker #437941
[DEBUG ] Joined worker #437942
[DEBUG ] Joined worker #437943
[DEBUG ] Joined worker #437944
[DEBUG ] Joined worker #437945
[DEBUG ] Joined worker #437946
[DEBUG ] Joined worker #437947
[DEBUG ] Joined worker #437948
[DEBUG ] Joined worker #437949
[DEBUG ] Joined worker #437950
[DEBUG ] Joined worker #437951
[DEBUG ] Joined worker #437952
[DEBUG ] Joined worker #437953
[DEBUG ] Joined worker #437954
[DEBUG ] Joined worker #437955
[DEBUG ] Joined worker #437956
[DEBUG ] Joined worker #437957
[DEBUG ] Joined worker #437958
[DEBUG ] Joined worker #437959
[DEBUG ] Joined worker #437960
[DEBUG ] Joined worker #437961
[DEBUG ] Joined worker #437962
[DEBUG ] Joined worker #437963
[DEBUG ] Joined worker #437964
[DEBUG ] Joined worker #437965
[DEBUG ] Joined worker #437966
[DEBUG ] Joined worker #437967
[DEBUG ] Joined worker #437968
[DEBUG ] Joined worker #437969
[DEBUG ] Joined worker #437970
[DEBUG ] Joined worker #437971
[DEBUG ] Joined worker #437972
[DEBUG ] Joined worker #437973
[DEBUG ] Joined worker #437974
[DEBUG ] Joined worker #437975
[DEBUG ] Joined worker #437976
[DEBUG ] Joined worker #437977
[DEBUG ] Joined worker #437978
[DEBUG ] Joined worker #437979
[DEBUG ] Joined worker #437980
[DEBUG ] Joined worker #437981
[DEBUG ] Joined worker #437982
[DEBUG ] Joined worker #437983
[DEBUG ] Joined worker #437984
[DEBUG ] Joined worker #437985
[DEBUG ] Joined worker #437986
[DEBUG ] Joined worker #437987
[DEBUG ] Joined worker #437988
[DEBUG ] Joined worker #437989
[DEBUG ] Joined worker #437990
[DEBUG ] Joined worker #437991
[DEBUG ] Joined worker #437992
[DEBUG ] Joined worker #437993
[DEBUG ] Joined worker #437994
[DEBUG ] Joined worker #437995
[DEBUG ] Joined worker #437996
[DEBUG ] Joined worker #437997
[DEBUG ] Joined worker #437998
[DEBUG ] Joined worker #437999
[DEBUG ] Joined worker #438000
[DEBUG ] Joined worker #438001
[DEBUG ] Joined worker #438002
[DEBUG ] Joined worker #438003
[DEBUG ] Joined worker #438004
[DEBUG ] Joined worker #438005
[DEBUG ] Joined worker #438006
[DEBUG ] Joined worker #438007
[DEBUG ] Joined worker #438008
[DEBUG ] Joined worker #438009
[DEBUG ] Joined worker #438010
[DEBUG ] Joined worker #438011
[DEBUG ] Joined worker #438012
[DEBUG ] Joined worker #438013
[DEBUG ] Joined worker #438014
[DEBUG ] Joined worker #438015
[DEBUG ] Joined worker #438016
[DEBUG ] Joined worker #438017
[DEBUG ] Joined worker #438018
[DEBUG ] Joined worker #438019
[DEBUG ] Joined worker #438020
[DEBUG ] Joined worker #438021
[DEBUG ] Joined worker #438022
[DEBUG ] Joined worker #438023
[DEBUG ] Joined worker #438024
[DEBUG ] Joined worker #438025
[DEBUG ] Joined worker #438026
[DEBUG ] Joined worker #438027
[DEBUG ] Joined worker #438028
[DEBUG ] Joined worker #438029
[DEBUG ] Joined worker #438030
[DEBUG ] Joined worker #438031
[DEBUG ] Joined worker #438032
[DEBUG ] Joined worker #438033
[DEBUG ] Joined worker #438034
[DEBUG ] Joined worker #438035
[DEBUG ] Joined worker #438036
[DEBUG ] Joined worker #438037
[DEBUG ] Joined worker #438038
[DEBUG ] Joined worker #438039
[DEBUG ] Joined worker #438040
[DEBUG ] Joined worker #438041
[DEBUG ] Joined worker #438042
[DEBUG ] Joined worker #438043
[DEBUG ] Joined worker #438044
[DEBUG ] Joined worker #438045
[DEBUG ] Joined worker #438046
[DEBUG ] Joined worker #438047
[DEBUG ] Joined worker #438048
[DEBUG ] Joined worker #438049
[DEBUG ] Joined worker #438050
[DEBUG ] Joined worker #438051
[DEBUG ] Joined worker #438052
[DEBUG ] Joined worker #438053
[DEBUG ] Joined worker #438054
[DEBUG ] Joined worker #438055
[DEBUG ] Joined worker #438056
[DEBUG ] Joined worker #438057
[DEBUG ] Joined worker #438058
[DEBUG ] Joined worker #438059
[DEBUG ] Joined worker #438060
[DEBUG ] Joined worker #438061
[DEBUG ] Joined worker #438062
[DEBUG ] Joined worker #438063
[DEBUG ] Joined worker #438064
[DEBUG ] Joined worker #438065
[DEBUG ] Joined worker #438066
[DEBUG ] Joined worker #438067
[DEBUG ] Joined worker #438068
[DEBUG ] Joined worker #438069
[DEBUG ] Joined worker #438070
[DEBUG ] Joined worker #438071
[DEBUG ] Joined worker #438072
[DEBUG ] Joined worker #438073
[DEBUG ] Joined worker #438074
[DEBUG ] Joined worker #438075
[DEBUG ] Joined worker #438076
[DEBUG ] Joined worker #438077
[DEBUG ] Joined worker #438078
[DEBUG ] Joined worker #438079
[DEBUG ] Joined worker #438080
[DEBUG ] Joined worker #438081
[DEBUG ] Joined worker #438082
[DEBUG ] Joined worker #438083
[DEBUG ] Joined worker #438084
[DEBUG ] Joined worker #438085
[DEBUG ] Joined worker #438086
[DEBUG ] Joined worker #438087
[DEBUG ] Joined worker #438088
[DEBUG ] Joined worker #438089
[DEBUG ] Joined worker #438090
[DEBUG ] Joined worker #438091
[DEBUG ] Joined worker #438092
[DEBUG ] Joined worker #438093
[DEBUG ] Joined worker #438094
[DEBUG ] Joined worker #438095
[DEBUG ] Joined worker #438096
[DEBUG ] Joined worker #438097
[DEBUG ] Joined worker #438098
[DEBUG ] Joined worker #438099
[DEBUG ] Joined worker #438100
[DEBUG ] Joined worker #438101
[DEBUG ] Joined worker #438102
[DEBUG ] Joined worker #438103
[DEBUG ] Joined worker #438104
[DEBUG ] Joined worker #438105
[DEBUG ] Joined worker #438106
[DEBUG ] Joined worker #438107
[DEBUG ] Joined worker #438108
[DEBUG ] Joined worker #438109
[DEBUG ] Joined worker #438110
[DEBUG ] Joined worker #438111
[DEBUG ] Joined worker #438112
[DEBUG ] Joined worker #438113
[DEBUG ] Joined worker #438114
[DEBUG ] Joined worker #438115
[DEBUG ] Joined worker #438116
[DEBUG ] Joined worker #438117
[DEBUG ] Joined worker #438118
[DEBUG ] Joined worker #438119
[DEBUG ] Joined worker #438120
[DEBUG ] Joined worker #438121
[DEBUG ] Joined worker #438122
[DEBUG ] Joined worker #438123
[DEBUG ] Joined worker #438124
[DEBUG ] Joined worker #438125
[DEBUG ] Joined worker #438126
[DEBUG ] Joined worker #438127
[DEBUG ] Joined worker #438128
[DEBUG ] Joined worker #438129
[DEBUG ] Joined worker #438130
[DEBUG ] Joined worker #438131
[DEBUG ] Joined worker #438132
[DEBUG ] Joined worker #438133
[DEBUG ] Joined worker #438134
[DEBUG ] Joined worker #438135
[DEBUG ] Joined worker #438136
[DEBUG ] Joined worker #438137
[DEBUG ] Joined worker #438138
[DEBUG ] Joined worker #438139
[DEBUG ] Joined worker #438140
[DEBUG ] Joined worker #438141
[DEBUG ] Joined worker #438142
[DEBUG ] Joined worker #438143
[DEBUG ] Joined worker #438144
[DEBUG ] Joined worker #438145
[DEBUG ] Joined worker #438146
[DEBUG ] Joined worker #438147
[DEBUG ] Joined worker #438148
[DEBUG ] Joined worker #438149
[DEBUG ] Joined worker #438150
[DEBUG ] Joined worker #438151
[DEBUG ] Joined worker #438152
[DEBUG ] Joined worker #438153
[DEBUG ] Joined worker #438154
[DEBUG ] Joined worker #438155
[DEBUG ] Joined worker #438156
[DEBUG ] Joined worker #438157
[DEBUG ] Joined worker #438158
[DEBUG ] Joined worker #438159
[DEBUG ] Joined worker #438160
[DEBUG ] Joined worker #438161
[DEBUG ] Joined worker #438162
[DEBUG ] Joined worker #438163
[DEBUG ] Joined worker #438164
[DEBUG ] Joined worker #438165
[DEBUG ] Joined worker #438166
[DEBUG ] Joined worker #438167
[DEBUG ] Joined worker #438168
[DEBUG ] Joined worker #438169
[DEBUG ] Joined worker #438170
[DEBUG ] Joined worker #438171
[DEBUG ] Joined worker #438172
[DEBUG ] Joined worker #438173
[DEBUG ] Joined worker #438174
[DEBUG ] Joined worker #438175
[DEBUG ] Joined worker #438176
[DEBUG ] Joined worker #438177
[DEBUG ] Joined worker #438178
[DEBUG ] Joined worker #438179
[DEBUG ] Joined worker #438180
[DEBUG ] Joined worker #438181
[DEBUG ] Joined worker #438182
[DEBUG ] Joined worker #438183
[DEBUG ] Joined worker #438184
[DEBUG ] Joined worker #438185
[DEBUG ] Joined worker #438186
[DEBUG ] Joined worker #438187
[DEBUG ] Joined worker #438188
[DEBUG ] Joined worker #438189
[DEBUG ] Joined worker #438190
[DEBUG ] Joined worker #438191
[DEBUG ] Joined worker #438192
[DEBUG ] Joined worker #438193
[DEBUG ] Joined worker #438194
[DEBUG ] Joined worker #438195
[DEBUG ] Joined worker #438196
[DEBUG ] Joined worker #438197
[DEBUG ] Joined worker #438198
[DEBUG ] Joined worker #438199
[DEBUG ] Joined worker #438200
[DEBUG ] Joined worker #438201
[DEBUG ] Joined worker #438202
[DEBUG ] Joined worker #438203
[DEBUG ] Joined worker #438204
[DEBUG ] Joined worker #438205
[DEBUG ] Joined worker #438206
[DEBUG ] Joined worker #438207
[DEBUG ] Joined worker #438208
[DEBUG ] Joined worker #438209
[DEBUG ] Joined worker #438210
[DEBUG ] Joined worker #438211
[DEBUG ] Joined worker #438212
[DEBUG ] Joined worker #438213
[DEBUG ] Joined worker #438214
[DEBUG ] Joined worker #438215
[DEBUG ] Joined worker #438216
[DEBUG ] Joined worker #438217
[DEBUG ] Joined worker #438218
[DEBUG ] Joined worker #438219
[DEBUG ] Joined worker #438220
[DEBUG ] Joined worker #438221
[DEBUG ] Joined worker #438222
[DEBUG ] Joined worker #438223
[DEBUG ] Joined worker #438224
[DEBUG ] Joined worker #438225
[DEBUG ] Joined worker #438226
[DEBUG ] Joined worker #438227
[DEBUG ] Joined worker #438228
[DEBUG ] Joined worker #438229
[DEBUG ] Joined worker #438230
[DEBUG ] Joined worker #438231
[DEBUG ] Joined worker #438232
[DEBUG ] Joined worker #438233
[DEBUG ] Joined worker #438234
[DEBUG ] Joined worker #438235
[DEBUG ] Joined worker #438236
[DEBUG ] Joined worker #438237
[DEBUG ] Joined worker #438238
[DEBUG ] Joined worker #438239
[DEBUG ] Joined worker #438240
[DEBUG ] Joined worker #438241
[DEBUG ] Joined worker #438242
[DEBUG ] Joined worker #438243
[DEBUG ] Joined worker #438244
[DEBUG ] Joined worker #438245
[DEBUG ] Joined worker #438246
[DEBUG ] Joined worker #438247
[DEBUG ] Joined worker #438248
[DEBUG ] Joined worker #438249
[DEBUG ] Joined worker #438250
[DEBUG ] Joined worker #438251
[DEBUG ] Joined worker #438252
[DEBUG ] Joined worker #438253
[DEBUG ] Joined worker #438254
[DEBUG ] Joined worker #438255
[DEBUG ] Joined worker #438256
[DEBUG ] Joined worker #438257
[DEBUG ] Joined worker #438258
...
[DEBUG ] Joined worker #442585
[DEBUG ] Joined worker #442586
[DEBUG ] Joined worker #442587
[INFO ] salt-eventsd has shut down

Feature: make eventsd more like salt-* binaries

It would probably be wise to (at least) add some options such as:

  • --help
  • -d (run as daemon)
  • -l (loglevel)
  • --version

This makes it work like all the other salt-* binaries like salt-call, salt-minion, salt-master etc.

Then the "fg" function could be changed to default behavior and turning it into a background process by providing -d (and/or --daemon).

TypeError: setup() takes exactly 1 argument (2 given)

Running the default / included configs (with modified DB credentials) results in errors like this when running in foreground. In background mode they are not shown nor logged.

Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 38, in run
    self._store_data()
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 117, in _store_data
    self._init_backend(event_set['backend'])
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 45, in _init_backend
    setup_backend.setup(self.name)
TypeError: setup() takes exactly 1 argument (2 given)

Install workers by default.

I'd suggest using a workers directory of eventsd.d in the /etc/salt directory to fall more in line with how salt has their directories.

Pin max supported salt version

@felskrone What is your opinion on pinning the max version of salt that we currently support via future TravisCI tests?

My suggestion would be to change requirements file to the following right now salt>=0.16.2,<=2014.7.0 and when we have tested a new version or similar we can bump the version number. Or would this cause to many issues with backwards compability on older releases?

Sub-events raising KeyError: 'fun'

I am trying to setup highstate logging. The /etc/salt/eventsd file is based on the extended example with this added:

        subs:
           # name of the sub-event
            highstate:
               # the function name, test.ping. saltutil.sync_modules, cmd.run_all, etc.
                fun: state.highstate
                backend: Minion_Event_Worker
                dict_name: data
                fields: [jid, id, retcode, return]
                template: insert into {0} (jid, servername, ret_code, result) values ('{1}', '{2}', '{3}', '{4}') on duplicate key update jid='{1}', ret_code='{3}', result='{4}';
                mysql_tab: highstate

However, when a highstate is being done this results in the following error:

Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/dist-packages/salteventsd/worker.py", line 38, in run
    self._store_data()
  File "/usr/lib/python2.7/dist-packages/salteventsd/worker.py", line 89, in _store_data
    if( self.event_map[event]['subs'][subevent]['fun'].match( entry['data']['fun'] ) ):
KeyError: 'fun'

This is based on the comments in the config file. Am I doing it wrong or ... ?

Master restart causes eventsd to stop processing

On Salt 2016.3.0 after I restarted the salt-master the events stopped being processed.
The logs showed the following AssertionError:

2016-06-13 08:22:35 [salt.transport.ipc     ][ERROR   ] Exception occurred in Subscriber while handling stream: Already reading
2016-06-13 08:22:35 [salt.log.setup         ][ERROR   ] An un-handled exception was caught by salt's global exception handler:
AssertionError: Already reading
Traceback (most recent call last):
  File "/usr/bin/salt-eventsd", line 118, in <module>
    main()
  File "/usr/bin/salt-eventsd", line 113, in main
    daemon.start()
  File "/usr/lib/pymodules/python2.7/salteventsd/daemon.py", line 301, in start
    super(SaltEventsDaemon, self).start()
  File "/usr/lib/pymodules/python2.7/salteventsd/daemon.py", line 139, in start
    self.run()
  File "/usr/lib/pymodules/python2.7/salteventsd/daemon.py", line 311, in run
    self.listen()
  File "/usr/lib/pymodules/python2.7/salteventsd/daemon.py", line 371, in listen
    ret = event.get_event(full=True)
  File "/usr/lib/python2.7/dist-packages/salt/utils/event.py", line 579, in get_event
    ret = self._get_event(wait, tag, match_func, no_block)
  File "/usr/lib/python2.7/dist-packages/salt/utils/event.py", line 484, in _get_event
    raw = self.subscriber.read_sync(timeout=wait)
  File "/usr/lib/python2.7/dist-packages/salt/transport/ipc.py", line 654, in read_sync
    return ret_future.result()
  File "/usr/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
    raise_exc_info(self._exc_info)
  File "/usr/lib/python2.7/dist-packages/tornado/gen.py", line 230, in wrapper
    yielded = next(result)
  File "/usr/lib/python2.7/dist-packages/salt/transport/ipc.py", line 631, in _read_sync
    raise exc_to_raise  # pylint: disable=E0702
AssertionError: Already reading

Prevent events from being lost when workers fail

It would be nice if eventsd could check whether the worker has completed its job and otherwise (temporarily) store the event for later processing.

Workers can fail (at least) due to:

  • A bug in the code resulting in an exception
  • An issue with an upstream system (e.g. (local) MySQL/Redis or remote services (REST API)

It would be nice if eventsd could store the event in such cases and reprocess them at a later date (with a configurable expiry and a filter which tags are important to be kept/retried (unless this applies to all)).

This would probably also require a change in workers themselves which should return the status of the _store and send events.

A (temporary) failure in the worker should not longer result in lost events.

Worker: Centralize DB config & wrong comment?

The comments in the worker mention

    # the settings for the mysql-server to use
    # it should be a read-only user, it just collects data

I assume this is "write only" instead? Otherwise it cannot insert data into the database :)

The 3 workers included all require to be configured manually with DB credentials. Its probably more practical (unless there are reasons for not doing it) to move those to a central location instead (e.g. the /etc/salt/eventsd file)?

AttributeError: 'Minion_Return_Worker' object has no attribute 'hostname'

The default provided worker does not work:

Exception in thread 1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 41, in run
    self._store_data()
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 124, in _store_data
    self._cleanup()
  File "/usr/local/lib/python2.7/dist-packages/salteventsd/worker.py", line 57, in _cleanup
    backend.shutdown()
  File "/etc/salt/eventsd_workers/Minion_Return_Worker.py", line 66, in shutdown
    self.hostname))
AttributeError: 'Minion_Return_Worker' object has no attribute 'hostname'

Credentials have been set globally in the config

AttributeError: 'New_Job_Worker' object has no attribute 'hostname'

Running salt-eventsd 0.8 results in:

Exception in thread 2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/pymodules/python2.7/salteventsd/worker.py", line 41, in run
    self._store_data()
  File "/usr/lib/pymodules/python2.7/salteventsd/worker.py", line 124, in _store_data
    self._cleanup()
  File "/usr/lib/pymodules/python2.7/salteventsd/worker.py", line 57, in _cleanup
    backend.shutdown()
  File "/etc/salt/eventsd_workers/New_Job_Worker.py", line 81, in shutdown
    self.hostname))
AttributeError: 'New_Job_Worker' object has no attribute 'hostname'

I assume the hostname is for the MySQL config.
I am using the all config to push the MySQL credentials down to the worker like this:

worker_credentials:

    # this entry will be used by any worker that is not explicitly defined
    all:
        username: salt
        password: xxxxxxxxxx
        hostname: localhost
        database: salt
    # add a worker with its name here, to have it use its own credentials
       #Minion_Sub_Worker:
       #username: <user>
       #password: <password>
       #hostname: <hostname>
       #database: <database>

Add console logger for foreground mode

salt-eventsd currently does not have console logger, all output is written to the daemons logfile.

Since the daemon disconnects from the console (by forking), we need to add a console logger if possible to be able to start in the foreground.

Batching of events not sent as batch to worker send() method

I was playing around with the batching/queue feature already present in the code to see how it was performing and working etc... and i stumbled upon a small problem when it is sent to the worker class.

I set the following in my config 'event_limit': 100 and simulated around 1000 events/sec so it should be starting 10 new thread per second and pass along 100 events per thread.

By this logic i would expect that when the worker is triggered and send() is called i would get all 100 events in a list or set and i could do anything i want with it. This is not the case right now.

In worker.py currently the list of events is itterated one by one (of those 100 events that was batched earlier) and sent one by one to the worker threads send() method.

This however is not what i kinda expect when i set a event queue of 100. I would expect that it would send all 100 events (or a filtered list of relevant events) to send() method and i could process them in there however i want. For example batching them together and sending them off to a external server for long term storage or bulk insertion into mysql etc...

Add logrotate

The log for eventsd may grow very big, even with "normal" output and not even with debug:

total 501M
drwxr-xr-x  2 root root   4,0K mrt 19 06:45 .
drwxrwxr-x 14 root syslog 4,0K mrt 19 06:45 ..
-rw-r--r--  1 root root   500M mrt 19 12:31 eventsd

It is perhaps a good idea to add logrotate configuration (e.g. /etc/logrotate.d/salt-eventsd) to the package to prevent this from happening.

Same for Salt itself, a 7-day rotate should suffice:

/var/log/salt/eventsd {
    weekly
    missingok
    rotate 7
    compress
    notifempty
}

Too many open files in Stat_Worker

Do we properly close all handles where necessary?

2015-03-08 13:22:43 [Stat_Worker                             ][ERROR   ] 23121378# Connecting to the mysql-server failed:
2015-03-08 13:22:43 [Stat_Worker                             ][ERROR   ] (2005, "Unknown MySQL server host 'custom_hostname' (24)")
2015-03-08 13:22:53 [salteventsd.worker                      ][INFO    ] 23121379# started
2015-03-08 13:22:53 [salteventsd.daemon                      ][CRITICAL] Failed to write state to /var/run/salt-eventsd.status
2015-03-08 13:22:53 [Stat_Worker                             ][ERROR   ] 23121379# Connecting to the mysql-server failed:
2015-03-08 13:22:53 [salteventsd.daemon                      ][ERROR   ] [Errno 24] Too many open files: '/var/run/salt-eventsd.status'
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.7/salteventsd/daemon.py", line 516, in _write_state
IOError: [Errno 24] Too many open files: '/var/run/salt-eventsd.status'

Upload to pypi

Hi. I know you have stated that you do not want to maintain multiple package builds like pypi, deb, rpm etc... but i think you should really reconsider to atleast upload this package to pypi so it can be easily installed via pip.

I have found this very easy and useful guide how to upload a package to pypi http://peterdowns.com/posts/first-time-with-pypi.html

In the end when you have the proper setup and config on your system, uploading a new release is as easy as python setup.py sdist upload -r pypi and it is done.

The PR #21 is intended for this issue and to make it work better when uploaded to pypi.

Feature Request - Argument Field

Having recently deployed eventsd (working well thanks!) i've noticed the argument field is base64 encoded. Presumably salt stores this as base64, would it be possible for eventsd to convert it before writing to the DB?

Trigger multiple backends for same tag

Is it possible to use multiple backends for the same tag? For instance two backends that react to the state.highstate. One to insert data into a database and the other to send a message to Slack.

They could be combined into one backend, but that makes it less practical.

Should I simply configure it like:

events:
   inserter:
      backend: Insert_Worker
      tag: salt/job/[0-9]*/new
   notify:
      backend: Notify_Worker
      tag: salt/job/[0-9]*/new

or would this only trigger one of the two?

2 Workers with same tag cause events to be duplicated in worker

So i was implementing a new worker that i was going to use to send some events based on what function was runned on the minion to a new backend service. What i noticed was that when i was using 2 workers with the same tag tag: salt/job/[0-9]*/ret/\w+ in this case, i noticed something really strange. For each event that eventsd got from salt eventbus, eventsd would send 2 events into the worker send_batch(self, events): method. Also the same set of 2 events would be sent to both workers. I would guess that this has something to do with how events is filtered out and matched against the config for each worker.

The expected result would be that for each event, it would be sent once to each worker no matter how many workers that is using the same tag config.

KeyError: 'fun' when using subevents

Once in a while the following error comes around:

[DEBUG   ] resetting the Event-timer
Exception in thread 2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/pymodules/python2.7/salteventsd/worker.py", line 40, in run
    self._store_data()
  File "/usr/lib/pymodules/python2.7/salteventsd/worker.py", line 102, in _store_data
    if self.event_map[event]['subs'][subevent]['tag'].match(entry['data']['fun']):
KeyError: 'fun'

This is most likely due to the fact that there is a difference between returns from a command (salt '*' test.foo, salt-call test.foo) and by a scheduler.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.