Coder Social home page Coder Social logo

Comments (19)

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

@neptuneresearch @serhack Thoughts?

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Actually, --log-level 2 is extremely verbose... Maybe just 1

from archival_network.

serhack avatar serhack commented on August 23, 2024

I agree with you. --log-level 1 is fine 👍

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Before we finalize this...

Perhaps we could run a daemon for 24 h at varying log levels and compare the space requirements.

Can we fill out this table:

*--log-level*      //     *daily_log_size*
   level 0                    ?? MB/day
   level 1                    ?? MB/day
   level 2                    ?? MB/day

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

I can run this experiment, or let somebody else handle it. @serhack could we earmark one of the nodes for this kind of analysis for a few days (maybe Tokyo? Let me know if you have a preference or are working on one.)

from archival_network.

serhack avatar serhack commented on August 23, 2024

The MAP-TOKYO-0 node could be OKAY. Before running the script, capture the disk size from Graphana. During the tests, MAP-TOKYO-0 can't install / update any program or manage any files besides the log file.

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

We could use the codebase to see what messages appear in each level.

Internally, these are calls to log functions by their level name. At a quick glance I see Debug, Info, Warning, Error, Fatal, Trace. So you could grep the codebase for all Trace calls for instance to see everything it puts under the Trace log level.

I could get together a list for you to cherry-pick. Or you could let me know your interested parameters
i.e. you need IP addresses, ... Or you can grep the codebase :)

The log messages could be a significant data source, and this way we would make a definite specification between the daemon log messaging and our analysis, working from the set of all messaging down to just what we want, instead of having to do multiple runs and working backwards from heterogeneous outputs.

FYI I have a 3 week logfile at log level 0 and its size is 16 MB.

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

@serhack - nifty to watch from Graphana. I was going to compare the size of the log files themselves, but your method will let us see if anything else weird changes with that.

@neptuneresearch - I cannot find any good notes about the --log-level flag or its various levels. That sounds like a good side project, if you want to poke at the codebase and assemble some documentation. Perhapse we could start collecting that info in the wiki??

It would be good to see how --log-level 1 versus --log-level 2 record the following events:

  • incoming transactiosn
  • incoming blocks
  • alternative blocks
  • reorgs

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Thanks @neptuneresearch - adding 16 MB / 21 d ~ 0.75 MB/day to the table

*--log-level*      //        *daily_log_size*
   level 0                    0.75 MB/day (NR)
   level 1                    ?? MB/day
   level 2                    ?? MB/day

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

Ok I can make this list.

Not sure about in 2 days though, so don't wait to get any logs going in the meantime. I can tell you more about any specific log messages, as well. I'd just go look at the code surrounding them.

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Cool. @neptuneresearch - 2 days??? This is very low priority.

I'm going to do the logsize tests, and jot down the events listed above.

I'll write up notes based on that, then pass them off to you (well, the wiki) and you can add anything else that you find in the code.

Definitely no rush.

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Looks like logs compress pretty well...

$ ls -la
-rw-r--r--  1 m m   4022961 Jul 27 11:42 bitmonero-TOKYO-1day.7z
-rw-r--r--  1 m m 104850022 Jul 27 11:38 bitmonero-TOKYO-1day.log
-rw-r--r--  1 m m   4023824 Jul 27 11:42 bitmonero-TOKYO-1day.tar.xz
-rw-r--r--  1 m m   6801306 Jul 27 11:41 bitmonero-TOKYO-1day.zip

We could have a compressor script that keeps an eye on ~/.bitmonero/, and whenever it sees a <filename> that matches bitmonero.log-*. it executes

tar -cf <filename>.tar <filename>
xz -z <filename>.tar
rm <filename>

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

Oh, please note the 16 MB figure was from the archive daemon which additionally logs:

  • 1 line per block
  • the alt_chain_info every block, this can be several lines long

So you will not see this extra size on the stock daemon.

I was actually just changing around the archive logging. In version 6 I currently have:

  • Block message:

    Block Archive [Chain=MAIN/ALT H=00000000 MRT=00000000 NRT=00000000]

Note: the 00.. stands for a number, the length may not be accurate in this example.

  • Altchaininfo:
    I'm taking this out, this is duplicate to the saved altchaininfo data so this was really just a convenience for watching it from the daemon console.

Comments on the Block message? Anything else you would like to see in it?

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

For the record, it looks like the archive daemon logs archive messages under the Info level to the global log.

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

That looks good. I am extremely eager to have MRT/NRT information.

If it won't hold up the patch timeline, then it would be natural to record peer information alongside the NRT for each copy. However if that would set you back more than an hour or two, just save it for the following version of the patch. Since we're totally blind at the moment, [patch without peer info, soon] >> [patch with peer info, later]

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

Yea, no peer info, I haven't researched that yet. Probably when I do the log messaging analysis.

from archival_network.

Mitchellpkt avatar Mitchellpkt commented on August 23, 2024

Realized that --log-level 4 contains data we've been looking for (txn_hash + txn_NRT for first copy), so we can get more mileage out of the logging than we realized.

It would be cleaner to set custom categories rather than overkill --log-level 4 with extraction. Notes from stoffu in MRL about acomplishing this:

Stoffu Noether > See monero-project/monero@5833d66 for a detailed explanation.

For example, you can set --log-level 0,*net*:DEBUG to specifically make any log categories containing net such as net.p2p print debug level logs.

Since I'm tied up with Mastering Monero right now, I just let a new node sync from scratch with monerod-archive and --log-level 4 ... Once I finish a few more chapters, I'll dig through that those logfiles to document desired information and specification by input flag.

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

Edit: I moved this information to the Wiki at Monerod Log Levels.

IMO, *:TRACE (log level > 3) is a lot, maybe we could lower the noise-to-signal ratio if we only :TRACE what we need and otherwise use *:WARNING.

from archival_network.

neptuneresearch avatar neptuneresearch commented on August 23, 2024

@IsthmusCrypto just to touch base on this, we are log level 1 currently until you go through the above doc and pick out more categories [if] you want.

from archival_network.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.