Coder Social home page Coder Social logo

Comments (9)

matthias-Q avatar matthias-Q commented on May 29, 2024

I am trying to find the root cause. The logs are not suspicious. They show some timeouts, but thats just because the DB has a high IO load. This is what:

SELECT
    *
FROM
    pg_statio_user_tables;

returns:

relid|schemaname|relname                       |heap_blks_read|heap_blks_hit|idx_blks_read|idx_blks_hit|toast_blks_read|toast_blks_hit|tidx_blks_read|tidx_blks_hit|
-----+----------+------------------------------+--------------+-------------+-------------+------------+---------------+--------------+--------------+-------------+
44424|public    |pending_steps                 |             0|            0|          216|        2295|              0|             0|             0|            0|
16460|public    |runs                          |        182784|     30826938|        28316|   109979645|          37406|       1568423|          1597|      3933090|
16516|public    |asset_keys                    |          5454|     38494342|          757|    21746971|            148|        424659|            38|       761821|
16539|public    |asset_event_tags              |         31473|       170285|        47013|      509394|              0|             0|             0|            0|
16555|public    |jobs                          |          9962|      9736603|          455|     6141862|              0|             0|             0|            0|
16583|public    |job_ticks                     |       3684634|   3470423073|      5073563|   335860867|             63|          1011|             9|         1031|
44435|public    |asset_check_executions        |             0|            0|            0|           0|              0|             0|             0|            0|
16496|public    |alembic_version               |             1|           23|            2|          17|               |              |              |             |
44402|public    |asset_daemon_asset_evaluations|     706935248|    535562840|        45411|     9730309|            418|          1181|            99|          875|
44414|public    |concurrency_slots             |             0|            0|            7|         187|              0|             0|             0|            0|
16416|public    |snapshots                     |          4078|       107544|         2941|      320093|           3945|         12561|           970|        17809|
16453|public    |kvs                           |          1264|     13882238|          238|     5910581|          24988|      42229431|           339|     10340129|
16569|public    |instigators                   |          2804|     42981206|          419|    31955612|              0|             0|             0|            0|
16528|public    |dynamic_partitions            |           517|        72381|          584|       60959|              0|             0|             0|            0|
16448|public    |instance_info                 |           191|       176981|          175|      119133|              0|             0|             0|            0|
16434|public    |bulk_actions                  |           154|       134958|          603|    12687122|             72|         12614|            20|        10072|
16426|public    |daemon_heartbeats             |          1378|     22333930|          237|    13688550|             43|          2238|             6|         2019|
16482|public    |run_tags                      |        137272|      8452773|        37721|      781590|              0|             0|             0|            0|
16502|public    |event_logs                    |      15061757|    221590786|      2693126|   981757487|         142356|        560068|          1505|       475894|
16404|public    |secondary_indexes             |            45|      8621166|          132|    25764533|              0|             0|             0|            0|

pg_stat_activity also does not show suspicious queries. No old queries or idle_in_transaction.

Postgres runs on version 14. We are pinning dagster to 1.7.1 for now and check if that helps.

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

Looks like the issue is the table asset_daemon_asset_evaluations.
I have switched off auto materialization for the moment and the constant read IO ops is gone. This table is 1.2G large btw. I have no clue why. The daemon tried to delete old evaluations but seemed to be stuck

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

So, something happened to this table on the 19th, when dagster 1.7.2 was released:

select date(create_timestamp), count(*)
from public.asset_daemon_asset_evaluations   
group by 1

returns:

date      |count |
----------+------+
2024-03-30|    17|
2024-03-31|   231|
2024-04-01|   203|
2024-04-02|   200|
2024-04-03|   199|
2024-04-04|   201|
2024-04-05|   199|
2024-04-06|   146|
2024-04-07|   156|
2024-04-08|   196|
2024-04-09|    94|
2024-04-10|   270|
2024-04-11|   258|
2024-04-12|   260|
2024-04-13|   279|
2024-04-14|   245|
2024-04-15|   274|
2024-04-16|   260|
2024-04-17|   287|
2024-04-18|   259|
2024-04-19|135770|
2024-04-20|274080|
2024-04-21|141408|
2024-04-22|110208|
2024-04-23| 96576|
2024-04-24| 82752|
2024-04-25| 18875|
2024-04-26|   163|
2024-04-27|   425|
2024-04-28|   262|
2024-04-29|    71|

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

I was digging a bit in (maybe related) commits between 1.7.1 -> 1.7.2 and I stumbled over this: e4f394e
(referring to this line:

evaluation_state_by_key[neighbor_key] = dataclasses.replace(
and following)

@OwenKephart You have swappend replace with copy. Can this lead to this issue here, where the daemon creates way more entries in the table job_ticks and asset_daemon_asset_evaluations?

This is a real issue for us, because we rely on automaterializations. With them switched on, we have many read-io ops on the postgres db.

from dagster.

OwenKephart avatar OwenKephart commented on May 29, 2024

Hi @matthias-Q -- apologies for not seeing this sooner, and I think you've found the exact problem. Luckily, there's already a fix in place for this: #21310.

Upgrading to the latest version of dagster (1.7.3) should resolve this issue, but feel free to reopen this if that is not the case.

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

Hey @OwenKephart , thanks for the response. It looks like it has no effect. I upgraded to 1.7.3 and switched on the automaterialization. The read io goes up again.

Could it be that we have too many rows in some tables? job_ticks and asset_daemon_asset_evaluations? Since there is no index on the create_timestamp column, this can lead to a lot of table scans? Would it be safe to delete rows where the create_timestamp is older that 3 days?

I will reopen this for now.

[EDIT] looks like I am not allowed to reopen this issue.

from dagster.

OwenKephart avatar OwenKephart commented on May 29, 2024

Hi @matthias-Q -- I think your hypothesis re: lacking an index on create_timestamp makes sense (and so even though there are no longer tons of new evaluations coming in, the old evaluations are slowing things down).

It's definitely safe to delete older records, and I'd encourage that option here. Leaving this issue open for now in case that doesn't resolve.

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

So in asset_daemon_asset_evaluations we have less entries after the upgrade to 1.7.3 (and also during the downgrade to 1.7.1). I have removed these rows.

For job_ticks the row count is still higher than before (but not as severe as for asset_daemon_asset_evaluations)

select date(create_timestamp), count(*) 
from public.job_ticks 
where create_timestamp > '2024-04-15'
group by 1
order by 1
date      |count|
----------+-----+
2024-04-15|   69|
2024-04-16|   53|
2024-04-17|   56|
2024-04-18|   48|
2024-04-19|   59|
2024-04-20|   63|
2024-04-21|   62|
2024-04-22|   64|
2024-04-23| 1717|
2024-04-24| 3378|
2024-04-25| 3397|
2024-04-26| 3945|
2024-04-27| 3547|
2024-04-28| 3568|
2024-04-29| 2525|
2024-04-30| 1602|

So I assume that there is still something going on.

I took a look into job_ticks

select date(create_timestamp),type, count(*) 
from public.job_ticks jt
group by 1,2

----
date      |type            |count|
----------+----------------+-----+
2024-04-30|AUTO_MATERIALIZE|  384|
2024-04-28|SCHEDULE        |   16|
2024-04-29|SENSOR          | 1715|
2024-04-28|AUTO_MATERIALIZE|  803|
2024-04-27|AUTO_MATERIALIZE|  792|
2024-04-30|SCHEDULE        |    8|
2024-04-29|AUTO_MATERIALIZE|  794|
2024-04-27|SENSOR          | 2738|
2024-04-29|SCHEDULE        |   16|
2024-04-27|SCHEDULE        |   17|
2024-04-30|SENSOR          | 1225|
2024-04-28|SENSOR          | 2749|

Looks like Sensors causing many of these ticks.

Turns out it is just one specific sensor with specific a parent id. This one is the sensor that checks job failures and posts it in a MS Teams channel.

from dagster.

matthias-Q avatar matthias-Q commented on May 29, 2024

The issue is resolved. I have run vacuum full verbose analyze asset_daemon_asset_evaluations, which drastically reduced the number of pages from 180k to 1k. Now I have very little IO reads.

from dagster.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.