Coder Social home page Coder Social logo

shivramsrivastava / firmament Goto Github PK

View Code? Open in Web Editor NEW

This project forked from camsas/firmament

0.0 0.0 0.0 6.16 MB

The Firmament cluster scheduling platform

Home Page: http://www.firmament.io

License: Apache License 2.0

CMake 3.07% Makefile 0.21% Shell 1.03% Python 2.89% C 0.77% C++ 89.07% Smarty 2.96%

firmament's People

Contributors

adamgleave avatar gustafa avatar icgog avatar joshuabambrick avatar mgrosvenor avatar ms705 avatar pooya avatar sebastian avatar shivramsrivastava avatar

Watchers

 avatar  avatar  avatar

firmament's Issues

Firmament Crashed with 'task_node' must be non NULL

I found this error when running k8s e2e tests on firmament-poseidon scheduler:
It seems when pod state change from Pod Pending to Pod Succeeded, firmament crashes.
Poseidon Log:
I0803 13:37:13.513089 21692 poseidon.go:47] Scheduler returned 0 deltas
I0803 13:37:23.514032 21692 poseidon.go:47] Scheduler returned 0 deltas
I0803 13:37:28.553037 21692 podwatcher.go:300] enqueuePodAddition: Added pod {hostpath-symlink-prep-e2e-tests-subpath-p7vwr e2e-tests-subpath-p7vwr}
I0803 13:37:28.553124 21692 podwatcher.go:386] PodPending {hostpath-symlink-prep-e2e-tests-subpath-p7vwr e2e-tests-subpath-p7vwr}
I0803 13:37:28.553968 21692 firmament_client.go:94] Task Submitted
I0803 13:37:31.936804 21692 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {hostpath-symlink-prep-e2e-tests-subpath-p7vwr e2e-tests-subpath-p7vwr} Succeeded
I0803 13:37:31.936907 21692 podwatcher.go:419] PodSucceeded {hostpath-symlink-prep-e2e-tests-subpath-p7vwr e2e-tests-subpath-p7vwr}
FATAL: 2018/08/03 13:37:32 &{0xc42011c280}.TaskCompleted(_) = _, rpc error: code = Unavailable desc = transport is closing:

Firmament dump
F0803 13:37:31.937496 21720 flow_graph_manager.cc:637] Check failed: 'task_node' Must be non NULL
*** Check failure stack trace: ***
@ 0x7fe9998fc5cd google::LogMessage::Fail()
@ 0x7fe9998fe433 google::LogMessage::SendToLog()
@ 0x7fe9998fc15b google::LogMessage::Flush()
@ 0x7fe9998fee1e google::LogMessageFatal::~LogMessageFatal()
@ 0xa871da google::CheckNotNull<>()
@ 0xa9b064 firmament::FlowGraphManager::TaskCompleted()
@ 0xaad9c1 firmament::scheduler::FlowScheduler::HandleTaskCompletion()
@ 0x94603e firmament::FirmamentSchedulerServiceImpl::TaskCompleted()
@ 0x91f0b9 std::_Mem_fn_base<>::operator()<>()
@ 0x91b5fb std::_Function_handler<>::_M_invoke()
@ 0x937d99 std::function<>::operator()()
@ 0x930d44 grpc::RpcMethodHandler<>::RunHandler()
@ 0xaf75c6 grpc::Server::SyncRequestThreadManager::DoWork()
@ 0xafa287 grpc::ThreadManager::MainWorkLoop()
@ 0xafa2ec grpc::ThreadManager::WorkerThread::Run()
@ 0x7fe998fb2c80 (unknown)
@ 0x7fe99a1cb6ba start_thread
@ 0x7fe99871841d clone
@ (nil) (unknown)
Aborted (core dumped)

Firmament crashed with 'res_id_ptr' Must be non NULL

I found this error when running k8s e2e tests on firmament-poseidon scheduler: Test case is "should handle the creation of 1000 pods"

Poseidon Log:
I0806 08:57:18.678808 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b91ceedd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.678900 4004 podwatcher.go:386] PodPending {b91ceedd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.679877 4004 firmament_client.go:94] Task Submitted
I0806 08:57:18.681714 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b91ceedd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.681830 4004 podwatcher.go:451] PodFailed {b91ceedd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
Failed Task Id: 16692299497042353999
I0806 08:57:18.693689 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b91e5eec-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.693777 4004 podwatcher.go:386] PodPending {b91e5eec-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
{node.kubernetes.io/not-ready Exists NoExecute 0xc420afdaa0}
{node.kubernetes.io/unreachable Exists NoExecute 0xc420afdb70}
I0806 08:57:18.694993 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b91e5eec-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.782707 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b920797f-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.782764 4004 podwatcher.go:386] PodPending {b920797f-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.785951 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b920797f-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.796038 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b92e4261-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.796101 4004 podwatcher.go:386] PodPending {b92e4261-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.799004 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b92e4261-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.802224 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b9305768-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.802316 4004 podwatcher.go:386] PodPending {b9305768-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.804904 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b9305768-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.814963 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b931338b-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.815157 4004 podwatcher.go:386] PodPending {b931338b-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.817780 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b931338b-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.820194 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b9332b4c-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.820364 4004 podwatcher.go:386] PodPending {b9332b4c-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.824280 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b9332b4c-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.832809 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b93428ba-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.832959 4004 podwatcher.go:386] PodPending {b93428ba-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.836302 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b93428ba-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.839174 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b93606be-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.839233 4004 podwatcher.go:386] PodPending {b93606be-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.842177 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b93606be-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.853196 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b936ea1d-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.853263 4004 podwatcher.go:386] PodPending {b936ea1d-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.855740 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b936ea1d-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
I0806 08:57:18.858839 4004 podwatcher.go:300] enqueuePodAddition: Added pod {b938fdfd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg}
I0806 08:57:18.861190 4004 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {b938fdfd-9956-11e8-b4f9-fa163ef4813d e2e-tests-pod-garbage-collector-qt5xg} Failed
FATAL: 2018/08/06 08:57:18 &{0xc4201ee780}.TaskSubmitted(_) = _, rpc error: code = Unavailable desc = transport is closing:

Firmament Dump
F0806 08:57:18.682215 4034 event_driven_scheduler.cc:393] Check failed: 'res_id_ptr' Must be non NULL
*** Check failure stack trace: ***
@ 0x7fc320f525cd google::LogMessage::Fail()
@ 0x7fc320f54433 google::LogMessage::SendToLog()
@ 0x7fc320f5215b google::LogMessage::Flush()
@ 0x7fc320f54e1e google::LogMessageFatal::~LogMessageFatal()
@ 0xa324d0 google::CheckNotNull<>()
@ 0xa2b304 firmament::scheduler::EventDrivenScheduler::HandleTaskFailure()
@ 0xaadbdd firmament::scheduler::FlowScheduler::HandleTaskFailure()
@ 0x946404 firmament::FirmamentSchedulerServiceImpl::TaskFailed()
@ 0x91f2c7 std::_Mem_fn_base<>::operator()<>()
@ 0x91b8db std::_Function_handler<>::_M_invoke()
@ 0x937c6d std::function<>::operator()()
@ 0x9309de grpc::RpcMethodHandler<>::RunHandler()
@ 0xaf75c6 grpc::Server::SyncRequestThreadManager::DoWork()
@ 0xafa287 grpc::ThreadManager::MainWorkLoop()
@ 0xafa2ec grpc::ThreadManager::WorkerThread::Run()
@ 0x7fc320608c80 (unknown)
@ 0x7fc3218216ba start_thread
@ 0x7fc31fd6e41d clone
@ (nil) (unknown)
Aborted (core dumped)

Pod updates feature in firmament

As of now it's not completely implemented in the firmament. As per kubernetes 1.11, only fields that can be updated for a pod are spec.containers[].image, spec.initContainers[].image, spec.activeDeadlineSeconds or spec.tolerations. But for ReplicaSet some more fields like cpu, mem can also be updated.
This feature needs preemption support in the firmament.

F0203: firmament_client.go:96 Crashes - "Task already submitted"

Hello,
When I try to deploy poseidon (Firmament is already deployed), I get this error which seems to be fatal, so that it results in CrashLoopBackOff state of the pod.

Here is the log of poseidon pod:
kubectl logs poseidon-8687d7b597-7npgr -n kube-system
I0203 15:18:43.711381 1 config.go:190] ReadFromCommandLineFlags{poseidon firmament-service.kube-system 1.6 0.0.0.0:9091 10 9090 . false 0.0.0.0:8989 0.0.0.0:8989 0.0.0.0:8989 500 1000 false false}
W0203 15:18:43.711500 1 config.go:133] Config File "poseidon_config" Not Found in "[/]"unable to read poseidon_config, using command flags/default values
I0203 15:18:43.718054 1 k8sclient.go:104] k8sclient init called
I0203 15:18:43.718362 1 poseidon.go:117] Starting Poseidon with firmament address firmament-service.kube-system:9090.
I0203 15:18:43.720953 1 stats.go:164] Starting stats server...
I0203 15:18:43.724818 1 poseidon.go:47] Scheduler returned 0 deltas
I0203 15:18:43.725644 1 k8sclient.go:92] k8s newclient called
I0203 15:18:43.725689 1 nodewatcher.go:41] Starting NodeWatcher...
I0203 15:18:43.726084 1 nodewatcher.go:222] Getting node updates...
I0203 15:18:43.749886 1 nodewatcher.go:149] enqueueNodeAdition: Added node full-wahoo
I0203 15:18:43.749908 1 nodewatcher.go:149] enqueueNodeAdition: Added node liked-tick
I0203 15:18:43.749917 1 nodewatcher.go:149] enqueueNodeAdition: Added node viable-toad
I0203 15:18:43.827371 1 nodewatcher.go:231] Starting node watching workers
I0203 15:18:43.827653 1 nodewatcher.go:260] map[full-wahoo:resource_desc:<uuid:"9ca153c1-668f-4d68-b765-9002bc52c005" friendly_name:"full-wahoo" state:RESOURCE_IDLE type:RESOURCE_MACHINE available_resources:<cpu_cores:4000 ram_cap:4038254592000 ephemeral_cap:16048676019000 > reserved_resources:<ram_cap:104857600000 ephemeral_cap:1783186253000 > resource_capacity:<cpu_cores:4000 ram_cap:4143112192000 ephemeral_cap:17831862272000 > labels:<key:"juju-application" value:"kubernetes-worker" > labels:<key:"kubernetes.io/hostname" value:"full-wahoo" > labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > max_pods:110 > children:<resource_desc:<uuid:"6bf16af5-5152-41fa-ba7a-29ed6d952aed" friendly_name:"full-wahoo_PU #0" state:RESOURCE_IDLE resource_capacity:<cpu_cores:4000 ram_ckubectl logs poseidon-8687d7b597-7npgr -n kube-system
I0203 15:18:43.711381 1 config.go:190] ReadFromCommandLineFlags{poseidon firmament-service.kube-system 1.6 0.0.0.0:9091 10 9090 . false 0.0.0.0:8989 0.0.0.0:8989 0.0.0.0:8989 500 1000 false false}
W0203 15:18:43.711500 1 config.go:133] Config File "poseidon_config" Not Found in "[/]"unable to read poseidon_config, using command flags/default values
I0203 15:18:43.718054 1 k8sclient.go:104] k8sclient init called
I0203 15:18:43.718362 1 poseidon.go:117] Starting Poseidon with firmament address firmament-service.kube-system:9090.
I0203 15:18:43.720953 1 stats.go:164] Starting stats server...
I0203 15:18:43.724818 1 poseidon.go:47] Scheduler returned 0 deltas
I0203 15:18:43.725644 1 k8sclient.go:92] k8s newclient called
I0203 15:18:43.725689 1 nodewatcher.go:41] Starting NodeWatcher...
I0203 15:18:43.726084 1 nodewatcher.go:222] Getting node updates...
I0203 15:18:43.749886 1 nodewatcher.go:149] enqueueNodeAdition: Added node full-wahoo
I0203 15:18:43.749908 1 nodewatcher.go:149] enqueueNodeAdition: Added node liked-tick
I0203 15:18:43.749917 1 nodewatcher.go:149] enqueueNodeAdition: Added node viable-toad
I0203 15:18:43.827371 1 nodewatcher.go:231] Starting node watching workers
I0203 15:18:43.827653 1 nodewatcher.go:260] map[full-wahoo:resource_desc:<uuid:"9ca153c1-668f-4d68-b765-9002bc52c005" friendly_name:"full-wahoo" state:RESOURCE_IDLE type:RESOURCE_MACHINE available_resources:<cpu_cores:4000 ram_cap:4038254592000 ephemeral_cap:16048676019000 > reserved_resources:<ram_cap:104857600000 ephemeral_cap:1783186253000 > resource_capacity:<cpu_cores:4000 ram_cap:4143112192000 ephemeral_cap:17831862272000 > labels:<key:"juju-application" value:"kubernetes-worker" > labels:<key:"kubernetes.io/hostname" value:"full-wahoo" > labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > max_pods:110 > children:<resource_desc:<uuid:"6bf16af5-5152-41fa-ba7a-29ed6d952aed" friendly_name:"full-wahoo_PU #0" state:RESOURCE_IDLE resource_capacity:<cpu_cores:4000 ram_cap:4143112192000 ephemeral_cap:17831862272000 > labels:<key:"juju-application" value:"kubernetes-worker" > labels:<key:"kubernetes.io/hostname" value:"full-wahoo" > labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > > parent_id:"9ca153c1-668f-4d68-b765-9002bc52c005" > ] in Nodedded
F0203 15:18:43.828111 1 firmament_client.go:96] Task (9ef2faa6-e39c-4e7f-aea9-15bff4323d79,14400535326635484788) already submittedap:4143112192000 ephemeral_cap:17831862272000 > labels:<key:"juju-application" value:"kubernetes-worker" > labels:<key:"kubernetes.io/hostname" value:"full-wahoo" > labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > > parent_id:"9ca153c1-668f-4d68-b765-9002bc52c005" > ] in Nodedded
F0203 15:18:43.828111 1 firmament_client.go:96] Task (9ef2faa6-e39c-4e7f-aea9-15bff4323d79,14400535326635484788) already submitted

I would greatly appreciate if you could tell me what might be the cause of it?

Thank you,
Daria

Segmentation fault

I found the segmentation fault in the firmament code when running k8s e2e tests on firmament-poseidon scheduler while running test case "should update labels on modification"

Poseidon Log:
I0806 12:28:20.637813 19225 podwatcher.go:300] enqueuePodAddition: Added pod {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:20.637894 19225 podwatcher.go:386] PodPending {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:20.638777 19225 firmament_client.go:94] Task Submitted
I0806 12:28:23.267119 19225 poseidon.go:47] Scheduler returned 1 deltas
I0806 12:28:25.474937 19225 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn} Running
I0806 12:28:25.475137 19225 podwatcher.go:461] PodRunning {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:27.164061 19225 podwatcher.go:340] enqueuePodUpdate: Updated pod {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:27.164204 19225 podwatcher.go:467] PodUpdated {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:33.268373 19225 poseidon.go:47] Scheduler returned 0 deltas
I0806 12:28:35.639014 19225 podwatcher.go:327] enqueuePodUpdate: Updated pod state change {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn} Pending
I0806 12:28:35.639118 19225 podwatcher.go:386] PodPending {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:35.639143 19225 podwatcher.go:396] Pod already added%!(EXTRA string=labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d, string=e2e-tests-downward-api-rgcpn)
I0806 12:28:36.653821 19225 podwatcher.go:316] enqueuePodDeletion: Added pod {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
I0806 12:28:36.653912 19225 podwatcher.go:428] PodDeleted {labelsupdate343ad6f0-9974-11e8-aa0f-fa163ef4813d e2e-tests-downward-api-rgcpn}
FATAL: 2018/08/06 12:28:36 &{0xc42011c280}.TaskRemoved(_) = _, rpc error: code = Unavailable desc = transport is closing:

Firmament Dump:
I0806 12:28:23.266592 19235 firmament_scheduler_service.cc:178] Got 1 scheduling deltas
Segmentation fault (core dumped)
Core Dump (For Reference)
�[01;32mubuntu@worker2�[00m:�[01;34m~/CrashFix/firmament�[00m$ gdb build/src/firmament_scheduler
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from build/src/firmament_scheduler...done.
(gdb) core
No core file now.
(gdb) core core
warning: core file may not match specified executable file.
[New LWP 21720]
[New LWP 21658]
[New LWP 21721]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by ./build/src/firmament_scheduler -flagfile config/firmament_scheduler_cpu_mem.cf'. Program terminated with signal SIGABRT, Aborted. #0 0x00007fe998646428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. [Current thread is 1 (Thread 0x7fe99339f700 (LWP 21720))] (gdb) bt full #0 0x00007fe998646428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 resultvar = 0 pid = 21658 selftid = 21720 #1 0x00007fe99864802a in __GI_abort () at abort.c:89 save_stage = 2 act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, sa_mask = {__val = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 140641289720974, 0, 140641174151504, 6, 0, 0}}, sa_flags = -1716456320, sa_restorer = 0x63} sigs = {__val = {32, 0 <repeats 15 times>}} #2 0x00007fe99990512c in ?? () from /usr/lib/x86_64-linux-gnu/libglog.so.0 No symbol table info available. #3 0x00007fe9998fc5cd in google::LogMessage::Fail() () from /usr/lib/x86_64-linux-gnu/libglog.so.0 No symbol table info available. #4 0x00007fe9998fe433 in google::LogMessage::SendToLog() () from /usr/lib/x86_64-linux-gnu/libglog.so.0 No symbol table info available. #5 0x00007fe9998fc15b in google::LogMessage::Flush() () from /usr/lib/x86_64-linux-gnu/libglog.so.0 No symbol table info available. #6 0x00007fe9998fee1e in google::LogMessageFatal::~LogMessageFatal() () from /usr/lib/x86_64-linux-gnu/libglog.so.0 No symbol table info available. #7 0x0000000000a871da in std::queue<unsigned long, std::deque<unsigned long, std::allocator<unsigned long> > >::queue(std::deque<unsigned long, std::allocator<unsigned long> >&&) (this=0x7fe99339ea70, __c=<unknown type in /home/ubuntu/CrashFix/firmament/build/src/firmament_scheduler, CU 0xb07a20, DIE 0xb2dbb7>) at /usr/include/c++/5/bits/stl_queue.h:146 No locals. #8 0x0000000000a9b064 in firmament::FlowGraphManager::TaskRemoved (this=0x1e596a8, task_id=0) at /home/ubuntu/CrashFix/firmament/src/scheduling/flow/flow_graph_manager.cc:689 No locals. #9 0x0000000000aad9c1 in firmament::scheduler::FlowScheduler::HandleTaskFinalReport (this=0x7fe988012a30, report=..., td_ptr=0x9a1a40 <firmament::TaskFinalReport::SharedCtor()+38>) at /home/ubuntu/CrashFix/firmament/src/scheduling/flow/flow_scheduler.cc:376 lock = {m = @0x7fe988012a30} task_id = 140640985885232 equiv_classes = 0x7fe99339e690 #10 0x000000000094603e in firmament::FirmamentSchedulerServiceImpl::TaskCompleted (this=0x7fff12e32bd0, context=0x7fe99339ec68, tid_ptr=0x7fe99339ea70, reply=0x7fe99339ea50) at /home/ubuntu/CrashFix/firmament/src/scheduling/firmament_scheduler_service.cc:253 td_ptr = 0x7fe9880108b0 job_id = {data = "\026\353\355\026\236\377M\v\262\237ZE\255g\265", <incomplete sequence \364>} jd_ptr = 0x7fe98800ad70 report = warning: can't find linker symbol for virtual table for firmament::TaskFinalReport' value
warning: found `protobuf_base_2ftask_5fstats_2eproto::TableStruct::offsets' instead
{google::protobuf::Message = {}, static kIndexInFileMessages = 0, static kTaskIdFieldNumber = 1, static kStartTimeFieldNumber = 2, static kFinishTimeFieldNumber = 3,
static kInstructionsFieldNumber = 4, static kCyclesFieldNumber = 5, static kLlcRefsFieldNumber = 6, static kLlcMissesFieldNumber = 7, static kRuntimeFieldNumber = 8,
internal_metadata = {<google::protobuf::internal::InternalMetadataWithArenaBase<google::protobuf::UnknownFieldSet, google::protobuf::internal::InternalMetadataWithArena>> = {ptr_ = 0x0,
static kPtrTagMask = , static kPtrValueMask = }, }, task_id_ = 0, start_time_ = 0, finish_time_ = 0, instructions_ = 0, cycles_ = 0,
llc_refs_ = 0, llc_misses_ = 0, runtime_ = 0, cached_size = 0}
num_incomplete_tasks = 0x7fe99339e850
camsas#11 0x000000000091f0b9 in std::_Mem_fn_base<grpc::Status (firmament::FirmamentScheduler::Service::)(grpc::ServerContext, firmament::TaskUID const*, firmament::TaskCompletedResponse*), true>::operator()<grpc::ServerContext*, firmament::TaskUID const*, firmament::TaskCompletedResponse*, void>(firmament::FirmamentScheduler::Service*, grpc::ServerContext*&&, firmament::TaskUID const*&&, firmament::TaskCompletedResponse*&&) const (this=0x1e2ff08, __object=0x7fff12e32bd0) at /usr/include/c++/5/functional:600
No locals.
camsas#12 0x000000000091b5fb in std::_Function_handler<grpc::Status (firmament::FirmamentScheduler::Service*, grpc::ServerContext*, firmament::TaskUID const*, firmament::TaskCompletedResponse*), std::_Mem_fn<grpc::Status (firmament::FirmamentScheduler::Service::)(grpc::ServerContext, firmament::TaskUID const*, firmament::TaskCompletedResponse*)> >::_M_invoke(std::_Any_data const&, firmament::FirmamentScheduler::Service*&&, grpc::ServerContext*&&, firmament::TaskUID const*&&, firmament::TaskCompletedResponse*&&) (__functor=...,
__args#0=<unknown type in /home/ubuntu/CrashFix/firmament/build/src/firmament_scheduler, CU 0x32a12, DIE 0x939e2>,
__args#1=<unknown type in /home/ubuntu/CrashFix/firmament/build/src/firmament_scheduler, CU 0x32a12, DIE 0x939e7>,
__args#2=<unknown type in /home/ubuntu/CrashFix/firmament/build/src/firmament_scheduler, CU 0x32a12, DIE 0x939ec>,
__args#3=<unknown type in /home/ubuntu/CrashFix/firmament/build/src/firmament_scheduler, CU 0x32a12, DIE 0x939f1>) at /usr/include/c++/5/functional:1857
No locals.
camsas#13 0x0000000000937d99 in std::function<grpc::Status (firmament::FirmamentScheduler::Service*, grpc::ServerContext*, firmament::TaskUID const*, firmament::TaskCompletedResponse*)>::operator()(firmament::FirmamentScheduler::Service*, grpc::ServerContext*, firmament::TaskUID const*, firmament::TaskCompletedResponse*) const (this=0x1e2ff08, __args#0=0x7fff12e32bd0, __args#1=0x7fe99339ec68,
__args#2=0x7fe99339ea70, __args#3=0x7fe99339ea50) at /usr/include/c++/5/functional:2267
No locals.
camsas#14 0x0000000000930d44 in grpc::RpcMethodHandler<firmament::FirmamentScheduler::Service, firmament::TaskUID, firmament::TaskCompletedResponse>::RunHandler (this=0x1e2ff00, param=...)
at /home/ubuntu/CrashFix/firmament/build/third_party/grpc/src/grpc/include/grpc++/impl/codegen/method_handler_impl.h:59
req = {google::protobuf::Message = {}, static kIndexInFileMessages = 14, static kTaskUidFieldNumber = 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.