Coder Social home page Coder Social logo

cloudsimpy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cloudsimpy's Issues

强化学习的state size是多少?

论文中说action size是M*N(M和N分别是任务和机器数目)
说state是一个变长的任务机器对。那么state size是多少呢?
求教,谢谢!

RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

python main-makespan.py

2021-06-27 22:57:21.987365: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Jobs number: 10
Tasks number: 93
Task instances number mean: 45.376344086021504
Task instances number std 85.48783652126134
Task instances cpu mean: 0.5264810426540284
Task instances cpu std: 0.10018140357202873
Task instances memory mean: 0.009175121384696406
Task instances memory std: 0.002757219144923028
Task instances duration mean: 74.68815165876777
Task instances duration std: 45.40343044250821
680 0.4279005527496338 62.05685685072286 1.4292964198242561
********** Iteration 0 ************
2021-06-27 22:57:31.490100: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Jobs number: 10
Tasks number: 93
Task instances number mean: 45.376344086021504
Task instances number std 85.48783652126134
Task instances cpu mean: 0.5264810426540284
Task instances cpu std: 0.10018140357202873
Task instances memory mean: 0.009175121384696406
Task instances memory std: 0.002757219144923028
Task instances duration mean: 74.68815165876777
Task instances duration std: 45.40343044250821
680 0.4458014965057373 62.05685685072286 1.4292964198242561
********** Iteration 0 ************
Traceback (most recent call last):
File "", line 1, in
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "D:\anaconda3\envs\deepjs\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\anaconda3\envs\deepjs\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\anaconda3\envs\deepjs\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\2021\Bin-packing\deepjs\CloudSimPy-master\playground\Non_DAG\launch_scripts\main-makespan2.py", line 78, in
manager = Manager()
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\context.py", line 56, in Manager
m.start()
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\managers.py", line 513, in start
self._process.start()
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I‘m running this code on Windows,can anybody help me with this?
Thanks in advance!

Cannot import the core module

Couldn't import the core file while running the main-makespan.py file

Traceback (most recent call last):
File "main-makespan.py", line 10, in
from core.machine import MachineConfig
ModuleNotFoundError: No module named 'core'

Setting up the python environment problem

I have created a virtual environment with python 3.6. While installing Tensorflow 1.12.0 its showing an error

protobuf requires Python '>=3.7' but the running Python is 3.6.0

What should I do? I am using windows system with VS code.

进程创建后不能启动

主程序main-makespan.py在windows下执行
已增加:
if name == 'main':
freeze_support()
可以看到Process创建了13个进程,但是第一进程就不能start()
报错说它不能转换为数值

 for i in range(n_episode):
     algorithm = RLAlgorithm(agent, reward_giver, features_extract_func=features_extract_func,
                             features_normalize_func=features_normalize_func)
     episode = Episode(machine_configs, jobs_configs, algorithm, None)
     algorithm.reward_giver.attach(episode.simulation)
     p = Process(target=multiprocessing_run,
                 args=(episode, trajectories, makespans, average_completions, average_slowdowns))
     p.start()
     p.join()
 #

WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\training\checkpointable\util.py:1858: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "E:/DPL/CloudSimPy-master/CloudSimPy-master/playground/Non_DAG/launch_scripts/main-makespan.py", line 93, in
pj.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 745, in reduce
return (convert_to_tensor, (self.numpy(),))
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 724, in numpy
raise ValueError("Resource handles are not convertible to numpy.")
ValueError: Resource handles are not convertible to numpy.

question

您好,想问下,您的job中的task是不是没有包含有依赖关系啊?
测试文件job.csv好像是基于历史数据?不知道您的数据来自哪里?

阿里云Trace使用问题请教

您好,我看到您的论文中使用了阿里云的数据,有些数据使用问题想向您请教。
主要是关于task和instance之间的关系。在2018的数据追踪记录中,同一个DAG下面对应多个task,每一个task下面对应多个instance,而且每个task对应instance的数目不相同,那这些instance和task之间的关系是什么?是否是指一个task的数据可以切分,然后由多个instance分布式运行?
希望能收到您的解答,非常感谢~

ValueError: Resource handles are not convertible to numpy.

python main-makespan.py

2021-06-27 23:37:25.581570: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Jobs number: 10
Tasks number: 93
Task instances number mean: 45.376344086021504
Task instances number std 85.48783652126134
Task instances cpu mean: 0.5264810426540284
Task instances cpu std: 0.10018140357202873
Task instances memory mean: 0.009175121384696406
Task instances memory std: 0.002757219144923028
Task instances duration mean: 74.68815165876777
Task instances duration std: 45.40343044250821
680 0.45179080963134766 62.05685685072286 1.4292964198242561
********** Iteration 0 ************
2021-06-27 23:37:36.427508: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Jobs number: 10
Tasks number: 93
Task instances number mean: 45.376344086021504
Task instances number std 85.48783652126134
Task instances cpu mean: 0.5264810426540284
Task instances cpu std: 0.10018140357202873
Task instances memory mean: 0.009175121384696406
Task instances memory std: 0.002757219144923028
Task instances duration mean: 74.68815165876777
Task instances duration std: 45.40343044250821
680 0.4897310733795166 62.05685685072286 1.4292964198242561
Traceback (most recent call last):
File "D:/2021/Bin-packing/deepjs/CloudSimPy-master/playground/Non_DAG/launch_scripts/main-makespan2.py", line 132, in
train()
File "D:/2021/Bin-packing/deepjs/CloudSimPy-master/playground/Non_DAG/launch_scripts/main-makespan2.py", line 96, in train
p.start()
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\anaconda3\envs\deepjs\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "D:\anaconda3\envs\deepjs\lib\site-packages\tensorflow\python\framework\ops.py", line 763, in reduce
return (convert_to_tensor, (self.numpy(),))
File "D:\anaconda3\envs\deepjs\lib\site-packages\tensorflow\python\framework\ops.py", line 742, in numpy
raise ValueError("Resource handles are not convertible to numpy.")
ValueError: Resource handles are not convertible to numpy.

This bug occurs. Can anybody help me with this?
Thanks!!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.